uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,499,415
arxiv
\section{Introduction.} We consider the kinetic Ising model in ${\mathbb Z}^d$ under a small positive magnetic field in the limit of vanishing temperature and we study the relaxation of the system starting from the metastable state where all the spins are set to minus. An introduction of the metastability problem is presented in section~\ref{back}. In section~\ref{prob}, we explain the three major problems we had to solve to extend the two dimensional results to dimension~$d$. The main results are stated in section~\ref{mr}. The strategy of the proof is explained in section~\ref{sp}. \subsection{Background.} \label{back} This work extends to dimension $d\geq 3$ the main result of Dehghanpour and Schonmann \cite{DS1}. We consider the stochastic Ising model on ${\mathbb Z}^d$ evolving with the Metropolis dynamics under a fixed small positive magnetic field~$h$. We start the system in the minus phase. Let~$\tau_d$ be the typical relaxation time of the system, defined here as the time where the plus phase has invaded the origin. We will study the asymptotic behavior of~$\tau_d$ when we scale the temperature to~$0$. The corresponding problem in finite volume (that is, in a box~$\Lambda$ whose size is fixed) has been previously studied in arbitrary dimension by Neves \cite{N2,N1}. In this situation, Neves proved that the relaxation time behaves as $\exp(\beta\Gamma_d)$ where $\beta=1/T$ is the inverse temperature and $\Gamma_d$ is the energy barrier the system has to overcome to go from the metastable state $\mathbf{-1}$ to the stable state $\mathbf{+1}$. An explicit formula is available for $\Gamma_d$, however the formula is quite complicated. The energy barrier $\Gamma_d$ is the solution of a minimax problem and it is reached for configurations which are optimal saddles between $\mathbf{-1}$ and $\mathbf{+1}$ in the energy landscape of the Ising model. These results have been refined in dimension~$3$ in \cite{BAC}. In dimension~$3$, the optimal saddles are identified, they are configurations called critical droplets, they contain exactly one connected component of pluses of cardinality~$m_3$, and their shape is an appropriate union of a specific quasicube (whose sides depend on~$h$) and a two dimensional critical droplet. In dimension $d\geq 4$, the results of Neves yield that the configurations consisting of the appropriate union of a $d$ dimensional quasicube and a $d-1$ dimensional critical droplet are optimal saddles, but it is currently not proved that they are the only ones. However it is reasonable to expect that the cases of equality in the discrete isoperimetric inequality on the lattice can be analyzed in dimension~$d\geq 4$ in the same way they were studied in dimension~$d=3$ \cite{AC}, so that the three dimensional results could be extended to higher dimension. In infinite volume, instead of nucleating locally in a finite box near the origin, a critical droplet of pluses might be created far from the origin and this droplet can grow, become supercritical and invade the origin. It turns out that this is the most efficient mechanism to relax to equilibrium. This was shown by Dehghanpour and Schonmann in the two dimensional case \cite{DS1} and it required several new ideas and insights compared to the finite volume analysis. Indeed, one has to understand the typical birth place of the first critical droplets which are likely to invade the origin, as well as their growth mechanism. The heuristics given in \cite{DS1} apply in $d$~dimensions as well. Suppose that nucleation in a finite box is exponentially distributed with rate $\exp({-{\beta} {\Gamma}_d})$, independently from other boxes, and that the speed of growth of a large supercritical droplet is $v_d$. The droplets which can reach the origin at time~$t$ are the droplets which are born inside the space--time cone whose basis is a $d$ dimensional square with side length $v_dt$ and whose height is $t$. The critical space--time cone is such that its volume times the nucleation rate is of order one. Let ${\tau}_d$ be the typical relaxation time in dimension $d$, i.e., the time when the stable plus phase invades the origin. From the previous heuristics, we conclude that ${\tau}_d$ satisfies $$\frac{1}{3}{\tau}_d \tonda{v_d {\tau}_d}^d \exp({-{\beta} {\Gamma}_d})\, =\,1\,.$$ Solving this identity and neglecting the factor ${1}/{3}$, we get $${\tau}_d\,=\,\exp\Big( \frac{1}{d+1}\big(\beta{\Gamma}_d-d\ln v_d\big)\Big)\,.$$ Since the large supercritical droplets are approximately parallelepipeds, the dynamics on one face behaves like a $d-1$ dimensional stochastic Ising model and the time needed to fill a face with pluses is of order~$\tau_{d-1}$. Thus $v_d$ should behave like the inverse of~$\tau_{d-1}$ and the previous formula becomes $$\ln{\tau}_d\,=\, \frac{1}{d+1}\big(\beta{\Gamma}_d+d\ln {\tau}_{d-1}\big)\,.$$ In this computation, we take only into account the terms on the exponential scale, of order $\exp(\beta\,\text{constant})$. Setting ${\tau}_d=\exp({{\beta} {\kappa}_d})$, the constant ${\kappa}_d$ satisfies $${\kappa}_d\,=\, \frac{1}{d+1}\big({\Gamma}_d+d{\kappa}_{d-1}\big)\,.$$ Solving the recursion, and using that ${\kappa}_0=0$, we get that $${\kappa}_d\,=\,\frac{1}{d+1}\big({\Gamma}_1+\cdots+{\Gamma}_d\big)\,.$$ \subsection{Three major problems.} \label{prob} Although these heuristics are rather convincing, it is a real challenge to prove rigorously that the asymptotics of the relaxation time are indeed of order $\exp({{\beta} {\kappa}_d})$. Our strategy is to implement inductively the scheme of Dehghanpour and Schonmann. To do so, we had to overcome three major problems. \noindent {\bf Speed of growth.} A first major difficulty is to control the speed of growth~$v_d$ of large supercritical droplets. The upper bound on the speed of growth in \cite{DS2} was based on a very detailed analysis of the growth of an infinite interface. Using a combinatorial argument based on chronological paths, first introduced by Kesten and Schonmann in the context of a simplified growth model \cite{KS}, Dehghanpour and Schonmann were able to prove that $v_2$ is of order $\exp(-\beta \Gamma_1/2)$. Despite considerable efforts, we never managed to extend this technique of analysis to higher dimension. Here we consider only interfaces with a size that is exponential in ${\beta}$. In order to control the growth of these interfaces, we use inductively coupling techniques introduced to analyze the finite--size scaling in the bootstrap percolation model \cite{CeCi,CM}. We apply successively these techniques in two distinct ways, the first sequential and the second parallel. This strategy has been elaborated first in a simplified growth model \cite{CM2}, yet its application in the context of the Ising model is more troublesome. Contrary to the case of the growth model, we did not manage to compare the dynamics in a strip with a genuine $d-1$ dimensional dynamics and we perform the induction on the boundary conditions rather than on the dimension. An additional source of trouble is to control the configurations in the metastable regions. We introduce an adequate hypothesis describing their law, which is preserved until the arrival of supercritical droplets, in order to tackle this problem. A key result to control the speed of growth is theorem~\ref{T2}. \noindent {\bf Energy landscape.} A second major difficulty is that it is very hard to analyze the energy landscape of the Ising model in high dimension and the results we are able to obtain are very weak compared to the corresponding results in finite volume and in dimension two and three (see \cite{NS1,NS2,BAC}). For instance we are not able to determine whether a given cluster of pluses tends to shrink or to grow. Moreover, we do not know some of the fine details of the energy landscape such as the depth of the small cycles that could trap the process and increase the relaxation time. In other words, we do not know how to compute the inner resistance of the metastable cycle in $d$ dimensions, that is, the energy barrier that a subcritical configuration has to overcome in order to reach either the plus configuration or the minus configuration in a finite box. This fact affects both strategies for the upper as well as for the lower estimate of the relaxation time, since in order to approximate the distribution of the nucleation time as an exponential law with rate $\exp(-{\beta} {\Gamma}_d )$, one has to rule out the possibility that the process is trapped in a deep well. We are able to get the required bounds by using the attractivity and the reversibility of the dynamics, see lemma~\ref{fugaup} and proposition~\ref{nucl}. \noindent {\bf Space--time clusters.} The third major difficulty to extend the analysis of Dehghanpour and Schonmann is to control adequately the space--time clusters. For instance, we cannot proceed as in \cite{DS1} to rule out the possibility that a subcritical cluster crosses a long distance. This question turns out to be much more involved in higher dimension. It is tackled in theorem~\ref{totcontrole}, which is a key of the whole analysis. To control the diameters of the space--time clusters, we use ideas of recurrence and a decomposition of the space into sets called ``cycle compounds''. A cycle compound is a connected set of states $\overline{\cal A}$ such that the communication energy between two points of $\overline{\cal A}$ is less than or equal to the communication energy between $\overline{\cal A}$ and its complement. A cycle is a cycle compound, yet an appropriate union of cycles might form a cycle compound without being a cycle. \subsection{Main results.} \label{mr} We describe now briefly the model and we state next our main result. We study the $d$ dimensional nearest-neighbor stochastic Ising model at inverse temperature~$\beta$ with a fixed small positive magnetic field $h$, that is, the continuous--time Markov process $(\sigma_t)_{t\geq 0}$ with state space $\smash{\{-1,+1\}^{{\mathbb Z}^d}}$ defined as follows. In the configuration~$\sigma$, the spin at the site $x\in{\mathbb Z}^d$ flips at rate $$c(\sigma,\sigma^x)\,=\, \exp\big(-\beta\big(\Delta_x H(\sigma)\big)^+\big)\,, $$ where $(a)^+=\max(a,0)$ and $$ \Delta_x H(\sigma)\,=\, \sigma(x) \Big(\sum_{\genfrac{}{}{0pt}{1}{y \in {\mathbb Z}^d}{|x-y|=1}} \sigma(y) + h \Big)\,.$$ In other words, the infinitesimal generator of the process $(\sigma_t)_{t\geq 0}$ acts on a local observable $f$ as \be{genee} (Lf)(\sigma)\,=\, \sum_{x} c(\sigma,\sigma^x) (f(\sigma^x)-f(\sigma))\,, \end{eqnarray*} where $\sigma^x$ is the configuration $\sigma$ in which the spin at site~$x$ has been turned upside down. Formally, we have $$\Delta_x H(\sigma)\,=\,H(\sigma^x)-H(\sigma)$$ where $H$ is the formal Hamiltonian given by \be{zertyu} H(\sigma ) \,=\, - \frac{1}{2}\sum_{\genfrac{}{}{0pt}{1}{\{x,y\} \subset {\mathbb Z}^d}{|x-y|=1}} \sigma(x) \sigma(y) - \frac h 2 \sum_{x \in {\mathbb Z}^d} \sigma(x)\,. \end{eqnarray*} More details on the construction of this process are given in sections~\ref{isisec} and~\ref{sigm}. We denote by $({\sigma}_t^\mathbf{-1})_{t\geq 0}$ the process starting from $\mathbf{-1}$, the configuration in which all the spins are equal to $-1$. A {local observable} is a real valued function~$f$ defined on the configuration space which depends only on a finite set of spin variables. \begin{theorem} \label{main} Let $f$ be a local observable. If the magnetic field $h$ is positive and sufficiently small, then there exists a value ${\kappa}_d$ such that, letting ${\tau_\beta}=\exp(\beta{\kappa})$, we have \begin{eqnarray*} &&\lim_{\beta\to\infty}E(f({\sigma}_{\scriptstyle {{\tau_\beta}}}^\mathbf{-1}))\,=\,f(\mathbf{-1}) \ \hbox{ if } {\kappa}<{\kappa}_d\,, \label{main1}\\ &&\lim_{\beta\to\infty}E(f({\sigma}_{{\tau_\beta}}^\mathbf{-1}))\,=\,f(\mathbf{+1}) \ \hbox{ if } {\kappa}>{\kappa}_d\,. \end{eqnarray*} The value ${\kappa}_d$ depends only on the dimension~$d$ and the magnetic field~$h$; in fact, if we denote by ${\Gamma}_i$ the energy of the $i$ dimensional critical droplet of the Ising model at zero temperature and magnetic field~$h$, then $${\kappa}_d\,=\,\frac{1}{d+1}\big({\Gamma}_1+\cdots+{\Gamma}_d\big)\,.$$ \end{theorem} Besides the aforementioned technical difficulties, our proof is basically an inductive implementation of the scheme of \cite{DS1}, combined with the strategy of \cite{CM}. Let us give some insight into the scheme of the proof. The first step of the proof consists in reducing the problem to a process defined in a finite exponential volume. Let $\kappa>0$ and let $\tau_\beta=\exp(\beta {\kappa})$. Let $L>\kappa$ and let ${\Lambda}_\beta={\Lambda}(\exp(\beta L))$ be a cubic box of side length $\exp(\beta L)$. We have that \begin{equation*} \lim_{\beta\to{\infty}}\,\, {\mathbb P} \big(f({\sigma}_{\tau_\beta}^\mathbf{-1}) \, =\, f({\sigma}_{{\Lambda}_\beta,\tau_\beta}^{-,\mathbf{-1}}) \big)\, =\, 1\,, \end{equation*} where $({\sigma}_{{\Lambda}_\beta,t}^{-,\mathbf{-1}})_{t\geq 0}$ is the process in the box ${\Lambda}_\beta$ with minus boundary conditions starting from $\mathbf{-1}$. This follows from a standard large deviation estimate based on the fact that the maximum rate in the model is~$1$, see lemmas~$1,2$ of \cite{Sch} for the complete proof. We state next the finite volume results that we will prove. \bt{mainfv Let $L>0$ and let ${\Lambda}_\beta={\Lambda}(\exp(\beta L))$ be a cubic box of side length $\exp(\beta L)$. Let $\kappa>0$ and let $\tau_\beta=\exp(\beta {\kappa})$. There exists $h_0>0$ such that, for any $h\in ]0,h_0[$, the following holds: \par\noindent $\bullet$ If $\kappa<\max(\Gamma_d-dL,\kappa_d)$, then \begin{equation*} \lim_{\beta\to\infty}\,\, {\mathbb P}\big({\sigma}_{{\Lambda}_\beta,\tau_\beta}^{-,\mathbf{-1}}(0)=1\big)\,=\,0\,. \end{equation*} \par\noindent $\bullet$ If $\kappa>\max(\Gamma_d-dL,\kappa_d)$, then \begin{equation*} \lim_{\beta\to\infty}\,\, {\mathbb P}\big({\sigma}_{{\Lambda}_\beta,\tau_\beta}^{-,\mathbf{-1}}(0)=-1\big)\,=\,0\,. \end{equation*} \end{theorem} Recall that $\Gamma_d$ and $\kappa_d$ depend on the magnetic field~$h$. Explicit formulae are available for $\Gamma_d$ and $\kappa_d$, however they are quite complicated. An important point is that $\Gamma_d$ and $\kappa_d$ are continuous functions of the magnetic field~$h$ (this is proved in lemma~\ref{contga}), this will allow us to reduce the study to irrational values of~$h$. An explicit bound on~$h_0$ can also be computed. In dimension~$d$, the proof works if $h_0\leq 1$ and lemma~\ref{control} holds. Let us denote by $m_{d}$ the volume of the critical droplet in dimension~$d$. Lemma~\ref{control} holds as soon as $$\forall n \leq d\qquad (\Gamma_{n-1})^{{n}}\,\leq\,(m_{n-1})^{n-1}\,.$$ We shift next our attention to finite volumes and we try to perform simple computations to understand why the critical constant appearing in theorem~\ref{mainfv} is equal to $\max(\Gamma_d-dL,\kappa_d)$. We have two possible scenarios for the relaxation to equilibrium in a finite cube. If the cube is small, then the system relaxes via the formation of a single critical droplet that grows until covering the entire volume. If the cube is large, then a more efficient mechanism consists in creating many critical droplets that grow and eventually coalesce. The critical side length of the cubes separating these two mechanisms scales exponentially with ${\beta}$ as $\exp({\beta} L_d)$, where $$L_d\,=\, \frac{\Gamma_d-\kappa_{d}}{d} \,.$$ This value is the result of the computations, we do not have a simple heuristic explanation for it. There are three main factors controlling the relaxation time, which correspond to the heuristics explained previously: \medskip \noindent {\bf Nucleation.} Within a box of side length $\exp(\beta K)$, the typical time when the first critical droplet appears is of order $\exp(\beta (\Gamma_d-dK))$. \noindent {\bf Initial growth.} The typical time to grow from a critical droplet (which has a diameter of order $2d/h$) into a supercritical droplet (which has a diameter of order $\exp({\beta} L_d)$) travelling at the asymptotic speed $\exp(-\beta \kappa_{d-1})$ is $\exp(\beta \Gamma_{d-1})$. \noindent {\bf Asymptotic growth.} In a time $\exp(\beta (K+\kappa_{d-1}))$, a supercritical droplet having a diameter larger than $\exp({\beta} L_d)$ and travelling at the asymptotic speed $\exp(-\beta \kappa_{d-1})$ covers a distance $\exp(\beta K)$ in each axis direction and its diameter increases by $2\exp(\beta K)$. \medskip \noindent The statement concerning the nucleation time contains no mystery. Let us try to explain the statements on the growth of the droplets. Once a critical droplet is born, it starts to grow at speed $\exp(-\beta \Gamma_{d-1})$. As the droplet grows, the speed of growth increases, because the number of choices for the creation of a new $d-1$ dimensional critical droplet attached to the face of the droplet is of order the surface of the droplet. Thus the speed of growth of a droplet of size $\exp(\beta K)$ is $$\exp(\beta (K(d-1)-\Gamma_{d-1}))\,.$$ When $K$ reaches the value $L_{d-1}$, the speed of growth is limited by the inverse of the time needed for the $d-1$ dimensional critical droplet to cover an entire face of the droplet. This time corresponds to the $d-1$ dimensional relaxation time in infinite volume and the droplet reaches its asymptotic speed, of order $\exp(-\beta \kappa_{d-1})$. The time needed to grow a critical droplet into a supercritical droplet travelling at the asymptotic speed is $$\sum_{1\leq i\leq \exp(\beta L_{d-1})} \exp \beta\Big( \Gamma_{d-1}- \frac{d-1}{\beta}\ln i \Big)$$ and, for $d\geq 2$, this is still of order $\exp(\beta \Gamma_{d-1})$. With the help of the above facts, we can estimate the relaxation time in a box of side length $\exp(\beta L)$. Suppose that the origin is covered by a large supercritical droplet at time $\exp(\beta \kappa)$. If this droplet is born at distance $\frac{1}{2}\exp(\beta K)$, then nucleation has occurred inside the box ${\Lambda}( \exp(\beta K))$ and the initial critical droplet has grown into a droplet of diameter $\frac{1}{2}\exp(\beta K)$ in order to reach the origin. This scenario needs a time $$\displaylines{ \left( \begin{matrix} \text{time for nucleation}\\ \text{in the box ${\Lambda}(\exp(\beta K))$ } \end{matrix} \right) \,+\, \left( \! \begin{matrix} \text{ time to cover }\\ \text{ the box ${\Lambda}(\exp(\beta K))$ } \end{matrix} \!\right) \cr \,\sim\, \exp(\beta (\Gamma_{d}-dK))\,+\, \exp(\beta \Gamma_{d-1}) \,+\, \exp(\beta (K+\kappa_{d-1})) \, }$$ which is of order $$\exp\Big(\beta \max\big(\Gamma_{d}-dK, \Gamma_{d-1}, K+\kappa_{d-1} \big)\Big) \,. $$ To find the most efficient scenario, we optimize over $K<L$ and we conclude that the relaxation time in the box ${\Lambda}(\exp({\beta} L))$ is of order \begin{equation*} \exp\Big( \beta\inf_{K\leq L}\,\max\big(\Gamma_{d}-dK, \Gamma_{d-1}, K+\kappa_{d-1} \big)\Big) \,. \end{equation*} It turns out that, for $h$ small, the above quantity is equal to \begin{equation*} \exp\big( \beta \max(\Gamma_d-dL,\kappa_d) \big) \,. \end{equation*} In particular, the time needed to grow a critical droplet into a supercritical droplet is not a limiting factor for the relaxation whenever $h$ is small. \subsection{Strategy of the proof.} \label{sp} The upper bound on the relaxation time, i.e., the second case where $\kappa> \max(\Gamma_d-dL,\kappa_d)$ is done in section~\ref{relaxa}. The ingredients involved in the upper bound are known since the works of Neves, Dehghanpour and Schonmann, and this part is considerably easier than the lower bound. The hardest part of theorem~\ref{mainfv} is the lower bound on the relaxation time, i.e., the first case where $\kappa< \max(\Gamma_d-dL,\kappa_d)$. The lower bound is done in sections~\ref{stc} and~\ref{metare}. Let us explain the strategy of the proof of the lower bound, without stating precisely the definitions and the technical results. Let $L>0$ and let ${\Lambda}_\beta={\Lambda}(\exp(\beta L))$ be a cubic box of side length $\exp(\beta L)$. Let $\kappa>0$ and let $\tau_\beta=\exp(\beta {\kappa})$. We want to prove that it is unlikely that the spin at the origin is equal to $+1$ at time ${\tau_\beta}$ for the process $\smash{({\sigma}_{{\Lambda}_\beta,t}^{-,\mathbf{-1}})_{t\geq 0}}$. Throughout the proof, we use in a crucial way the notion of space--time cluster. A space--time cluster of the trajectory $(\sigma_{{\Lambda},t},{0\leq t\leq{\tau_\beta}})$ is a maximal connected component of space--time points for the following relation: two space-time points $(x,t)$ and $(y,s)$ are connected if $\sigma_{{\Lambda},t}(x) = \sigma_{{\Lambda},s}(y) = +1$ and either ($s=t$ and $|x-y| \le 1$) or ($x=y$ and $\sigma_{{\Lambda},u}(x)=+1$ for $s\leq u\leq t$). With the space--time clusters, we record the influence of the plus spins throughout the evolution. We can then compare the status of a spin in dynamics associated to different boundary conditions with the help of the graphical construction (described in section~\ref{sigm}). The diameter ${\mathrm{diam }} \,{\cal C}$ of a space--time cluster~${\cal C}$ is the diameter of its spatial projection. We argue as follows. If $\smash{{\sigma}_{{\Lambda}_\beta,{\tau_\beta}}^{-,\mathbf{-1}}}(0)=+1$, then the space--time point $(0,{\tau_\beta})$ belongs to a non void space--time cluster, which we denote by ${\cal C}^*$. We discuss then according to the diameter of ${\cal C}^*$. \newline $\bullet$ If ${\mathrm{diam }} \,{\cal C}^*<\ln\ln\beta$, then ${\cal C}^*$ is also a space--time cluster of the process $\big( \smash{{\sigma}_{{\Lambda}(\ln\beta),t}^{-,\mathbf{-1}}}, 0\leq t\leq{\tau_\beta}\big)$, and the spin at the origin is also equal to $+1$ in this process at time ${\tau_\beta}$. The finite volume estimates obtained for fixed boxes can be readily extended to boxes of side length $\ln\beta$, and we obtain that the probability of the above event is exponentially small if $\kappa<\Gamma_d$, because the entropic contribution to the free energy is negligible with respect to the energy. \newline $\bullet$ If ${\mathrm{diam }} \,{\cal C}^*>\exp(\beta L_d)$ (this case can occur only when $L>L_d$), then we use the main technical estimate of the paper, theorem~\ref{T2}, which states roughly the following: for $\kappa<\kappa_d$, the probability that, in the trajectory $\big( \smash{{\sigma}_{{\Lambda}_\beta,t}^{-,\mathbf{-1}}}, 0\leq t\leq{\tau_\beta}\big)$, there exists a space--time cluster of diameter larger than $\exp({{\beta} L_d})$ is a super exponentially small function of $\beta$ (${\hbox{\footnotesize\rm SES}}$ in the following), and it can be neglected. \newline $\bullet$ If $\ln\ln\beta\leq {\mathrm{diam }} \,{\cal C}^*\leq \exp(\beta L_d)$, then ${\cal C}^*$ is also a space--time cluster of the process restricted to the box ${\Lambda}(3\exp(\beta L_d))\cap{\Lambda}_\beta$. A space--time cluster is said to be large if its diameter is larger than or equal to $\ln\ln\beta$. A box is said to be small if its sides have a length larger than $\ln\ln\beta$ and smaller than $d\ln\beta$. The diameters of the space--time clusters increase with time when they coalesce because of a spin flip. This implies that, if a large space--time cluster is created in the box ${\Lambda}_\beta$, then it has to be created also locally in a small box. The number of small boxes included in ${\Lambda}_\beta$ is of order $$\big| {\Lambda}(3\exp(\beta L_d))\cap{\Lambda}_\beta\big|\,=\, \exp\big(\beta d\min(L_d,L)\big) \,.$$ For the dynamics restricted to a small box, we have \begin{multline*} P \left( \begin{matrix} \text{a large ${\hbox{\footnotesize\rm STC}}$ is} \\ \text{ created before ${\tau_\beta}$} \end{matrix} \right) \,\leq\,\\ P \left( \begin{matrix} \text{a large ${\hbox{\footnotesize\rm STC}}$ is created} \\ \text{before nucleation} \end{matrix} \right)\,+\, P \left( \begin{matrix} \text{nucleation occurs} \\ \text{before ${\tau_\beta}$} \end{matrix} \right) \,. \end{multline*} The main result of section \ref{diamostc}, theorem~\ref{totcontrole}, yields that the first term of the righthand side is ${\hbox{\footnotesize\rm SES}}$. The finite volume estimates in fixed boxes obtained in the previous studies of metastability can be readily extended to small boxes. By lemma~\ref{fugaup}, we have that, up to corrective factors, $$P \left( \begin{matrix} \text{nucleation occurs} \\ \text{before ${\tau_\beta}$} \end{matrix} \right) \,\leq\, \tau_\beta \, \exp(-{\beta}{\Gamma}_{d})\,. $$ Finally, we have $$\displaylines{ {\mathbb P}\big({\mathrm{diam }} {\cal C}^*\geq\ln\ln\beta\big) \,\leq\, \exp\big(\beta d\min(L_d,L)\big) \big( \tau_\beta \,\exp(-{\beta}{\Gamma}_{d}) \,+\,{\hbox{\footnotesize\rm SES}}\big)\cr \,\leq\, \exp\Big(\beta \big(d\min(L_d,L) +\kappa-{\Gamma}_{d}\big) \Big) \,+\,{\hbox{\footnotesize\rm SES}}\cr \,=\, \exp\Big(\beta \big( \kappa- \max({\Gamma}_d-dL_d,{\Gamma}_d-dL) \big) \Big) \,+\,{\hbox{\footnotesize\rm SES}} \, }$$ and the desired result follows easily. From this quick sketch of proof, we see that the most difficult intermediate results are theorems~\ref{totcontrole} and~\ref{T2}. The remainder of the paper is mainly devoted to the proof of these results. In section~\ref{metro}, we consider a general Metropolis dynamics on a finite state space, we recall the formulas for the law of exit in continuous time and we introduce the notions of cycle and cycle compounds in this context. Section~\ref{feat} is devoted to the study of some specific features of the cycle compounds of the Ising model. In section~\ref{energyestmates}, we state several discrete isoperimetric results from~\cite{N2,N1,AC} and the fundamental estimate for the nucleation time in a finite box. Apart from the notion of cycle compounds, the definitions and the results presented in sections~\ref{metro}, \ref{feat} and~\ref{energyestmates} come from the previous literature on metastability, with some rewriting and adaptation to fit the continuous--time framework and our specific $n\pm$ boundary conditions. The main technical contributions of this work are presented in sections~\ref{stc} and~\ref{metare}. In section~\ref{stc}, we prove the key estimate on the diameters of the space--time clusters (theorem~\ref{totcontrole}). Section~\ref{metare} is devoted to the proof of theorem~\ref{T2}. The proof of the lower bound on the relaxation time is completed in section~\ref{concl}. The final section~\ref{relaxa} contains the proof of the upper bound on the relaxation time. \section{The Metropolis dynamics.} \label{metro} A very efficient tool to describe the metastable behavior of a process in the low temperature regime is a hierarchical decomposition of the state space known as the cycle decomposition. In the context of a Markov chain with finite state space evolving under a Metropolis dynamics, the cycles can be defined geometrically with the help of the energy landscape. Our context of infinite volume is much more complicated, but since the system is attractive, we will end up with some local problems that we handle with the finite volume techniques. We start by reviewing these techniques. Here we recall some basic facts about the cycle decomposition. For a complete review we refer to \cite{S4,OS3,OS2,CaCe,OS1,OV}. Since we are working here with a continuous--time process defined with the help of transition rates, as opposed to a discrete--time Markov chain defined with transition probabilities, we feel that it is worthwhile to present the exact formulas giving the law of exit of an arbitrary subset in this slightly different framework. This is the purpose of section~\ref{lawexit}. In section~\ref{metrosec}, we define the Metropolis dynamics and we show how to apply the formulas of section~\ref{lawexit} in this context. In section~\ref{cycles}, we recall the definitions of a cycle, the communication energy, the height of a set, its bottom, its depth and its boundary. We introduce also an additional concept, called cycle compound, which turns out to be useful when analyzing the energy landscape of the Ising model. Apart from the notion of cycle compounds, the definitions and the results presented in this section come from the previous literature on metastability and simulated annealing, they are simply adapted to the continuous--time framework. \subsection{Law of exit.} \label{lawexit} We will not derive in detail all the results used in this paper concerning the behavior of a Markov process with exponentially vanishing transition rates, because the proofs are essentially the same as in the discrete--time setting. These proofs can be found in the book of Freidlin and Wentzell (\cite{FW}, chapter~$6$, section~$3$), or in the lecture notes of Catoni (\cite{Catoni}, section~$3$). However, for the sake of clarity, we present the two basic formulas in continuous time giving the law of the exit from an arbitrary set. Let ${\cal X}$ be a finite state space. Let $c:{\cal X}\times{\cal X}\to{\mathbb R}$ be a matrix of transition rates on~${\cal X}$, that is, $$\displaylines{ \forall x,y\in{\cal X}\,,\quad x\neq y\,,\qquad c(x,y)\geq 0\,,\cr \forall x\in{\cal X}\qquad \sum_{y\in{\cal X}}c(x,y)\,=\,0\,.}$$ We consider the continuous--time homogeneous Markov process $(X_t)_{t\geq 0}$ on~${\cal X}$ whose infinitesimal generator is $$ \forall f:{\cal X} \to {\mathbb R} \ \ \ \ \ (Lf)(x)\,=\, \sum_{y \in {\cal X}} c(x,y) (f(y)-f(x))\,. $$ For~$C$ an arbitrary subset of~${\cal X}$, we define the time~$\tau(C)$ of exit from~$C$ $$\tau(C)\,=\,\inf\{\,t\geq 0:X_t\not\in C\,\}\,.$$ \noindent The next lemmas provide useful formulas for the laws of the exit time and exit point for an arbitrary subset of~${\cal X}$. These formulas are rational fractions of products of the coefficients of the matrix of the transition rates whose numerators and denominators are most conveniently written as sums over particular types of graphs. \definition (the graphs~$G(W)$) \noindent Let~$W$ be an arbitrary non--empty subset of~${\cal X}$.\newline An oriented graph on~${\cal X}$ is called a~$W$-graph if and only if\newline \indent$\bullet$\quad there is no arrow starting from a point of~$W$;\newline \indent$\bullet$\quad each point of~$W^c$ is the initial point of exactly one arrow;\newline \indent$\bullet$\quad for each point~$x$ in~$W^c$, there exists a path in the graph leading from~$x$ to~$W$. \newline The set of all $W$--graphs is denoted by~$G(W)$. \enddefinition If the first two conditions are fulfilled, then the third condition above is equivalent to\newline \indent$\bullet$\quad there is no cycle in the graph. \definition{(the graphs~$G_{x,y}(W)$)} \noindent Let~$W$ be an arbitrary non--empty subset of~${\cal X}$, let~$x$ belong to~${\cal X}$ and $y$ to~$W$. If~$x$ belongs to~$W^c$, then the set~$G_{x,y}(W)$ is the set of all oriented graphs on~${\cal X}$ such that \newline \indent$\bullet$\quad there is no arrow starting from a point of~$W$;\newline \indent$\bullet$\quad each point of~$W^c$ is the initial point of exactly one arrow;\newline \indent$\bullet$\quad for each point~$z$ in~$W^c$, there exists a path in the graph leading from~$z$ to~$W$;\newline \indent$\bullet$\quad there exists a path in the graph leading from~$x$ to~$y$. \newline More concisely, they are the graphs of~$G(W)$ which contain a path leading from~$x$ to~$y$. \newline If~$x$ belongs to~$W$, then the set~$G_{x,y}(W)$ is empty if~$x\neq y$ and is equal to~$G(W)$ if~$x=y$. \enddefinition The graphs in~$G_{x,y}(W)$ have no cycles. For any~$x$ in~${\cal X}$ and~$y$ in~$W$, the set~$G_{x,y}(W)$ is included in~$G(W)$. \definition{(the graphs~$G(x\not\rightarrow W)$)} \noindent Let~$W$ be an arbitrary non--empty subset of~${\cal X}$ and let~$x$ be a point of~${\cal X}$. \newline If~$x$ belongs to~$W$ then the set~$G(x\not\rightarrow W)$ is empty.\newline If~$x$ belongs to~$W^c$ then the set~$G(x\not\rightarrow W)$ is the set of all oriented graphs on~${\cal X}$ such that \newline \indent$\bullet$\quad there is no arrow starting from a point of~$W$;\newline \indent$\bullet$\quad each point of~$W^c$ except one, say~$y$, is the initial point of exactly one arrow;\newline \indent$\bullet$\quad there is no cycle in the graph;\newline \indent$\bullet$\quad there is no path in the graph leading from~$x$ to~$W$. \medskip \noindent The third condition (no cycle) is equivalent to\newline \indent$\bullet$\quad for each~$z$ in~$W^c\setminus\{y\}$, there is a path in the graph leading from~$z$ to~$W\cup\{y\}$. \enddefinition \bl{aaa} Let~$W$ be an arbitrary non--empty subset of~${\cal X}$ and let~$x$ be a point of~${\cal X}$. The set~$G(x\not\rightarrow W)$ is the union of all the sets $G_{x,y}(W\cup\{y\}),\,y\in W^c$. \end{lemma} In the case~$x\in W^c, y\in W$, the definitions of~$G_{x,y}(W)$ and~$G(x\not\rightarrow W)$ are those given by Wentzell and Freidlin~(1984). We have extended these definitions to cover all possible values of~$x$. With our choice for the definition of the time of exit~$\tau(W^c)$ (the first time greater than or equal to zero when the chain is outside~$W^c$), the formulas for the law of~$X_{\tau(W^c)}$ and for the expectation of~$\tau(W^c)$ will remain valid in all cases. \medskip \noindent Let $g$ be a graph on~${\cal X}$, we define $$c(g)\,=\!\!\prod_{(x\rightarrow y)\in g} c(x,y).$$ \bl{dfe} (exit point) \noindent For any non--empty subset~$W$ of~${\cal X}$, any~$y$ in~$W$ and~$x$ in~${\cal X}$, $$P(X_{\tau(W^c)}=y/ X_0=x)\,\,=\,\, \frac{\displaystyle\sum\limits_{g\in G_{x,y}(W)}c(g)}{ \displaystyle\sum\limits_{g\in G(W)} c(g)}.$$ \end{lemma} \bl{ert} (exit time) \noindent For any subset~$W$ of~${\cal X}$ and~$x$ in~${\cal X}$, $$E(\tau(W^c)/ X_0=x)\,\,=\,\, \frac{\displaystyle\sum\limits_{y\in W^c}\quad \displaystyle\sum\limits_{g\in G_{x,y}(W\cup\{y\})}c(g)} {\displaystyle\sum\limits_{g\in G(W)} c(g)} \,\,=\,\, \frac{\displaystyle\sum\limits_{g\in G(x\not\rightarrow W)}c(g)} {\displaystyle\sum\limits_{g\in G(W)} c(g)}.$$ \end{lemma} \noindent For instance, if we apply lemma~\ref{ert} to the case where $W={\cal X}\setminus\{x\}$ and the process starts from~$x\in{\cal X}$, then we get $$E(\tau(\{\,x\,\})/ X_0=x)\,\,=\,\, \frac{1}{\sum_{y\neq x}c(x,y)} \,=\, -\frac{1}{c(x,x)}\,.$$ To prove these formulas in continuous time, we study the involved quantities as functions of the starting point and derive a system of linear equations with the help of the Markov property. For instance, let $$m(x,y)\,=\, P(X_{\tau(W^c)}=y/ X_0=x)\,.$$ Let $T=\tau(\{\,x\,\})$. We have then $$\displaylines{ m(x,y)\,=\, \sum_{z\in W^c} P(X_{\tau(W^c)}=y,\,X_T=z/ X_0=x) + P(X_{T}=y/ X_0=x)\cr \,=\,\!\sum_{z\in W^c}\! P(X_{\tau(W^c)}=y/ X_0=z) \,P(X_{T}=z/ X_0=x) + P(X_{T}=y/ X_0=x) \,.}$$ Let $$p(x,z)\,=\,P(X_{T}=z/ X_0=x)\,=\, \frac{c(x,z)}{\sum_{u\neq x}c(x,u)} \,=\,- \frac{c(x,z)}{c(x,x)}\,.$$ Then $p(\cdot,\cdot)$ is a matrix of transition probabilities, and $$\displaylines{ m(x,y)\,=\, \sum_{z\in W^c} p(x,z) m(z,y) +p(x,y) \,.}$$ This is exactly the same equation as in the case of a discrete--time Markov chain with transition matrix $p(\cdot,\cdot)$. This way the continuous--time formula can be deduced from its discrete--time counterpart. \subsection{The Metropolis dynamics.} \label{metrosec} We suppose from now onwards that we deal with a family of continuous--time homogeneous Markov processes $(X_t)_{t\geq 0}$ indexed by a positive parameter~$\beta$ (the inverse temperature). Thus the state space and the transition rates change with~$\beta$. We suppose that these processes evolve under a Metropolis dynamics. More precisely, let $\alpha:{\cal X}\times{\cal X}\to [0,1]$ be a symmetric irreducible transition kernel on~${\cal X}$, that is $\alpha(x,y)=\alpha(y,x)$ for $x,y\in{\cal X}$ and $$\displaylines{ \forall y,z\in {\cal X}\times {\cal X} \qquad\exists x_0,x_1,\ldots,x_r\qquad x_0=y,\quad x_r=z,\hfill\cr \hfill \alpha(x_0,x_1)\times\cdots\times\alpha(x_{r-1},x_r)\,>\,0.}$$ Let $H:{\cal X}\to{\mathbb R}$ be an energy defined on~${\cal X}$. We suppose that the transition rates $c(x,y)$ are given by $$\displaylines{\forall x,y\in{\cal X} \qquad c(x,y)\,=\,\alpha(x,y) \exp \tonda{-{\beta} \max\big(0,H(y)-H(x)\big)}\,.}$$ The irreducibility hypothesis ensures the existence of a unique invariant probability measure~$\nu$ for the Markov process~$(X_t)_{t\geq 0}$. We have then, for any~$x,y\in{\cal X}$ and $t\geq 0$, $$\nu(x)P(X_t=y/X_0=x)\,\leq\, \sum_{z\in{\cal X}}\nu(z)P(X_t=y/X_0=z)\,=\, \nu(y)\,.$$ In the case where $\alpha(x,y)\in\{\,0,1\,\}$ for $x,y\in{\cal X}$, the invariant measure~$\nu$ is the Gibbs distribution associated to the Hamiltonian~$H$ at inverse temperature~$\beta$, and we have $$\forall x,y\in{\cal X}\quad\forall t\geq 0 \qquad P(X_t=y/X_0=x)\,\leq\, \exp\big( -{\beta} (H(y)-H(x))\big)\,.$$ We will send $\beta$ to $\infty$ and we seek asymptotic estimates on the law of exit from a subset of~${\cal X}$. The exact formulas given in the previous section can be exploited when the cardinality of the space ${\cal X}$ and the degree of the communication graph are not too large, so that the number of terms in the sums is negligible on the exponential scale. More precisely, let $\deg(\alpha)$ be the degree of the communication kernel $\alpha$, i.e., $$\deg(\alpha)\,=\,\max_{x\in{\cal X}}\,\big|\, \{\,y\in{\cal X}:\alpha(x,y)>0\,\}\,\big|\,.$$ We suppose that $\alpha(x,y)\in\{\,0,1\,\}$ for $x,y\in{\cal X}$ and that $$\lim_{\beta\to\infty} \frac{1}{\beta} {|{\cal X}|\,\ln \deg(\alpha)} \,=\,0\,.$$ Under this hypothesis, for any subset~$W$ of ${\cal X}$, the number of graphs in $G(W)$ is bounded by $$\big|\,G(W)\,\big|\,\leq\, {\deg(\alpha)}^{|{\cal X}|}\,=\,\exp o(\beta)\,.$$ From lemma~\ref{dfe}, we have then for a subset~$W$ of~${\cal X}$, $y$ in~$W$ and~$x$ in~${\cal X}$, $$ {\deg(\alpha)}^{-|{\cal X}|} \frac{c(g^*_{x,y})}{ c(g^*_{W})} \,\leq\, P(X_{\tau(W^c)}=y/ X_0=x)\,\leq \, {\deg(\alpha)}^{|{\cal X}|} \frac{c(g^*_{x,y})}{c(g^*_{W})} \,,$$ where the graphs $g^*_{x,y}$ and $g^*_{W}$ are chosen so that $$\displaylines{{c(g^*_{x,y})} \,=\, \max\,\big\{\, c(g) :g\in G_{x,y}(W)\,\big\}\,,\cr {c(g^*_{W})} \,=\, \max\,\big\{\, c(g) :g\in G(W)\,\big\}\,.}$$ For $g$ a graph over ${\cal X}$ we set $$V(g)\,=\,\sum_{(x\rightarrow y)\in g} \max\big(0,H(y)-H(x)\big)\,$$ so that $c(g)=\exp(-\beta V(g))$. The previous inequalities yield then $$\displaylines{ \lim_{\beta\to\infty} \frac{1}{\beta} \,\ln P(X_{\tau(W^c)}=y/ X_0=x)\,= \,\hfill \cr \min\,\big\{\, V(g) :g\in G_{x,y}(W)\,\big\} \,-\, \min\,\big\{\, V(g) :g\in G(W)\,\big\} \,.}$$ Similarly, from lemma~\ref{ert}, we obtain that $$\displaylines{ \lim_{\beta\to\infty} \frac{1}{\beta} \,\ln E({\tau(W^c)}/ X_0=x)\,= \,\hfill \cr \min\,\big\{\, V(g) :{g\in G( x\not\rightarrow W )} \,\big\} \,-\, \min\,\big\{\, V(g) :g\in G(W )\,\big\} \,.}$$ \subsection{Cycles and cycle compounds.\label{cycles}} We say that two states $x,y$ communicate if either $x=y$ or $\alpha(x,y)>0$. A {path} ${\omega}$ is a sequence ${\omega}=({\omega}_1,\ldots,{\omega}_n)$ of states such that each state of the sequence communicates with its successor. A set ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}$ is said to be connected if any states in ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}$ can be joined by a path in ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}$, i.e., \begin{multline*} \forall x,y\in{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}\quad\exists\,{\omega}_1,\dots,{\omega}_n\in{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}\quad {\omega}_1=x,\,{\omega}_n=y,\quad\\ \alpha({\omega}_1,{\omega}_2)\cdots \alpha({\omega}_{n-1},{\omega}_n) >0 \,. \end{multline*} We define the {communication energy} between two states $x, y$ by $$\F(x,y)=\min\,\big\{ \max_{z \in {\omega}}\, H(z): {\omega}\text{ path from $x$ to $y$}\big\}\,.$$ The {communication energy} between two sets of states ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D},{\cal B}$ is $$\F({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D},{\cal B})=\min\,\big\{\, \F(x,y): {x\in {\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D},y\in{\cal B}}\,\big\} \,.$$ The height of a set of states ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}$ is \newcommand\height{{\operatorname{height}}} \newcommand\depth{\operatorname{depth}} \newcommand\bottom{\operatorname{bottom}} $$\height({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D})\,=\, \max\,\big\{\, E(x,y):x,y\in{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D},\,\,x\neq y\,\big\}\,.$$ \bd{cyclesb} A cycle is a connected set of states ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}$ such that $$ \height({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}) \,< \, \F({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D},{\cal X}\setminus{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}) \,.$$ A cycle compound is a connected set of states $\overline{\cal A}$ such that $$ \height(\overline{\cal A}) \,\leq \, \F(\overline{\cal A},{\cal X}\setminus\overline{\cal A}) \,.$$ \end{definition} Let us rewrite these definitions directly in terms of the energy $H$. For any set ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}$, we have $$\F({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D},{\cal X}\setminus{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D})=\min\,\big\{\, \max(H(x),H(y)): {x\in {\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D},\,y\not\in{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}},\,\alpha(x,y)>0\,\big\} \,.$$ Notice that the height of a singleton is $-\infty$. Moreover, if ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}$ is a connected set having at least two elements, then $$\height({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D})\,=\, \max\,\big\{\, H(x): x\in{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}\,\big\}\,.$$ Thus a cycle is either a singleton or a connected set of states ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}$ such that $$ \forall x,y\in{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}\quad \forall z\not\in{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}\quad \alpha(y,z)>0 \quad\Longrightarrow \quad H(x) \,< \, \max(H(y),H(z)) \,.$$ A cycle compound is either a singleton or a connected set of states $\overline{\cal A}$ such that $$ \forall x,y\in\overline{\cal A}\quad \forall z\not\in\overline{\cal A}\quad \alpha(y,z)>0 \quad\Longrightarrow \quad H(x) \,\leq \, \max(H(y),H(z)) \,.$$ Although a cycle and a cycle compound have almost the same definitions, the structure of these sets is quite different. Indeed, the communication under a fixed height $\l$ is an equivalence relation and the cycles are equivalence classes under this relation. In particular, two cycles are either disjoint or included one into the other. With our definition, any singleton is also a cycle of height~$-\infty$. \bp{union} Let $n\geq 2$ and let ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_1, \ldots , {\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_n$ be $n$ cycles such that $$ E({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_1,{\cal X}\setminus{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_1)\,=\,\cdots\,=\, E({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_n,{\cal X}\setminus{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_n)\,.$$ If their union $$\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}\,=\, \bigcup_{i=1}^n {\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_i \,$$ is connected, then it is a cycle compound. \end{proposition} \begin{proof} If $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ is a singleton, then there is nothing to prove. Let us suppose that $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ has at least two elements. Since $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ is connected, then $$\height(\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}})\,=\, \max\,\big\{\, H(x): x\in\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}\,\big\}\,.$$ Moreover, $$E(\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}},{\cal X}\setminus\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}})\,\geq\, \min_{1\leq i\leq n}\,E({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_i,{\cal X}\setminus{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_i)\,=\, \max_{1\leq i\leq n}\,E({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_i,{\cal X}\setminus{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_i)\,. $$ For $i\in\{\,1,\dots,n\,\}$, since ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_i$ is a cycle, we have $$E({{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_i},{\cal X}\setminus{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_i})\,\geq\, \max\,\big\{\, H(x): x\in{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_i}\,\big\}\,,$$ whence $$E(\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}},{\cal X}\setminus\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}})\,\geq\, \max_{1\leq i\leq n}\, \max\,\big\{\, H(x): x\in{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}_i}\,\big\}\,=\, \height(\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}) \,,$$ so that $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ is a cycle compound. \end{proof} \noindent Thus two distinct cycle compounds might have a nonempty intersection. Let us introduce a few more definitions. The bottom of a set ${\cal G}$ of states is $$\bottom({\cal G}) \,=\,\big\{\,x\in{\cal G}: H(x)\,=\,\min_{y\in{\cal G}}H(y)\,\big\}\,.$$ It is the set of the minimizers of the energy in ${\cal G}$. We denote the energy of the states in $\bottom({\cal G})$ by $H(\bottom({\cal G}))$. The depth of a set ${\cal G}$ is $$\depth({\cal G})\,=\,E({\cal G},{\cal X}\setminus{\cal G})-H(\bottom({\cal G}))\,.$$ The exterior boundary of a subset ${\cal G}$ of ${\cal X}$ is the set $$\partial {\cal G}=\sgraffa{x \not \in {\cal G} : \ \exists y \in {\cal G} \quad \alpha(y,x)>0}\,.$$ Let us set, for $g$ a graph over ${\cal X}$, $$V(g)\,=\,\sum_{(x\rightarrow y)\in g} \max\big(0,H(y)-H(x)\big)\,.$$ The following results are far from obvious, they are consequences of the formulas of section~\ref{lawexit} and the analysis of the cycle decomposition \cite{S4,OS3,OS2,CaCe,OS1}. \bt{exitcost} Let $\overline{\cal A}$ be a cycle compound, let $x\in\overline{\cal A}$ and let $y\in\partial\overline{\cal A}$. We have the identity $$\displaylines{ \min\,\big\{\, V(g) :g\in G_{x,y}({\cal X}\setminus\overline{\cal A})\,\big\} \,-\, \min\,\big\{\, V(g) :g\in G({\cal X}\setminus\overline{\cal A})\,\big\} \cr \,=\, \max\big(0,H(y)- E(\overline{\cal A},{\cal X}\setminus\overline{\cal A}) \big) \,,\cr \min\,\big\{\, V(g) :{g\in G(x\not\rightarrow {\cal X}\setminus\overline{\cal A})} \,\big\} \,-\, \min\,\big\{\, V(g) :g\in G({\cal X}\setminus\overline{\cal A})\,\big\} \cr \,=\, E(\overline{\cal A},{\cal X}\setminus\overline{\cal A}) -H(\bottom(\overline{\cal A})) \,.}$$ \end{theorem} Substituting the above identities into the formulas of lemmas~\ref{dfe} and \ref{ert}, we obtain the following estimates. \bc{exitcom} Let $\overline{\cal A}$ be a cycle compound, let $x\in\overline{\cal A}$ and let $y\in\partial\overline{\cal A}$. We have $$\displaylines{ {\deg(\alpha)}^{-|{\cal X}|} \,\leq\, \frac{P\big( X_{\tau(\overline{\cal A})}=y/X_0=x\big)} { \exp\big(-\beta \max\big(0,H(y)- E(\overline{\cal A},{\cal X}\setminus\overline{\cal A}) \big)\big)} \,\leq\, {\deg(\alpha)}^{|{\cal X}|}\,,\cr {\deg(\alpha)}^{-|{\cal X}|} \,\leq\, \frac{ E(\tau(W^c)/ X_0=x)}{ \exp\big(\beta \depth(\overline{\cal A}) \big) } \,\leq\, {\deg(\alpha)}^{|{\cal X}|} \,.}$$ \end{corollary} Let ${\cal Y}} \def\cZ{{\cal Z}$ be a subset of ${\cal X}$. A cycle ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}$ (respectively a cycle compound $\overline{\cal A}$) included in ${\cal Y}} \def\cZ{{\cal Z}$ is said to be maximal if there is no cycle ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}'$ (respectively no cycle compound $\smash{\overline{\cal A}'}$) included in ${\cal Y}} \def\cZ{{\cal Z}$ such that ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}\subsetneq{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}'$ (respectively $\overline{\cal A}\subsetneq\smash{\overline{\cal A}'}$). \bl{disjoint} Two maximal cycle compounds in ${\cal Y}} \def\cZ{{\cal Z}$ are either equal or disjoint. \end{lemma} \begin{proof} Let $\overline{\cal A}_1,\overline{\cal A}_2$ be two maximal cycle compounds in ${\cal Y}} \def\cZ{{\cal Z}$ which are not disjoint. Suppose that $$E(\overline{\cal A}_1,{\cal X}\setminus\overline{\cal A}_1) \,=\, E(\overline{\cal A}_2,{\cal X}\setminus\overline{\cal A}_2)\,.$$ Then $\overline{\cal A}_1\cup\overline{\cal A}_2$ is still a cycle compound included in ${\cal Y}} \def\cZ{{\cal Z}$. By maximality, we must have $\overline{\cal A}_1=\overline{\cal A}_2$. Suppose that $$E(\overline{\cal A}_1,{\cal X}\setminus\overline{\cal A}_1) \,<\, E(\overline{\cal A}_2,{\cal X}\setminus\overline{\cal A}_2)\,.$$ Let $x$ be a point of $\overline{\cal A}_1\cap\overline{\cal A}_2$. If $\overline{\cal A}_1\setminus\overline{\cal A}_2\neq\varnothing$, then $$E(x,{\cal X}\setminus\overline{\cal A}_2) \,\leq\, \height(\overline{\cal A}_1)\,\leq\, E(\overline{\cal A}_1,{\cal X}\setminus\overline{\cal A}_1)\,,$$ which is absurd. Thus $\overline{\cal A}_1\subset\overline{\cal A}_2$, and by maximality, $\overline{\cal A}_1=\overline{\cal A}_2$. \end{proof} We denote by ${\cal M}} \def\cN{{\cal N}} \def\cO{{\cal O}} \def{\cal P}{{\cal P}({\cal Y}} \def\cZ{{\cal Z})$ the partition of ${\cal Y}} \def\cZ{{\cal Z}$ into maximal cycles, i.e., $${\cal M}} \def\cN{{\cal N}} \def\cO{{\cal O}} \def{\cal P}{{\cal P}({\cal Y}} \def\cZ{{\cal Z})\,=\,\big\{\, {{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}: {{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}} \text{ is a maximal cycle included in ${\cal Y}} \def\cZ{{\cal Z}$}\,\big\} \,,$$ and by $\overline{{\cal M}} \def\cN{{\cal N}} \def\cO{{\cal O}} \def{\cal P}{{\cal P}}({\cal Y}} \def\cZ{{\cal Z})$ the partition of ${\cal Y}} \def\cZ{{\cal Z}$ into maximal cycle compounds, i.e., $$\overline{{\cal M}} \def\cN{{\cal N}} \def\cO{{\cal O}} \def{\cal P}{{\cal P}}({\cal Y}} \def\cZ{{\cal Z})\,=\,\big\{\, \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}: \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}} \text{ is a maximal cycle compound included in ${\cal Y}} \def\cZ{{\cal Z}$} \,\big\}\,. $$ \bl{exitval} Let $\overline{\cal A}$ be a maximal cycle compound included in a subset~${\cal D}$ of~${\cal X}$ and let $x$ belong to $\partial\overline{\cal A}\cap {\cal D}$. Then $H(x)$ is not equal to $E(\overline{\cal A},{\cal X}\setminus\overline{\cal A})$. If $H(x)< E(\overline{\cal A},{\cal X}\setminus\overline{\cal A})$, then we have $E(x,{\cal X}\setminus{\cal D})< E(\overline{\cal A},{\cal X}\setminus\overline{\cal A})$. \end{lemma} \begin{proof} If there was a state~$x\in\partial\overline{\cal A}\cap{\cal D}$ such that $H(x)=E(\overline{\cal A},{\cal X}\setminus\overline{\cal A})$, then the set $\overline{\cal A}\cup\{\,x\,\}$ would be a cycle compound included in ${\cal D}$, which would be strictly larger than~$\overline{\cal A}$, and this would contradict the maximality of~$\overline{\cal A}$. Similarly, for the second assertion, suppose that $H(x)< E(\overline{\cal A},{\cal X}\setminus\overline{\cal A})$ and let $${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}'\,=\,\big\{\,y\in{\cal X}:E(x,y)< E(\overline{\cal A},{\cal X}\setminus\overline{\cal A}) \,\big\}\,.$$ The set ${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}'$ is a cycle of height strictly less than $E(\overline{\cal A},{\cal X}\setminus\overline{\cal A})$ and such that $E({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}',{\cal X}\setminus{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}')\geq E(\overline{\cal A},{\cal X}\setminus\overline{\cal A})$. Moreover $$\height (\overline{\cal A}\cup{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}')\,\leq\, E(\overline{\cal A},{\cal X}\setminus\overline{\cal A}) \,\leq\, E(\overline{\cal A}\cup{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}',{\cal X}\setminus(\overline{\cal A}\cup{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}'))\,.$$ Thus $\overline{\cal A}\cup{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}'$ is still a cycle compound. Because of the maximality of $\overline{\cal A}$, this cycle compound is not included in ${\cal D}$. Therefore $\smash{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}'}$ intersects ${\cal X}\setminus{\cal D}$ and $E(x,{\cal X}\setminus{\cal D})< E(\overline{\cal A},{\cal X}\setminus\overline{\cal A}) $. \end{proof} \section{The stochastic Ising model} \label{geom} \label{feat} The material presented in this section is standard and classical. In section~\ref{isisec}, we define the Hamiltonian of the Ising model with various boundary conditions and we show the benefit of working with an irrational magnetic field. In section~\ref{sigm}, we define the stochastic Ising model and we recall the graphical construction, which provides a coupling between the various dynamics associated to different boundary conditions and parameters. \subsection{The Hamiltonian of the Ising model} \label{isisec} With each configuration $\sigma\in\smash{\{-1,+1\}^{{\mathbb Z}^d}}$, we associate a formal Hamiltonian~$H$ defined by \be{ising} H(\sigma ) \,=\, - \frac{1}{2}\sum_{\genfrac{}{}{0pt}{1}{\{x,y\} \subset {\mathbb Z}^d}{|x-y|=1}} \sigma(x) \sigma(y) - \frac h 2 \sum_{x \in {\mathbb Z}^d} \sigma(x). \end{eqnarray*} The value $\sigma(x)$ is the spin at site $x\in {\mathbb Z}^d$ in the configuration $\sigma$. Notice that the first sum runs over the unordered pairs $x,y$ of nearest neighbors sites of ${\mathbb Z}^d$. We denote by $\sigma^x$ the configuration obtained from $\sigma$ by flipping the spin at site $x$. The variation of energy caused by flipping the spin at site~$x$ is $$H(\sigma^x)-H(\sigma)\,=\, \sigma(x) \Big(\sum_{\genfrac{}{}{0pt}{1}{y \in {\mathbb Z}^d}{|x-y|=1}} \sigma(y) + h \Big)\,.$$ Given a box ${\Lambda} $ included in ${\mathbb Z}^d$ and a boundary condition ${\zeta}\in \smash{\{\,-1,+1\,\}^{{\mathbb Z}^d \setminus {\Lambda}}}$, we define a function $H_{{\Lambda}}^{{\zeta}} : \{\,-1,+1\,\}^{\Lambda} \longrightarrow {\mathbb R}$ by \be{i2} H_{{\Lambda}}^{{\zeta}} (\sigma ) \,=\, - \frac{1}{2}\kern-3pt \sum_{\genfrac{}{}{0pt}{1}{\{x,y\} \subset {\Lambda}}{|x-y|=1}} \kern-4pt\sigma(x) \sigma(y) - \frac h 2 \sum_{x \in {\Lambda}} \sigma(x) - \frac{1}{2}\kern-4pt \sum_{ \genfrac{}{}{0pt}{1}{x \in {\Lambda},y\not \in {\Lambda}}{|x-y|=1}} \kern-3pt \sigma(x) {\zeta}(y)+c_{\Lambda}^{\zeta}\, \end{eqnarray*} where $c_{\Lambda}^{\zeta}$ is a constant depending on ${\Lambda}$ and ${\zeta}$. Since $h$ is positive, for sufficiently large boxes, the configuration with all pluses, denoted by $\mathbf{+1}$, is the absolute minimum of the energy for any boundary condition and it has the maximal Gibbs probability. The configuration with all minuses, denoted by $\mathbf{-1}$, will play the role of the deepest local minimum in our system, representing the metastable state. We choose the constant $c_{\Lambda}^{\zeta}$ so that $$H_{{\Lambda}}^{{\zeta}} (\mathbf{-1} )\,=\,0 \,.$$ Sometimes we remove ${\Lambda}$ and ${\zeta}$ from the notation to alleviate the text, writing simply $H$ instead of $H_{{\Lambda}}^{{\zeta}}$. The communication kernel~$\alpha$ on $\{\,-1,+1\,\}^{\Lambda}$ is defined by $$\forall\sigma\in \{\,-1,+1\,\}^{\Lambda}\quad\forall x\in{\Lambda}\qquad \alpha(\sigma,\sigma^x)=1$$ and $\alpha(\sigma,\eta)=0$ if $\sigma$ and $\eta$ have different spins in two sites or more. The space $\smash{\{\,-1,+1\,\}^{\Lambda}}$ is now endowed with a communication kernel~$\alpha$ and an energy $H_{{\Lambda}}^{{\zeta}}$, we define an associated Metropolis dynamics on it as in section~\ref{metrosec}. We shall identify a configuration of spins with the support of the pluses in it; this way, we think of a configuration as a set, and we can perform the usual set operations on configurations. For instance, we denote by ${\eta} \cup \x$ the configuration in which the set of pluses is the union of the sets of pluses in ${\eta}$ and in $\x$. We call volume of a configuration ${\eta}$ the number of pluses in ${\eta}$ and we denote it by $|{\eta}|$. We call perimeter of a configuration ${\eta}$ the number of the interfaces between the pluses and the minuses in ${\eta}$ and we denote it by $p({\eta})$: \[ p(\eta)\,=\,\big|\big\{\,\{x,y\}:\eta(x)=+1,\,\eta(y)=-1,\,|x-y|=1\, \big\}\big|\,. \] The Hamiltonian of the Ising model can then be rewritten conveniently as $$H({\eta})\,=\,p({\eta})-h|{\eta}|\,.$$ Our analysis of the energy landscape will be based on the assumption that $h$ is an irrational number. This hypothesis simplifies in a radical way our study, because of the following lemma. \bl{irrazionale} Let $h$ be an irrational number. Suppose $\sigma,\eta$ are two configurations such that $\sigma\subset\eta$ and $H(\sigma)=H(\eta)$. Then $\sigma=\eta$. \end{lemma} \begin{proof} Since $h$ is irrational, the knowledge of the energy of a configuration determines in a unique way its perimeter and its volume. Since $\sigma$ is included in $\eta$ and they have the same volume, then they are equal. \end{proof} \noindent In the next section, we build a monotone coupling of the dynamics associated to different magnetic fields~$h$. With the help of this coupling, we will show in section~\ref{rer} that it is sufficient to prove theorem~\ref{mainfv} for irrational values of the magnetic field. The main point is that the critical constant~$\kappa_d$ depends continuously on~$h$ (this is proved in lemma~\ref{contga}). We believe that the main features of the cycle structure should persist for rational values of $h$. The assumption that $h$ is irrational (or at least that it does not belong to some countable set) is present in most papers to simplify the structure of the energy landscape, with the only exception of \cite{MNOS}. In dimension 2, for $2/h$ integer, there exists a very complicated cycle compound, consisting of cycles with the same depth that communicate at the same energy level (see \cite{MNOS}). This compound is not contained in the metastable cycle and is compatible with our results. Our analysis is based on the following attractive inequality. \bl{attrineq} For any configurations ${\eta}$, $\x$, we have $$H({\eta}\cap\x)+H({\eta}\cup\x)\, \le\, H({\eta}) + H(\x)\,.$$ \end{lemma} \begin{proof} This inequality can be proved with a direct computation (see theorem~5.1 of~\cite{BAC}). \end{proof} \noindent \subsection{Graphical construction} \label{sigm} The time evolution of the model is given by the Metropolis dynamics: when the system is in the configuration ${\eta}$, the spin at a site $x \in {\Lambda} \subset {\mathbb Z}^d$ flips at rate \be{rateising} c_{{\Lambda},{\beta}}^{{\zeta}}(x,{\eta}) \,=\, \exp \tonda{-{\beta} \max\big(0, H_{{\Lambda}}^{{\zeta}} ({\eta}^x)- H_{{\Lambda}}^{{\zeta}} ({\eta})\big)}, \end{eqnarray*} where the parameter ${\beta}$ is the inverse temperature. A standard construction yields a continuous-time Markov process whose generator is defined by \be{generatore} \forall f:\{-1,+1\}^{{\Lambda}} \to {\mathbb R} \ \ \ \ \ (Lf)({\eta})\,=\, \sum_{x \in {\Lambda}} c_{{\Lambda},{\beta}}^{{\zeta}} (x,{\eta}) (f({\eta}^x)-f({\eta}))\,. \end{eqnarray*} The process in a $d$ dimensional box $\Lambda$, under magnetic field $h$, with initial condition ${\alpha}$ and boundary condition ${\zeta}$ is denoted by \be{defsigma} ({\sigma}^{{\alpha},{\zeta}}_{\Lambda,t},\,t\geq 0)\,. \end{eqnarray*} To define the process in infinite volume, we consider the weak limit of the previous process as ${\Lambda}$ grows to~${\mathbb Z}^d$. This weak limit does not depend on the sequence of the boundary conditions (see \cite{Sch} for the details). Sometimes we omit $\Lambda$, ${\alpha}$ or ${\zeta}$ from the notation if $\Lambda={\mathbb Z}^d$, ${\alpha}=\mathbf{-1}$, or ${\zeta}=\mathbf{-1}$, respectively. \label{graphi} In order to compare different processes, we use a standard construction, known as the graphical construction, that allows to define on the same probability space all the processes at a given inverse temperature ${\beta}$, in ${\mathbb Z}^d$ and in any of its finite subsets, with any initial and boundary conditions and any magnetic field $h$. We refer to \cite{Sch} for details. We consider two families of i.i.d. Poisson processes with rate one, associated with the sites in ${\mathbb Z}^d$. Let $x \in {\mathbb Z}^d$, we denote by $({\tau}^-_{x,n})_{n\geq 1}$ and by $({\tau}^+_{x,n})_{n\geq 1}$ the arrival times of the two Poisson processes associated to $x$. Notice that, almost surely, these random times are all distinct. With each of these arrival times, we associate uniform random variables $(u^-_{x,n})_{n\geq 1}$, $(u^+_{x,n})_{n\geq 1}$, and we assume that these variables are independent of each other and of the Poisson processes. We introduce next an updating procedure in order to define simultaneously all the processes on this probability space. Let $\Lambda$ be a finite subset of ${\mathbb Z}^d$ and let $x\in\Lambda$. Let $\varepsilon=-1$ or $\varepsilon=+1$, let ${\alpha}$ be an initial configuration and let ${\zeta}$ be a boundary condition. Let ${\sigma}$ denote the configuration just before time ${\tau}^\varepsilon_{x,n}$. The updating rule at time ${\tau}^\varepsilon_{x,n}$ is the following: \medskip $\bullet$ The spins not at $x$ do not change; \smallskip $\bullet$ If ${\sigma}(x)=-\varepsilon$ and $u^\varepsilon_{x,n}<c_{{\Lambda},{\beta}}^{{\zeta}}(x,{\sigma})$, then the spin at $x$ is reversed. \medskip \noindent If the set ${\Lambda}$ is finite, then the above rules define a Markov process $(\sigma^{{\alpha},{\zeta}}_{{\Lambda},t})_{t\geq 0}$. Whenever ${\Lambda}$ is infinite, one has to be more careful, because there is an infinite number of arrival times in any finite time interval and it is not possible to order them in an increasing sequence. However, because the rates are bounded, changes in the system propagate at a finite speed, and a Markov process can still be defined by taking the limit of finite volume processes (see \cite{Sch,L} for more details). In any case our proofs will involve mainly boxes whose side length is finite, although they might grow with ${\beta}$. From now on, we denote by $P$ and $E$ the probability and expectation with respect to the family of the Poisson processes and the uniform random variables. The graphical construction allows to take advantage of the monotonicity properties of the rates $c_{{\Lambda},{\beta}}^{{\zeta}}(x,{\sigma})$. For any box ${\Lambda}$, any configurations ${\alpha} \leq {\alpha}'$, ${\zeta} \leq {\zeta}'$, we have \be{fkg} \forall t\geq 0\qquad {\sigma}^{\alpha,{\zeta}}_{{\Lambda},t} \,\leq\, {\sigma}^{\alpha',{\zeta}'}_{{\Lambda},t} \,. \end{eqnarray*} The process is also non decreasing as a function of the magnetic field~$h$. \noindent \subsection{Reduction to irrational fields} \label{rer} \noindent We show here how the monotonicity of the process as a function of the magnetic field, together with the continuity of $\Gamma_d$ and $\kappa_d$, allow us to reduce the study to irrational values of the magnetic field. Suppose that theorem~\ref{mainfv} has been proved for irrational values of the magnetic field. Let $h<h_0$ be a positive rational number and let $\kappa<\max(\Gamma_d-dL,\kappa_d)$. As we will see in lemma~\ref{contga}, the constants~$\Gamma_d$ and $\kappa_d$ depend continuously on~$h$, therefore there exists an irrational number $h'$ such that $h<h'<h_0$ and $$\kappa<\max(\Gamma_d'-dL,\kappa_d')\,,$$ where $\Gamma_d'$ and $\kappa_d'$ are the constants associated to the field $h'$. Theorem~\ref{mainfv} applied to the process $\smash{({\sigma}_{{\Lambda}_\beta,t}^{-,\mathbf{-1},h'})_{t\geq 0}}$ associated to the field $h'$ yields \begin{equation*} \lim_{\beta\to\infty}\,\, {\mathbb P}\big({\sigma}_{{\Lambda}_\beta,\tau_\beta}^{-,\mathbf{-1},h'}(0)=1\big)\,=\,0\,. \end{equation*} From the graphical construction, we have $${ {\sigma}_{{\Lambda}_\beta,\tau_\beta}^{-,\mathbf{-1},h}(0) \,\leq\, {\sigma}_{{\Lambda}_\beta,\tau_\beta}^{-,\mathbf{-1},h'}(0)}$$ whence \begin{equation*} \lim_{\beta\to\infty}\,\, {\mathbb P}\big({\sigma}_{{\Lambda}_\beta,\tau_\beta}^{-,\mathbf{-1},h}(0)=1\big)\,=\,0\,. \end{equation*} as desired. The second part of theorem~\ref{mainfv} for rational values of~$h$ is proved similarly. Therefore, it is sufficient to prove theorem \ref{main} for $h$ irrational. For the remainder of the paper, we will assume that it is the case. This will allow us to use the result of lemma \ref{irrazionale} which implies the other results on the energy landscape proven in section~\ref{geom}, in particular lemma~\ref{fondo}. \section{Isoperimetric results.} \label{energyestmates} In this section we report some specific results on the energy landscape of the $d$ dimensional Ising model. In the two dimensional case, a very detailed description can be found in \cite{NS2,NS1}. In three dimensions, the cycle structure is known only near the typical transition paths (see \cite{N2,N1,AC,BAC}). In higher dimension, we can compute the communication energy between $\mathbf{-1}$ and $\mathbf{+1}$ by using the results of Neves \cite{N1}, but finer details are still unknown. In section~\ref{sie}, we state a discrete isoperimetric inequality which will be used in the proof of lemma~\ref{control}. In section~\ref{sir}, we define the so--called reference path. Thanks to the isoperimetric results of Neves, we can compute the critical energy~$\Gamma_d$ with the help of the reference path. This is done in section~\ref{sitm}. As a by--product, we prove that the energy $\Gamma_d$ depends continuously on~$h$. In the inductive proof of theorem~\ref{T2}, we work with mixed boundary conditions, called $n\pm$ boundary conditions. In section~\ref{six}, we define the $n\pm$ boundary conditions and we prove the required isoperimetric results in boxes with these boundary conditions. \subsection{An isoperimetric inequality.} \label{sie} A $d$ dimensional polyomino is a set which is the finite union of unit $d$ dimensional cubes. There is a natural correspondence between configurations and polyominoes. To a configuration we associate the polyomino which is the union of the unit cubes centered at the sites having a positive spin. The main difference between configurations and polyominoes is that the polyominoes are defined up to translations. Neves \cite{N1} has obtained a discrete isoperimetric inequality in dimension~$d$, which yields the exact value of \[\min\,\big\{\,\text{perimeter}(c): c\text{ is a $d$ dimensional polyomino of volume $v$} \,\big\}\,,\] where $v\in{\mathbb N}$. This value is a quite complicated function of the volume $v$, which is larger than \[ 2d\big\lfloor v^{1/d}\big\rfloor^{d-1}\,. \] We derive from this the following simplified isoperimetric inequality. \noindent {\bf Simplified isoperimetric inequality}. For a $d$ dimensional polyomino $c$, \[\text{perimeter}(c) \,\geq\, 2d\big(\text{volume}(c)\big)^{(d-1)/d}\,. \] \begin{proof} We rely on the inequality stated above and we perform a simple scaling with an integer factor $N$: \begin{multline*} \min\,\big\{\,\text{perimeter}(c): c\text{ $d$ dimensional polyomino of volume $v$} \,\big\} \\ \,\geq\, \min\,\Big\{\,\text{perimeter}\big( N^{-1/d}c\big): c\text{ polyomino of volume $Nv$} \,\Big\}\\ \,=\, N^{\textstyle\frac{1-d}{d}} \min\,\big\{\,\text{perimeter}( c): c\text{ polyomino of volume $Nv$} \,\big\}\\ \,\geq\, N^{\textstyle\frac{1-d}{d}} 2d\big\lfloor (Nv)^{1/d}\big\rfloor^{d-1}\,.\hfil \end{multline*} Sending $N$ to $\infty$, we obtain the desired inequality. \end{proof} \noindent If we had applied the classical isoperimetric inequality in $\mathbb{R}^d$, then we would have obtained an inequality with a different constant, namely the perimeter of the unit ball instead of $2d$. The constant $2d$ is sharp, indeed there is equality when $c$ is a $d$ dimensional cube whose side length is an integer. We believe that, for polyominoes of volume equal to $l^d$ where $l$ is an integer, it is the only shape realizing the equality, yet we were unable to locate a proof of this statement in the literature (apart for the three dimensional case \cite{BAC}). We will need the simplified isoperimetric inequality with the correct constant in the main inductive proof. \subsection{The reference path.} \label{sir} Let $R$ be a parallelepiped in ${\mathbb Z}^d$ whose vertices belong to ${\mathbb Z}^d+(1/2,\dots,1/2)$ and whose sides are parallel to the axis. A face of $R$ consists of the set of the sites of ${\mathbb Z}^d$ which are at distance $1/2$ from the parallelepiped and which are contained in a given single hyperplane. With a slight abuse of terminology, we say that a configuration $\eta$ is obtained by attaching a $d-1$ dimensional configuration $\x$ to a face of a $d$ dimensional parallelepiped ${\zeta}$ if ${\eta}={\zeta} \cup \x$ and $\x$ is contained in a face of ${\zeta}$. It is immediate to see that in this case \be{eqfaccia} H_{{\mathbb Z}^d}({\zeta} \cup \x)\,=\, H_{{\mathbb Z}^d} ({\zeta})\,+\, H_{{\mathbb Z}^{d-1}} (\x)\,. \end{eqnarray*} We call quasicube a parallelepiped in ${\mathbb Z}^d$ such that the shortest and the longest side lengths differ at most by one length unit. Notice that the faces of a quasicube are $d-1$ dimensional quasicubes. From the results of Neves \cite{N1} we see that there exists an optimal path from $\mathbf{-1}$ to $\mathbf{+1}$ made of configurations which are as close as possible to a cube. We call reference path in a box~$\Lambda$ a path $\rho=(\rho_0,\dots,\rho_{|{\Lambda}|})$ going from $\mathbf{-1}$ to $\mathbf{+1}$ built with the following algorithm. In one dimension, $\rho_i$ has exactly $i$ pluses which form an interval of length $i$. In higher dimension, we proceed as follows: \begin{enumerate} \item Put a plus somewhere in the box. \item Fill one of the largest faces of the parallelepiped of pluses (among that contained in the box), following a $d-1$ dimensional reference path. \item Go to step 2 until the entire box is full of pluses. \end{enumerate} With a reference path $\rho=(\rho_0,\dots,\rho_{|{\Lambda}|})$, we associate a re\-fe\-ren\-ce cy\-cle path consisting in the sequence of cycles $(\pi_0,\dots,\pi_{|{\Lambda}|})$, where for $i=0,\dots,|{\Lambda}|$, the cycle $\pi_i$ is the maximal cycle of $\smash{\{\,-1,+1\,\}^\Lambda} \setminus \{\,\mathbf{-1},\mathbf{+1}\,\}$ containing $\rho_i$. A reference path enjoys the following remarkable property: $$\forall i<j\qquad E(\rho_i,\rho_j)\,=\,\max\,\big\{\,H(\rho_k):i\leq k\leq j\,\big\}\,,$$ i.e., it realizes the solution of the minimax problem associated to the communication energy between any two of its configurations. \subsection{The metastable cycle.} \label{sitm} Let ${\Lambda}$ be a box whose sides are larger than $2d/h$. We endow ${\Lambda}$ with minus boundary conditions. The metastable cycle ${\cal C}_d$ in the box $\Lambda$ is the maximal cycle of $$\smash{\{\,-1,+1\,\}^\Lambda} \setminus \{\,\mathbf{+1}\,\}$$ containing $\mathbf{-1}$ in the energy landscape associated to $H_{\Lambda}^-$, the Hamiltonian in ${\Lambda}$ with minus boundary conditions. We define $${\Gamma}_d\,=\,\depth({\cal C}_d)\,=\, E(\mathbf{-1},\mathbf{+1})\,.$$ Recall that, by convention, $H(\mathbf{-1})=0$. Obviously, a path ${\omega}=({\omega}_0,\dots,{\omega}_l)$ going from $\mathbf{-1}$ to $\mathbf{+1}$ satisfies $$\displaylines{\max_{0\leq i\leq l}H({\omega}_l)\,\geq\, \max_{0\leq k\leq |\Lambda|} \min\,\big\{\,H(\sigma): \sigma\in\smash{\{\,-1,+1\,\}^\Lambda},\,|\sigma|=k\,\big\} \cr \,=\, \max_{0\leq k\leq |\Lambda|} \Big( \min\,\big\{\,p(\sigma): \sigma\in\smash{\{\,-1,+1\,\}^\Lambda},\,|\sigma|=k\,\big\}-hk \Big) }$$ and the reference path~$\rho$ realizes the equality in this inequality. We conclude therefore that $${\Gamma}_d\,=\, \max_{0\leq k\leq |\Lambda|} H(\rho_k)\,.$$ When $h$ is irrational, there exists a unique value $m_d$ such that $${\Gamma}_d\,=\, H(\rho_{m_d})\,,$$ i.e., the value ${\Gamma}_d$ is reached for the configuration of a reference path having volume $m_d$. We call such a configuration a critical droplet. \centerline{ \includegraphics[height=7cm,width=7cm]{f9a.eps} } \noindent From the results of Neves \cite{N2,N1} and a direct computation, we derive the following facts. Let $$l_c(d)\,=\,\Big\lfloor\frac{2(d-1)}{h}\Big\rfloor\,.$$ The configuration of volume $m_d$ is a quasicube with sides of length $l_c(d)$ or $l_c(d)+1$, with a $d-1$ dimensional critical droplet attached on one of its largest sides. The precise shape of the critical droplet depends on the value of~$h$ (see for instance \cite{BAC} for $d=3$); by the precise shape, we mean the number of sides of the quasicube which are equal to $l_c(d)$ and $l_c(d)+1$. It is possible to derive exact formulas for~$m_d$ and ${\Gamma}_d$, but they are complicated and it is necessary to consider various cases according to the value of~$h$. However, we have $m_1=1$, ${\Gamma}_1=2-h$ and the following inequalities: $$\displaylines{ \big(l_c(d)\big)^d\,\leq\,m_d \,\leq\, \big(l_c(d)+1\big)^d\,,\cr 2d\big(l_c(d)\big)^{d-1} -h\big(l_c(d)+1\big)^d \,\leq\,\Gamma_d \,\leq\, 2d\big(l_c(d)+1\big)^{d-1} -h\big(l_c(d)\big)^d \,.}$$ This yields the following expansions as $h$ goes to~$0$: $$m_d \sim \tonda{\frac{2 (d-1)}{h}}^d\,,\qquad {\Gamma}_d \sim 2\tonda{\frac{2 (d-1)}{h}}^{d-1}\,. $$ \bl{contga} The energy $\Gamma_d$ of the critical droplet in dimension~$d$ is a continuous function of the magnetic field~$h$. \end{lemma} \begin{proof} Let $h_0>0$. Let ${\Lambda}$ be a box of side length larger than $4d/h_0$. From the previous results, for any $h\geq h_0$, we have the equality $$ \Gamma_d \,=\, \max_{0\leq k\leq |\Lambda|} \min\,\big\{\,H(\sigma): \sigma\in\smash{\{\,-1,+1\,\}^\Lambda},\,|\sigma|=k\,\big\}\,. $$ Given a configuration $\sigma$ of spins in $\Lambda$, the Hamiltonian $H(\sigma)$ is a continuous function of the magnetic field $h$. For $k\leq |\Lambda|$, the number of configurations $\sigma$ such that $|\sigma|=k$ is finite, thus the minimum $$\min\,\big\{\,H(\sigma): \sigma\in\smash{\{\,-1,+1\,\}^\Lambda},\,|\sigma|=k\,\big\} $$ is also a continuous function of $h$. Thus $\Gamma_d$ is also a continuous function of $h$ on $[h_0,+\infty[$. This holds for any $h_0>0$, thus $\Gamma_d$ is a continuous function of $h$ on $]0,+\infty[$. \end{proof} \noindent Our next goal is to prove that the maximal depth of the cycles in a reference cycle path is smaller than ${\Gamma}_{d-1}$. Let $\rho=(\rho_0,\dots,\rho_{|{\Lambda}|})$ be a reference path and let $(\pi_0,\dots,\pi_{|{\Lambda}|})$ be the corresponding reference cycle path. We set $$\Delta_d\,=\,\max_{0\leq i<m_d}\depth(\pi_i) \,=\, \max_{0\leq i<m_d} \big(E(\pi_i,\mathbf{-1})-E(\bottom(\pi_i))\big)\,.$$ \bp{dee} The maximal depth~$\Delta_d$ of the cycles in a reference cycle path is strictly less than~${\Gamma}_{d-1}$. \end{proposition} \begin{proof} For $i<m_d$ the configuration $\rho_i$ belongs to~${\cal C}_d$ and we have $$E(\pi_i,\mathbf{-1})\,=\, \max_{0\leq j\leq i}H(\rho_j)\,.$$ Let us define, for $0\leq i\leq r$, \def\underline{v}{\underline{v}} \def\overline{v}{\overline{v}} $$\displaylines{ \underline{v}_i\,=\,\min\,\big\{\,|\sigma|:\sigma\in\pi_i\,\big\}\,,\cr \overline{v}_i\,=\,\max\,\big\{\,|\sigma|:\sigma\in\pi_i\,\big\}\,. }$$ Whenever $i<m_d$, the value $\underline{v}_i$ is the unique integer $v$ such that $$H(\rho_{v-1})\,=\,E(\pi_i,\mathbf{-1})\,.$$ Thanks to the minimax property of the reference path, we have also that $\rho_k\in\pi_i$ for $\underline{v}_i\leq k\leq\overline{v}_i$ whence $$E(\bottom(\pi_i))\,=\,\min\,\big\{\,H(\rho_k): \underline{v}_i\leq k\leq\overline{v}_i\,\big\}\,.$$ From the previous identities, we infer that $$\displaylines{ \Delta_d \,=\, \max_{0\leq i<m_d} \max \big\{\, H(\rho_{\underline{v}_i-1})- H(\rho_{k}): \underline{v}_i\leq k\leq\overline{v}_i\,\big\} \cr \,\leq\, \max_{0\leq j\leq i<m_d} \big(H(\rho_{j})- H(\rho_{i})\big) \,.}$$ The maximum of the energy along a $d-1$ dimensional reference path is reached at the value $m_{d-1}$, while the minimum of the energy is reached at one of the two ends of the path. Therefore the indices $i^*,j^*$ realizing the maximum of the righthand side correspond respectively to a quasicube $\rho_{i^*}$ and the union $\rho_{j^*}$ of a quasicube $c^*$ and a $d-1$ dimensional critical droplet. Since $j^*\leq i^*$, we have $c^*\subset\rho_{j^*}\subset \rho_{i^*}$. The quasicubes $c^*$ and $\rho_{i^*}$ being subcritical, we have $H(c^*)<H(\rho_{i^*})$ and therefore $$\Delta_d\,\leq\, H(\rho_{j^*})- H(\rho_{i^*}) \,<\, H(\rho_{j^*})- H({c^*})\,\leq\,{\Gamma}_{d-1}\,.$$ The last inequality holds also when $c^*$ is too small so that a $d-1$ dimensional critical droplet cannot be attached to one of its faces. \end{proof} \subsection{Boxes with $n\pm$ boundary conditions.} \label{six} \newcommand{\partial^{\, out}}{\partial^{\, out}} Unlike in the simplified model studied in~\cite{CM2}, we cannot use here a direct induction on the dimension~$d$. Instead, we introduce special boundary conditions that make a $d$--dimensional system behave like a $n$--dimensional system. For $E$ a subset of ${\mathbb Z}^d$, we define its outer vertex boundary $\partial^{\, out} E$ as $$\partial^{\, out} E\,=\, \big\{\,x\in {\mathbb Z}^d\setminus E:\exists\, y\in E\quad |y-x|=1\,\big\}\,.$$ Let $n\in\{\,0,\dots,d\,\}$. We define next mixed boundary conditions for parallelepipeds with minus on $2n$ faces and plus on $2d-2n$ faces. \noindent {\bf Boundary condition $n\pm$}. Let $R$ be a parallelepiped. We write $R$ as the product $R=\Lambda_1\times\Lambda_2$, where $\Lambda_1,\Lambda_2$ are parallelepipeds of dimensions $n,d-n$ respectively. We consider the boundary conditions on~$R$ defined as \medskip $\bullet$ minus on $\big(\partial^{\, out} {\Lambda}_1\big) \times {\Lambda}_2$, \smallskip $\bullet$ plus on ${\Lambda}_1\times \partial^{\, out} {\Lambda}_2$. \medskip \noindent We denote by $n\pm$ this boundary condition, and by $H^{n\pm}$ the corresponding Hamiltonian in $R$. The $n\pm$ boundary condition on $R$ is obtained by putting minuses on the exterior faces of $R$ orthogonal to the first $n$ axis and pluses on the remaining faces. \begin{pspicture}(-4.1,-5)(3,3) \pstThreeDCoor[xMax=2,zMax=2,yMax=2] \pstThreeDBox[hiddenLine](3,-1,2)(0,0,0.75)(0.75,0,0)(0,0.75,0) \uput[0](-2.8,1.1){$+$} \uput[0](-3.5,1.1){$+$} \uput[0](-2.96,1.4){$+$} \uput[0](-3.1,2.1){$0\pm$} \pstThreeDBox[hiddenLine](3,-1,-2)(0,0,1.5)(1.5,0,0)(0,1.5,0) \uput[0](-2.6,-2.4){$-$} \uput[0](-3.7,-2.4){$-$} \uput[0](-3.5,-1.7){$-$} \uput[0](-3.1,-0.8){$3\pm$} \pstThreeDBox[hiddenLine](-3.5,4,2)(0,0,1)(4,0,0)(0,1,0) \uput[0](4.2,0.8){$+$} \uput[0](4,1.5){$+$} \uput[0](2.4,0.5){$-$} \uput[0](2.7,1.8){$1\pm$} \pstThreeDBox[hiddenLine](-2,3,-1)(0,0,1)(4,0,0)(0,4,0) \uput[0](3.25,-2){$+$} \uput[0](1.7,-2.8){$-$} \uput[0](4.5,-3){$-$} \uput[0](4.75,-0.8){$2\pm$} \end{pspicture} \noindent We will now transfer the isoperimetric results in ${\mathbb Z}^d$ to parallelepipeds with $n\pm$ boundary condition. \bl{prok} Let $n\in\{\,1,\dots,d\,\}$. Let $R$ be a $d$ dimensional parallelepiped and let $l$ be the length of its smallest side. For any configuration $\sigma$ in $R$ such that $|\sigma|<l$, there exists an $n$ dimensional configuration $\rho$ such that \[|\rho|=|{\sigma}|\,,\qquad H_{{\mathbb Z}^n}(\rho)\,\leq\, H^{n\pm}_R(\sigma)\,.\] \end{lemma} \begin{proof} The constraint on the cardinality of $\sigma$ ensures that there is no cluster of $+$ connecting two opposite faces of $R$. We endow ${\mathbb N}^d$ with $n\pm$ boundary conditions by putting minuses on \[\big(\{\,-1\,\}\times {\mathbb N}^{d-1}\big)\cup\dots\cup \big({\mathbb N}^{n-1}\times\{\,-1\,\}\times {\mathbb N}^{d-n}\big)\] and pluses on \[\big({\mathbb N}^n\times\{\,-1\,\}\times {\mathbb N}^{d-n-1}\big)\cup\dots\cup \big({\mathbb N}^{d-1}\times\{\,-1\,\}\big)\,.\] We shall prove the following assertion, which implies the claim of the lemma. Suppose $n<d$. For any finite configuration $\sigma$ in ${\mathbb N}^d$, there exists a configuration $\rho$ in ${\mathbb N}^{d-1}$ such that \[|\rho|=|{\sigma}|\,,\qquad H_{{\mathbb N}^{d-1}}^{n\pm}(\rho)\,\leq\, H_{{\mathbb N}^d}^{n\pm}(\sigma)\,.\] If we start with a configuration $\sigma$ in $R$ such that $|\sigma|<l$, then we apply iteratively this result to the connected components of $\sigma$ (since no connected component of $\sigma$ intersects two opposite faces of $R$, up to a rotation, their energies can be computed as if they were in ${\mathbb N}^d$ with $n\pm$ boundary conditions). We end up with a configuration $\eta$ in ${\mathbb N}^n$ with $n\pm$ boundary conditions which satisfies the conclusion of the lemma. We prove next the assertion. Let $\sigma$ be a finite configuration in ${\mathbb N}^d$ and let $c$ be the polyomino associated to $\sigma$. We let $c$ fall by gravity along the $(n+1)$th axis on ${\mathbb N}^{n}\times\{\,-1\,\}\times {\mathbb N}^{d-n-1}$. \smallskip \centerline{ \psset{xunit=1cm,yunit=1cm,runit=1cm} \pspicture(-1,1.5)(10,8.7) \rput(1.5,5){ \includegraphics[height=7cm,width=7cm]{f5a.eps} } \rput(8,5){ \includegraphics[height=7cm,width=7cm]{f5b.eps} } \psline{->}(3,4.5)(6,4.5) \rput(4.4,4.8){Falling along} \rput(4.4,4.2){the third axis} \rput(4.4,6.3){ \pstThreeDCoor[xMin=-0.5,yMin=-0.5,zMin=-0.5,xMax=1,zMax=1,yMax=1] } \endpspicture } \smallskip\noindent The resulting polyomino $\smash{{\widetilde{c}}}$ has the same volume than~$c$ and moreover \[\text{perimeter}(\smash{{\widetilde{c}}})\,\leq\,\text{perimeter}(c)\,,\] because the number of contacts between the unit cubes or with the boundary condition cannot increase through the ``falling'' operation. We can think of $\smash{{\widetilde{c}}}$ as a stack of $d-1$ dimensional polyominoes $c_0,\dots,c_k$, which are obtained by intersecting $\smash{{\widetilde{c}}}$ with the layers \[L_i\,=\,\big\{\, x=(x_1,\dots,x_d)\in {\mathbb N}^d: i-\frac{1}{2}\leq x_{n+1}<i+\frac{1}{2}\, \big\}\,,\qquad i\in{\mathbb N}\,.\] Since we have let $c$ fall by gravity to obtain $\smash{{\widetilde{c}}}$, this stack is non--increasing in the following sense: for $i$ in ${\mathbb N}$, the $d-1$ dimensional polyomino $c_i$ associated to the layer $L_i$ contains the $d-1$ dimensional polyomino $c_{i+1}$ associated to the layer $L_{i+1}$. As a consequence, \[ H_{{\mathbb N}^d}^{n\pm}(\smash{{\widetilde{c}}})\,\geq\, \sum_{i\geq 0} H_{{\mathbb N}^{d-1}}^{n\pm}(c_i)\,+\, \text{area}(\text{proj}_{n+1}(\smash{{\widetilde{c}}})) \] where $\text{proj}_{n+1}(\smash{{\widetilde{c}}})$ is the orthogonal projection of $\smash{{\widetilde{c}}}$ on ${\mathbb N}^{n}\times\{\,-1\,\}\times {\mathbb N}^{d-n-1}\!$. Let $\smash{{\widehat{c}}}$ be a $d-1$ dimensional polyomino obtained as the union of disjoint translates of $c_0,\dots,c_k$. The polyomino $\smash{{\widehat{c}}}$ answers the problem. \end{proof} \noindent Let ${\Lambda}$ be a box whose sides are larger than $m_n$. We construct next a reference path $(\rho^{n\pm}_i,0\leq i\leq |{\Lambda}|)$ in the box ${\Lambda}$ endowed with $n\pm$ boundary conditions with the following algorithm. \begin{enumerate} \item Compute the maximum number $m$ of plus neighbours for a minus site in the box (taking into account the boundary conditions). \item If there is only one site realizing this maximum, put a plus at this site and go to step 1. \item Otherwise, compute the maximal length of a segment of minus sites having all $m$ plus neighbors. \item Put a plus at a site of a segment realizing the previous maximum and go to step 1. \end{enumerate} \noindent As before, the reference path $(\rho^{n\pm}_i,0\leq i\leq |{\Lambda}|)$ realizes the solution of the minimax problem associated to the communication energy between any two of its configurations. The metastable cycle ${\cal C}_d^{n\pm}$ in the box $\Lambda$ with $n\pm$ boundary conditions is the maximal cycle of $$\smash{\{\,-1,+1\,\}^\Lambda} \setminus \{\,\mathbf{-1},\mathbf{+1}\,\}$$ containing $\mathbf{-1}$ in the energy landscape associated to the Hamiltonian $H^{n\pm}_{{\Lambda}}$. \bc{dep} The depth of the metastable cycle ${\cal C}_d^{n\pm}$ is equal to ${\Gamma}_n$. \end{corollary} \begin{proof} With the help of lemma~\ref{prok}, we can compare the energy along a path in ${\Lambda}$ with $n\pm$ boundary conditions with the energy along a path in ${\mathbb Z}^n$, in such a way that at each index the configurations in each path have the same cardinality. This construction implies immediately that $$\depth({\cal C}_d^{n\pm}) \,\geq\,{\Gamma}_n\,.$$ To get the converse inequality we simply consider the reference path in ${\Lambda}$ with $n\pm$ boundary conditions. \end{proof} \bc{codep} The maximal depth~$\Delta_d^{n\pm}$ of the cycles in a reference cycle path with $n\pm$ boundary conditions is strictly less than~${\Gamma}_{n-1}$. \end{corollary} \begin{proof} We check that, until the index $m_n$, the energy along the reference path $(\rho^{n\pm}_i,i\geq 0)$ is equal to the energy along the reference path in ${\mathbb Z}^n$ computed with $H_{{\mathbb Z}^n}$. The result follows then from proposition~\ref{dee}. \end{proof} \section{The space-time clusters.} \label{stc} The goal of this section is to prove theorem~\ref{totcontrole}, which provides a control on the diameter of the space-time clusters. Theorem~\ref{totcontrole} is used in an essential way in the proof of the lower bound on relaxation time, under the following weaker form: for the dynamics restricted to a small box, the probability of creating a large ${\hbox{\footnotesize\rm STC}}$ before nucleation is SES. We first recall the basic definitions and properties of the space-time clusters in section~\ref{badp}. We next proceed to show that it is very unlikely that large space--time clusters are formed before nucleation. The main theorem of this section, theorem~\ref{totcontrole}, is the analog of Lemma 4 in \cite{DS1}. The proof in \cite{DS1} relies on the fact that in two dimensions the energy needed to grow, i.e., the energy of a protuberance, is larger than the energy needed to shrink a subcritical droplet. In higher dimension, we are not able to prove a corresponding result. Let us give a quick sketch of the proof of theorem~\ref{totcontrole}. We consider a set ${\cal D}$ satisfying a technical hypothesis and we want to control the probability of creating a large space--time cluster before exiting ${\cal D}$. Typically, the set ${\cal D}$ is a cycle or a cycle compound included in the metastable cycle. We use several ideas coming from the theory of simulated annealing \cite{CaCe}. We decompose ${\cal D}$ into its maximal cycle compounds and we show that, before exiting ${\cal D}$, the process is unlikely to make a large number of jumps between these maximal cycle compounds. Thus, if a large space--time cluster is created, then it must be created during a visit to a maximal cycle compound. The problem is therefore reduced to control the size of the space--time cluster created inside a cycle compound $\overline{\cal A}$ included in ${\cal D}$. The key estimate is proved by induction over the depth of the cycle compound. Suppose we want to prove the estimate for a cycle compound $\overline{\cal A}$. A first key fact, proved in lemma \ref{fondo} with the help of the ferromagnetic inequality, is that in the Ising model under irrational magnetic field the bottom of every cycle compound is a singleton. Let $\eta$ be the bottom of the cycle compound~$\overline{\cal A}$. We consider now the trajectory of the process starting from a point of $\overline{\cal A}$ until it exits from $\overline{\cal A}$. In section~\ref{sectriangle}, in order to control the size of the space-time clusters, we define a quantity ${\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(s,t)$ depending on a time interval $[s,t]$. This quantity is larger than the increase of the maximum of the diameters of the space--time clusters created between the times $s$ and $t$. Moreover this quantity is subadditive with respect to the time (see lemma~\ref{triangle}). Our strategy is to look at the successive visits to $\eta$ and the excursions outside of $\eta$. Suppose that $\eta$ has only one connected component. The creation of a large space--time cluster in a fixed direction has to be achieved during an excursion outside of $\eta$. Indeed, each time the process comes back to $\eta$, the growth of the space--time clusters restarts almost from scratch. \centerline{ \psset{xunit=1cm,yunit=1cm,runit=1cm} \pspicture(0,0)(11,11) \rput(10,-0.5){time} \psline{->}(0,0)(11,0) \psline{->}(0,0)(0,11) \rput(0.75,10){space} \psline[linestyle=dashed]{-}(0,0)(0,10) \psdots[dotsize=5pt](0,3)(0,7)(0,6)(0,6.5) \psline{-}(0,7)(1.1,7) \psline{-}(1.3,7)(2,7) \psline{-}(1.3,7)(1.3,6.5) \psline{-}(1.5,6)(2,6) \psline{-}(1.5,6)(1.5,6.5) \psline{-}(0,6.5)(2,6.5) \psline[linestyle=dashed]{-}(2,0)(2,10) \psdots[dotsize=5pt](2,3)(2,7)(2,6)(2,6.5) \psline[linestyle=dashed]{-}(5,0)(5,10) \psdots[dotsize=5pt](5,3)(5,7)(5,6)(5,6.5) \psline[linestyle=dashed]{-}(6,0)(6,10) \psdots[dotsize=5pt](6,3)(6,7)(6,6)(6,6.5) \psline[linestyle=dashed]{-}(10,0)(10,10) \psdots[dotsize=5pt](10,3)(10,7)(10,6)(10,6.5) \psline{-}(0,3)(1,3) \psline{-}(0.5,3.5)(0.9,3.5) \psline{-}(0.5,3.5)(0.5,3) \psline{-}(0.6,4)(1.8,4) \psline{-}(0.6,4)(0.6,3.5) \psline{-}(1.2,4.5)(1.7,4.5) \psline{-}(1.2,4.5)(1.2,4) \rput(3.5,-0.5){returns to $\eta$} \psline{->}(2.5,-0.5)(2,0) \psline{->}(4.5,-0.5)(5,0) \psline{->}(4.5,-0.5)(6,0) \psline{->}(4.5,-0.5)(10,0) \psline{-}(2,6)(2.3,6)(2.3,5.5)(2.3,5)(3,5)(3,4.5)(3.2,4.5)(3.2,4) (4,4)(4,3.5)(4.5,3.5)(4.5,3)(5,3) \psline{-}(2.3,6)(2.9,6) \psline{-}(2.3,5.5)(3.9,5.5) \psline{-}(3.2,4)(4.7,4) \psline{-}(3.7,7)(3.7,6.5) \psline{-}(3,4.5)(3.7,4.5) \psline{-}(3.2,4)(3.4,4) \psline{-}(4,3.5)(4.5,3.5) \psline{-}(5,3)(3,3)(3,1) \psline{-}(2,7)(4,7) \psline{-}(2,3)(2.5,3) \psline{-}(5,7)(4.5,7)(4.5,6.5)(5,6.5)(3,6.5) \psline{-}(5,7)(5.1,7)(5.1,7.5)(5.3,7.5)(5.3,8)(5.4,8) (5.4,8.5)(5.4,9)(5.5,9)(5.5,9.5)(5.7,9.5)(5.7,10) \psline{-}(5,6)(7.2,6)(7.2,5.5)(7,5.5)(7,5)(7.8,5)(7.8,4.5)(8.2,4.5) (8.2,4)(9.1,4)(9.1,4.5)(9.7,4.5)(9.7,5) \psline{-}(8.4,4)(8.4,4.5)(8.5,4.5)(8.5,5)(8.8,5) (8.8,5.5)(9.3,5.5)(9.3,6)(10,6) \psline{-}(5,6.5)(8,6.5) \psline{-}(5,3)(6,3) \psline{-}(6,3)(6.2,3)(6.2,3.5)(6.5,3.5)(6.5,3)(6.6,3)(6.6,2.5) (7,2.5)(7,2)(7.7,2)(7.7,1.5)(8,1.5)(8,2)(8.4,2)(8.5,2)(8.5,1.5)(8.7,1.5) (8.7,1)(8.9,1)(8.9,0.5)(9.3,0.5) \psline{-}(8.5,2)(8.6,2)(8.6,2.5)(9.1,2.5)(9.1,3)(10,3) \psline{-}(5,7)(8.6,7)(8.6,7.5)(9.3,7.5)(9.3,8)(9.4,8)(9.4,7.5)(9.6,7.5) (9.6,7)(10,7) \endpspicture } \bigskip \bigskip \centerline{Evolution of a STC in dimension 1} \bigskip Thus if a large space--time cluster is created before the exit of $\overline{\cal A}$, then it has to be created during an excursion outside of $\eta$. The situation is more complicated when the bottom $\eta$ has several connected components. Indeed the space--time clusters associated to one connected component might change between two consecutive visits to $\eta$. We prove in section~\ref{anad} that this does not happen: at each visit to $\eta$, a given connected component of $\eta$ always belong to the same space--time cluster. This is a consequence of lemma~\ref{stccc}. The figure shows an example of the space--time clusters associated to a configuration $\eta$ having two connected components. On the evolution depicted in the figure, the space--time clusters containing the lower component of $\eta$ at the times of the first two returns are distinct. We will prove that this cannot occur as long as the process stays in the cycle compound~$\overline{\cal A}$ (this is the purpose of lemma~\ref{stccc}). We rely then on a technique going back to the theory of simulated annealing, which consists in removing the bottom $\eta$ from $\overline{\cal A}$, decomposing $\overline{\cal A}\setminus\{\,\eta\,\}$ into its maximal cycle compounds and studying the jumps of the process between these maximal cycle compounds until the exit of $\overline{\cal A}\setminus\{\,\eta\,\}$. As before, we show that, before exiting $\overline{\cal A}\setminus\{\,\eta\,\}$, the process is unlikely to make a large number of jumps between these maximal cycle compounds. This step is very similar to the initial step, when we considered a general set ${\cal D}$. For the clarity of the exposition, we prefer to repeat the argument rather than to introduce additional notations and make a general statement. Using the subadditivity of ${\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(s,t)$, we conclude that a large space--time cluster has to be created during a visit to a maximal cycle compound of $\overline{\cal A}\setminus\{\,\eta\,\}$. Now each cycle compound included in $\overline{\cal A}\setminus\{\,\eta\,\}$ has a depth strictly smaller than the depth of $\overline{\cal A}$. Using the induction hypothesis, we have a control on the space--time clusters created during each visit to these cycle compounds. Combining the estimate provided by the induction hypothesis and the estimate on the number of cycle compounds of $\overline{\cal A}\setminus\{\,\eta\,\}$ visited by the process, we obtain a control on the size of the space--time clusters created during an excursion in $\overline{\cal A}\setminus\{\,\eta\,\}$. Using the estimates presented in section~\ref{cycles}, we can also control the number of visits to $\eta$ before the exit of $\overline{\cal A}$. The induction step is completed by combininig all the previous estimates. \subsection{Basic definitions and properties.} \label{badp} Let $\Lambda$ be a subset of ${\mathbb Z}^d$ and let $(\sigma_{{\Lambda},t})_{t\geq0}$ be a continuous--time trajectory in $\smash{\{\,-1,+1\,\}^\Lambda}$. We endow the set of the space--time points $\Lambda\times{\mathbb R}^+$ with the following connectivity relation: the two space-time points $(x,t)$ and $(y,s)$ are connected if $\sigma_{{\Lambda},t}(x) = \sigma_{{\Lambda},s}(y) = +1$ and \newline $\bullet$ either $s=t$ and $|x-y| \le 1$; \newline $\bullet$ or $x=y$ and $\sigma_{{\Lambda},u}(x)=+1$ for $u\in [\min(s,t),\max(s,t)]$. \newline A space--time cluster of the trajectory $(\sigma_{{\Lambda},t})_{t\geq 0}$ is a maximal connected component of space--time points. For $u\leq s\in{\mathbb R}^+$, we denote by ${\hbox{\footnotesize\rm STC}}(u,s)$ the space--time clusters of the trajectory restricted to the time interval $[u,s]$. Sometimes we deal with a specific initial condition~$\alpha$ and boundary conditions~${\zeta}$. We denote by $\smash{{\hbox{\footnotesize\rm STC}}({\sigma}^{\alpha,{\zeta}}_{{\Lambda},t},s\leq t\leq u)}$ the space--time clusters of the trajectory of the process $\smash{({\sigma}^{\alpha,{\zeta}}_{{\Lambda},t})_{t\geq 0}}$ restricted to the time interval $[u,s]$. The graphical construction updates the configuration in two different places independently until a space--time cluster connects the two places. We state next a refinement of Lemma 2 of \cite{DS1}, which allows to compare processes defined in different volumes or with different boundary conditions via the graphical construction described in section~\ref{graphi}. \begin{lemma} \label{cresSTC} Let ${\Lambda}$ be a subset of ${\mathbb Z}^d$ and let ${\zeta}$ be a boundary condition on ${\Lambda}$. Let $x$ be a site of the exterior boundary of ${\Lambda}$ such that ${\zeta}(x)=+1$. If ${\cal C}$ is a ${\hbox{\footnotesize\rm STC}}$ for the dynamics in ${\Lambda}$ with ${\zeta}$ as boundary conditions and ${\cal C}$ is such that $x$ is not the neighbour of a point of ${\cal C}$, then ${\cal C}$ is also a ${\hbox{\footnotesize\rm STC}}$ for the dynamics in ${\Lambda}$ with ${\zeta}^x$ as boundary conditions. \end{lemma} \begin{proof} We denote by $\alpha$ the initial configuration. From the coupling, we have \[ \forall t\geq 0\quad\forall y\in{\Lambda}\qquad {\sigma}^{\alpha,{\zeta}^x}_{{\Lambda},t}(y)\,\leq\, {\sigma}^{\alpha,{\zeta}}_{{\Lambda},t}(y)\,. \] Let ${\cal C}$ be a ${\hbox{\footnotesize\rm STC}}$ in ${\hbox{\footnotesize\rm STC}}({\sigma}^{\alpha,{\zeta}}_{{\Lambda},t},s\leq t\leq u)$ and suppose that ${\cal C}$ does not belong to ${\hbox{\footnotesize\rm STC}}(\smash{{\sigma}^{\alpha,{\zeta}^x}_{{\Lambda},t}},s\leq t\leq u)$. Necessarily, there exists a space--time point $(y,t)$ such that \[ (y,t)\in{\cal C}\,,\quad {\sigma}^{\alpha,{\zeta}^x}_{{\Lambda},t}(y)=-1\,,\quad {\sigma}^{\alpha,{\zeta}}_{{\Lambda},t}(y)=+1\,. \] We consider the set of the space--time points satisfying the above condition and we denote by $(y^*,t^*)$ the space--time point such that $t^*$ is minimum. This is possible since the number of spin flips in a finite box is finite in a finite time interval, and moreover the trajectories are right continuous. At time $t^*$, the spin at site $y$ becomes $+1$ in the process $\smash{({\sigma}^{\alpha,{\zeta}}_{{\Lambda},t})_{t\geq 0}}$, and it remains equal to $-1$ in $\smash{({\sigma}^{\alpha,{\zeta}^x}_{{\Lambda},t})_{t\geq 0}}$. We examine next the neighbors of $y^*$. Let $z$ be a neighbor of $y^*$ in ${\Lambda}$. If $\smash{{\sigma}^{\alpha,{\zeta}}_{{\Lambda},t^*}(z)=-1}$, then $\smash{{\sigma}^{\alpha,{\zeta}^x}_{{\Lambda},t^*}(z)=-1}$ as well. Suppose that $\smash{{\sigma}^{\alpha,{\zeta}}_{{\Lambda},t^*}(z)=+1}$. The spin at $z$ does not change at time $t^*$, thus for $s<t^*$ close enough to $t^*$, we have also $\smash{{\sigma}^{\alpha,{\zeta}}_{{\Lambda},s}(z)=+1}$. This implies that $\{\,z\,\}\times [s,t^*]$ is included in ${\cal C}$. From the definition of $(y^*,t^*)$, we have that \[\forall u\in[s,t^*]\quad \smash{{\sigma}^{\alpha,{\zeta}^x}_{{\Lambda},u}(z)=+1}\,. \] We conclude that the neighbors of $y$ in ${\Lambda}$ have the same spins in $\smash{{\sigma}^{\alpha,{\zeta}^x}_{{\Lambda},t^*}}$ and in $\smash{{\sigma}^{\alpha,{\zeta}}_{{\Lambda},t^*}}$. Therefore $y$ must have a neighbor in ${\mathbb Z}^d\setminus {\Lambda}$ whose spin is different in $\smash{{\sigma}^{\alpha,{\zeta}^x}_{{\Lambda},t^*}}$ and in $\smash{{\sigma}^{\alpha,{\zeta}}_{{\Lambda},t^*}}$. The only possible candidate is $x$. \end{proof} \noindent The next corollary is very close to Lemma 2 of \cite{DS1}. \bc{deghstc} Let ${\Lambda}_1\subset{\Lambda}_2$ be two subsets of ${\mathbb Z}^d$, let $\alpha$ be an initial configuration in ${\Lambda}_2$ and let ${\zeta}$ be a boundary condition on ${\Lambda}_2$. If no ${\hbox{\footnotesize\rm STC}}$ of the process $({\sigma}^{\alpha,{\zeta}}_{{\Lambda}_2,t},s\leq t\leq u)$ intersects both ${\Lambda}_1$ and the inner boundary of ${\Lambda}_2$, then $$ \forall t\in[s,u]\qquad {\sigma}^{\alpha,{\zeta}}_{{\Lambda}_2,t}|_{{\Lambda}_1}\,=\, {\sigma}^{\alpha,-}_{{\Lambda}_2,t}|_{{\Lambda}_1}\,. $$ \end{corollary} We define the diameter ${\mathrm{diam }} \,{\cal C}$ of a space--time cluster~${\cal C}$ by $${\mathrm{diam }} \,{\cal C}\,= \, \sup\, \big\{\,|x-y|_\infty: (x,s),\,(y,t) \in {\cal C} \,\big\}\, $$ where $|\quad|_\infty$ is the supremum norm given by $$\forall x=(x_1,\dots,x_d)\in{\mathbb Z}^d\qquad |x|_\infty\,=\, \max_{1\leq i\leq d} |x_i|\,.$$ Thus ${\mathrm{diam }}\,{\cal C}$ is the diameter of the spatial projection of ${\cal C}$. \subsection{The bottom of a cycle compound.} We prove here that, when $h$ is irrational, the bottom of a cycle compound of the Ising model contains a unique configuration. Throughout the section, we consider a finite box $Q$ endowed with a boundary condition $\xi$. To alleviate the formulas, we write simply $H$ instead of $H_Q^\xi$. \bl{nuovo} Suppose that $h$ is irrational. Let ${\eta}$ be a minimizer of the energy in a cycle compound $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$. Then, for any ${\zeta}\in \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$, ${\zeta} \cup {\eta} \in \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ and ${\zeta} \cap {\eta} \in \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$. \end{lemma} \begin{proof} Let $\eta$ belong to the bottom of $\overline{\cal A}$. We assume that $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ is not a singleton, otherwise there is nothing to prove. Let ${\omega}=({\omega}_1,\dots,{\omega}_n)$ be a path in $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ that goes from ${\eta}$ to ${\zeta}$. We associate with ${\omega}$ a {\emph{slim}} path $${\omega}\cap\eta\,=\,({\omega}_1\cap{\eta},\dots,{\omega}_n\cap{\eta})$$ and a {\emph{fat}} path $${\omega}\cup\eta\,=\,({\omega}_1\cup{\eta},\dots,{\omega}_n\cup{\eta})\,.$$ Suppose that the thesis is false, and let us set \be{defk} {\kappa}^* = \min \big\{\,k\geq 1 : {\omega}_k\cap{\eta} \not \in \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}} \ \text{ or } \ {\omega}_k\cup{\eta} \not \in \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}\,\big\}\,. \end{eqnarray*} Notice that ${\kappa}^*$ is larger than or equal to~$2$. We will use the attractive inequality \be{attr} H({\omega}_k\cap{\eta})+H({\omega}_k\cup{\eta}) \le H({\omega}_k) + H({\eta}) \end{eqnarray*} and the fact that ${\eta}$ is a minimizer of the energy in $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$. Let us set $$\lambda\,=\,E\big(\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}, \{-1,+1\}^{{\Lambda}} \setminus\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}\big)\,.$$ First, for any $k<{\kappa}^*$, the above inequality yields that $$\max\tonda{H({\omega}_{k}\cap{\eta}),H({\omega}_{k}\cup{\eta})} \leq H({\omega}_{k})\leq \l \,.$$ The configurations ${\omega}_{{\kappa}^*}$ and ${\omega}_{{\kappa}^*-1}$ differ for the spin in a single site. We say that the ${\kappa}^*$-th spin flip is inside (respectively {outside}) ${\eta}$ if this site has a plus spin (respectively a minus spin) in ${\eta}$, that is, if ${\omega}_{{\kappa}^*} \vartriangle {\omega}_{{{\kappa}^*}-1} \subset {\eta}$ (respectively ${\omega}_{{\kappa}^*} \vartriangle {\omega}_{{{\kappa}^*}-1} \not\subset {\eta}$). We distinguish two cases, according to the position of the ${\kappa}^*$-th spin flip with respect to ${\eta}$: \medskip \noindent i) if the ${\kappa}^*$-th spin flip is inside ${\eta}$, then ${\omega}_{{\kappa}^*} \cup {\eta} ={\omega}_{{\kappa}^*-1} \cup {\eta}$, so that only the slim path moves and exits $\overline{\cal A}$ at index ${\kappa}^*$. Thus $${\omega}_{{\kappa}^*-1}\cap{\eta}\in\overline{\cal A}\,,\quad {\omega}_{{\kappa}^*}\cap{\eta}\not\in\overline{\cal A} $$ and these two configurations communicates, therefore $$ \max( H( {\omega}_{{\kappa}^*-1}\cap{\eta}), H( {\omega}_{{\kappa}^*}\cap{\eta}) )\,\geq\,\lambda\,.$$ We distinguish again two cases. \noindent $\bullet\quad H( {\omega}_{{\kappa}^*-1}\cap{\eta})\geq\lambda$. Since $ H( {\omega}_{{\kappa}^*-1}\cap{\eta})\leq H( {\omega}_{{\kappa}^*-1})\leq\lambda$, then ${\omega}_{{\kappa}^*-1}\cap{\eta}$ and ${\omega}_{{\kappa}^*-1}$ have both an energy equal to $\lambda$, and by lemma~\ref{irrazionale}, we conclude that ${\omega}_{{\kappa}^*-1}\cap{\eta}= {\omega}_{{\kappa}^*-1}$ and ${\omega}_{{\kappa}^*-1}$ is included in ${\eta}$. Since we are assuming that the slim path moves at step ${\kappa}^*$, the original path and the slim path undergo the same spin flip so that they must coincide also at step ${\kappa}^*$, contradicting the assumption that ${\omega}_{{\kappa}^*}\cap{\eta} \not \in \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$. \noindent $\bullet\quad H( {\omega}_{{\kappa}^*}\cap{\eta})\geq\lambda$. By the attractive inequality $$H({\omega}_{{\kappa}^*}) -H({\omega}_{{\kappa}^*}\cap{\eta})\,\ge\, H({\omega}_{{\kappa}^*-1}\cup{\eta})-H({\eta})\,\geq\,0\,,$$ whence $$H({\omega}_{{\kappa}^*}\cap{\eta}) \,\leq\,H({\omega}_{{\kappa}^*})\,\leq\, \lambda\,.$$ Thus ${\omega}_{{\kappa}^*}\cap{\eta}$ and ${\omega}_{{\kappa}^*}$ have both an energy equal to $\lambda$. By lemma~\ref{irrazionale}, we conclude that ${\omega}_{{\kappa}^*}\cap{\eta}= {\omega}_{{\kappa}^*}$, contradicting the assumption that ${\omega}_{{\kappa}^*}\cap{\eta} \not \in \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$. \smallskip \noindent We consider next the second case. The argument is very similar in the two dual cases i) and ii), yet it seems necessary to handle them separately. \smallskip \noindent ii) if the ${\kappa}^*$-th spin flip is outside ${\eta}$, then ${\omega}_{{\kappa}^*} \cap {\eta} ={\omega}_{{\kappa}^*-1} \cap {\eta}$, so that only the fat path moves and exits $\overline{\cal A}$ at index ${\kappa}^*$. Thus $${\omega}_{{\kappa}^*-1}\cup{\eta}\in\overline{\cal A}\,,\quad {\omega}_{{\kappa}^*}\cup{\eta}\not\in\overline{\cal A} $$ and these two configurations communicates, therefore $$ \max( H( {\omega}_{{\kappa}^*-1}\cup{\eta}), H( {\omega}_{{\kappa}^*}\cup{\eta}) )\,\geq\,\lambda\,.$$ We distinguish again two cases. \noindent $\bullet\quad H( {\omega}_{{\kappa}^*-1}\cup{\eta})\geq\lambda$. Since $ H( {\omega}_{{\kappa}^*-1}\cup{\eta})\leq H( {\omega}_{{\kappa}^*-1})\leq\lambda$, then ${\omega}_{{\kappa}^*-1}\cup{\eta}$ and ${\omega}_{{\kappa}^*-1}$ have both an energy equal to $\lambda$, and by lemma~\ref{irrazionale}, we conclude that ${\omega}_{{\kappa}^*-1}\cup{\eta}= {\omega}_{{\kappa}^*-1}$ and ${\omega}_{{\kappa}^*-1}$ contains ${\eta}$. Since we are assuming that the fat path moves at step ${\kappa}^*$, the original path and the fat path undergo the same spin flip so that they must coincide also at step ${\kappa}^*$, contradicting the assumption that ${\omega}_{{\kappa}^*}\cup{\eta} \not \in \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$. \noindent $\bullet\quad H( {\omega}_{{\kappa}^*}\cup{\eta})\geq\lambda$. By the attractive inequality $$H({\omega}_{{\kappa}^*}) -H({\omega}_{{\kappa}^*}\cup{\eta})\,\ge\, H({\omega}_{{\kappa}^*-1}\cap{\eta})-H({\eta})\,\geq\,0\,,$$ whence $$H({\omega}_{{\kappa}^*}\cup{\eta}) \,\leq\,H({\omega}_{{\kappa}^*})\,\leq\, \lambda\,.$$ Thus ${\omega}_{{\kappa}^*}\cup{\eta}$ and ${\omega}_{{\kappa}^*}$ have both an energy equal to $\lambda$. By lemma~\ref{irrazionale}, we conclude that ${\omega}_{{\kappa}^*}\cup{\eta}= {\omega}_{{\kappa}^*}$, contradicting the assumption that ${\omega}_{{\kappa}^*}\cup{\eta} \not \in \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$. \end{proof} \bl{fondo} Suppose that $h$ is irrational. The bottom $\bottom(\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}})$ of any cycle compound $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ contains a single configuration. \end{lemma} \begin{proof} If ${\eta}_1,{\eta}_2 \in \bottom(\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}})$, then by lemma \ref{nuovo} we have also ${\eta}_1 \cup {\eta}_2 \in \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ and ${\eta}_1 \cap {\eta}_2 \in \overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$, so that $H({\eta}_1) + H({\eta}_2) \le H({\eta}_1 \cup {\eta}_2)+H({\eta}_1 \cap {\eta}_2) $. But by the attractive inequality, \be{eqfondo} H({\eta}_1 \cup {\eta}_2)+H({\eta}_1 \cap {\eta}_2) \le H({\eta}_1) + H({\eta}_2), \end{eqnarray*} so that ${\eta}_1\cup{\eta}_2$ and ${\eta}_1\cap{\eta}_2$ are also in $\bottom(\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}})$. Lemma \ref{irrazionale} implies that ${\eta}_1\cup{\eta}_2={\eta}_1\cap{\eta}_2$, showing that ${\eta}_1={\eta}_2$. \end{proof} \subsection{The space--time clusters in a cycle compound.} \label{anad} In this section, we study some properties of the paths contained in suitable cycle compounds. In order to avoid unnecessary notation, with a slight abuse of terms, we consider space--time clusters associated to a discrete time trajectory. In other words, in this section the world "time" has the meaning of "index of the configuration in the trajectory", and the space-time clusters considered here are pure geometrical objects. We will use these geometrical results in order to control the diameter of the space-time clusters of our processes. As in the previous section, we consider a finite box $Q$ endowed with a boundary condition $\xi$. To alleviate the formulas, we write simply $H$ instead of $H_Q^\xi$. A connected component of a configuration~$\sigma$ is a maximal connected subset of the plus sites of $\sigma$ $$\{\,x\in {\mathbb Z}^d:\sigma(x)=+1\,\}\,,$$ two sites being connected if they are nearest neighbors on the lattice. We denote by ${\cal C}(\sigma)$ the connected components of $\sigma$. If $C\in {\cal C}(\sigma)$, then we define its energy as $$H(C)\,=\,\big|\{\,\{\,x,y\,\}:x\not\in C,\,\, y\in C,\,\, |x-y|=1\,\} \big|-h|C|\,.$$ In particular, we have $$H(\sigma)\,=\,\sum_{C\in{\cal C}(\sigma)}H(C)\,.$$ Let ${\omega}=({\omega}_0,\dots,{\omega}_r)$ be a path of configurations in the box~$Q$. We endow the set of the space--time points $Q\times{\mathbb N}$ with the following connectivity relation associated to~${\omega}$: the two space-time points $(x,i)$ and $(y,j)$ are connected if ${\omega}_i(x) = {\omega}_j(y) = +1$ and \newline $\bullet$ either $i=j$ and $|x-y| \le 1$; \newline $\bullet$ or $x=y$ and $|i-j|=1$. \newline A space--time cluster of the path ${\omega}$ is a maximal connected component of space--time points in ${\omega}$. We consider a domain~${\cal D}$, which is a set of configurations satisfying the following hypothesis. \medskip \noindent {\bf Hypothesis on ${\cal D}$.} The configurations in~${\cal D}$ are such that: \noindent $\bullet\quad$There exists $v_{\cal D}$ (independent of $\beta$) such that $|\sigma|\leq v_{\cal D}$ for any $\sigma\in{\cal D}$. \noindent $\bullet\quad$If $\sigma\in{\cal D}$ and $C$ is a connected component of $\sigma$, then we have $H(C)>H(\mathbf{-1})$. \noindent $\bullet\quad$If $\sigma\in{\cal D}$ and $\eta$ is such that $\eta\subset\sigma$ and $H(\eta)\leq H(\sigma)$, then $\eta\in{\cal D}$. \bl{stccc} Let $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ be a cycle compound included in~${\cal D}$ and let $\eta$ be the unique configuration of $\bottom(\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}})$. Let ${\omega}\,=\,({\omega}_0,\dots,{\omega}_r)$ be a path in $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ starting at~$\eta$ and ending at~$\eta$. Let $C$ be a connected component of $\eta$. Then the space--time sets $C\times\{0\}$ and $C\times\{r\}$ belong to the same space--time cluster of~${\omega}$. \end{lemma} \begin{proof} From lemma~\ref{fondo}, we know that $\bottom(\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}})$ is reduced to a single configuration ${\eta}$. By lemma~\ref{nuovo}, the path $${\omega}\cap\eta\,=\,({\omega}_0\cap{\eta},\dots,{\omega}_r\cap{\eta})$$ is still a path in $\overline{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}}$ that goes from $\eta$ to $\eta$. Moreover, the space--time clusters of ${\omega}\cap\eta$ are included in those of ${\omega}$, therefore it is enough to prove the result for the path ${\omega}\cap\eta$. Let $\smash{\widetilde{\omega}}$ be the path obtained from ${\omega}\cap\eta$ by removing all the space--time clusters of ${\omega}\cap\eta$ which don't intersect $\eta\times\{0\}$. The path~$\smash{\widetilde{\omega}}$ is still admissible, i.e., it is a sequence of configurations such that each configuration communicates with its successor. Let $i\in\{\,0,\cdots,r\,\}$. We have $\smash{\widetilde{\omega}}_i\,\subset\,{\omega}_i\cap\eta$. Since $\smash{\widetilde{\omega}}_i$ is obtained from ${\omega}_i\cap\eta$ by removing some connected components of ${\omega}_i\cap\eta$, the second hypothesis on the domain~${\cal D}$ yields that $H(\smash{\widetilde{\omega}}_i)\,\leq\,H({\omega}_i\cap\eta)$. With the help of the third hypothesis on ${\cal D}$, we conclude that $\smash{\widetilde{\omega}}_i$ is in ${\cal D}$. In particular the whole path $\smash{\widetilde{\omega}}$ stays in ${\cal D}$. Suppose that the path $\smash{\widetilde{\omega}}$ leaves $\overline{\cal A}$ at some index $i$, so that $\smash{\widetilde{\omega}}_{i}\neq{\omega}_{i}\cap\eta$. We consider two cases. \noindent $\bullet\quad\smash{\widetilde{\omega}}_{i-1}={\omega}_{i-1}\cap\eta$. In this case, the spin flip between ${\omega}_{i-1}\cap\eta$ and ${\omega}_{i}\cap\eta$ creates a new STC which does not intersect $\eta\times\{0\}$, hence $\smash{\widetilde{\omega}}_{i}={\omega}_{i-1}\cap\eta$. This contradicts the fact that $\smash{\widetilde{\omega}}$ leaves $\overline{\cal A}$ at index $i$. \noindent $\bullet\quad\smash{\widetilde{\omega}}_{i-1}\neq {\omega}_{i-1}\cap\eta$. Since we have also $\smash{\widetilde{\omega}}_i\neq {\omega}_{i}\cap\eta$, then by lemma \ref{irrazionale} we have the strict inequality $$\max\big(H(\smash{\widetilde{\omega}}_{i-1}),H(\smash{\widetilde{\omega}}_i)\big)\,<\, \max\big(H({\omega}_{i-1}\cap\eta),H({\omega}_i\cap\eta)\big)\,\leq\, E(\overline{\cal A},{\cal X}\setminus\overline{\cal A})\,.$$ However, since $\smash{\widetilde{\omega}}$ leaves $\overline{\cal A}$ at index $i$, we have also $$\max\big(H(\smash{\widetilde{\omega}}_{i-1}),H(\smash{\widetilde{\omega}}_i)\big)\,\geq\, E(\overline{\cal A},{\cal X}\setminus\overline{\cal A})\,,$$ which is absurd. Thus the path $\smash{\widetilde{\omega}}$ stays also in $\overline{\cal A}$. Since $$H(\smash{\widetilde{\omega}}_r)\,\leq\,H({\omega}_r\cap\eta)\,,\quad \smash{\widetilde{\omega}}_r\subset{\omega}_r\cap\eta=\eta\,, $$ we have $\smash{\widetilde{\omega}}_r=\eta$ by lemma \ref{irrazionale}. The path $\smash{\widetilde{\omega}}$ is included in $\eta\times\{\,0,\dots,r\,\}$, hence, for any connected component $C$ of $\eta$, the space--time cluster of $\smash{\widetilde{\omega}}$ containing $C\times\{\,r\,\}$ is included in $C\times\{\,0,\dots,r\,\}$, so that its intersection with $\eta\times\{\,0\,\}$, which is not empty by construction, must be equal to $C\times\{\,0\,\}$. \end{proof} \subsection{Triangle inequality for the diameters of the STCs} \label{sectriangle} In the sequel, we consider a trajectory of the process $({\sigma}_{Q,t},t\geq 0)$ in a finite box~$Q$ and we study its space--time clusters. For $s<t$, we define $$\displaylines{ {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(s,t)\,=\, \max\Big( \kern-3pt \sum_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(s,t)}{ \scriptstyle {\cal C}\cap (Q\times\{\,s,t\,\})\neq\varnothing}} \kern-15pt {\mathrm{diam }}\,{\cal C} , \kern-7pt \max_{ \phantom{\Big(}\tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(s,t)}{\scriptstyle {\cal C}\cap (Q\times\{\,s,t\,\})=\varnothing}} \kern-9pt {\mathrm{diam }}\,{\cal C} \Big)\,.}$$ The main point of this awkward definition is the following triangle inequality. \bl{triangle} For any $s<u<t$, we have $${\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(s,t)\,\leq\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(s,u)\,+\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(u,t)\,.$$ \end{lemma} \begin{proof} When we look at the restriction to the time intervals $(s,u)$ and $(u,t)$ of a ${\hbox{\footnotesize\rm STC}}$ in ${\hbox{\footnotesize\rm STC}}(s,t)$ which is alive at time $u$, this ${\hbox{\footnotesize\rm STC}}$ splits into several ${\hbox{\footnotesize\rm STC}}$ belonging to ${\hbox{\footnotesize\rm STC}}(s,u)\cup {\hbox{\footnotesize\rm STC}}(u,t)$. Yet the diameter of the initial ${\hbox{\footnotesize\rm STC}}$ is certainly less than the sum of all the diameters of the ${\hbox{\footnotesize\rm STC}}$ in ${\hbox{\footnotesize\rm STC}}(s,u)\cup {\hbox{\footnotesize\rm STC}}(u,t)$ which are alive at time $u$. The proof is quite tedious, however since this inequality is fundamental for our argument we provide a detailed verification. First, we have $$\sum_{ \tatop{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(s,t)} {\scriptstyle {\cal C}\cap (Q\times\{\,s,t\,\})\neq\varnothing}} {\scriptstyle {\cal C}\cap (Q\times\{\,u\,\})\neq\varnothing }} \kern-15pt {\mathrm{diam }}\,{\cal C} \,\leq\, \sum_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(s,u)} {\scriptstyle {\cal C}\cap (Q\times\{\,u\,\})\neq\varnothing} } \kern-15pt {\mathrm{diam }}\,{\cal C}\,\, + \sum_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(u,t)} {\scriptstyle {\cal C}\cap (Q\times\{\,u\,\})\neq\varnothing}} \kern-15pt {\mathrm{diam }}\,{\cal C}\,. $$ Next, if ${\cal C}\in{\hbox{\footnotesize\rm STC}}(s,t)$ and ${\cal C}\cap (Q\times\{\,u\,\})=\varnothing$, then ${\cal C}\in{\hbox{\footnotesize\rm STC}}(s,u)\cup{\hbox{\footnotesize\rm STC}}(u,t)$. Thus $$\sum_{ \tatop{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(s,t)} {\scriptstyle {\cal C}\cap (Q\times\{\,s,t\,\})\neq\varnothing}} {\scriptstyle {\cal C}\cap (Q\times\{\,u\,\})=\varnothing }} \kern-15pt {\mathrm{diam }}\,{\cal C} \,\leq\, \sum_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(s,u)} {\tatop{\scriptstyle {\cal C}\cap (Q\times\{\,s\,\})\neq\varnothing} {\scriptstyle {\cal C}\cap (Q\times\{\,u\,\})=\varnothing}} } \kern-15pt {\mathrm{diam }}\,{\cal C}\,\, + \sum_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(u,t)} {\tatop {\scriptstyle {\cal C}\cap (Q\times\{\,u\,\})=\varnothing} {\scriptstyle {\cal C}\cap (Q\times\{\,t\,\})\neq\varnothing}}} \kern-15pt {\mathrm{diam }}\,{\cal C}\,. $$ Summing the two previous inequalities, we get $$\displaylines{\sum_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(s,t)} {\scriptstyle {\cal C}\cap (Q\times\{\,s,t\,\})\neq\varnothing}} \kern-15pt {\mathrm{diam }}\,{\cal C} \,\leq\, \sum_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(s,u)} {\scriptstyle {\cal C}\cap (Q\times\{\,s,u\,\})\neq\varnothing} } \kern-15pt {\mathrm{diam }}\,{\cal C}\,\, + \sum_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(u,t)} {\scriptstyle {\cal C}\cap (Q\times\{\,u,t\,\})\neq\varnothing}} \kern-15pt {\mathrm{diam }}\,{\cal C} \hfill\cr \,\leq\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(s,u)\,+\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(u,t)\,.}$$ Moreover, if ${\cal C}\in{\hbox{\footnotesize\rm STC}}(s,t)$, ${\cal C}\cap (Q\times\{\,s,t\,\})=\varnothing$ and ${\cal C}\cap (Q\times\{\,u\,\})\neq\varnothing$ then $${\mathrm{diam }}\,{\cal C}\,\leq\, \sum_{{ \tatop{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(s,u)} {\scriptstyle {\cal C}\cap (Q\times\{\,u\,\})\neq\varnothing}} {\scriptstyle {\cal C}\cap (Q\times\{\,s\,\})=\varnothing} }} \kern-15pt {\mathrm{diam }}\,{\cal C} \,+\, \sum_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(u,t)} {\tatop {\scriptstyle {\cal C}\cap (Q\times\{\,u\,\})\neq\varnothing} {\scriptstyle {\cal C}\cap (Q\times\{\,t\,\})=\varnothing}}} \kern-15pt {\mathrm{diam }}\,{\cal C}\,.$$ Finally if ${\cal C}\in{\hbox{\footnotesize\rm STC}}(s,t)$, ${\cal C}\cap (Q\times\{\,s,u,t\,\})=\varnothing$ then ${\cal C}\in{\hbox{\footnotesize\rm STC}}(s,u)\cup{\hbox{\footnotesize\rm STC}}(u,t)$ and $${\mathrm{diam }}\,{\cal C}\,\leq\, \max_{ \phantom{\Big(}\tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(s,u)}{\scriptstyle {\cal C}\cap (Q\times\{\,s,u\,\})=\varnothing}} \kern-3pt {\mathrm{diam }}\,{\cal C} + \max_{ \phantom{\Big(}\tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(u,t)}{\scriptstyle {\cal C}\cap (Q\times\{\,u,t\,\})=\varnothing}} \kern-3pt {\mathrm{diam }}\,{\cal C} \,.$$ The two previous inequalities yield $$\max_{ \phantom{\Big(}\tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(s,t)}{\scriptstyle {\cal C}\cap (Q\times\{\,s,t\,\})=\varnothing}} \kern-3pt {\mathrm{diam }}\,{\cal C} \,\leq\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(s,u)\,+\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(u,t)\,$$ and the proof is completed. \end{proof} \subsection{The diameter of the space--time clusters} \label{diamostc} We consider boxes that grow slowly with $\beta$. This creates a major complication in the description of the energy landscape, but it allows to obtain very strong estimates that will be used to control entropy effects in the dynamics of growing droplets. We make the following hypothesis on the volume of the box~$Q$. \medskip \noindent {\bf Hypothesis on $Q$.} The box~$Q$ is such that $|Q|\,=\,\exp o(\ln\beta)$, which means that $$\lim_{\beta\to\infty} \frac{\ln |Q|}{\ln\beta}\,=\,0\,.$$ Let $n\in\{\,0,\dots,d\,\}$. As in section~\ref{anad}, we consider a set of configurations ${\cal D}$ in the box~$Q$ satisfying the following hypothesis. \medskip \noindent {\bf Hypothesis on ${\cal D}$.} The configurations in~${\cal D}$ are such that: \noindent $\bullet\quad$There exists $v_{\cal D}$ (independent of $\beta$) such that $|\sigma|\leq v_{\cal D}$ for any $\sigma\in{\cal D}$. \noindent $\bullet\quad$If $\sigma\in{\cal D}$ and $C$ is a connected component of $\sigma$, then we have $$H_Q^{n\pm}(C)\,>\,H_Q^{n\pm}(\mathbf{-1})\,.$$ \noindent $\bullet\quad$If $\sigma\in{\cal D}$ and $\eta$ is such that $\eta\subset\sigma$ and $H_Q^{n\pm}(\eta)\leq H_Q^{n\pm}(\sigma)$, then $\eta\in{\cal D}$. \medskip \noindent The hypothesis on~${\cal D}$ ensures that the number of the energy values of the configurations in ${\cal D}$ with $n\pm$ boundary conditions is bounded by a value independent of~$\beta$. Indeed, for any $\sigma\in{\cal D}$, $$H_Q^{n\pm}(\sigma)\,=\,\sum_{C\in{\cal C}(\sigma)}H_Q^{n\pm}(C)\,,$$ where ${\cal C}(\sigma)$ is the set of the connected components of $\sigma$. Yet there is at most $v_{\cal D}$ elements in ${\cal C}(\sigma)$ and any element of ${\cal C}(\sigma)$ has volume at most $v_{\cal D}$, hence the number of possible values for~$H$ is at most $\smash{c(d)^{(v_{\cal D})^2}}$ where $c(d)$ is a constant depending on the dimension~$d$ only. Let next $$\delta_0<\delta_1<\cdots<\delta_p$$ be the possible values for the difference of the energies of two configurations of ${\cal D}$, i.e., $$\{\, \delta_0,\cdots,\delta_p\,\} \,=\, \{\,\big|H_Q^{n\pm}(\sigma)-H_Q^{n\pm}(\eta)\big|_+ :\sigma,\eta\in{\cal D}\,\}\,.$$ {\bf Notation}. We will study the space--time clusters associated to different processes. For $\alpha$ an initial configuration and $\zeta$ a boundary condition, we denote by $$ {\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,{\zeta}}_{Q,t}, s\leq t\leq u)$$ the ${\hbox{\footnotesize\rm STC}}$ associated to the trajectory of the process $({\sigma}^{\alpha,{\zeta}}_{Q,t})_{t\geq 0}$ during the time interval $[s,u]$. Accordingly, $$ {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,{\zeta}}_{Q,t}, s\leq t\leq u)$$ is equal to ${\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( s,u)$ computed for the ${\hbox{\footnotesize\rm STC}}$ of the process $({\sigma}^{\alpha,{\zeta}}_{Q,t})_{t\geq 0}$ on the time interval $[s,u]$. \bt{totcontrole} Let $n\in\{\,1,\dots,d\,\}$. For any $K>0$, there exists a value $D$ which depends only on~$v_{\cal D}$ and $K$ such that, for $\beta$ large enough, we have $$ \forall \alpha\in{\cal D} \qquad P\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,n\pm}_{Q,t}, 0\leq t\leq\tau({\cal D}))\geq D \big) \,\leq\, \exp(-\beta K)\,. $$ \end{theorem} \medskip \noindent To alleviate the formulas, we drop the superscripts which do not vary, like the boundary conditions $n\pm$ and sometimes the initial configuration $\alpha$. Throughout the proof we fix an integer $n\in\{\,1,\dots,d\,\}$ and ${\sigma}_{Q,t}$ stands for ${\sigma}^{\alpha,n\pm}_{Q,t}$. For~${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}$ an arbitrary set and $t\geq 0$, we define the time~$\tau({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D},t)$ of exit from~${\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}$ after time~$t$ $$\tau({\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D},t)\,=\,\inf\,\{\,s\geq t:\sigma_{Q,s}\not\in {\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}\,\}\,.$$ Let ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}$ be a subset of ${\cal D}$. We consider the decomposition of ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}$ into its maximal cycle compounds $\overline{\cal M}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})$ and we look at the successive jumps between the elements of $\overline{\cal M}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})$. For $\gamma\in{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}$, we denote by $$\overline{\pi}(\gamma,{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})$$ the maximal cycle compound of ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}$ containing $\gamma$. Let $\alpha\in{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}$ be the initial configuration. We define recursively a sequence of random times and maximal cycle compounds included in ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}$: $$\begin{matrix} \tau_0=0,&&\overline{\pi}_0=\overline{\pi}(\alpha,{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}),\\ \tau_1=\tau(\overline{\pi}_0,\tau_0),&&\overline{\pi}_1=\overline{\pi}(\sigma_{Q,\tau_1}, {\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}),\\ \quad\vdots&&\quad\vdots\\ \tau_k=\tau(\overline{\pi}_{k-1},\tau_{k-1}),&& \overline{\pi}_k=\overline{\pi}(\sigma_{Q,\tau_k},{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}),\\ \quad\vdots &&\quad\vdots\\ \tau_R=\tau(\overline{\pi}_{R-1},\tau_{R-1}),&& \overline{\pi}_R=\overline{\pi}(\sigma_{Q,\tau_R},{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}),\\ \tau_{R+1}=\tau({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}).&&\\ \end{matrix}$$ The sequence~$(\overline{\pi}_0,\ldots,\overline{\pi}_{R-1},\overline{\pi}_R)$ is the path of the maximal cycle compounds in ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}$ visited by~$(\sigma_{Q,t})_{t\geq 0}$ and it is denoted by $\overline{\pi}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})$. We first obtain a control on the random length~$R({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})$ of $\overline{\pi}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})$. \bp{long} There exists a constant $c>0$ depending only on~$v_{\cal D}$ such that, for any subset ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}$ of ${\cal D}$, for $\beta$ large enough, $$\forall \alpha\in{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H} \quad\forall r\geq 1 \qquad {\mathbb P}\big(R({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})\geq r\big) \,\leq\, \frac{1}{c}\exp(-\beta cr)\,.$$ \end{proposition} \begin{proof} Let us set $\overline{\cal A}_0=\overline{\pi}(\alpha,{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})$. We write $${\mathbb P}(R({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})=r)\,=\, \sum_{\overline{\cal A}_1,\dots,\overline{\cal A}_r \in\overline{\cal M}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})} {\mathbb P}\big( \overline{\pi}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})= (\overline{\cal A}_0,\overline{\cal A}_1,\dots,\overline{\cal A}_r)\big)\,. $$ Let ${\overline{\cal A}_1,\dots,\overline{\cal A}_r}$ be a fixed path in $\overline{\cal M}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})$. With the help of the Markov property, we have $$\displaylines{ {\mathbb P}\big( \overline{\pi}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})= (\overline{\cal A}_0,\overline{\cal A}_1,\dots,\overline{\cal A}_r)\big) \hfill\cr \,=\,\sum_{\alpha_1\in\overline{\cal A}_1\cap\partial\overline{\cal A}_0,\dots, \alpha_r\in\overline{\cal A}_r\cap\partial\overline{\cal A}_{r-1}} \kern-7pt {\mathbb P}\bigg( \lower7pt\vbox{ \hbox{ $\overline{\pi}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})= (\overline{\cal A}_0,\overline{\cal A}_1,\dots,\overline{\cal A}_r)$} \hbox{ $\sigma_{Q,\tau_1}=\alpha_1,\,\dots,\, \sigma_{Q,\tau_{r}}=\alpha_r$ }}\bigg) \cr \,=\,\sum_{\alpha_1\in\overline{\cal A}_1\cap\partial\overline{\cal A}_0,\dots, \alpha_r\in\overline{\cal A}_r\cap\partial\overline{\cal A}_{r-1}} {\mathbb P}\big( \sigma^\alpha_{Q,\tau_1}=\alpha_1\big) \,\cdots\, {\mathbb P}\big( \sigma^{\alpha_{r-1}}_{Q,\tau_{1}}=\alpha_r\big) \,.}$$ Using the hypothesis on $Q$ and ${\cal D}$, for $\varepsilon>0$ and for $\beta$ large enough, we can bound the prefactor appearing in corollary~\ref{exitcom} by $${\deg(\alpha)}^{|{\cal X}|}\,\leq\, \exp(\beta{\varepsilon})\,.$$ For $i\in\{\,1,\dots,r\,\}$, let $a_i$ in ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}$ be such that $H(a_i)= E(\overline{\cal A}_{i-1},{\cal X}\setminus\overline{\cal A}_{i-1})$. Applying next corollary~\ref{exitcom}, we obtain $$\displaylines{ {\mathbb P}\big( \overline{\pi}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})= (\overline{\cal A}_0,\overline{\cal A}_1,\dots,\overline{\cal A}_r)\big) \hfill\cr \,\leq\, \kern -7pt \sum_{\alpha_1\in\overline{\cal A}_1\cap\partial\overline{\cal A}_0,\dots, \alpha_r\in\overline{\cal A}_r\cap\partial\overline{\cal A}_{r-1}} \kern -12pt \exp(r\beta{\varepsilon}) \prod_{i=1}^r \exp\Big(-\beta\max \big(0,H(\alpha_i)- H(a_i)\big) \Big) \hfill\cr \,\leq\, \kern -7pt \sum_{\alpha_1\in\overline{\cal A}_1\cap\partial\overline{\cal A}_0,\dots, \alpha_r\in\overline{\cal A}_r\cap\partial\overline{\cal A}_{r-1}} \kern -17pt \exp(r\beta{\varepsilon}) \exp\Big(-\beta \delta_1\big|\big\{\,i\leq r: H(\alpha_i)>H(a_i) \,\big\}\big|\Big)\,. }$$ For $1\leq i\leq r$, the point $\alpha_i$ belongs to $\partial \overline{\cal A}_{i-1}$, by lemma~\ref{exitval}, this implies that $H(\alpha_i)\neq H(a_i)$. Moreover there is no strictly decreasing sequence of energy values of length larger than~$p+2$ (recall that $\delta_0<\delta_1<\cdots<\delta_p$ are the possible values for the difference of the energies of two configurations of ${\cal D}$). Therefore $$\big|\big\{\,i\leq r: H(\alpha_i)>H(a_i) \,\big\}\big| \,\geq\,\left\lfloor\frac{r}{p+2}\right\rfloor \,.$$ We conclude that $$\displaylines{ {\mathbb P}\big( \overline{\pi}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})= (\overline{\cal A}_0,\overline{\cal A}_1,\dots,\overline{\cal A}_r)\big) \,\leq\, |{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}|^r \exp\Big( r\beta{\varepsilon} -{\beta \delta_1} \left\lfloor\frac{r}{p+2}\right\rfloor \Big) ,}$$ and $${\mathbb P}(R({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})=r)\,\leq\, \big|\overline{\cal M}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})\big|^r |{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}|^r \exp\Big( r\beta{\varepsilon} -{\beta \delta_1} \left\lfloor\frac{r}{p+2}\right\rfloor \Big) \,.$$ By lemmas~\ref{disjoint} and~\ref{fondo}, the map which associates to each maximal cycle compound its bottom is one to one, hence $\big|\overline{\cal M}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})\big|\leq |{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}|$. The hypothesis on ${\cal D}$ yields that, for $\varepsilon>0$ and for $\beta$ large enough, $$ |{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}|\,\leq\, {v_{\cal D}} \big|Q\big|^{v_{\cal D}} \,\leq\, \exp(\beta{\varepsilon})\,, $$ whence $ {\mathbb P}(R({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})=r)\,\leq\, \exp\Big( 3r\beta{\varepsilon} -{\beta \delta_1} \left\lfloor\frac{r}{p+2}\right\rfloor \Big) \,.$$ Choosing $\varepsilon$ small enough and resumming this inequality, we obtain the desired estimate. \end{proof} \noindent We start now the proof of theorem~\ref{totcontrole}. We consider the decomposition of ${\cal D}$ into its maximal cycle compounds $\overline{\cal M}({\cal D})$ in order to reduce the problem to the case where ${\cal D}$ is a cycle compound. We decompose $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0, \tau({\cal D}) )\geq D \big)\,\leq\, {\mathbb P}(R ( {\cal D}) \geq r) \hfill\cr \,+\, \sum_{0\leq k<r} {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0, \tau ( {\cal D}) )\geq D ,\,R ( {\cal D}) =k\big) \,. }$$ Let us fix $k<r$. We write, using the notation defined before proposition~\ref{long}, and setting $\overline{\cal A}_0=\overline{\pi}(\alpha,{\cal D})$, $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0, \tau({\cal D}) )\geq D ,\,R ( {\cal D}) =k\big) \hfill\cr \,\leq\, \kern-9pt \sum_{\overline{\cal A}_1,\dots,\overline{\cal A}_k \in\overline{\cal M}({\cal D})} {\mathbb P}\bigg( \raise-7pt\vbox{\hbox{ $\sum_{0\leq j\leq k}{\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(\tau_j, \tau_{j+1})\geq D$} \hbox{$ \quad\overline{\pi}({\cal D})= (\overline{\cal A}_0,\overline{\cal A}_1,\dots,\overline{\cal A}_k)$} }\bigg) \hfill\cr \,\leq\, \kern-9pt \sum_{\overline{\cal A}_1,\dots,\overline{\cal A}_k \in\overline{\cal M}({\cal D})} \sum_{j=0}^{k} \sum_{\alpha_j\in\overline{\cal A}_j} {\mathbb P}\bigg( \kern-3pt \raise-7pt\vbox{\hbox{ $ {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,n\pm}_{Q,t}, \tau_j\leq t\leq \tau_{j+1})\geq D/r$ } \hbox{$\,\,\, {\sigma}^{\alpha,n\pm}_{Q,\tau_j}=\alpha_j,\,\,\, \overline{\pi}({\cal D})= (\overline{\cal A}_0,\overline{\cal A}_1,\dots,\overline{\cal A}_k)$} }\bigg) \hfill\cr \,\leq\, \kern-9pt \sum_{\overline{\cal A}_1,\dots,\overline{\cal A}_k \in\overline{\cal M}({\cal D})} \sum_{j=0}^{k} \sum_{\alpha_j\in\overline{\cal A}_j} {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha_j,n\pm}_{Q,t}, 0\leq t\leq \tau(\overline{\cal A}_{j}))\geq D/r \big) .}$$ Given a value $K$, we choose $r$ such that $cr>2K$, where $c$ is the constant appearing in proposition~\ref{long}. We choose then $\varepsilon>0$ such that $r\varepsilon < K$. By lemmas~\ref{disjoint} and~\ref{fondo}, the map which associates to each maximal cycle compound its bottom is one to one, hence $$\big| \overline{\cal M}({\cal D}) \big|\,\leq\, |{\cal D}|\,\leq\, \exp(\beta\varepsilon)\,.$$ The last inequality holds for $\beta$ large, thanks to the hypothesis on ${\cal D}$. Combining the previous estimates, we obtain, for $\beta$ large enough, $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0, \tau({\cal D}) )\geq D \big)\,\leq\, \frac{1}{c} \exp(-2\beta K) \,+\, \hfill\cr r^2\exp(\beta r\varepsilon) \max_{\tatop{\scriptstyle \overline{\cal A}\in\overline{\cal M}({\cal D})} {\scriptstyle\alpha\in\overline{\cal A}}} {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,n\pm}_{Q,t}, 0\leq t\leq \tau(\overline{\cal A}))\geq D/r \big) \,. }$$ To conclude, we need to control the size of the space--time clusters created inside a cycle compound $\overline{\cal A}$ included in ${\cal D}$. More precisely, we need to prove the statement of theorem~\ref{totcontrole} for a cycle compound. We shall prove the following result by induction on the depth of the cycle compound. \medskip\noindent {\bf Induction hypothesis at step $i$:} For any $K>0$, there exists $D_i$ depending only on $v_{\cal D}$ and $K$ such that, for $\beta$ large enough, for any cycle compound $\overline{\cal A}$ included in ${\cal D}$ having depth less than or equal to $\delta_i$, $$ \forall \alpha\in\overline{\cal A} \qquad {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,n\pm}_{Q,t}, 0\leq t\leq\tau(\overline{\cal A})) \geq D_i \big)\,\leq\, \exp(-\beta K)\,. $$ Once this result is proved, to conclude the proof of theorem~\ref{totcontrole}, we simply choose $D$ such that $$\frac{D}{r}\,>\,\max\,\big\{\,D_i(2K):0\leq i\leq p\,\big\}$$ where $D_i(2K)$ is the constant associated to $2K$ in the induction hypothesis. We proceed next to the inductive proof. Suppose that $\overline{\cal A}$ is a cycle compound of depth~$0$. Then $\overline{\cal A}=\{\,\eta\,\}$ is a singleton and therefore $${\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0,\tau(\overline{\cal A}))\,\leq\, \sum_{C\in{\cal C}(\eta)}{\mathrm{diam }}\,C+1 \,\leq\,v_{\cal D}+1 \,.$$ Let $i\geq 0$. Suppose that the result has been proved for all the cycle compounds included in ${\cal D}$ of depth less than or equal to $\delta_i$. Let now $\overline{\cal A}$ be a cycle compound of depth $\delta_{i+1}$. By lemma~\ref{fondo} the bottom of~$\overline{\cal A}$ consists of a unique configuration $\eta$. Let $\alpha\in\overline{\cal A}$ be a starting configuration. We study next the process $({\sigma}^{\alpha,n\pm}_{Q,t})_{t\geq 0}$, and unless stated otherwise, the ${\hbox{\footnotesize\rm STC}}$ and the quantities like ${\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}$ are those associated to this process. We define the time~$\theta$ of the last visit to~$\eta$ before the time~$\tau(\overline{\cal A})$, i.e., $$\theta\,=\,\sup\{\,s\leq \tau(\overline{\cal A}):\sigma_{Q,s}=\eta\,\}$$ (if the process does not visit~$\eta$ before~$\tau(\overline{\cal A})$, then we take~$\theta=0$). Considering the random times $\tau(\overline{\cal A}\setminus\{\,\eta\,\})$, $\theta$ and $\tau(\overline{\cal A})$, we have by lemma~\ref{triangle} $$\displaylines{{\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0,\tau(\overline{\cal A}))\,\leq\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0,\tau(\overline{\cal A}\setminus\{\,\eta\,\}))\hfill\cr \,+\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(\tau(\overline{\cal A}\setminus\{\,\eta\,\}), \theta)\,+\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(\theta,\tau(\overline{\cal A}))\,. }$$ Indeed, if $\tau(\overline{\cal A}\setminus\{\,\eta\,\})<\tau(\overline{\cal A})$, then $\tau(\overline{\cal A}\setminus\{\,\eta\,\})\leq \theta\leq\tau(\overline{\cal A})$ and the above inequality holds. Otherwise, if $\tau(\overline{\cal A}\setminus\{\,\eta\,\})=\tau(\overline{\cal A})$, then $\theta=0$ and the second term of the righthand side vanishes. Let $D>0$ and let us write $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0,\tau(\overline{\cal A})) \geq D\big)\,\leq\, {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0,\tau(\overline{\cal A}\setminus\{\,\eta\,\}))\geq D/3\big) \hfill\cr \,+\, {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(\tau(\overline{\cal A}\setminus\{\,\eta\,\}), \theta)\geq D/3\big)\cr \,+\, {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(\theta,\tau(\overline{\cal A}))\geq D/3\big)\,. }$$ We will now consider different starting points, hence we use the more explicit notation for the ${\hbox{\footnotesize\rm STC}}$. From the Markov property, we have \begin{multline*} {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,n\pm}_{Q,t}, \tau(\overline{\cal A}\setminus\{\,\eta\,\})\leq t\leq \theta) \geq D/3 \big) \,\leq\,\\ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\eta,n\pm}_{Q,t}, 0\leq t\leq\theta) \geq D/3 \big) \end{multline*} and $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,n\pm}_{Q,t}, \theta\leq t\leq\tau(\overline{\cal A})) \geq D/3 \big) \hfill\cr \,\leq\, {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\eta,n\pm}_{Q,t}, 0\leq t\leq\tau(\overline{\cal A})) \geq D/3,\, \tau(\overline{\cal A})= \tau(\overline{\cal A}\setminus\{\,\eta\,\}) \big) \cr \,\leq\, {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\eta,n\pm}_{Q,t}, 0\leq t\leq\tau(\overline{\cal A}\setminus\{\,\eta\,\})) \geq D/3 \big)\, }$$ whence $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,n\pm}_{Q,t}, 0\leq t\leq \tau(\overline{\cal A})) \geq D\big)\,\leq\,\hfill\cr 2\sup_{\gamma\in\overline{\cal A}}{\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\gamma,n\pm}_{Q,t}, 0\leq t\leq\tau(\overline{\cal A}\setminus\{\,\eta\,\}))\geq D/3\big) \cr + {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\eta,n\pm}_{Q,t}, 0\leq t\leq\theta) \geq D/3 \big)\,. }$$ We first control the size of the space--time clusters created during an excursion outside the bottom~$\eta$. \bl{srcex} For any $K'>0$, there exists $D'$ depending only on $v_{\cal D},K'$ such that, for $\beta$ large enough, for any $\alpha\in\overline{\cal A}$, $$ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,n\pm}_{Q,t}, 0\leq t\leq \tau(\overline{\cal A}\setminus\{\,\eta\,\}) )\geq D' \big)\,\leq\, \exp(-\beta K')\,.$$ \end{lemma} \begin{proof} The argument is very similar to the initial step of the proof of theorem~\ref{totcontrole}, i.e., we reduce the problem to the maximal cycle compounds included in $\overline{\cal A}\setminus\{\,\eta\,\}$. Although it is possible to include these two steps in a more general result, for the clarity of the exposition, we prefer to repeat the argument rather than to introduce additional notations. We consider the decomposition of $\overline{\cal A}\setminus\{\,\eta\,\}$ into its maximal cycle compounds $\overline{\cal M}(\overline{\cal A}\setminus\{\,\eta\,\})$. Each cycle compound of $\overline{\cal M}(\overline{\cal A}\setminus\{\,\eta\,\})$ has a depth strictly less than $\delta_{i+1}$, hence we can apply the induction hypothesis and control the size of the space--time clusters created inside such a cycle compound. We decompose next $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0, \tau(\overline{\cal A}\setminus\{\,\eta\,\}) )\geq D' \big)\,\leq\, {\mathbb P}(R ( \overline{\cal A}\setminus\{\,\eta\,\}) \geq r) \hfill\cr \,+\, \sum_{0\leq k<r} {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0, \tau ( \overline{\cal A}\setminus\{\,\eta\,\}) )\geq D' ,\,R ( \overline{\cal A}\setminus\{\,\eta\,\}) =k\big) \,. }$$ Let us fix $k<r$ and, denoting simply $\overline{\cal M}=\overline{\cal M}(\overline{\cal A}\setminus\{\,\eta\,\})$, we write, using the notation defined before proposition~\ref{long}, and setting $\overline{\cal A}_0=\overline{\pi}(\alpha, \overline{\cal A}\setminus\{\,\eta\,\} )$, $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0, \tau(\overline{\cal A}\setminus\{\,\eta\,\}) )\geq D' ,\,R ( \overline{\cal A}\setminus\{\,\eta\,\}) =k\big) \hfill\cr \,\leq\, \kern-7pt \sum_{\overline{\cal A}_1,\dots,\overline{\cal A}_k \in\overline{\cal M}} {\mathbb P}\bigg( \raise-7pt\vbox{\hbox{ $\sum_{0\leq j\leq k}{\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(\tau_j, \tau_{j+1})\geq D'$} \hbox{$ \quad\overline{\pi}(\overline{\cal A}\setminus\{\,\eta\,\})= (\overline{\cal A}_0,\overline{\cal A}_1,\dots,\overline{\cal A}_k)$} }\bigg) \hfill\cr \,\leq\, \kern-7pt \sum_{\overline{\cal A}_1,\dots,\overline{\cal A}_k \in\overline{\cal M}} \sum_{0\leq j\leq k} \sum_{\alpha_j\in\overline{\cal A}_j} \kern-2pt {\mathbb P}\bigg( \raise-7pt\vbox{\hbox{ $ {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,n\pm}_{Q,t}, \tau_j\leq t\leq \tau_{j+1})\geq D'/r$ } \hbox{$\,\, {\sigma}^{\alpha,n\pm}_{Q,\tau_j}=\alpha_j,\, \overline{\pi}(\overline{\cal A}\setminus\{\,\eta\,\})= (\overline{\cal A}_0,\overline{\cal A}_1,\dots,\overline{\cal A}_k)$} }\bigg) \hfill\cr \,\leq\, \kern-9pt \sum_{\overline{\cal A}_1,\dots,\overline{\cal A}_k \in\overline{\cal M}} \sum_{0\leq j\leq k} \sum_{\alpha_j\in\overline{\cal A}_j} {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha_j,n\pm}_{Q,t}, 0\leq t\leq \tau(\overline{\cal A}_{j}))\geq D'/r \big) .}$$ Given a value $K'$, we choose $r$ such that $cr>2K'$, where $c$ is the constant appearing in proposition~\ref{long} and $D'$ such that $D'/r>D_i(2K')$ where $D_i(2K')$ is the value given by the induction hypothesis at step~$i$ associated to $2K'$. Notice that this value is uniform with respect to the cycle compound $\overline{\cal A}\subset{\cal D}$ of depth $\delta_{i+1}$, because all the cycle compounds of $\overline{\cal M}$ are included in ${\cal D}$ and have a depth at most equal to $\delta_i$. We choose then $\varepsilon>0$ such that $r\varepsilon < K'$. By lemmas~\ref{disjoint} and~\ref{fondo}, the map which associates to each maximal cycle compound its bottom is one to one, hence $$\big| \overline{\cal M}(\overline{\cal A}\setminus\{\,\eta\,\}) \big|\,\leq\, \big| \overline{\cal A}\setminus\{\,\eta\,\} \big|\,\leq\, |{\cal D}|\,\leq\, \exp(\beta\varepsilon)\,.$$ The last inequality holds for $\beta$ large, thanks to the hypothesis on ${\cal D}$. Combining the previous estimates, we obtain, for $\beta$ large enough, $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0, \tau(\overline{\cal A}\setminus\{\,\eta\,\}) )\geq D' \big)\,\leq\, \hfill\cr \big|\overline{\cal M}(\overline{\cal A}\setminus\{\,\eta\,\})\big|^{r-1}r^2 \big|\overline{\cal A}\setminus\{\,\eta\,\}\big| \exp(-2\beta K') \,+\, \frac{1}{c}\exp(-\beta cr) \cr \,\leq\, r^2\exp(\beta(r\varepsilon-2 K')) + \frac{1}{c} \exp(-2\beta K') \,. }$$ The last quantity is less than $\exp(-\beta K')$ for $\beta$ large enough. \end{proof} \noindent The remaining task is to control the space--time clusters between $\tau(\overline{\cal A}\setminus\{\,\eta\,\})$ and $\theta$, which amounts to control $${\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\eta,n\pm}_{Q,t}, 0\leq t\leq\theta) \geq D/3 \big)\,.$$ We suppose that $\tau(\overline{\cal A}\setminus\{\,\eta\,\})<\tau(\overline{\cal A})$ (otherwise $\theta=0$) and that the process is in~$\eta$ at time~$0$. To the continuous--time trajectory $\big( {\sigma}^{\eta,n\pm}_{Q,t}, \,0\leq t\leq \theta\big)$, we associate a discrete path ${\omega}$ as follows: $$\begin{matrix} T_0=0\,,&&{\omega}_0=\sigma_{Q,0}=\eta\,,\\ T_1=\min\,\{\,t>T_0:\sigma_{Q,t}\neq{\omega}_0\,\}\,, && {\omega}_1=\sigma_{Q,T_1}\,,\\ T_2=\min\,\{\,t>T_1:\sigma_{Q,t}\neq{\omega}_1\,\}\,, && {\omega}_2=\sigma_{Q,T_2}\,,\\ \quad\vdots&&\quad\vdots\\ T_{k}=\min\,\{\,t>T_{k-1}:\sigma_{Q,t}\neq{\omega}_{k-1}\,\}\,,&& {\omega}_k=\sigma_{Q,T_{k}}\,,\\ \quad\vdots &&\quad\vdots\\ T_{S-1}=\min\,\{\,t>T_{S-2}:\sigma_{Q,t}\neq{\omega}_{S-2}\,\}\,,&& {\omega}_{S-1}=\sigma_{Q,T_{S-1}}\,,\\ T_S=\theta\,, &&{\omega}_{S}=\sigma_{Q,T_{S}}=\eta\,. \end{matrix}$$ Let $R$ be the number of visits of the path ${\omega}$ to $\eta$, i.e., $$R\,=\,\big|\,\{\,1\leq i\leq S:{\omega}_i=\eta\,\}\,\big|\,.$$ We define then the indices $\phi(0),\dots,\phi(R)$ of the successive visits to~$\eta$ by setting $\phi(0)=0$ and for $i\geq 1$, $$\phi(i)\,=\,\min\,\{\,k: k>\phi(i-1)\,,\quad {\omega}_k=\eta\,\}\,.$$ The times $\tau_0,\dots,\tau_R$ corresponding to these indices are $$\tau_i=T_{\phi(i)}\,,\quad 0\leq i\leq R\,.$$ Each subpath $$\smash{\widetilde{\omega}}^i=({\omega}_k,\phi(i)\leq k\leq\phi(i+1))$$ is an excursion outside $\eta$ inside $\overline{\cal A}$. We denote by ${\cal C}(\eta)$ the connected components of~$\eta$. Let $C$ belong to ${\cal C}(\eta)$. By lemma~\ref{stccc}, the space--time sets $C\times\{\phi(i)\}$ and $C\times\{\phi(i+1)\}$ belong to the same space--time cluster of~$\smash{\widetilde{\omega}}^i$, therefore they are also in the same space--time cluster of ${\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1})$. Thus the space--time set $$C\times\{\,\tau_0,\cdots,\tau_R\,\}$$ belongs to one space--time cluster of ${\hbox{\footnotesize\rm STC}}(0,\theta)$. The following computations deal with the process $({\sigma}^{\eta,n\pm}_{Q,t})_{t\geq 0}$ starting from $\eta$ at time~$0$. Hence all the ${\hbox{\footnotesize\rm STC}}$ and the exit times are those associated to this process. Let ${\cal C}$ belong to ${\hbox{\footnotesize\rm STC}}(0,\theta)$. We consider two cases: \smallskip \noindent $\bullet\quad$ If ${\cal C}\cap\big(\eta\times\{\,\tau_0,\cdots,\tau_R\,\}\big)\,=\, \varnothing$, then there exists $i\in\{\,0,\dots, R-1\,\}$ such that $${\cal C}\in {\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1})\,, \qquad{\cal C}\cap (\eta\times\{\,\tau_i,\tau_{i+1}\,\})=\varnothing\,.$$ Therefore $$ {\mathrm{diam }}\,{\cal C} \,\leq\, \max_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1})}{ \scriptstyle {\cal C}\cap (Q\times\{\,\tau_i,\tau_{i+1}\,\})=\varnothing}} \kern-7pt {\mathrm{diam }}\,{\cal C}\,. $$ \smallskip \noindent $\bullet\quad$ If $\smash{{\cal C}\cap\big(\eta\times\{\,\tau_0,\cdots,\tau_R\,\}\big)\,\neq\, \varnothing}$, then there exists a connected component $C\in{\cal C}(\eta)$ and $i\in\{\,0,\dots,R\,\}$ such that ${\cal C}\cap\big(C\times\{\,\tau_i\,\}\big)\,\neq\, \varnothing$. From the previous discussion, we conclude that $C\times\{\,\tau_0,\cdots,\tau_R\,\}$ is included in ${\cal C}$. In fact, for any $C$ in ${\cal C}(\eta)$, we have $$\text{either}\quad {\cal C}\,\cap\,\big(C\times\{\,\tau_0,\cdots,\tau_R\,\}\big)\,=\,\varnothing \quad\text{or}\quad C\times\{\,\tau_0,\cdots,\tau_R\,\}\,\subset\,{\cal C}\,.$$ For $C$ in ${\cal C}(\eta)$ and $i\in\{\,0,\dots,R-1\,\}$, we denote by ${\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1})(C)$ the space--time cluster of ${\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1})$ containing $C\times\{\,\tau_i,\tau_{i+1}\,\}$. The space--time cluster ${\cal C}$ is thus included in the set $$\bigcup_{\tatop{\scriptstyle C\in{\cal C}(\eta)} {\scriptstyle C\times\{\,0,\,\theta\,\}\subset{\cal C}}} \bigcup_{0\leq i<R} {\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1})(C)\,.$$ For any $C\in{\cal C}(\eta)$, the space--time set $$\bigcup_{0\leq i<R} {\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1})(C)\,$$ is connected, and its diameter is bounded by $$2\max_{0\leq i <R} {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1})(C) \,.$$ The factor~$2$ is due to the fact that the two sites realizing the diameter might belong to two different excursions outside~$\eta$. Therefore $${\mathrm{diam }}\,{\cal C}\,\leq\, \sum_{\tatop{\scriptstyle C\in{\cal C}(\eta)} {\scriptstyle C\times\{\,0,\,\theta\,\}\subset{\cal C}}} 2\max_{0\leq i <R} {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1})(C) \,.$$ From the inequality obtained in the first case, we conclude that $$\max_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(0,\theta)}{ \scriptstyle {\cal C}\cap (Q\times\{\,0,\,\theta\,\})=\varnothing}} \kern-7pt {\mathrm{diam }}\,{\cal C} \,\leq\, \max_{0\leq i<R} \max_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1})}{ \scriptstyle {\cal C}\cap (Q\times\{\,\tau_i,\tau_{i+1}\,\})=\varnothing}} \kern-7pt {\mathrm{diam }}\,{\cal C}\,.$$ We sum next the inequality of the second case over all the elements of ${\hbox{\footnotesize\rm STC}}(0,\theta)$ intersecting $Q\times \{\,0,\,\theta\,\}$. Since two distinct ${\hbox{\footnotesize\rm STC}}$ of ${\hbox{\footnotesize\rm STC}}(0,\theta)$ do not intersect at time $0$, they don't meet the same connected components of $\eta$ and we obtain $$\displaylines{ \sum_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}(0,\theta)}{ \scriptstyle {\cal C}\cap (Q\times\{\,0,\theta\,\})\neq\varnothing}} \kern-15pt {\mathrm{diam }}\,{\cal C} \,\leq\, \sum_{C\in{\cal C}(\eta)} 2\max_{0\leq i <R} {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1})(C) \,. }$$ Putting together the two previous inequalities, we conclude that $${\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0, \theta)\,\leq\, 2|\eta| \max_{0\leq i <R} {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1}) \,.$$ We write $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( 0,\theta) \geq D/3 \big) \,\leq\, \hfill\cr {\mathbb P}\big(R\geq r\big) +\sum_{0\leq k< r} {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( 0,\theta) \geq D/3,\,R=k \big) \,.}$$ For a fixed integer~$k$, the previous inequalities and the Markov property yield $$\displaylines{{\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( 0,\theta) \geq D/3,\,R=k \big) \hfill\cr \,\leq\, {\mathbb P}\big( 2|\eta| \max_{0\leq i <k} {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(\tau_i,\tau_{i+1}) \geq D/3,\,R=k \big) \cr \,\leq\, k {\mathbb P}\big( 2|\eta| {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( 0,\tau_1) \geq D/3 ,\,\tau_1< \tau(\overline{\cal A}) \big) \,.}$$ Recalling that $$T_1=\min\,\{\,t>T_0:\sigma_{Q,t}\neq\eta\,\}\,,\quad \tau_1=\min\,\{\,t>T_1:\sigma_{Q,t}=\eta\,\}\,,$$ we claim that, on the event $\tau_1 <\tau(\overline{\cal A})$, we have $$ {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0,\tau_{1})\,\leq\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(T_1,\tau_{1})+1\,. $$ Indeed, let ${\cal C}$ belong to ${\hbox{\footnotesize\rm STC}}(0,\tau_{1})$. If ${\cal C}$ is in ${\hbox{\footnotesize\rm STC}}(T_1,\tau_{1})$, then obviously $${\mathrm{diam }}\,{\cal C}\,\leq\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(T_1,\tau_{1})\,. $$ Otherwise, the set ${\cal C}\cap(Q\times [T_1,\tau_1])$ is the union of several elements of ${\hbox{\footnotesize\rm STC}}(T_1,\tau_{1})$, say ${\cal C}_1,\dots,{\cal C}_r$, which all intersect $Q\times\{\,T_1\,\}$. The spin flip leading from $\eta$ to $\sigma_{Q,T_1}$ can change only by one the sum of the diameters of the ${\hbox{\footnotesize\rm STC}}$ present at time $0$. This spin flip occurred in ${\cal C}$ if and only if $${\cal C}\cap(Q\times \{\,0\,\})\neq {\cal C}\cap(Q\times \{\,T_1\,\})\,,$$ thus $$ {\mathrm{diam }}\,{\cal C}\,\leq\, \sum_{1\leq i\leq r} {\mathrm{diam }}\,{\cal C}_i\,+\, 1_{ {\cal C}\cap(Q\times \{\,0\,\})\neq {\cal C}\cap(Q\times \{\,T_1\,\}) }\,.$$ Summing over all the elements of ${\hbox{\footnotesize\rm STC}}(0,\tau_1)$ which intersect $Q\times\{\,0\,\}$, we obtain the desired inequality. Reporting in the previous computation and conditioning with respect to ${\sigma}^{\eta,n\pm}_{Q,T_1}$, we get $$\displaylines{{\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( 0,\theta) \geq D/3,\,R=k \big) \hfill\cr \,\leq\, k {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( T_1,\tau_1) \geq \frac{D}{6 |\eta|}-1 ,\,\tau_1< \tau(\overline{\cal A}) \big) \cr \,\leq\, \sum_{\gamma\in\overline{\cal A}\setminus \{\,\eta\,\}} k {\mathbb P}\big( {\sigma}^{\eta,n\pm}_{Q,T_1}=\gamma,\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( T_1,\tau_1) \geq \frac{D}{6 |\eta|}-1 ,\,\tau_1< \tau(\overline{\cal A}) \big) \cr \,\leq\, \big|\overline{\cal A} \big|k\, \max_{\gamma\in \overline{\cal A}} P\Big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\gamma,n\pm}_{Q,t}, 0\leq t\leq \tau(\overline{\cal A}\setminus\{\,\eta\,\})) \geq \frac{D}{6 |\eta|}-1 \Big) \,.}$$ Summing over $k$, we arrive at $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( 0,\theta) \geq D/3 \big) \,\leq\, {\mathbb P}\big(R\geq r\big) \hfill\cr + r^2 \big|\overline{\cal A} \big|\, \max_{\gamma\in \overline{\cal A}}\, {\mathbb P}\Big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\gamma,n\pm}_{Q,t}, 0\leq t\leq \tau(\overline{\cal A}\setminus\{\,\eta\,\})) \geq \frac{D}{6 |\eta|}-1 \Big) \,.}$$ By the Markov property, the variable $R$ satisfies for any $n,m\geq 0$, \begin{multline*} {\mathbb P}\big(R\geq n+m)\,=\, {\mathbb P}\big(\phi(n+m)<\tau(\overline{\cal A})\big) \\ \,=\, {\mathbb P}\big(\phi(n)<\tau(\overline{\cal A}),\, \phi(n+m)<\tau(\overline{\cal A})\big) \\ \,=\, {\mathbb P}\big(\phi(n)<\tau(\overline{\cal A})\big) {\mathbb P}\big(\phi(m)<\tau(\overline{\cal A})\big) \\ \,=\, {\mathbb P}\big(R\geq n) {\mathbb P}\big(R\geq m)\,. \end{multline*} Therefore the law of~$R$ is the discrete geometric distribution and $$\forall n\geq 0\qquad {\mathbb P}\big(R\geq n)\,=\, \bigg( \frac{E(R)} {1+E(R)} \bigg)^n\,\leq\, \exp-\frac {n}{1+E(R)}\,.$$ By corollary~\ref{exitcom}, or more precisely its discrete--time counterpart, for $\beta$ large enough, $${E(R)}\,\leq\, \exp\big(\frac{3}{2}\beta\depth(\overline{\cal A})\big)\,\leq\, \exp(2\beta\delta_{i+1})-1 \,.$$ Choosing $$r=\beta^2 \exp(2\beta\delta_{i+1})\,,$$ we obtain from the previous inequalities that $$\displaylines{ {\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( 0,\theta) \geq D/3 \big) \,\leq\, \exp-\beta^2 + \beta^4 \exp(4\beta\delta_{i+1}) \,\big|\overline{\cal A} \big|\,\times \hfill\cr \hfill \max_{\gamma\in \overline{\cal A}}\, {\mathbb P}\Big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\gamma,n\pm}_{Q,t}, 0\leq t\leq \tau(\overline{\cal A}\setminus\{\,\eta\,\})) \geq \frac{D}{6v_{\cal D}}-1\Big) \,.}$$ We complete now the induction step at rank $i+1$. Let $K>0$ be given. Let $K'>0$ be such that $4\delta_{i+1}-K'<-3K$ and let $D'$ associated to $K'$ as in lemma~\ref{srcex}. Let $D''$ be such that $$\frac{D''}{6v_{\cal D}}-1> D'\,,\quad \frac{D''}{3}> D' \,.$$ Thanks to the hypothesis on ${\cal D}$ and $Q$, for $\beta$ large enough, $$\big|\overline{\cal A} \big|\, \,\leq\, \big|{\cal D} \big|\, \leq\,\exp(\beta K)\,.$$ From the previous computation, we have $$\displaylines{{\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\eta,n\pm}_{Q,t}, 0\leq t\leq\theta) \geq D''/3 \big) \,\leq\, \exp-\beta^2 +\beta^4 \exp(-2\beta K)\,.}$$ Since $D''/3>D'$, we have also for any $\gamma\in\overline{\cal A}$, $${\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\gamma,n\pm}_{Q,t}, 0\leq t \leq \tau(\overline{\cal A}\setminus\{\,\eta\,\}))\geq D''/3\big) \,\leq\,\exp(-3\beta K)\,. $$ Substituting the previous inequalities into the inequality obtained before lemma~\ref{srcex}, we conclude that, for any $\alpha\in\overline{\cal A}$, $${\mathbb P}\big( {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{\alpha,n\pm}_{Q,t}, 0\leq t\leq \tau(\overline{\cal A})) \geq D''\big)\,\leq\, (\beta^4 +3)\exp(-2\beta K)$$ and the induction is completed. \section{The metastable regime.} \label{metare} The goal of this section is to prove theorem~\ref{T2}, which states roughly the following. Under an appropriate hypothesis on the initial law and on the initial ${\hbox{\footnotesize\rm STC}}$, for any $\kappa<\kappa_d$, the probability that a space--time cluster of diameter larger than $\exp({{\beta} L_d})$ is created before time $\exp(\beta\kappa)$ is ${\hbox{\footnotesize\rm SES}}$. The hypothesis is satisfied by the law of a typical configuration in the metastable regime. This result allows to control the speed of propagation of large supercritical droplets. As already pointed out by Dehghanpour and Schonmann, the control of this speed is a crucial point for the study of metastability in infinite volume. This estimate is quite delicate and it is performed by induction over the dimension. More precisely, we consider a set of the form $${\Lambda}^{n}(\exp(\beta L)) \times {\Lambda}^{d-n}(\ln\beta) \,$$ with $n\pm$ boundary conditions and we do the proof by induction over $n$. The process in this set and with these boundary conditions behaves roughly like the process in dimension~$n$. Proposition~\ref{PT2} handles the case $n=0$. A difficult point is that the growth of the supercritical droplet is more complicated than a simple growth process. Indeed, supercritical droplets might be helped when they touch some clusters of pluses, which were created independently. Therefore we cannot proceed as in the simpler growth model handled in~\cite{CM2}. To tackle this problem, we introduce an hypothesis on the initial law and on the initial space--time clusters. The hypothesis on the initial law guarantees that regions which are sufficiently far away are decoupled. The hypothesis on the initial space--time clusters provides a control on the space--time clusters initially present in the configuration. The point is that these two hypotheses are satisfied by the law of the process in a fixed good region until the arrival of the first supercritical droplets. The key ingredient in this part of the proof is the lower bound on the time needed to cross parallelepipeds of the above kind. Heuristically, we will take into account the effect of the growing supercritical droplet by using suitable boundary conditions, i.e., by using the Hamiltonian $H^{n\pm}$ instead of $H^-$. Moreover, at the time when the configuration in the parallelepiped starts to feel this effect, it is rather likely that the parallelepiped is not void, so that we have to consider more general initial configurations. In any fixed $n$--small parallelepiped, it is very unlikely that nucleation occurs before ${\tau_\beta}$, or that a large space--time cluster is created before nucleation. However, the region under study contains an exponential number of $n$--small parallelepipeds, thus the previous events will occur somewhere. In proposition~\ref{nuclei}, we show that these events occur in at most $\ln\ln\beta$ places. The proof uses the hypothesis on the initial law and a simple counting argument. The proof of theorem~\ref{T2} relies on a notion already used in bootstrap percolation, namely boxes crossed by a space--time cluster (see definition~\ref{cro}). An $n$ dimensional box $\Phi$ is said to be crossed by a ${\hbox{\footnotesize\rm STC}}$ before time $t$ if, for the dynamics restricted to $\Phi\times {\Lambda}^{d-n}(\ln\beta)$, there exists a space--time cluster whose projection on the first $n$ coordinates intersects two opposite faces of~$\Phi$. The point is that, if a box is crossed by a space--time cluster in some time interval, then it is also crossed in the dynamics restricted to the box with appropriate boundary conditions. These appropriate boundary conditions are obtained as follows. We put $n\pm$ boundary conditions on the restricted box exactly as on the large box, and we put $+$ boundary conditions on the faces which are normal to the direction which is crossed. The induction step is long and it is decomposed in eleven steps. We will use the notation defined in sections~\ref{energyestmates} and~\ref{stc}. Our main objective is to control the maximal diameter of the ${\hbox{\footnotesize\rm STC}}$ created in a finite volume before the relaxation time. Let $d\geq 1$, let $n\in\{\,0,\dots, d\,\}$ and let us consider a parallelepiped $\Sigma$ in ${\mathbb Z}^{d}$ of the form $$\Sigma\,=\,{\Lambda}^{n}(L_\beta) \times {\Lambda}^{d-n}(\ln\beta) \,$$ where ${\Lambda}^{n}(L_\beta)$ is a $n$ dimensional cubic box of side length $L_\beta$, ${\Lambda}^{d-n}(\ln\beta)$ is a $d-n$ dimensional cubic box of side length $\ln\beta$, and the length $L_\beta$ satisfies $$\displaylines{ L_\beta\,\geq\,\ln\beta\,, \qquad \limsup_{{\beta}\to\infty} \frac{1}{\beta} \ln L_\beta\,<\,+\infty\,.}$$ We set ${\kappa}_0=L_0=\Gamma_0=0$ and for $n\geq 1$ \[ {\kappa}_n\,=\,\frac{1}{n+1}\big({\Gamma}_1+\cdots+{\Gamma}_n\big)\,,\qquad L_n\,=\, \frac{\Gamma_n-\kappa_{n}}{n} \,.\] In the sequel we consider a time ${\tau_\beta}$ satisfying $$\limsup_{{\beta}\to\infty} \frac{1}{\beta} \ln \tau_\beta\,<\,{\kappa}_n\,.$$ We say that a probability ${\mathbb P}(\cdot)$ is super--exponentially small in $\beta$ (written in short ${\hbox{\footnotesize\rm SES}}$) if it satisfies $$\lim_{\beta\to\infty}\,\, \frac{1}{\beta}\ln {\mathbb P}(\cdot)\,=\,-\infty\,.$$ \subsection{Initial law} We estimate the speed of growth of exponentially large droplets by bounding from below the time needed by a large droplet to cross some tiles. In each tile, we use $n\pm$ boundary conditions in order to take into account the effect of the droplet. A major difficulty is to control the configuration until the arrival of the supercritical droplets. We introduce an adequate hypothesis on the initial law describing the configuration into the tile when the droplet enters. This is achieved with the help of the following definitions. \noindent {\bf $n$--small parallelepipeds}. Let $n\geq 1$. A parallelepiped is $n$--small if all its sides have a length larger than $\ln \ln \beta$ and smaller than $n\ln\beta$. A parallelepiped is $0$--small if all its sides have a length larger than $\ln \ln \beta$ and smaller than $2\ln\ln\beta$. \noindent {\bf Restricted ensemble}. Let $n\geq 0$. We denote by $m_n$ the volume of the $n$ dimensional critical droplet. Let $Q$ be an $n$--small parallelepiped. The restricted ensemble ${\cal R}_n(Q)$ is the set of the configurations $\sigma$ in $Q$ such that $|\sigma|\,\leq\,m_n$ and $H_Q^{n\pm}(\sigma)\leq \Gamma_n$, i.e., $$ {\cal R}_n(Q)\,=\,\big\{\,\sigma\in \smash{\{\,-1,+1\,\}^Q}:|\sigma|\leq m_n,\, H_Q^{n\pm}(\sigma)\leq \Gamma_n \,\big\}\,.$$ We observe that ${\cal R}_n(Q)$ is a cycle compound and that $$ E\big({\cal R}_n(Q), \smash{\{\,-1,+1\,\}^Q}\setminus {\cal R}_n(Q)\big)\,=\,\Gamma_n\,.$$ Notice that the restricted ensemble satisfies the hypothesis on the domain ${\cal D}$ stated at the beginning of section~\ref{diamostc}. We introduce next the hypothesis on the initial law, which is preserved until the arrival of the supercritical droplets and which allows to perform the induction. \noindent {\bf Hypothesis on the initial law at rank $n$.} At rank $n=0$ we simply assume that the initial law $\mu$ is the Dirac mass on the configuration equal to $-1$ everywhere on $\Sigma$. At rank $n\geq 1$, we will work with an initial law $\mu$ on the configurations in $\Sigma$ satisfying the following condition. For any family $(Q_i,i\in I)$ of $n$--small parallelepipeds included in $\Sigma$ such that two parallelepipeds of the family are at distance larger than $$5(d-n+1)\ln\ln\beta\,,$$ we have the following estimates: for any family of configurations $(\sigma_i,i\in I)$ in the parallelepipeds $(Q_i,i\in I)$, \[ \mu\big(\,\forall i\in I\quad\sigma|_{Q_i}=\sigma_i\,\big) \,\leq\, \prod_{i\in I} \big(\phi_n(\beta)\, \rho_{Q_i}^{n\pm}(\sigma_i)\big)\,, \] where \[ \rho_{Q_i}^{n\pm}(\sigma_i)\,=\, \begin{cases} \exp\big(-\beta H^{n\pm}_{Q_i}(\sigma_i) \big) \quad\text{ if }\quad \sigma_i\in {\cal R}_n(Q_i) \\ \exp(-\beta\Gamma_n) \quad\text{ if }\quad \sigma_i\not\in {\cal R}_n(Q_i) \end{cases} \] and $\phi_n(\beta)$ is a function depending only upon $\beta$ which is $\exp o(\beta)$, meaning that $$\lim_{\beta\to\infty} \frac{1} {\beta} \ln \phi_n(\beta) \,=\,0\,.$$ \noindent {\bf Hypothesis on the initial ${\hbox{\footnotesize\rm STC}}$ at rank $n$}. We take also into account the presence of ${\hbox{\footnotesize\rm STC}}$ in the initial configuration~$\xi$. These ${\hbox{\footnotesize\rm STC}}$ are unions of clusters of pluses present in $\xi$, we denote them by ${\hbox{\footnotesize\rm STC}}(\xi)$. We suppose that for any $n$--small parallelepiped $Q$ included in $\Sigma$ \[\sum_{\tatop{\scriptstyle{\cal C}\in{\hbox{\footnotesize\rm STC}}(\xi)} {\scriptstyle {\cal C}\cap Q\neq\varnothing}}{\mathrm{diam }}{\cal C}\,\leq\, (d-n+1)\ln\ln\beta\,. \] \subsection{Lower bound on the nucleation time.} \label{finitebox} In this section we give a lower bound on the nucleation time in a finite box. The proof rests on a coupling with the dynamics conditioned in the restricted ensemble, which we define next. \noindent {\bf Dynamics conditioned to stay in ${\cal R}_n(Q)$.} We denote by $(\widetilde\sigma^{n\pm,\xi}_{Q,t},t\geq 0)$ the process $\smash{({\sigma}^{n\pm,\xi}_{Q,t},t\geq 0)}$ conditioned to stay in ${\cal R}_n(Q)$. Its rates \smash{$\widetilde c^{n\pm}_{Q}(x,{\sigma})$} are identical to those of the process \smash{$({\sigma}^{n\pm,\xi}_{Q,t},t\geq 0)$} whenever ${\sigma}^x$ belongs to ${\cal R}_n(Q)$ and they are equal to $0$ whenever ${\sigma}^x\not\in{\cal R}_n(Q)$. As usual, we couple the processes $$(\widetilde\sigma^{n\pm,\xi}_{Q,t},t\geq 0)\,,\qquad ({\sigma}^{n\pm,\xi}_{Q,t},t\geq 0)$$ so that $$\forall \xi\in{\cal R}_n(Q)\quad \forall t\leq\tau({\cal R}_n(Q))\qquad \widetilde\sigma^{n\pm,\xi}_{Q,t}= {\sigma}^{n\pm,\xi}_{Q,t}\,.$$ Finally the measure $\widetilde\mu_{Q}^{n\pm}$ defined by \[ \forall {\sigma}\in{\cal R}_n(Q)\qquad \widetilde\mu_{Q}^{n\pm}({\sigma})\,=\, \frac{ \mu_{Q}^{n\pm}({\sigma})} {\mu_{Q}^{n\pm}({\cal R}_n(Q))} \] is a stationary measure for the process $(\widetilde\sigma^{n\pm,\xi}_{Q,t},t\geq 0)$. \noindent {\bf Local nucleation}. We say that local nucleation occurs before ${\tau_\beta}$ in the parallelepiped $Q$ starting from $\xi$ if the process $\smash{({\sigma}^{n\pm,\xi}_{Q,t}, t\geq 0)}$ exits ${\cal R}_n(Q)$ before ${\tau_\beta}$. In words, local nucleation occurs if the process creates a configuration of energy larger than $\Gamma_n$ or of volume larger than $m_n$ before ${\tau_\beta}$, i.e., $$\max\Big\{\, \smash{H_Q^{n\pm}\big({\sigma}^{n\pm,\xi}_{Q,t}}\big):t\leq{\tau_\beta} \,\Big\}\,>\,\Gamma_n \quad\text{or}\quad \max\Big\{\, \smash{\big|{\sigma}^{n\pm,\xi}_{Q,t}}\big|:t\leq{\tau_\beta} \,\Big\}\,>\,m_n \,.$$ \bl{fugaup} Let $n\geq 0$ and let $Q$ be a parallelepiped. We consider the process \smash{$({\sigma}^{n\pm,\widetilde\mu}_{Q,t},t\geq 0)$} in the box~$Q$ with $n\pm$ boundary conditions and initial law the measure \smash{$\widetilde\mu_{Q}^{n\pm}$}. For any deterministic time ${\tau_\beta}$, we have for ${\beta}\geq 1$, \begin{multline*} P \left( \begin{matrix} \text{local nucleation occurs before $\tau_\beta$} \\ \text{ in the process } ({\sigma}^{n\pm,\widetilde\mu}_{Q,t}, t\geq 0)\\ \end{matrix} \right) \,\leq\,\cr 4{\beta} (m_n+2)^2 |Q|^{2m_n+2}\tau_\beta \, \exp(-{\beta}{\Gamma}_{n}) + \exp(-{\beta}|Q|{\tau_\beta}\ln{\beta})\,. \end{multline*} \end{lemma} \begin{proof} To alleviate the text, we drop $\widetilde\mu$ from the notation, writing $\smash{{\sigma}^{n\pm}_{Q,t}}$ instead of $\smash{{\sigma}^{n\pm,\widetilde\mu}_{Q,t}}$. To the continuous--time Markov process \smash{$(\widetilde\sigma^{n\pm}_{Q,t},t\geq 0)$}, we associate in a standard way a discrete--time Markov chain \[ \smash{(\widetilde\sigma^{n\pm}_{Q,k},k\in{\mathbb N} )} \,.\] We define first the time of jumps. We set $\tau_0=0$ and for $k\geq 1$, \[\tau_{k}\,=\, \inf\,\big\{\,t>\tau_{k-1}: \widetilde\sigma^{n\pm}_{Q,t}\neq \widetilde\sigma^{n\pm}_{Q,\tau_{k-1}}\,\big\}\,. \] We define then \[\forall k\in{\mathbb N}\qquad\widetilde\sigma^{n\pm}_{Q,k}\,=\, \widetilde\sigma^{n\pm}_{Q,\tau_{k}}\,. \] Let $X$ be the total number of arrival times less than ${\tau_\beta}$ of all the Poisson processes associated to the sites of the box $Q$. The law of $X$ is Poisson with parameter $\lambda=|Q|\tau_\beta$. Next, for any $N\geq \l$, \begin{multline*} P(X\geq N) \,=\,\sum_{i\geq N}\frac{\l^i}{i!}\exp(-\l) \,\leq\, \l^N \exp(-\l) \sum_{i\geq N}\frac{N^{i-N}}{i!}\\ \,=\, \left(\frac{\l}{N}\right)^N \exp(-\l) \sum_{i\geq N}\frac{N^{i}}{i!} \,\leq\, \left(\frac{\l}{N}\right)^N \exp(N-\l)\,. \end{multline*} Thus \[ P(X\geq 4{\beta}\l)\,\leq\,\exp(-{\beta}\l\ln{\beta})\,. \] The measure \smash{$\widetilde\mu_{Q}^{n\pm}$} is a stationary measure for the Markov chain $(\widetilde\sigma^{n\pm}_{Q,k})_{k\geq 0}$, thus \begin{multline*} P\big(\tau({\cal R}_n(Q))\leq {\tau_\beta}\big)\,\leq\, P\big(\exists\,t\leq{\tau_\beta}\quad {\sigma}^{n\pm}_{Q,t}\not\in {\cal R}_n(Q)\big) \\ \,\leq\, P\big(X\leq 4{\beta}\l,\, \exists\,t\leq{\tau_\beta}\quad {\sigma}^{n\pm}_{Q,t}\not\in {\cal R}_n(Q)\big) \,+\, P\big(X> 4{\beta}\l)\,. \end{multline*} The second term is already controlled. Let us estimate the first term: \begin{multline*} P\big(X\leq 4{\beta}\l,\, \exists\,t\leq{\tau_\beta}\quad {\sigma}^{n\pm}_{Q,t}\not\in {\cal R}_n(Q)\big) \,\leq\,\\ P\big(X\leq 4{\beta}\l,\, \exists\,k\leq X\quad {\sigma}^{n\pm}_{Q,0},\dots, {\sigma}^{n\pm}_{Q,k-1}\in {\cal R}_n(Q),\, {\sigma}^{n\pm}_{Q,k}\not\in {\cal R}_n(Q) \big) \\ \,\leq\, \sum_{1\leq k\leq 4\beta\lambda} \sum_{\eta\in{\cal R}_n(Q)} \sum_{\rho\in\partial{\cal R}_n(Q)} P\big( {\sigma}^{n\pm}_{Q,k-1}= \widetilde\sigma^{n\pm}_{Q,k-1}=\eta,\, {\sigma}^{n\pm}_{Q,k}=\rho \big)\,. \end{multline*} Next, for any $\eta\in{\cal R}_n(Q),\,\rho\in\partial{\cal R}_n(Q)$, \begin{multline*} P\big( {\sigma}^{n\pm}_{Q,k-1}= \widetilde\sigma^{n\pm}_{Q,k-1}=\eta,\, {\sigma}^{n\pm}_{Q,k}=\rho \big) \\ \,\leq\, \widetilde\mu_{Q}^{n\pm}( \eta) \exp \tonda{-{\beta} \max\big(0, H^{n\pm}_Q(\rho)-H^{n\pm}_Q(\eta)\big)} \\ \,\leq\, \exp \tonda{-{\beta} \max\big( H^{n\pm}_Q(\rho),H^{n\pm}_Q(\eta)\big)} \\ \,\leq\, \exp \tonda{-{\beta} E\big({\cal R}_n(Q), \smash{\{\,-1,+1\,\}^Q}\setminus {\cal R}_n(Q)\big) } \,\leq\, \exp (-{\beta} \Gamma_n)\,. \end{multline*} Coming back in the previous inequalities, we get \begin{multline*} P\big(X\leq 4{\beta}\l,\, \exists\,t\leq{\tau_\beta}\quad {\sigma}^{n\pm}_{Q,t}\not\in {\cal R}_n(Q)\big) \\ \,\leq\, 4{\beta} \lambda \big|{\cal R}_n(Q)\big|\, \big|\partial{\cal R}_n(Q)\big|\, \exp (-{\beta} \Gamma_n) \\ \,\leq\, 4{\beta} \lambda\, (m_n+1) |Q|^{m_n} (m_n+2) |Q|^{m_n+1} \exp (-{\beta} \Gamma_n)\,, \end{multline*} since the number of pluses in a configuration of $\partial{\cal R}_n(Q)$ is at most $m_n+1$. Putting together the previous inequalities, we arrive at \begin{multline*} P\big(\tau({\cal R}_n(Q))\leq {\tau_\beta}\big)\,\leq\, \\ 4{\beta} (m_n+2)^2 |Q|^{2m_n+2}\tau_\beta \, \exp(-{\beta}{\Gamma}_{n}) + \exp(-{\beta}|Q|{\tau_\beta}\ln{\beta}) \end{multline*} as required. \end{proof} \subsection{Local nucleation or creation of a large ${\hbox{\footnotesize\rm STC}}$} The condition on the initial law and the initial ${\hbox{\footnotesize\rm STC}}$ implies that the process is initially in a metastable state. We will need to control the ${\hbox{\footnotesize\rm STC}}$ created until the arrival of the supercritical droplets. Let $Q$ be a parallelepiped included in $\Sigma$. To build the ${\hbox{\footnotesize\rm STC}}$ of the process $\smash{({\sigma}^{n\pm,\xi}_{Q,t}, t\geq 0)}$ we take into account the ${\hbox{\footnotesize\rm STC}}$ initially present in $\xi$ and we denote by ${\hbox{\footnotesize\rm STC}}_\xi(0,t)$ the resulting ${\hbox{\footnotesize\rm STC}}$ on the time interval [0,t$]$. Hence an element of ${\hbox{\footnotesize\rm STC}}_\xi(0,t)$ is either a ${\hbox{\footnotesize\rm STC}}$ of ${\hbox{\footnotesize\rm STC}}(0,t)$ which is born after time $0$ or it is the union of the ${\hbox{\footnotesize\rm STC}}$ of ${\hbox{\footnotesize\rm STC}}(0,t)$ which intersect an initial ${\hbox{\footnotesize\rm STC}}$ of ${\hbox{\footnotesize\rm STC}}(\xi)$. We define then ${\mathrm{diam }} {\hbox{\footnotesize\rm STC}}_\xi(0,t)$ as in section~\ref{sectriangle} by $$\displaylines{ {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}_\xi(0,t)\,=\, \max\Big( \kern-3pt \sum_{ \tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}_\xi(0,t)}{ \scriptstyle {\cal C}\cap (Q\times\{\,0,t\,\})\neq\varnothing}} \kern-15pt {\mathrm{diam }}\,{\cal C} , \kern-7pt \max_{ \phantom{\Big(}\tatop{\scriptstyle {\cal C}\in {\hbox{\footnotesize\rm STC}}_\xi(0,t)}{\scriptstyle {\cal C}\cap (Q\times\{\,0,t\,\})=\varnothing}} \kern-9pt {\mathrm{diam }}\,{\cal C} \Big)\,.}$$ To control this quantity, we will rely on the following inequality: $$\displaylines{ {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}_\xi(0,t)\,\leq\, \sum_{\tatop{\scriptstyle{\cal C}\in{\hbox{\footnotesize\rm STC}}(\xi)} {\scriptstyle {\cal C}\cap Q\neq\varnothing}}{\mathrm{diam }}{\cal C}\,+\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}(0,t)\,. }$$ The first term will be controlled with the help of the hypothesis on the initial ${\hbox{\footnotesize\rm STC}}$, the second term with the help of theorem~\ref{totcontrole}. \noindent {\bf Local nucleation}. We say that local nucleation occurs before ${\tau_\beta}$ in the parallelepiped $Q$ starting from $\xi$ if the process $\smash{({\sigma}^{n\pm,\xi}_{Q,t}, t\geq 0)}$ exits ${\cal R}_n(Q)$ before ${\tau_\beta}$. In words, local nucleation occurs if the process creates a configuration of energy larger than $\Gamma_n$ or of volume larger than $m_n$ before ${\tau_\beta}$, i.e., $$\max\Big\{\, \smash{H_Q^{n\pm}\big({\sigma}^{n\pm,\xi}_{Q,t}}\big):t\leq{\tau_\beta} \,\Big\}\,>\,\Gamma_n \quad\text{or}\quad \max\Big\{\, \smash{\big|{\sigma}^{n\pm,\xi}_{Q,t}}\big|:t\leq{\tau_\beta} \,\Big\}\,>\,m_n \,.$$ \noindent {\bf Creation of large ${\hbox{\footnotesize\rm STC}}$}. We say that the dynamics creates a large ${\hbox{\footnotesize\rm STC}}$ before time ${\tau_\beta}$ in the parallelepiped $Q$ starting from $\xi$ if for the process $\smash{({\sigma}^{n\pm,\xi}_{Q,t}, t\geq 0)}$, we have $${\mathrm{diam }} {\hbox{\footnotesize\rm STC}}(0,{\tau_\beta})\,\geq \, \ln\ln\beta\,.$$ We denote by ${\cal R}(Q)$ the event: \[{\cal R}(Q)\,=\, \left( \begin{matrix} \text{ neither local nucleation nor creation}\\ \text{of a large ${\hbox{\footnotesize\rm STC}}$ occurs before time ${\tau_\beta}$ }\\ \text{ in the parallelepiped $Q$ starting from $\xi$} \end{matrix} \right) \] \bigskip The next proposition gives a control on the number of these events in a box of subcritical volume until time ${\tau_\beta}$. \begin{proposition} \label{nuclei} Let $n\in\{\,1,\dots, d\,\}$. We suppose that the hypothesis on the initial law at rank $n$ is satisfied. Let $R_\beta$ be a parallelepiped whose volume satisfies $$\limsup_{{\beta}\to\infty} \frac{1}{\beta} \ln |R_\beta|\,\leq\, nL_n \,.$$ The probability that for the process \smash{$({\sigma}^{n\pm,\xi}_{\Sigma,t}, t\geq 0)$} $\ln\ln\beta$ local nucleations or creations of a large ${\hbox{\footnotesize\rm STC}}$ occur before time $\tau_\beta$ in $n$--small parallelepipeds included in $R_\beta$ which are pairwise at distance larger than $5(d-n+1)\ln\ln\beta$ is super--exponentially small in $\beta$. \end{proposition} \begin{proof} Let us rephrase more precisely the event described in the statement of the proposition: there exists a family $(Q_i,i\in I)$ of $\ln\ln\beta$ $n$--small parallelepipeds included in $R_\beta$ such that: $$\displaylines{ \forall i,j\in I\,,\qquad i\neq j\quad\Rightarrow\quad d(Q_i,Q_j) > 5(d-n+1)\ln\ln\beta\,, }$$ and for $i\in I$, the event ${\cal R}(Q_i)$ does not occur for the process $({\sigma}^{n\pm,\xi}_{Q_i,t}, t\geq 0)$. Denoting this event by ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}$, we have \begin{displaymath} P({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})\,\leq\, \sum_{(Q_i)_{i\in I}} P \left( \bigcap_{i\in I}\,\, {\cal R}(Q_i )^c \right) \end{displaymath} where the sum runs over all the possible choices of boxes $(Q_i)_{i\in I}$. We condition next on the initial configurations $({\sigma}_i,i\in I)$ in the boxes $(Q_i,{i\in I})$: \begin{multline*} P({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})\,\leq\, \sum_{(Q_i)_{i\in I}} \sum_{({\sigma}_i)_{i\in I}} P \left( \bigcap_{i\in I}\,\, {\cal R}(Q_i)^c \,\big|\,\forall i\in I\quad \xi|_{Q_i}={\sigma}_i \right) \\ \times \,\mu\big(\,\forall i\in I\quad \xi|_{Q_i}={\sigma}_i\,\big) \,. \end{multline*} Once the initial configurations $({\sigma}_i,i\in I)$ are fixed, the nucleation events in the boxes $(Q_i,i\in I)$ become independent because they depend on Poisson processes associated to disjoint boxes. Thanks to the geometric condition imposed on the boxes, we can apply the estimates given by the hypothesis on the initial law $\mu$: \begin{multline*} P({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})\,\leq\, \sum_{(Q_i)_{i\in I}} \sum_{({\sigma}_i)_{i\in I}} \prod_{i\in I} \,\, P \Big({\cal R}(Q_i )^c \,\big|\, \xi|_{Q_i}={\sigma}_i \Big) \, \phi_n(\beta) \, \rho_{Q_i}^{n\pm}({\sigma}_i) \\ \,=\,\sum_{(Q_i)_{i\in I}} \prod_{i\in I}\left( \phi_n(\beta) \sum_{{\sigma}_i} \,\, P \Big({\cal R}(Q_i)^c \,\big|\, \xi|_{Q_i}={\sigma}_i \Big) \, \rho_{Q_i}^{n\pm}({\sigma}_i) \right)\,. \end{multline*} Let us fix $i\in I$ and let us estimate the term inside the big parenthesis. Let $Q$ be an $n$--small box. We write \begin{multline*} \sum_{\eta} \,\, P \Big({\cal R}(Q)^c \,\big|\, \xi|_{Q}=\eta \Big) \, \rho_{Q}^{n\pm}(\eta) \\ \,\leq\, \sum_{\eta} P \left( \begin{matrix} \text{the process } ({\sigma}^{n\pm,\eta}_{Q,t}, t\geq 0)\\ \text{nucleates before time ${\tau_\beta}$} \\ \end{matrix} \right) \, \rho_{Q}^{n\pm}(\eta) \\ + \sum_{\eta} P \left( \begin{matrix} \text{the process } ({\sigma}^{n\pm,\eta}_{Q,t}, t\geq 0) \text{ creates}\\ \text{a large ${\hbox{\footnotesize\rm STC}}$ before nucleating} \\ \end{matrix} \right) \, \rho_{Q}^{n\pm}(\eta)\,. \end{multline*} First, by theorem~\ref{totcontrole}, the probability that the process $({\sigma}^{n\pm,\eta}_{Q,t}, t\geq 0)$ creates a large ${\hbox{\footnotesize\rm STC}}$ before nucleating is ${\hbox{\footnotesize\rm SES}}$. Second, \[ \kern-7pt \sum_{\eta \not\in{\cal R}_n(Q)} \kern-7pt P \left( \begin{matrix} \text{the process } ({\sigma}^{n\pm,\eta}_{Q,t}, t\geq 0)\\ \text{nucleates before time ${\tau_\beta}$} \\ \end{matrix} \right) \, \rho_{Q}^{n\pm}(\eta) \,\leq\, 2^{|Q|} \exp(-\beta\Gamma_n)\,. \] Third, for $ \eta \in{\cal R}_n(Q)$, using the notation of section~\ref{finitebox}, \[\rho_{Q}^{n\pm}(\eta) \,\leq\, |{\cal R}_n(Q)|\, \widetilde{\mu}_{Q}^{n\pm}(\eta) \,\leq\, (m_n+1)|Q|^{m_n} \widetilde{\mu}_{Q}^{n\pm}(\eta) \] whence, using lemma~\ref{fugaup}, \begin{multline*} \kern-7pt \sum_{\eta \in{\cal R}_n(Q)} \kern-7pt P \left( \begin{matrix} \text{the process } ({\sigma}^{n\pm,\eta}_{Q,t}, t\geq 0)\\ \text{nucleates before time ${\tau_\beta}$} \\ \end{matrix} \right) \, \rho_{Q}^{n\pm}(\eta) \\ \,\leq\, (m_n+1)|Q|^{m_n} P \left( \begin{matrix} \text{the process } ({\sigma}^{n\pm,\widetilde\mu}_{Q,t}, t\geq 0)\\ \text{nucleates before time ${\tau_\beta}$} \\ \end{matrix} \right) \\ \,\leq\, 4{\beta} (m_n+2)^3 (n\ln\beta)^{d(2m_n+3)} \tau_\beta \, \exp(-{\beta}{\Gamma}_{n}) + {\hbox{\footnotesize\rm SES}} \,. \end{multline*} Substituting these estimates into the last inequality on $P({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})$, we obtain \begin{multline*} P({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H})\,\leq\, \Big( \big|R_\beta \big| (n\ln\beta)^d \big(2^{(n\ln\beta)^d} \\ +4\beta{(m_n+2)^3(n\ln\beta)^{2dm_n+3d}} \tau_\beta \big) \phi_n(\beta) \exp(-\beta\Gamma_n) +{\hbox{\footnotesize\rm SES}} \Big)^{\textstyle |I|} \kern -5pt . \end{multline*} Since $|I|= \ln\ln\beta$ and $$\limsup_{{\beta}\to\infty} \frac{1}{\beta} \ln\Big( |R_\beta| \,\tau_\beta \, \phi_n(\beta) \,\exp(-\beta\Gamma_n) \Big) \,<\, nL_n+{\kappa}_n-{\Gamma}_n \,=\,0 \,,$$ we conclude that the above quantity is ${\hbox{\footnotesize\rm SES}}$. \end{proof} \subsection{Control of the metastable space--time clusters} \label{fvs} The key result is the following control on the size of the space--time clusters in the configuration. The next proposition states the result at rank 0, the theorem thereafter states the result at rank $n\geq 1$. \bp{PT2} We suppose that the law $\mu$ of the initial configuration $\xi$ satisfies the hypothesis at rank $0$. Let $\tau_\beta$ be a time satisfying $$\limsup_{{\beta}\to\infty} \frac{1}{\beta} \ln \tau_\beta\,<\,{\kappa}_0=0\,.$$ The probability that a ${\hbox{\footnotesize\rm STC}}$ of diameter larger than $\ln\ln\beta$ is created in the process \smash{$({\sigma}^{0\pm,\xi}_{\Sigma,t}, 0\leq t\leq \tau_\beta )$} is ${\hbox{\footnotesize\rm SES}}$. \end{proposition} \begin{proof} With $n=0$, we have $$\Sigma\,=\,{\Lambda}^{d}(\ln\beta)\,,\quad \Gamma_0={\kappa}_0=L_0=m_0=0\,,$$ the boundary condition is plus on $\partial^{\, out} \Sigma$ and ${\cal R}_0(Q)=\{\,\mathbf{-1}\,\}$ for any box $Q$. By the hypothesis on $\mu$ at rank $0$, the initial law $\mu$ is the Dirac mass on the configuration equal to $-1$ everywhere on $\Sigma$. Now $$\displaylines{ P\big(\exists\,{\cal C}\in {\hbox{\footnotesize\rm STC}}(0, \tau_\beta )\text{ with }{\mathrm{diam }}{\cal C}\geq \ln\ln\beta\big) \,\leq\,\hfill\cr P\left( \begin{matrix} \text{there are at least $\ln\ln\beta$ arrival times less than ${\tau_\beta}$ }\\ \text{ for the Poisson processes associated to the sites of $\Sigma$} \end{matrix} \right)\cr \,=\, P(X\geq \ln\ln\beta)\, }$$ where $X$ is a variable whose law is Poisson with parameter $$\lambda\,=\,|\Sigma|\tau_\beta \,=\,(\ln\beta)^d{\tau_\beta}\,.$$ So \[ P(X\geq \ln\ln\beta)\,=\,\sum_{k\geq\ln\ln\beta} \exp(-\lambda) \frac{\lambda^k}{k!}\, \leq\,3\lambda^{\ln\ln\beta}\] which is ${\hbox{\footnotesize\rm SES}}$. \end{proof} \noindent For the case $n=0$ the initial configuration is $-1$ everywhere and all the ${\hbox{\footnotesize\rm STC}}$ born before the initial configuration are dead. In the case $n\geq 1$, the situation is more delicate and we must deal with ${\hbox{\footnotesize\rm STC}}$ born in the past. To build the ${\hbox{\footnotesize\rm STC}}$ of the process $\smash{({\sigma}^{n\pm,\xi}_{\Sigma,t}, t\geq 0)}$ we take into account the ${\hbox{\footnotesize\rm STC}}$ initially present in $\xi$ and we denote by ${\hbox{\footnotesize\rm STC}}_\xi(0,t)$ the resulting ${\hbox{\footnotesize\rm STC}}$ on the time interval [0,t$]$. Hence an element of ${\hbox{\footnotesize\rm STC}}_\xi(0,t)$ is either a ${\hbox{\footnotesize\rm STC}}$ of ${\hbox{\footnotesize\rm STC}}(0,t)$ which is born after time $0$ or it is the union of the ${\hbox{\footnotesize\rm STC}}$ of ${\hbox{\footnotesize\rm STC}}(0,t)$ which intersect an initial ${\hbox{\footnotesize\rm STC}}$ of ${\hbox{\footnotesize\rm STC}}(\xi)$. We recall that ${\hbox{\footnotesize\rm STC}}(\xi)$ denotes the initial ${\hbox{\footnotesize\rm STC}}$ present in~$\xi$, these ${\hbox{\footnotesize\rm STC}}$ are unions of clusters of pluses of $\xi$. \bt{T2} Let \smash{$n\in\{\,1,\dots,d\,\}$}. We suppose that both the hypothesis on the initial law at rank $n$ and on the initial ${\hbox{\footnotesize\rm STC}}$ present in $\xi$ are satisfied. Let $\tau_\beta$ be a time satisfying $$\limsup_{{\beta}\to\infty} \frac{1}{\beta} \ln \tau_\beta\,<\,{\kappa}_n\,.$$ The probability that, for the process \smash{$({\sigma}^{n\pm,\xi}_{\Sigma,t})_{t\geq 0}$}, there exists a space--time cluster in ${\hbox{\footnotesize\rm STC}}_\xi(0,{\tau_\beta})$ of diameter larger than $\exp({{\beta} L_n})$ is ${\hbox{\footnotesize\rm SES}}$. \end{theorem} Theorem~\ref{T2} is proved by induction over~$n$. We suppose that the result at rank $n-1$ has been proved and that a ${\hbox{\footnotesize\rm STC}}$ of diameter larger than $\exp(\beta L_n)$ is formed before time ${\tau_\beta}$. The induction step is long and it is decomposed in the eleven following steps. \medskip \noindent {\bf Step 1: Reduction to a box $R_{i,j}$ of side length of order $\exp(\beta L_n)$. } \noindent By a trick going back to the work of Aizenmann and Lebowtiz on bootstrap percolation \cite{AL}, there exists a ${\hbox{\footnotesize\rm STC}}$ of diameter between $\exp(\beta L_n)/2$ and $\exp(\beta L_n)+1$ which is formed before time ${\tau_\beta}$. In particular there exists a box $R_{i,j}$ of side length of order $\exp(\beta L_n)$ which is crossed by a ${\hbox{\footnotesize\rm STC}}$ before time ${\tau_\beta}$. \medskip \noindent {\bf Step 2: Reduction to a box $S_{i}$ of side length of order $\exp(\beta L_n)/\ln\beta$ devoid of bad events. } Thanks to proposition~\ref{nuclei}, the number of bad events, like local nucleation or creation of a large ${\hbox{\footnotesize\rm STC}}$, is at most $\ln\ln\beta$, up to a ${\hbox{\footnotesize\rm SES}}$ event. By a simple counting argument, there exists a box $S_i$ of side length of order $\exp(\beta L_n)/\ln\beta$ in the $n$--th direction which is crossed vertically before ${\tau_\beta}$ and in which no bad events occur. We consider next the dynamics in this box $S_i$ with either $n\pm$ or $n-1\pm$ boundary conditions. \medskip \noindent {\bf Step 3: Control of the diameters of the ${\hbox{\footnotesize\rm STC}}$ born in $S_i$ with $n\pm$ boundary conditions. } By construction, for the dynamics in the box $S_i$ with $n\pm$ boundary conditions, no bad events occur before time ${\tau_\beta}$, therefore the process stays in the metastable state. Until time ${\tau_\beta}$, only small droplets are created, and they survive for a short time. We quantify this in lemma~\ref{essx}, where we prove that any ${\hbox{\footnotesize\rm STC}}$ in ${\hbox{\footnotesize\rm STC}}_\xi ({\sigma}^{n\pm,\xi}_{S_i,t}, 0 \leq t\leq \tau_\beta )$ has a diameter at most $(d-n+2)\ln\ln\beta$. \medskip \noindent {\bf Step 4: Reduction to a flat box $\Delta_{i,j}\subset S_i$ of height $\ln\beta$ crossed vertically in a time $\exp\big(\beta (\kappa-L_n)\big)/ (\ln\beta)^2$. } The box $S_{i}$ has height of order $\exp(\beta L_n)/\ln\beta$. In the dynamics restricted to $S_i$ with $n-1\pm$ boundary conditions, the box $S_i$ is vertically crossed by a ${\hbox{\footnotesize\rm STC}}$ in a time ${\tau_\beta}$. From the result of step~3, we conclude that the crossing ${\hbox{\footnotesize\rm STC}}$ emanates either from the bottom or the top of $S_i$, because the vertical crossing can occur only with the help of the boundary conditions. This ${\hbox{\footnotesize\rm STC}}$ has to be born close to the top or the bottom of $S_i$ and it propagates then towards the middle plane of $S_i$. We partition $S_i$ in slabs of height $\ln\beta$, the number of these slabs is of order $\exp(\beta L_n)/(\ln\beta)^2$. By summing the crossing times of each of these slabs, we obtain that one slab, denoted by $\Delta_{i,j}$, has to be crossed vertically in a time $\exp\big(\beta (\kappa-L_n)\big)/ (\ln\beta)^2$. We denote by $\cT_a$ the event: At time $a$, the set $\Delta_{i,j}$ has not been touched by a ${\hbox{\footnotesize\rm STC}}$ emanating from top or bottom of $S_i$ in the process \smash{$({\sigma}^{n-1\pm,\xi}_{S_i,t}, t\geq 0)$}. We denote by ${\cal V}_b$ the event: At time $b$, the set $\Delta_{i,j}$ is vertically crossed in the process \smash{$({\sigma}^{n-1\pm,\xi}_{S_{i},t},t\geq 0)$}. We show that there exist two integer values $a<b$ such that $b-a< \exp\big(\beta (\kappa-L_n)\big)/ (\ln\beta)^2$ and the events $\cT_a$ and ${\cal V}_b$ both occur. \noindent \medskip \noindent {\bf Step 5: Conditioning on the configuration at the time of arrival of the large ${\hbox{\footnotesize\rm STC}}$. } We want to estimate the probability of the event $\cT_a\cap {\cal V}_b$. This event will have a low probability, because it requires that the slab $\Delta_{i,j}$ is vertically crossed too quickly, before it had time to relax to equilibrium. To this end, we condition with respect to the configuration in $\Delta_{i,j}$ at time $a$ and we estimate the probability of the vertical crossing in a time $b-a$. We first replace the condition that no bad events occur before time ${\tau_\beta}$, by the weaker condition that no bad events occur before time $a$ (otherwise the conditionned dynamics after time $a$ would be much more complicated). We then perform the conditioning with respect to the configuration in $\Delta_{i,j}$ at time $a$. We denote by $\zeta$ this configuration, by $\nu$ its law and by ${\hbox{\footnotesize\rm STC}}(\zeta)$ the ${\hbox{\footnotesize\rm STC}}$ present in $\zeta$. The idea is to apply the induction hypothesis to the process in $\Delta_{i,j}$ between times $a$ and $b$. To this end, we check that $\nu$ and ${\hbox{\footnotesize\rm STC}}(\zeta)$ satisfy the hypothesis at rank $n-1$. \medskip \noindent {\bf Step 6: Check of the hypothesis on the initial ${\hbox{\footnotesize\rm STC}}$ at rank $n-1$}. We use the initial hypothesis on the ${\hbox{\footnotesize\rm STC}}$ at rank~$n$ and the fact that no bad events, like nucleation or creation of a large ${\hbox{\footnotesize\rm STC}}$, occur until time~$a$ to obtain the appropriate control on the ${\hbox{\footnotesize\rm STC}}$ at time~$a$. The factor $(d-n+1)\ln\ln\beta$ is tuned adequately to perform the induction step. The condition is stronger at step~$n$ than at step $n-1$. Indeed, the hypothesis is done at step $n$ on the initial ${\hbox{\footnotesize\rm STC}}$, and because of the metastable dynamics, the diameters of the ${\hbox{\footnotesize\rm STC}}$ might increase by $\ln\ln\beta$ until the arrival of the supercritical droplets. Thus the hypothesis on the ${\hbox{\footnotesize\rm STC}}$ at rank $n-1$ is still fulfilled. \medskip \noindent {\bf Step 7: Check of the hypothesis on the initial law at rank $n-1$.} Similarly, we use the hypothesis on the initial law at rank~$n$ and the fact that no bad events, like nucleation or creation of a large ${\hbox{\footnotesize\rm STC}}$, occur until time~$a$ to obtain the appropriate decoupling on the law of the configuration at time~$a$. The hypothesis on the law at rank $n$ implies that small boxes at distance larger than $5(d-n+1)\ln\beta$ are independent. Until time $a$, no bad events occur, hence the metastable dynamics inside a small box $Q$ can only be influenced by events happening at distance $\ln\ln\beta$ from the small box, i.e., inside a slightly larger box~$R$. This way we obtain the appropriate decoupling on boxes which are at distance larger than $5(d-n+2)\ln\beta$. \medskip \noindent {\bf Step 8: Comparison of $\widetilde\mu_{R}^{n\pm}|_{Q}$ and $\rho_{Q}^{n-1\pm}$. } To obtain the appropriate bounding factor we have to prove that, if $Q,R$ are two parallelepipeds which are $n$--small and such that $Q\subset R$, then for any configuration $\eta$ in $Q$, \[ \widetilde\mu_{R}^{n\pm}\big(\, \sigma|_{Q} =\eta \, \big)\,\leq\, \phi_{n-1}(\beta)\, \rho_{Q}^{n-1\pm}(\eta)\,, \] where $\phi_{n-1}(\beta)$ is a function depending only upon $\beta$. This is done with the help of three geometric lemmas. First we show that a configuration $\sigma$ having at most $m_n$ pluses and such that $H^{n\pm}_{R}(\sigma) \,\leq\, \Gamma_{n-1} $ can have at most $m_{n-1}$ pluses. The next point is that, when the number of pluses in the configuration $\eta$ is less than $m_{n-1}$, the Hamiltonian in $R$ with $n\pm$ boundary conditions will always be larger than the Hamiltonian in $Q$ with $n-1\pm$ boundary conditions, up to a polynomial correcting factor. \medskip \noindent {\bf Step 9: Reduction to a box $\Phi$ of side length of order $\exp(\beta L_{n-1})$.} We are now able to apply the induction hypothesis at rank $n-1$: Up to a ${\hbox{\footnotesize\rm SES}}$ event, there is no space--time cluster of diameter larger than $\exp({{\beta} L_{n-1}})$ for the process in $\Delta$ with $n-1\pm$ boundary conditions. Therefore the vertical crossing of $\Delta$ has to occur in a box $\Phi$ of side length of order $\exp({{\beta} L_{n-1}})$. \medskip \noindent {\bf Step 10: Reduction to boxes $\Phi_i\subset\Phi$ of vertical side length of order $\ln\beta/\ln\ln\beta$.} We partition $\Phi$ in slabs $\Phi_i$ of height $\ln\beta/\ln\ln\beta$, the number of these slabs is of order $\ln\ln\beta$. We can choose a subfamily of slabs such that two slabs of the subfamily are at distance larger than $5(d-n+2)\ln\ln\beta$. Since $\Phi$ endowed with $n-1\pm$ boundary conditions is vertically crossed before time $\exp\big(\beta (\kappa-L_n)\big)/ (\ln\beta)^2$, so are each of these slabs $\Phi_i$. \medskip \noindent {\bf Step 11: Conclusion of the induction step.} These crossings imply that a large ${\hbox{\footnotesize\rm STC}}$ is created. The dynamics in each slab $\Phi_i$ with $n-1\pm$ boundary conditions are essentially independent, thanks to the boundary conditions and the hypothesis on the initial law. It follows that the probability of creating simultaneously these $\ln\ln\beta$ large ${\hbox{\footnotesize\rm STC}}$ is ${\hbox{\footnotesize\rm SES}}$. \vfill\eject \phantom{a} \vfill \smallskip \centerline{ \psset{xunit=1cm,yunit=1cm,runit=1cm} \pspicture(-1,-1)(11,11) \psline{-}(0,0)(10,0)(10,10)(0,10)(0,0) \rput(-0.5,1){${\Lambda}^n(k)$} \rput(5,1){$n$--th direction} \psline{->}(6.25,0.5)(6.25,1.5) \psline{<->}(1,0)(1,10) \rput(2,1){$\exp(\beta L_n)$} \psset{linewidth=1pt} \psset{linewidth=2pt} \psline{-}(0,5)(10,5) \psline{-}(0,8)(10,8) \psline{-}(0,5)(0,8) \psline{-}(10,5)(10,8) \psline[linestyle=dashed]{-}(0,6.5)(10,6.5) \psset{linewidth=1pt} \psline{<->}(2,5)(2,8) \rput(3.5,7.5){$\exp(\beta L_n)/\ln\beta$} \rput(8,8.25){top} \rput(8,5.25){bottom} \rput(8,6.75){middle} \rput(-0.5,6.5){$S_i$} \psline{->}(-0.25,6.5)(1.5,5) \psline{->}(-0.25,6.5)(1.5,8) \psset{linewidth=3.5pt} \psline{-}(0,5.5)(10,5.5) \psline{-}(0,6)(10,6) \psline{-}(0,5.5)(0,6) \psline{-}(10,5.5)(10,6) \psset{linewidth=1pt} \psline{-}(5,5.7)(7,5.7) \psline{-}(5,5.8)(7,5.8) \psline{>-<}(6,5.93)(6,5.57) \rput(6,4.25){$\ln\beta/\ln\ln\beta$} \psset{linewidth=1pt} \psline[linestyle=dashed]{-}(6,4.5)(6,5.7) \psline{<->}(5,6.25)(7,6.25) \psline{-}(5,6.25)(5,6) \psline{-}(7,6.25)(7,6) \rput(7.5,5.75){$\Phi_i$} \psline{-}(5,5.5)(5,6) \psline{-}(7,5.5)(7,6) \rput(4.5,5.75){$\Phi$} \rput(3.5,6.25){$\exp(\beta L_{n-1})$} \psline{<->}(3,5.5)(3,6) \rput(3.5,5.75){$\ln\beta$} \rput(-0.5,5.75){$\Delta_{i,j}$} \endpspicture } \bigskip \centerline{Reduction from ${\Lambda}^n(k)$ to $\Phi_i$} \vfill\eject \noindent We start now the precise proof, which follows the above strategy. We suppose that the result at rank $n-1$ has been proved and that a ${\hbox{\footnotesize\rm STC}}$ of diameter larger than $\exp(\beta L_n)$ is formed before time ${\tau_\beta}$. \medskip \noindent {\bf Step 1: Reduction to a box $R_{i,j}$ of side length of order $\exp(\beta L_n)$. } \smallskip \noindent Let us consider the function \[f(t)\,=\,\max\,\big\{\,{\mathrm{diam }}{\cal C}:{\cal C}\in {\hbox{\footnotesize\rm STC}}_\xi(0,t)\,\big\}\,.\] This function is non--decreasing, it changes when a spin flip creates a larger ${\hbox{\footnotesize\rm STC}}$ by merging two or more existing ${\hbox{\footnotesize\rm STC}}$. Suppose there is a spin flip at time $t$. Just before the spin flip, the largest ${\hbox{\footnotesize\rm STC}}$ had diameter at most \[f(t-)\,=\,\lim_{\tatop{\scriptstyle s<t}{\scriptstyle s\to t}}f(s)\] hence after the spin flip, the largest ${\hbox{\footnotesize\rm STC}}$ has diameter at most $2f(t-)+1$. Therefore \[\forall t\geq 0\qquad f(t)\,\leq\,2f(t-)+1\,.\] With the same reasoning applied to a specific ${\hbox{\footnotesize\rm STC}}$, we get the following result. \begin{lemma} \label{reason} Let $D$ be such that \[ D\,\geq \, \max\,\big\{\,{\mathrm{diam }}{\cal C}:{\cal C}\in {\hbox{\footnotesize\rm STC}}(\xi)\,\big\}\,. \] Let ${\cal C}$ be a ${\hbox{\footnotesize\rm STC}}$ in ${\hbox{\footnotesize\rm STC}}_\xi(0,t)$ having diameter larger than $D$. There exists $s\leq t$ and ${\cal C}'$ a ${\hbox{\footnotesize\rm STC}}$ in ${\hbox{\footnotesize\rm STC}}_\xi(0,s)$ such that $${\cal C}'\subset {\cal C}\,,\quad D\leq{\mathrm{diam }} {\cal C}'\leq 2D\,.$$ \end{lemma} The hypothesis on the initial ${\hbox{\footnotesize\rm STC}}$ present in $\xi$ implies that \[ \max\,\big\{\,{\mathrm{diam }}{\cal C}:{\cal C}\in {\hbox{\footnotesize\rm STC}}(\xi)\,\big\}\,\leq \,(d-n+1)\ln\ln\beta\,. \] Therefore, if \[f( \tau_\beta) \,\geq\, \exp(\beta L_n)\,,\] then, by lemma~\ref{reason}, there exists a random time $T\leq \tau_\beta$ and ${\cal C}\in {\hbox{\footnotesize\rm STC}}_\xi(0,T)$ such that \[\exp(\beta L_n)\,\leq\, {\mathrm{diam }}{\cal C} \,\leq\, 2\exp(\beta L_n)\,. \] Let $\Phi$ be the smallest $n$ dimensional box such that \[{\cal C}\,\subset\, \big(\Phi\times {\Lambda}^{d-n}(\ln\beta)\big)\times [0,T]\,.\] With the help of lemma~\ref{cresSTC}, we observe that the box $\Phi$ is crossed by a ${\hbox{\footnotesize\rm STC}}$ before time $\tau_\beta$, where the meaning of ``crossed'' is explained next. \definition \label{cro} An $n$ dimensional box $\Phi$ is said to be crossed by a ${\hbox{\footnotesize\rm STC}}$ before time $t$ if, for the dynamics restricted to $\Phi\times {\Lambda}^{d-n}(\ln\beta)$ with initial configuration $\xi$ and $n\pm$ boundary condition, there exists ${\cal C}$ in ${\hbox{\footnotesize\rm STC}}_\xi(0,t)$ whose projection on the first $n$ coordinates intersects two opposite faces of~$\Phi$. \enddefinition With this definition, we have $$\displaylines{ P\big(\exists\,{\cal C}\in {\hbox{\footnotesize\rm STC}}_\xi(0, \tau_\beta )\text{ with }{\mathrm{diam }}{\cal C}\geq \exp(\beta L_n)\big) \hfill \,\cr \,\leq\, P \left( \begin{matrix} \exists\, \Phi\text{ $n$ dimensional box $\subset {\Lambda}^{n}(L_\beta)$,} \\ \exp(\beta L_n)\leq {\mathrm{diam }} \Phi\leq 2\exp(\beta L_n),\\ \text{$\Phi$ is crossed by a ${\hbox{\footnotesize\rm STC}}$ before time } \tau_\beta \\ \end{matrix} \right) \,\cr \,\leq\, \big|{\Lambda}^n( L_\beta )|\times 2\exp(\beta L_n) \times \max_{x,k}\, P \left( \begin{matrix} \text{the box } \big(x+{\Lambda}^n(k)\big)\times {\Lambda}^{d-n}(\ln\beta) \\ \text{is } \text{crossed by a ${\hbox{\footnotesize\rm STC}}$ before time } \tau_\beta \end{matrix} \right) }$$ where the maximum is taken over $x,k$ such that \[ \exp(\beta L_n)\leq k \leq 2\exp(\beta L_n)\,,\qquad \big(x+{\Lambda}^n(k)\big)\,\subset\, {\Lambda}^n( L_\beta ) \,. \] Let us now fix $x,k$ as above, for simplicity we take $x=0$, and let us suppose that ${\Lambda}^n(k)\times {\Lambda}^{d-n}(\ln\beta)$ is crossed by a ${\hbox{\footnotesize\rm STC}}$ before time $\tau_\beta$ for the process with initial configuration $\xi$ and $n\pm$ boundary condition. We can suppose for instance that ${\Lambda}^n(k)\times {\Lambda}^{d-n}(\ln\beta)$ is crossed vertically, i.e., that the crossing occurs along the $n$th coordinate. Using the monotonicity with respect to the boundary conditions, we observe that, for any $i,j$ such that $-k/2\leq i\leq j\leq k/2$, the parallelepiped \[ R_{i,j} \,=\,{\Lambda}^{n-1}(k)\times [i,j]\times {\Lambda}^{d-n}(\ln\beta) \] is also crossed vertically before time $\tau_\beta$ for the process with initial configuration $\xi|_{ R_{i,j}} $ and $n-1\pm$ boundary condition on $R_{i,j}$. \medskip \noindent {\bf Step 2: Reduction to a box $S_{i}$ of side length of order $\exp(\beta L_n)/\ln\beta$ devoid of bad events. } \smallskip \noindent With $k$ defined above, we consider next the collection of the sets \[ S_i\,=\, {\Lambda}^{n-1}(k)\times \left[ \frac{(2i)k}{4\ln\beta}, \frac{(2i+1)k}{4\ln\beta} \right]\times {\Lambda}^{d-n}(\ln\beta)\,,\quad |i|<\ln\beta-\frac{1}{2}\,.\] These sets are pairwise at distance larger than $\ln\beta$. By proposition~\ref{nuclei}, up to a ${\hbox{\footnotesize\rm SES}}$ event, there exists a set $S_i$ in which the event \[{\cal R}(S_i) \,=\, \bigcap_{ \tatop{\scriptstyle Q \text{ $n$--small}} {\scriptstyle Q\subset S_i} } {\cal R}(Q )\, \] occurs. This means that neither local nucleation nor the creation of a large ${\hbox{\footnotesize\rm STC}}$ occurs before time $\tau_\beta$ for the process in $S_i$ with initial configuration $\xi|_{ S_{i}} $ and $n\pm$ boundary condition. From now onwards we will study what is happening in this particular set $S_i$. Let us define \begin{align*} \text{bottom}\,=\, {\Lambda}^{n-1}(k)\times \left\{ \frac{(2i)k}{4\ln\beta} \right\} \times {\Lambda}^{d-n}(\ln\beta)\,,\\ \text{top}\,=\, {\Lambda}^{n-1}(k)\times \left\{ \frac{(2i+1)k}{4\ln\beta} \right\} \times {\Lambda}^{d-n}(\ln\beta)\,.\\ \end{align*} By lemma~\ref{cresSTC}, any ${\hbox{\footnotesize\rm STC}}$ of the process $$({\sigma}^{n-1\pm,\xi}_{S_i,t}, 0 \leq t\leq \tau_\beta )$$ which intersects neither top nor bottom is also a ${\hbox{\footnotesize\rm STC}}$ of the process $$({\sigma}^{n\pm,\xi}_{S_i,t}, 0 \leq t\leq \tau_\beta )\,,$$ because it has not been ``helped'' by the $n-1\pm$ boundary condition. \medskip \noindent {\bf Step 3: Control of the diameters of the ${\hbox{\footnotesize\rm STC}}$ born in $S_i$ with $n\pm$ boundary conditions. } \smallskip \noindent \bl{essx} On the event ${\cal R}(S_i)$, any ${\hbox{\footnotesize\rm STC}}$ in ${\hbox{\footnotesize\rm STC}}_\xi ({\sigma}^{n\pm,\xi}_{S_i,t}, 0 \leq t\leq \tau_\beta )$ has a diameter at most $(d-n+2)\ln\ln\beta$. \end{lemma} \begin{proof} Indeed, suppose that there exists ${\cal C}$ in ${\hbox{\footnotesize\rm STC}}_\xi ({\sigma}^{n\pm,\xi}_{S_i,t}, 0 \leq t\leq \tau_\beta )$ with $${\mathrm{diam }}{\cal C} \,>\, (d-n+2)\ln\ln\beta\,.$$ By lemma~\ref{reason}, there exists $T\leq{\tau_\beta}$ and ${\cal C}'$ in ${\hbox{\footnotesize\rm STC}}_\xi ({\sigma}^{n\pm,\xi}_{S_i,t}, 0 \leq t\leq T )$ such that $$ (d-n+2)\ln\ln\beta\,\leq\, {\mathrm{diam }}{\cal C}' \,\leq\,\frac{1}{3}\ln\beta \,.$$ Let $Q'$ be a box of side length $\ln\beta$ included in $S_i$ and centered on a point of ${\cal C}'$. By lemma~\ref{cresSTC}, ${\cal C}'$ is also a ${\hbox{\footnotesize\rm STC}}$ of the process $({\sigma}^{n\pm,\xi}_{Q',t}, 0 \leq t\leq \tau_\beta )$. Yet \begin{multline*} {\mathrm{diam }}{\cal C}'\,\leq\,{\mathrm{diam }} {\hbox{\footnotesize\rm STC}}_\xi ({\sigma}^{n\pm,\xi}_{Q',t}, 0 \leq t\leq T )\\ \,\leq\, \sum_{\tatop{\scriptstyle{\cal C}\in{\hbox{\footnotesize\rm STC}}(\xi)} {\scriptstyle {\cal C}\cap Q'\neq\varnothing}}{\mathrm{diam }}{\cal C}\,+\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{n\pm,\xi}_{Q',t}, 0\leq t\leq T)\,\\ \,\leq\, (d-n+1)\ln\ln\beta+ {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{n\pm,\xi}_{Q',t}, 0\leq t\leq T)\,. \end{multline*} We have used the hypothesis on the initial clusters present in $\xi$ to bound the sum. This inequality implies that \[{\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{n\pm,\xi}_{Q',t}, 0\leq t\leq T)\,\geq\,\ln\ln\beta\,,\] hence the events ${\cal R}(Q')$ and ${\cal R}(S_i)$ would not occur. \end{proof} \medskip \noindent {\bf Step 4: Reduction to a flat box $\Delta_{i,j}\subset S_i$ crossed vertically in a time $(\ln\beta)^2$. } \smallskip \noindent By lemma~\ref{cresSTC}, any ${\hbox{\footnotesize\rm STC}}$ in \smash{$({\sigma}^{n-1\pm,\xi}_{S_i,t}, 0 \leq t\leq \tau_\beta )$} of diameter strictly larger than $(d-n+2)\ln\ln\beta$ intersects top or bottom. Since $S_i$ is vertically crossed by time ${\tau_\beta}$, the middle set, defined by \[\text{middle}\,=\, {\Lambda}^{n-1}(k)\times \left\{ \frac{(2i+\frac{1}{2})k}{4\ln\beta} \right\} \times {\Lambda}^{d-n}(\ln\beta)\,, \] is hit before time ${\tau_\beta}$ by a ${\hbox{\footnotesize\rm STC}}$ emanating either from the bottom or from the top of $S_i$. Let us define \begin{multline*} \tau_{\text{bot}}(h)\,=\,\inf\,\big\{\,u\geq 0: \exists\,{\cal C}\in {\hbox{\footnotesize\rm STC}}_\xi( {\sigma}^{n-1\pm,\xi}_{S_i,t}, 0 \leq t\leq u)\,, \\ {\cal C}\cap\text{bottom}\neq\varnothing\,,\quad \exists\,x=(x_1,\dots,x_d)\in{\cal C}\quad x_n=h\,\big\}\,. \end{multline*} Suppose for instance that the first ${\hbox{\footnotesize\rm STC}}$ hitting middle emanates from the bottom. We have then \[\tau_{\text{bot}} \left( \frac{(2i+\frac{1}{2})k}{4\ln\beta} \right)\,\leq\,{\tau_\beta}\,. \] Moreover, setting $h=2i/(4\ln\beta)$, we have \begin{multline*} \tau_{\text{bot}} \left( h+\frac{k}{8\ln\beta} \right)\,\geq\,\\ \sum_{1\leq j\leq J} \Big( \tau_{\text{bot}}\left( h+j\ln\beta \right)- \tau_{\text{bot}}\left( h+(j-1)\ln\beta \right) \Big) \end{multline*} where \[J\,=\, \frac{k}{8(\ln\beta)^2} \,.\] Therefore there exists an index $j\leq J$ such that \[ \tau_{\text{bot}}\left( h+j\ln\beta \right)- \tau_{\text{bot}}\left( h+(j-1)\ln\beta \right) \,\leq\,\frac{{\tau_\beta}}{J}\,. \] Let $\Delta_{i,j}$ be the set $$\displaylines{ \Delta_{i,j}= {\Lambda}^{n-1}(k)\times \left[ h+(j-1)\ln\beta, h+j\ln\beta \right] \times {\Lambda}^{d-n}(\ln\beta). }$$ The set $\Delta_{i,j}$ is isometric to a set of the form $${\Lambda}^{n-1}(k) \times {\Lambda}^{d-n+1}(\ln\beta) \,.$$ We conclude that there exist two indices $i,j$ and two times $a,b$ such that: \smallskip \noindent $\bullet$ $i,j$ are integers and satisfy $0\leq |i|\leq \ln\beta$, $0\leq j\leq J$. \smallskip \noindent $\bullet$ $a,b$ are integers and satisfy $0\leq b-a\leq {\tau_\beta}/J+2$. \smallskip \noindent $\bullet$ The event ${\cal R}(S_i)$ occurs. \smallskip \noindent $\bullet$ At time $a$, the set $\Delta_{i,j}$ has not been touched by a ${\hbox{\footnotesize\rm STC}}$ emanating from top or bottom of $S_i$ in the process \smash{$({\sigma}^{n-1\pm,\xi}_{S_i,t}, t\geq 0)$}. We denote this event by $\cT_a$. \smallskip \noindent $\bullet$ At time $b$, the set $\Delta_{i,j}$ is vertically crossed in the process \smash{$({\sigma}^{n-1\pm,\xi}_{S_{i},t},t\geq 0)$}. We denote this event by ${\cal V}_b$. \smallskip \noindent From the previous discussion, we see that \[ P \left( \begin{matrix} {\Lambda}^n(k)\times {\Lambda}^{d-n}(\ln\beta) \text{ is crossed}\\ \text{vertically before time } \tau_\beta \end{matrix} \right) \leq \sum_{i,j}\,\, \sum_{a,b}\,\, P({\cal R}(S_i),\, \cT_a,\, {\cal V}_b)\, \] with the summation running over indices $i,j,a,b$ satisfying the above conditions. \medskip \noindent {\bf Step 5: Conditioning on the configuration at the time of arrival of the large ${\hbox{\footnotesize\rm STC}}$. } \smallskip \noindent We next estimate the probability appearing in the summation. To alleviate the formulas, we drop $i,j$ from the notation, writing $S,\Delta, \zeta $ instead of $S_i,\Delta_{i,j}, \zeta_{i,j}$. For $Q$ an $n$--small parallelepiped, we denote by ${\cal R}(Q,a)$ the event \[{\cal R}(Q,a)\,=\, \left( \begin{matrix} \text{ neither local nucleation nor creation}\\ \text{of a large ${\hbox{\footnotesize\rm STC}}$ occurs before time $a$ }\\ \text{ for the process $({\sigma}^{n\pm,\xi}_{Q,t},{t\geq 0})$ } \end{matrix} \right)\,. \] We define the event ${\cal R}(S,a)$ as \[{\cal R}(S,a) \,=\, \bigcap_{ \tatop{\scriptstyle Q \text{ $n$--small}} {\scriptstyle Q\subset S} } {\cal R}(Q,a )\, \] and we estimate its probability as in proposition~\ref{nuclei}. For $a\leq{\tau_\beta}$, we obtain that \begin{multline*} P({\cal R}(S,a)^c)\,\leq\, {\hbox{\footnotesize\rm SES}}\,+\\ \big|S \big| (n\ln\beta)^d \big(2^{(n\ln\beta)^d} +{4\beta (m_n+2)^3(n\ln\beta)^{2dm_n+3d}} \tau_\beta \big) \phi_n(\beta) \exp(-\beta\Gamma_n) \,. \end{multline*} Since $$\limsup_{{\beta}\to\infty} \frac{1}{\beta} \ln \Big( |S| \,\tau_\beta \,\exp(-\beta\Gamma_n) \Big) \,<\, nL_n+{\kappa}_n-{\Gamma}_n \,=\,0 \,,$$ we conclude that $$\lim_{{\beta}\to\infty} P({\cal R}(S,a))\,=\, 1\,. $$ We will next condition on the configuration at time $a$ in $\Delta$ in order to estimate the probability of the event ${\cal V}_b$: \begin{multline*} P({\cal R}(S),\, \cT_a,\, {\cal V}_b)\,=\, \sum_\zeta P({\cal R}(S),\,\cT_a, \,{\cal V}_b,\, {\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}=\zeta\big)\\ \,\leq\, \sum_\zeta P({\cal R}(S,a),\,\cT_a, \,{\cal V}_b,\, {\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}=\zeta\big)\,. \end{multline*} Yet the knowledge of the configuration at time $a$ is not enough to decide whether the event ${\cal V}_b$ will occur: we need also to take into account the ${\hbox{\footnotesize\rm STC}}$ present at time $a$ in $\Delta$ to determine whether a vertical crossing occurs in $\Delta$ before time $b$. Thus we record the ${\hbox{\footnotesize\rm STC}}$ which are present in the configuration ${\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}$. We write \[ {\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}=\zeta\,,\qquad {\hbox{\footnotesize\rm STC}}_\xi\big({\sigma}^{n-1\pm,\xi}_{S,t},0\leq t\leq a\big)|_{\Delta\times\{a\}}={\hbox{\footnotesize\rm STC}}(\zeta) \] to express that the configuration in $\Delta$ at time $a$ is $\zeta$ and that the trace at time $a$ of the ${\hbox{\footnotesize\rm STC}}$ created before time $a$ in $\zeta$ is given by ${\hbox{\footnotesize\rm STC}}(\zeta)$. We condition next on this information: \begin{multline*} \sum_\zeta P({\cal R}(S,a),\, \cT_a,\,{\cal V}_b,\, {\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}=\zeta\big) \\ \,=\, \sum_{\zeta,{\hbox{\footnotesize\rm STC}}(\zeta)} P \left( \begin{matrix} {\cal R}(S,a),\,\cT_a, \,{\cal V}_b,\, {\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}=\zeta, \\ {\hbox{\footnotesize\rm STC}}_\xi\big({\sigma}^{n-1\pm,\xi}_{S,t},0\leq t\leq a\big)|_{\Delta\times\{a\}}={\hbox{\footnotesize\rm STC}}(\zeta) \end{matrix} \right) \\ \,=\, \sum_{\zeta,{\hbox{\footnotesize\rm STC}}(\zeta)} P\bigg( {\cal V}_b\,\,\Big|\,\, \begin{matrix} {\cal R}(S,a),\,\cT_a, \, {\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}=\zeta,\,\\ {\hbox{\footnotesize\rm STC}}_\xi\big({\sigma}^{n-1\pm,\xi}_{S,t},0\leq t\leq a\big)|_{\Delta\times\{a\}}={\hbox{\footnotesize\rm STC}}(\zeta) \end{matrix} \bigg)\\ \,\times\, P\bigg( \begin{matrix} {\cal R}(S,a),\,\cT_a, \, {\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}=\zeta,\,\\ {\hbox{\footnotesize\rm STC}}_\xi\big({\sigma}^{n-1\pm,\xi}_{S,t},0\leq t\leq a\big)|_{\Delta\times\{a\}}={\hbox{\footnotesize\rm STC}}(\zeta) \end{matrix} \bigg) \,. \end{multline*} On the event $\cT_a$, by lemma~\ref{cresSTC}, \begin{multline*} {\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}\,=\, {\sigma}^{n\pm,\xi}_{S,a}|_{\Delta}\,,\\ {\hbox{\footnotesize\rm STC}}_\xi\big({\sigma}^{n-1\pm,\xi}_{S,t},0\leq t\leq a\big)|_{\Delta\times\{a\}}= {\hbox{\footnotesize\rm STC}}_\xi\big({\sigma}^{n\pm,\xi}_{S,t},0\leq t\leq a\big)|_{\Delta\times\{a\}}\,, \end{multline*} whence \begin{multline*} P\bigg( \begin{matrix} {\cal R}(S,a),\,\cT_a, \, {\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}=\zeta,\,\\ {\hbox{\footnotesize\rm STC}}_\xi\big({\sigma}^{n-1\pm,\xi}_{S,t},0\leq t\leq a\big)|_{\Delta\times\{a\}}={\hbox{\footnotesize\rm STC}}(\zeta) \end{matrix} \bigg) \,\leq\,\\ P\bigg( \begin{matrix} {\cal R}(S,a),\, {\sigma}^{n\pm,\xi}_{S,a}|_{\Delta}=\zeta,\,\\ {\hbox{\footnotesize\rm STC}}_\xi\big({\sigma}^{n\pm,\xi}_{S,t},0\leq t\leq a\big)|_{\Delta\times\{a\}}={\hbox{\footnotesize\rm STC}}(\zeta) \end{matrix} \bigg)\,. \end{multline*} Let us set $$\nu(\zeta)\,=\, P\big( {\sigma}^{n\pm,\xi}_{S,a}|_{\Delta}=\zeta \,\big|\, {\cal R}(S,a) \big)\,. $$ Thus $\nu$ is the law of the configuration ${\sigma}^{n\pm,\xi}_{S,a}|_{\Delta}$ conditioned on the event ${\cal R}(S,a)$. This configuration, denoted by $\zeta$, comes equipped with the trace of the ${\hbox{\footnotesize\rm STC}}$ created before time $a$, which are denoted by ${\hbox{\footnotesize\rm STC}}(\zeta)$. Formally, the law $\nu$ should be a law on the trace of the ${\hbox{\footnotesize\rm STC}}$ at time $a$, however, to alleviate the text, we make a slight abuse of notation and we deal with $\nu$ as if it was a law on the configurations. With this convention and using the Markov property, we rewrite the previous inequalities as $$\displaylines{ P({\cal R}(S),\, \cT_a,\, {\cal V}_b) \,\leq\,\hfill\cr \sum_{\zeta,{\hbox{\footnotesize\rm STC}}(\zeta)} P\big( {\cal V}_b\,\,|\,\, {\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}=\zeta,\,{\hbox{\footnotesize\rm STC}}(\zeta)\big) \,\nu(\zeta)\, P( {\cal R}(S,a)) \cr \,\leq\, \kern -3pt \sum_{\zeta,{\hbox{\footnotesize\rm STC}}(\zeta)} \kern -3pt P\left( \begin{matrix} \text{ there is a vertical crossing between }\\ \text{ times $a$ and $b$ in $({\sigma}^{n-1\pm,\xi}_{\Delta,t}, t\geq 0)$ } \end{matrix} \,\,\Big|\,\, \begin{matrix} {\sigma}^{n-1\pm,\xi}_{S,a}|_{\Delta}=\zeta\\ {\hbox{\footnotesize\rm STC}}(\zeta) \end{matrix} \right) \,\nu(\zeta)\, \cr \,\leq\, \sum_{\zeta} P\left( \begin{matrix} \text{there exists a vertical crossing in}\\ {\hbox{\footnotesize\rm STC}}_\zeta\big({\sigma}^{n-1\pm,\zeta}_{\Delta,t}, 0\leq t\leq b-a\big) \end{matrix} \right) \,\nu(\zeta)\,. }$$ We check next that the hypothesis on the initial law at rank $n-1$ is satisfied by the law $\nu$ of $\zeta$ and that the hypothesis on the initial clusters is satisfied by ${\hbox{\footnotesize\rm STC}}(\zeta)$, the ${\hbox{\footnotesize\rm STC}}$ present in $\zeta$. \medskip \noindent {\bf Step 6: Check of the hypothesis on the initial ${\hbox{\footnotesize\rm STC}}$ at rank $n-1$}. \smallskip \noindent Let ${\cal C}$ belong to $\smash{{\hbox{\footnotesize\rm STC}}_\xi({\sigma}^{n\pm,\xi}_{S,t},0\leq t\leq a)}$. Then ${\cal C}$ is the union of ${\hbox{\footnotesize\rm STC}}$ belonging to $\smash{{\hbox{\footnotesize\rm STC}}({\sigma}^{n\pm,\xi}_{S,t},0\leq t\leq a)}$ and to ${\hbox{\footnotesize\rm STC}}(\xi)$. Since the event ${\cal R}(S,a)$ occurs, any ${\cal C}$ in $\smash{{\hbox{\footnotesize\rm STC}}({\sigma}^{n\pm,\xi}_{S,t},0\leq t\leq a)}$ has diameter at most $\ln\ln\beta$. Thus any path in ${\cal C}$ having diameter strictly larger than $\ln\ln\beta$ has to meet a ${\hbox{\footnotesize\rm STC}}$ of ${\hbox{\footnotesize\rm STC}}(\xi)$. Suppose there exists ${\cal C}$ in $\smash{{\hbox{\footnotesize\rm STC}}_\xi({\sigma}^{n\pm,\xi}_{S,t},0\leq t\leq a)}$ such that \[{\mathrm{diam }}{\cal C}\,\geq\, \frac{1}{4}\ln\beta\,.\] By lemma~\ref{reason}, there would exist ${\cal C}'\subset{\cal C}$ and $a'\leq a$ such that \[{\cal C}'\in\smash{{\hbox{\footnotesize\rm STC}}_\xi({\sigma}^{n\pm,\xi}_{S,t},0\leq t\leq a')} \,,\quad \frac{1}{4}\ln\beta\,\leq {\mathrm{diam }}{\cal C}'\,\leq\, \frac{1}{2}\ln\beta\,. \] Let $Q'$ be an $n$--small box containing ${\cal C}'$. The previous discussion implies that ${\cal C}'$ would meet at least $\frac{1}{4}(\ln\beta)/\ln\ln\beta$ elements of ${\hbox{\footnotesize\rm STC}}(\xi)$, thus we would have \[ \sum_{\tatop{\scriptstyle{\cal C}\in{\hbox{\footnotesize\rm STC}}(\xi)} {\scriptstyle {\cal C}\cap Q'\neq\varnothing}}{\mathrm{diam }}{\cal C}\,\geq\, \frac{\ln\beta}{4\ln\ln\beta} \,>\,(d-n+1) \ln\ln\beta\,, \] and this would contradict the hypothesis on the initial ${\hbox{\footnotesize\rm STC}}$ present in $\xi$. Therefore any ${\hbox{\footnotesize\rm STC}}$ in $\smash{{\hbox{\footnotesize\rm STC}}_\xi({\sigma}^{n\pm,\xi}_{S,t},0\leq t\leq a)}$ has a diameter less than $\frac{1}{4}\ln\beta$. Let now $Q$ be an $(n-1)$--small parallelepiped included in $\Delta$. Let $Q'$ be an $n$--small parallelepiped containing $Q$ and such that \[ d(S\setminus Q',Q)\,>\, \frac{1}{3}\ln\beta\,. \] From the previous discussion, we see that a ${\hbox{\footnotesize\rm STC}}$ of $\smash{{\hbox{\footnotesize\rm STC}}_\xi({\sigma}^{n\pm,\xi}_{S,t},0\leq t\leq a)}$ which intersects the box $Q$ does not meet the inner boundary of $Q'$. By lemma~\ref{cresSTC}, such a ${\hbox{\footnotesize\rm STC}}$ is also a ${\hbox{\footnotesize\rm STC}}$ of the process $\smash{{\hbox{\footnotesize\rm STC}}_\xi\big({\sigma}^{n\pm,\xi}_{Q',t},0\leq t\leq a\big)}$. It follows that $$ \sum_{\tatop{\scriptstyle{\cal C}\in{\hbox{\footnotesize\rm STC}}(\zeta)} {\scriptstyle {\cal C}\cap Q\neq\varnothing}}{\mathrm{diam }}{\cal C}\,\leq\, {\mathrm{diam }}\, {\hbox{\footnotesize\rm STC}}_\xi\big({\sigma}^{n\pm,\xi}_{Q',t},0\leq t\leq a\big)\,. $$ Since the event ${\cal R}(S,a)$ occurs, any ${\cal C}$ in ${\hbox{\footnotesize\rm STC}}({\sigma}^{n\pm,\xi}_{S,t},0\leq t\leq a)$ has diameter at most $\ln\ln\beta$. From the hypothesis on the initial ${\hbox{\footnotesize\rm STC}}$ at rank $n$, we have \begin{multline*} {\mathrm{diam }}{\hbox{\footnotesize\rm STC}}_\xi({\sigma}^{n\pm,\xi}_{Q',t},0\leq t\leq a)\,\leq\,\\ \sum_{\tatop{\scriptstyle{\cal C}\in{\hbox{\footnotesize\rm STC}}(\xi)} {\scriptstyle {\cal C}\cap Q'\neq\varnothing}}{\mathrm{diam }}{\cal C}\,+\, {\mathrm{diam }}\, {\hbox{\footnotesize\rm STC}}\big({\sigma}^{n\pm,\xi}_{Q',t},0\leq t\leq a\big)\\ \,\leq\, (d-n+1)\ln\ln\beta+\ln\ln\beta \,=\, (d-n+2)\ln\ln\beta\,. \end{multline*} and the hypothesis on the initial ${\hbox{\footnotesize\rm STC}}$ present in $\zeta$ is fulfilled. \medskip \noindent {\bf Step 7: Check of the hypothesis on the initial law at rank $n-1$.} \smallskip \noindent Let $(Q_i,i\in I)$ be a family of $(n-1)$--small parallelepipeds included in $\Delta$ such that $$\displaylines{ \forall i,j\in I\,,\qquad i\neq j\quad\Rightarrow\quad d(Q_i,Q_j) > 5(d-n+2)\ln\ln\beta\,, }$$ and let $(\sigma_i,i\in I)$ be a family of configurations in the parallelepipeds $(Q_i,i\in I)$. For $i\in I$, let $R_i$ be the box $Q_i$ enlarged by a distance $2\ln\ln\beta$ along the first $n$ axis. The boxes $(R_i,i\in I)$ are $n$--small and satisfy $$\displaylines{ \forall i,j\in I\,,\qquad i\neq j\quad\Rightarrow\quad d(R_i,R_j) > 5(d-n+1)\ln\ln\beta\,. }$$ On the event ${\cal R}(S,a)$, we have by lemma~\ref{cresSTC} \[\forall i\in I\qquad {\sigma}^{n\pm,\xi}_{S,a}|_{Q_i}\,=\, {\sigma}^{n\pm,\xi}_{R_i,a}|_{Q_i} \,. \] Therefore \begin{multline*} \nu\big(\,\forall i\in I\quad\sigma|_{Q_i}=\sigma_i\,\big) \,=\, P\big(\,\forall i\in I\quad {\sigma}^{n\pm,\xi}_{S,a}|_{Q_i} =\sigma_i\,\big|\, {\cal R}(S,a) \,\big)\\ \,=\,P\big(\,\forall i\in I\quad {\sigma}^{n\pm,\xi}_{R_i,a}|_{Q_i} =\sigma_i\,\big|\, {\cal R}(S,a) \,\big)\,. \end{multline*} We condition next on the initial configurations in the boxes $R_i, i\in I$: \begin{multline*} P\big(\, {\cal R}(S,a) ,\, \forall i\in I\quad {\sigma}^{n\pm,\xi}_{R_i,a}|_{Q_i} =\sigma_i \,\big) \\ \,=\, \sum_{\zeta_i,\,i\in I} P\big(\, {\cal R}(S,a),\, \forall i\in I\quad {\sigma}^{n\pm,\xi}_{R_i,a}|_{Q_i} =\sigma_i,\quad \xi|_{R_i} =\zeta_i \,\big) \\ \,\leq\, \sum_{\zeta_i,\,i\in I} P \big(\forall i\in I\quad {\cal R}(R_i,a),\quad {\sigma}^{n\pm,\xi}_{R_i,a}|_{Q_i} =\sigma_i,\quad \xi|_{R_i} =\zeta_i \big) \\ \,=\, \sum_{\zeta_i,\,i\in I} P\big(\, \forall i\in I\quad {\cal R}(R_i,a),\quad {\sigma}^{n\pm,\xi}_{R_i,a}|_{Q_i} =\sigma_i \,\,\big|\,\, \forall i\in I\quad \xi|_{R_i} =\zeta_i \,\big) \\ \hfill \times P\big(\, \forall i\in I\quad \xi|_{R_i} =\zeta_i \,\big)\,. \end{multline*} We next use the hypothesis on the law of $\xi$ and the fact that, once the initial configurations in the boxes $R_i$ are fixed, the dynamics in these boxes with $n\pm$ boundary conditions are independent. We obtain: \begin{multline*} P\big(\, {\cal R}(S,a) ,\, \forall i\in I\quad {\sigma}^{n\pm,\xi}_{R_i,a}|_{Q_i} =\sigma_i \,\big)\\ \,\leq\, \sum_{\zeta_i,\,i\in I} \,\prod_{i\in I} \,\, P\Big(\, {\cal R}(R_i,a),\, {\sigma}^{n\pm,\xi}_{R_i,a}|_{Q_i} =\sigma_i \,\,\Big|\,\, \xi|_{R_i} =\zeta_i \,\Big)\, \phi_n(\beta)\, \rho_{R_i}^{n\pm}(\zeta_i)\,. \end{multline*} We recall that $\smash{(\widetilde\sigma^{n\pm,\xi}_{R_i,t})_{t\geq 0}}$ is the process conditioned to stay in ${\cal R}_n(R_i)$. On the event ${\cal R}(R_i,a)$, the initial configuration $\zeta_i$ belongs to ${\cal R}_n(R_i)$ and \[ {\sigma}^{n\pm,\xi}_{R_i,a}|_{Q_i}\,=\, \widetilde\sigma^{n\pm,\xi}_{R_i,a}|_{Q_i}\,,\qquad \rho_{R_i}^{n\pm}(\zeta_i)\,\leq\, (m_n+1)|R_i|^{m_n} \widetilde{\mu}_{R_i}^{n\pm}(\zeta_i) \,. \] Moreover $|R_i| \leq (n\ln\beta)^{d}$ and $P({\cal R}(S,a))\geq 1/2$ for $\beta$ large enough. Thus \begin{multline*} \nu\big(\,\forall i\in I\quad\sigma|_{Q_i}=\sigma_i\,\big)\,\leq\, \\ \frac{1}{P({\cal R}(S,a))} P\big(\, {\cal R}(S,a) ,\, \forall i\in I\quad {\sigma}^{n\pm,\xi}_{R_i,a}|_{Q_i} =\sigma_i \,\big)\\ \,\leq\, 2\prod_{i\in I} \, \left( \sum_{\zeta_i\in {\cal R}_n(R_i) } \,\, P\Big(\, \widetilde\sigma^{n\pm,\xi}_{R_i,a}|_{Q_i} =\sigma_i \,\,\Big|\,\, \xi|_{R_i} =\zeta_i \,\Big)\, \phi_n(\beta)\, \rho_{R_i}^{n\pm}(\zeta_i) \right) \\ \,\leq\, 2\,\prod_{i\in I} \, \Big( (m_n+1) (n\ln\beta)^{dm_n} \,\phi_n(\beta) \,\, \widetilde{\mu}_{R_i}^{n\pm} \big(\, \sigma|_{Q_i} =\sigma_i \,\big) \Big)\,. \\ \end{multline*} \noindent {\bf Step 8: Comparison of $\widetilde\mu_{R}^{n\pm}|_{Q}$ and $\rho_{Q}^{n-1\pm}$. } \smallskip \noindent To conclude we need to prove that, if $Q,R$ are two parallelepipeds which are $n$--small and such that $Q\subset R$, then for any configuration $\eta$ in $Q$, \[ \widetilde\mu_{R}^{n\pm}\big(\, \sigma|_{Q} =\eta \, \big)\,\leq\, \phi_{n-1}(\beta)\, \rho_{Q}^{n-1\pm}(\eta)\,, \] where $\phi_{n-1}(\beta)$ is a function depending only upon $\beta$ which is $\exp o(\beta)$. This is the purpose of the next three lemmas. \begin{lemma} \label{control} Let $R$ be an $n$--small parallelepiped. There exists $h_0>0$ such that, for $h\in]0,h_0[$, the following result holds. If $\sigma$ is a configuration in $R$ satisfying \[ |\sigma|\,\leq\, m_{n}\,,\qquad H^{n\pm}_{R}(\sigma) \,\leq\, \Gamma_{n-1} \] then $|\sigma| \,\leq\, m_{n-1}$. \end{lemma} \begin{proof} Let $\sigma$ be a configuration satisfying the hypothesis of the lemma and let us set $m=|\sigma|$. By lemma~\ref{prok}, there exists an $n$ dimensional configuration $\rho$ such that $|\rho|=m$ and $H_{{\mathbb Z}^n}(\rho) =H^{n\pm}_{R}(\sigma)$. We apply next the simplified isoperimetric inequality stated in section~\ref{sie}: $$\displaylines{ H_{{\mathbb Z}^n}(\rho) \,=\,\text{perimeter}(\rho)-h|\rho|\hfill\cr \geq\, \inf\,\big\{\,\text{perimeter}(A):A\text{ is the finite union of $m$ unit cubes}\, \big\}\,-\,hm\cr \geq\, 2n\,m^{(n-1)/{n}}-hm\,. }$$ Therefore the number $m$ of pluses in $\sigma$ satisfies \[ m \,\leq\, m_{n}\,,\qquad 2n\,m^{(n-1)/{n}}-hm \,\leq\, \Gamma_{n-1}\,.\] Thus, for $h\leq 1$, $$m\,\leq\,\big(l_c(n)+1\big)^n\,\leq\, \Big(\frac{2(n-1)}{h}+1\Big)^n\,\leq\, \Big(\frac{2n-1}{h}\Big)^n \,,$$ whence $$2n\,m^{(n-1)/{n}}-hm \,=\, m^{(n-1)/{n}}\big(2n-hm^{1/n}\big) \,\geq\, m^{(n-1)/{n}} \,$$ and we conclude that \[m^{(n-1)/{n}}\,\leq\, \Gamma_{n-1}\,.\] We have the following expansions as $h\to 0$: $$m_n \sim \tonda{\frac{2 (n-1)}{h}}^n\,,\qquad {\Gamma}_n \sim 2\tonda{\frac{2 (n-1)}{h}}^{n-1}\,. $$ Thus, for $h$ small enough, \[m^{(n-1)/{n}}\,\leq\, \Gamma_{n-1}\,\leq\, (2n)^{n-1}h^{-(n-2)} \,,\] whence \[m\,\leq\, (2n)^nh^{-\textstyle\frac{n(n-2)}{n-1}} \,\leq\, m_{n-1}\,, \] the last inequality being valid for $h$ small enough, since $n(n-2)<(n-1)^2$ and $m_{n-1}$ is of order $h^{-(n-1)}$ as $h$ goes to $0$.\end{proof} \begin{lemma} \label{cutting} Let $Q\subset R$ be two $n$--small parallelepipeds. If $\eta$ is a configuration in $R$ satisfying $|\eta|\leq m_{n-1}$ then \[H^{n\pm}_{R}(\eta)\,\geq\, H^{n\pm}_{Q}(\eta|_Q)\,.\] \end{lemma} \begin{proof} We will prove the following intermediate result. If $\pi$ is a half--space, then \[H^{n\pm}_{R}(\eta)\,\geq\, H^{n\pm}_{R\cap \pi}(\eta\cap\pi)\,.\] Re\-pea\-ted applications of the above inequality will yield the result stated in the lemma. We consider first the case where $\pi$ is orthogonal to one of the first $n$ axis, say the $n$-th, and it has for equation \[ \pi\,=\,\big\{\,x=(x_1,\dots,x_n,\dots,x_d):x_n\leq h+1/2\,\big\}\] where $h\in{\mathbb Z}$. We think of $\eta$ as the union of $d-1$ dimensional configurations which are obtained by intersecting $\eta$ with the layers \[L_i\,=\,\big\{\,x=(x_1,\dots,x_d)\in {\mathbb Z}^d: i-\frac{1}{2}\leq x_{n}<i+\frac{1}{2}\,\big\}\,,\qquad i\in{\mathbb Z}\,.\] Let us define the hyperplanes \[P_i\,=\,\big\{\,x=(x_1,\dots,x_d)\in {\mathbb Z}^d: x_{n}=i+ \frac{1}{2}\,\big\}\,,\qquad i\in{\mathbb Z}\,.\] We have \[ H^{n\pm}_{R}(\eta)\,=\, \sum_{i} H_{{\mathbb Z}^{d-1}}^{n-1\pm}(\eta\cap L_i)\,+\, \sum_{i} \,\text{area}( \partial\eta\cap P_i)\,. \] Yet, for any $i>h$, we have $\big|\eta\cap L_i\big|\leq m_{n-1}$ whence \[H_{{\mathbb Z}^{d-1}}^{n-1\pm}(\eta\cap L_i)\,\geq 0\,.\] Moreover \[ \sum_{i\geq h} \,\text{area}( \partial\eta\cap P_i)\,\geq\, \,| \eta\cap L_h| \,. \] This is because the boundary conditions are minus on the faces orthogonal to the $n$th axis, hence there must be at least one unit interface above each plus site of the layer $L_h$. We conclude that \begin{multline*} H^{n\pm}_{R}(\eta)\,\geq\, \sum_{i\leq h} H_{{\mathbb Z}^{d-1}}^{n-1\pm}(\eta\cap L_i)\,+\, \sum_{i< h} \,\text{area}( \partial\eta\cap P_i) \,+\, \,| \eta\cap L_h| \\ \,=\, H^{n\pm}_{R\cap\pi}(\eta\cap\pi) \end{multline*} as requested. The case where $\pi$ is orthogonal to one of the last $d-n$ axis can be handled similarly. This case is even easier because the boundary conditions become plus along $\pi$ and contribute to lowering the energy. \end{proof} \begin{lemma} Let $Q,R$ be two parallelepipeds which are $n$--small and such that $Q\subset R$. If $\eta\in{\cal R}_{n-1}(Q)$ then \[ \widetilde\mu_{R}^{n\pm}\big(\, \sigma|_{Q} =\eta \, \big)\,\leq\, (m_n+1) (n\ln\beta)^{dm_n}\exp\big(-\beta H^{n-1\pm}_{Q}(\eta) \big)\,.\] If $\eta\not\in {\cal R}_{n-1}(Q)$ then \[ \widetilde\mu_{R}^{n\pm}\big(\, \sigma|_{Q} =\eta \, \big)\,\leq\, (m_n+1) (n\ln\beta)^{dm_n} \exp(-\beta\Gamma_{n-1})\,. \] \end{lemma} \begin{proof} For any configuration $\eta$ in $Q$, \begin{multline*} \widetilde\mu_{R}^{n\pm}\big(\, \sigma|_{Q} =\eta \, \big)\,\leq\, \sum_{ \tatop{\scriptstyle \rho\in{\cal R}_n(R)} {\scriptstyle \rho|_{Q}=\eta}} \widetilde\mu_{R}^{n\pm}(\rho) \\ \,\leq\, \big|{\cal R}_n(R)\big|\, \max \,\big\{\,\widetilde\mu_{R}^{n\pm}(\rho): \rho\in{\cal R}_n(R),\, \rho|_{Q}=\eta\,\big\}\\ \,\leq\, (m_n+1)(n\ln\beta)^{dm_n} \exp\Big(-\beta \min \,\big\{\,H_{R}^{n\pm}(\rho): \rho\in{\cal R}_n(R),\, \rho|_{Q}=\eta\,\big\}\Big)\,. \end{multline*} If the minimum in the exponential is larger than or equal to $\Gamma_{n-1}$, then we have the desired inequality. Suppose that the minimum is less than $\Gamma_{n-1}$. Let $\rho\in{\cal R}_n(R)$ be such that $H^{n\pm}_{R}(\rho)\leq \Gamma_{n-1}$ and $\rho|_{Q}=\eta$. By lemma~\ref{control}, we have then also $|\rho| \leq m_{n-1}$. Let ${\mathcal C}(\rho)$ be the set of the connected components of $\rho$. Since $\rho\in{\cal R}_n(R)$, we have \[ \forall C \in{\mathcal C}(\rho)\qquad H^{n\pm}_{R}(C)\geq 0\, \] hence \[ H^{n\pm}_{R}(\rho)\,\geq\, \sum_{ \tatop{\scriptstyle C\in {\mathcal C}(\rho)} {\scriptstyle C\cap Q\neq\varnothing} } H^{n\pm}_{R}(C) \,.\] Let $C\in {\mathcal C}(\rho)$ be such that ${C\cap Q\neq\varnothing}.$ Since $|\rho|\leq m_{n-1}$, lemma~\ref{cutting} yields that \[ H^{n\pm}_{R}(C)\,\geq\, H^{n\pm}_{Q}(C\cap Q) \] whence \begin{multline*} H^{n\pm}_{R}(\rho)\,\geq\, \sum_{ \tatop{\scriptstyle C\in {\mathcal C}(\rho)} {\scriptstyle C\cap Q\neq\varnothing} } H^{n\pm}_{Q}(C\cap Q) \\ \,=\, H^{n\pm}_{Q}(\rho\cap Q) \,=\, H^{n\pm}_{Q}(\eta) \,\geq\, H^{n-1\pm}_{Q}(\eta) \,.\hfil \end{multline*} The last inequality is a consequence of the attractivity of the boundary conditions. It follows that $H^{n-1\pm}_{Q}(\eta)\leq\Gamma_{n-1}$ so that $\eta$ belongs to ${\cal R}_{n-1}(Q)$. In addition, we conclude that $$ \min \,\big\{\,H_{R}^{n\pm}(\rho): \rho\in{\cal R}_n(R),\, \rho|_{Q}=\eta\,\big\}\,\geq\, H^{n-1\pm}_{Q}(\eta) $$ which yields the desired inequality. \end{proof} \medskip \noindent {\bf Step 9: Reduction to a box $\Phi$ of side length of order $\exp(\beta L_{n-1})$.} \smallskip \noindent Thus the measure $\nu$ on the configurations in $\Delta$ satisfies the initial hypothesis at rank $n-1$. Let us set $\tau'_\beta=b-a$. We have then $$\limsup_{{\beta}\to\infty} \frac{1}{\beta} \ln \tau'_\beta\,<\,{\kappa}_n-L_n\,=\,{\kappa}_{n-1}\,.$$ We are in position to apply the induction hypothesis at rank $n-1$. We define the box $$\Phi_0\,=\, \begin{cases} {\Lambda}^{1}(2\ln\ln\beta) \times {\Lambda}^{d-1}(\ln\beta) \qquad \text{if }n=1 \\ {\Lambda}^{n-1}( 2\exp({{\beta} L_{n-1}}) ) \times {\Lambda}^{d-n+1}(\ln\beta) \qquad \text{if }n\geq 2 \end{cases} $$ Up to a ${\hbox{\footnotesize\rm SES}}$ event, there is no space--time cluster of diameter larger than \[ \begin{cases} \ln\ln\beta \qquad \text{if }n=1 \\ \exp({{\beta} L_{n-1}}) \qquad \text{if }n\geq 2 \end{cases} \] in \[ {\hbox{\footnotesize\rm STC}}_{\zeta}({\sigma}^{n-1\pm,\zeta}_{\Delta,t}, 0\leq t\leq \tau'_\beta)\,.\] It follows that any ${\hbox{\footnotesize\rm STC}}$ of the above process is included in a translate of the box $\Phi_0$ and the vertical crossing of $\Delta$ can only occur in such a set. Thus $$\displaylines{ \sum_\zeta P\left( \begin{matrix} \text{there is a vertical crossing in} \\ {\hbox{\footnotesize\rm STC}}_{\zeta}({\sigma}^{n-1\pm,\zeta}_{\Delta,t}, 0\leq t\leq \tau'_\beta) \end{matrix} \right) \,\nu(\zeta)\, \leq\,\hfill\cr \sum_\Phi \sum_\zeta P\left( \begin{matrix} \text{there is a vertical crossing in} \\ {\hbox{\footnotesize\rm STC}}_{\zeta}({\sigma}^{n-1\pm,\zeta}_{\Phi,t}, 0\leq t\leq \tau'_\beta) \end{matrix} \right) \,\nu(\zeta)\,+{\hbox{\footnotesize\rm SES}} }$$ where the sum over $\Phi$ runs over the translates of $\Phi_0$ included in $\Delta$. We estimate \[ \sum_\zeta P\left( \begin{matrix} \text{there is a vertical crossing in} \\ {\hbox{\footnotesize\rm STC}}_{\zeta}({\sigma}^{n-1\pm,\zeta}_{\Phi,t}, 0\leq t\leq \tau'_\beta) \end{matrix} \right) \,\nu(\zeta) \] for $\Phi=x+\Phi_0$ a fixed translate of $\Phi_0$. \medskip \noindent {\bf Step 10: Reduction to boxes $\Phi_i\subset\Phi$ of vertical side length of order $\ln\beta/\ln\ln\beta$.} \smallskip \noindent We consider the following subsets of $\Phi$. Let us set $I=\ln\ln\beta$. If $n=1$, then we define for $1\leq i\leq I$ $$\Phi_i\,=\, x+{\Lambda}^{1}( 2\ln\ln\beta) \times \left[ \frac{i\ln\beta}{2\ln\ln\beta}, \frac{ (i+1/2) \ln\beta}{2\ln\ln\beta} \right] \times {\Lambda}^{d-2}(\ln\beta) \,. $$ If $n=2$, then we define for $1\leq i\leq I$ $$\Phi_i\,=\, x+{\Lambda}^{n-1}( 2\exp({{\beta} L_{n-1}}) ) \times \left[ \frac{i\ln\beta}{2\ln\ln\beta}, \frac{ (i+1/2) \ln\beta}{2\ln\ln\beta} \right] \times {\Lambda}^{d-n}(\ln\beta) \,.$$ These sets are pairwise disjoint and satisfy, for $\beta$ large enough, $$\displaylines{ \forall i,j\leq I \,,\qquad i\neq j\quad\Rightarrow\quad d(\Phi_i,\Phi_j) \geq \frac{\ln\beta}{4\ln\ln\beta} > 5(d-n+2)\ln\ln\beta\,. }$$ If the set $\Phi$ endowed with $n-1\pm$ boundary conditions is vertically crossed before time $\tau'_\beta$, so are the sets $\Phi_i,\, 1\leq i\leq I$. The vertical side of $\Phi_i$ is \[\frac{\ln\beta}{4\ln\ln\beta}\,>\, (d-n+3)\ln\ln\beta \] hence the vertical crossing of $\Phi_i$ implies that a ${\hbox{\footnotesize\rm STC}}$ of diameter larger than $(d-n+3)\ln\ln\beta$ has been created in $\Phi_i$. \medskip \noindent {\bf Step 11: Conclusion of the induction step.} \smallskip \noindent By lemma~\ref{reason}, there exists an $(n-1)$--small box $Q_i$ included in $\Phi_i$ and a ${\hbox{\footnotesize\rm STC}}$ ${\cal C}'_i$ in ${\hbox{\footnotesize\rm STC}}_\zeta({\sigma}^{n-1\pm,\zeta}_{Q_i,t}, 0 \leq t\leq \tau'_\beta )$ such that $${\mathrm{diam }}{\cal C}'_i\,\geq\, (d-n+3)\ln\ln\beta\,.$$ Taking into account the hypothesis on the initial ${\hbox{\footnotesize\rm STC}}$ present in $\zeta$, \begin{multline*} {\mathrm{diam }}{\cal C}'_i\,\leq\,{\mathrm{diam }} {\hbox{\footnotesize\rm STC}}_\zeta ({\sigma}^{n-1\pm,\zeta}_{Q_i,t}, 0 \leq t\leq \tau'_\beta )\\ \,\leq\, \sum_{\tatop{\scriptstyle{\cal C}\in{\hbox{\footnotesize\rm STC}}(\zeta)} {\scriptstyle {\cal C}\cap Q_i\neq\varnothing}}{\mathrm{diam }}{\cal C}\,+\, {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{n-1\pm,\zeta}_{Q_i,t}, 0\leq t\leq \tau'_\beta )\,\\ \,\leq\, (d-n+2)\ln\ln\beta+ {\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{n-1\pm,\zeta}_{Q_i,t}, 0\leq t\leq \tau'_\beta )\,. \end{multline*} Therefore $${\mathrm{diam }}\,{\hbox{\footnotesize\rm STC}}( {\sigma}^{n-1\pm,\zeta}_{Q_i,t}, 0\leq t\leq \tau'_\beta )\,\geq\,\ln\ln\beta$$ and a large STC is formed in the process $({\sigma}^{n-1\pm,\zeta}_{\Phi_i,t}, 0\leq t\leq \tau'_\beta)$. We have thus \begin{multline*} P\left( \begin{matrix} \text{there is a vertical crossing in} \\ {\hbox{\footnotesize\rm STC}}_{\zeta}({\sigma}^{n-1\pm,\zeta}_{\Phi,t}, 0\leq t\leq \tau'_\beta) \end{matrix} \right) \\ \,\leq\, P\left( \begin{matrix} \text{ each set $\Phi_i$ is vertically crossed} \\ \text{ in $({\sigma}^{n-1\pm,\zeta}_{\Phi,t}, 0\leq t\leq \tau'_\beta)$} \end{matrix} \right)\\ \,\leq\, P\left( \begin{matrix} \text{ for each $i\in I$, a large ${\hbox{\footnotesize\rm STC}}$ is formed} \\ \text{ in the process $({\sigma}^{n-1\pm,\zeta}_{\Phi_i,t}, 0\leq t\leq \tau'_\beta)$} \end{matrix} \right)\\ \,\leq\, P\left( \begin{matrix} \text{for the process $({\sigma}^{n-1\pm,\zeta}_{\Phi,t}, 0\leq t\leq \tau'_\beta)$} \\ \text{ $\ln\ln\beta$ large ${\hbox{\footnotesize\rm STC}}$ are created in $(n-1)$--small } \\ \text{ parallelepipeds which are pairwise at } \\ \text{ distance larger than $5(d-n+2)\ln\ln\beta$ } \end{matrix} \right)\,.\\ \end{multline*} Since $\nu$ satisfies the hypothesis on the initial law at rank $n-1$ and the volume $\Phi$ and the time $\tau'_\beta$ satisfy $$\limsup_{{\beta}\to\infty} \frac{1}{\beta} \ln |\Phi|\,\leq\, (n-1)L_{n-1} \,,\qquad \limsup_{{\beta}\to\infty} \frac{1}{\beta} \ln \tau'_\beta\,<\,{\kappa}_{n-1}\,$$ we can apply proposition~\ref{nuclei} to conclude that \[ \sum_\zeta P\left( \begin{matrix} \text{there is a vertical crossing in} \\ {\hbox{\footnotesize\rm STC}}_{\zeta}({\sigma}^{n-1\pm,\zeta}_{\Phi,t}, 0\leq t\leq \tau'_\beta) \end{matrix} \right) \,\nu(\zeta) \] is ${\hbox{\footnotesize\rm SES}}$. Coming back along the chain of inequalities, we see that $$P({\cal R}(S),\, \cT_a,\, {\cal V}_b)$$ is also ${\hbox{\footnotesize\rm SES}}$, as well as \[ \sum_{i,j}\,\, \sum_{a,b}\,\, P({\cal R}(S_i),\, \cT_a,\, {\cal V}_b)\, \] since the number of terms in the sums is of order exponential in $\beta$. Coming back one more step, we obtain that $$\displaylines{ P\big(\exists\,{\cal C}\in {\hbox{\footnotesize\rm STC}}_\xi(0, \tau_\beta )\text{ with }{\mathrm{diam }}{\cal C}\geq \exp(\beta L_n)\big) }$$ is also ${\hbox{\footnotesize\rm SES}}$, as required. \subsection{Proof of the lower bound in theorem~\ref{mainfv}.} \label{concl} For technical convenience, we consider here boxes of sidelength $c\exp(\beta L)$. The statement of theorem~\ref{mainfv} corresponds to the special case where $c=1$. Let $L,c>0$ and let ${\Lambda}_\beta={\Lambda}(c\exp(\beta L))$ be a cubic box of side length $c\exp(\beta L)$. Let $\kappa$ be such that \[\kappa<\max(\Gamma_d-dL,\kappa_d)\] and let $\tau_\beta=\exp(\beta {\kappa})$. We have \begin{equation*} {\mathbb P}\tonda{{\sigma}_{{\Lambda}_\beta,\tau_\beta}^{-,\mathbf{-1}}(0)=1}\,=\, {\mathbb P}\left( \begin{matrix} ( 0,{\tau_\beta})\text{ belongs to a non void ${\hbox{\footnotesize\rm STC}}$}\\ \text{of the process $({\sigma}_{{\Lambda}_\beta,t}^{-,\mathbf{-1}}, 0\leq t\leq \tau_\beta)$} \end{matrix} \right)\,. \end{equation*} Let us denote by ${\cal C}^*$ the ${\hbox{\footnotesize\rm STC}}$ of the process $\smash{({\sigma}_{{\Lambda}_\beta,t}^{-,\mathbf{-1}}, 0\leq t\leq \tau_\beta)}$ containing the space--time point $( 0,{\tau_\beta})$. In case $\smash{{\sigma}_{{\Lambda}_\beta,\tau_\beta}^{-,\mathbf{-1}}(0)}=-1$, then ${\cal C}^*=\varnothing$. We write then \begin{multline*} {\mathbb P}\tonda{{\sigma}_{{\Lambda}_\beta,\tau_\beta}^{-,\mathbf{-1}}(0)=1}\,=\,\\ {\mathbb P}\big({\cal C}^*\neq\varnothing,\,{\mathrm{diam }} {\cal C}^*<\ln\ln\beta\big) \,+\, {\mathbb P}\big({\mathrm{diam }} {\cal C}^*\geq\ln\ln\beta\big)\,. \end{multline*} By lemma~\ref{cresSTC}, if ${\mathrm{diam }} {\cal C}^*<\ln\ln\beta$, then ${\cal C}^*$ is also a ${\hbox{\footnotesize\rm STC}}$ of the process $({\sigma}_{{\Lambda}(\ln\beta),t}^{-,\mathbf{-1}}, 0\leq t\leq \tau_\beta)$. Thus \[ {\mathbb P}\big({\cal C}^*\neq\varnothing,\,{\mathrm{diam }} {\cal C}^*<\ln\ln\beta\big) \,\leq\, {\mathbb P}\big({\sigma}_{{\Lambda}(\ln\beta),\tau_\beta}^{-,\mathbf{-1}}(0)=1\big)\,. \] We use the processes $\smash{({\sigma}^{d\pm,\widetilde\mu}_{ {\Lambda}(\ln\beta),t},t\geq 0)}$ and $\smash{(\widetilde\sigma^{d\pm,\widetilde\mu}_{ {\Lambda}(\ln\beta),t},t\geq 0)}$ to estimate the last quantity: \begin{multline*} {\mathbb P}\big({\sigma}_{{\Lambda}(\ln\beta),\tau_\beta}^{-,\mathbf{-1}}(0)=1\big) \,\leq\, P \left( \begin{matrix} \text{nucleation occurs before $\tau_\beta$} \\ \text{ in the process } ({\sigma}_{{\Lambda}(\ln\beta),t}^{-,\mathbf{-1}}, t\geq 0)\\ \end{matrix} \right) \\ \,+\, P \left( \begin{matrix} {\sigma}_{{\Lambda}(\ln\beta),\tau_\beta}^{-,\mathbf{-1}}(0)=1 ,\, \text{ nucleation does not occur} \\ \text{before $\tau_\beta$ in the process } ({\sigma}_{{\Lambda}(\ln\beta),t}^{-,\mathbf{-1}}, t\geq 0) \\ \end{matrix} \right) \\ \,\leq\, P \left( \begin{matrix} \text{nucleation occurs before $\tau_\beta$} \\ \text{ in the process } ({\sigma}_{{\Lambda}(\ln\beta),t}^{d\pm,\widetilde\mu}, t\geq 0)\\ \end{matrix} \right) \,+\, P\big(\widetilde\sigma_{{\Lambda}(\ln\beta),{\tau_\beta}}^{d\pm,\widetilde\mu}(0)=1\big)\,. \end{multline*} Thanks to lemma~\ref{fugaup}, the first term is exponentially small in $\beta$. The second term is less than $\smash{\widetilde\mu_{{\Lambda}(\ln\beta)}^{d\pm}({\sigma}(0)=1)}$ which is also exponentially small in $\beta$. It remains to estimate \[ {\mathbb P}\big({\mathrm{diam }} {\cal C}^*\geq\ln\ln\beta\big)\,. \] We distinguish two cases. \noindent $\bullet$ $L>L_d$. In this case, we write \begin{multline*} {\mathbb P}\big({\mathrm{diam }} {\cal C}^*\geq\ln\ln\beta\big)\,=\,\\ {\mathbb P}\big(\ln\ln\beta\leq {\mathrm{diam }} {\cal C}^*\leq \exp(\beta L_d)\big) \,+\, {\mathbb P}\big({\mathrm{diam }} {\cal C}^*> \exp(\beta L_d)\big)\,. \end{multline*} We estimate separately each term. First \begin{multline*} {\mathbb P}\big({\mathrm{diam }} {\cal C}^*> \exp(\beta L_d)\big)\,\leq\,\\ P \left( \begin{matrix} \text{the process } ({\sigma}_{{\Lambda}_\beta,t}^{-,\mathbf{-1}}, 0\leq t\leq {\tau_\beta}) \text{ creates}\\ \text{a ${\hbox{\footnotesize\rm STC}}$ of diameter larger than $\exp({{\beta} L_d})$} \end{matrix} \right) \end{multline*} which is ${\hbox{\footnotesize\rm SES}}$ by theorem~\ref{T2}. Second, we have by lemma~\ref{cresSTC}, \begin{multline*} {\mathbb P}\big(\ln\ln\beta\leq {\mathrm{diam }} {\cal C}^*\leq \exp(\beta L_d)\big) \,\leq\,\\ P \left( \begin{matrix} \text{a large ${\hbox{\footnotesize\rm STC}}$ is created before time ${\tau_\beta}$ in} \\ \text{the process } ({\sigma}_{ {\Lambda}(3\exp(\beta L_d)) ,t}^{-,\mathbf{-1}}, t\geq 0) \end{matrix} \right)\,. \end{multline*} We have reduced the problem to the second case, which we handle next. \noindent $\bullet$ $L\leq L_d$. In this case, we write, with the help of lemma~\ref{cresSTC}, \begin{multline*} {\mathbb P}\big({\mathrm{diam }} {\cal C}^*\geq\ln\ln\beta\big)\,\leq\, P \left( \begin{matrix} \text{a large ${\hbox{\footnotesize\rm STC}}$ is created before ${\tau_\beta}$} \\ \text{in the process } ({\sigma}^{-,\mathbf{-1}}_{ {\Lambda}_\beta ,t}, t\geq 0) \end{matrix} \right)\\ \,\leq\, \sum_{ \tatop{\scriptstyle Q \text{ $d$--small}} {\scriptstyle Q\subset {\Lambda}_\beta} } P \left( \begin{matrix} \text{a large ${\hbox{\footnotesize\rm STC}}$ is created before ${\tau_\beta}$} \\ \text{in the process } ({\sigma}^{-,\mathbf{-1}}_{ Q ,t}, t\geq 0) \end{matrix} \right) \,. \end{multline*} This inequality holds because the first large STC has to be created in a $d$--small box, by lemma~\ref{reason}. Finally, the term inside the summation is estimated as follows: \begin{multline*} P \left( \begin{matrix} \text{a large ${\hbox{\footnotesize\rm STC}}$ is created before ${\tau_\beta}$} \\ \text{in the process } ({\sigma}^{-,\mathbf{-1}}_{ Q ,t}, t\geq 0) \end{matrix} \right) \,\leq\,\\ P \left( \begin{matrix} \text{a large ${\hbox{\footnotesize\rm STC}}$ is created before nucleation} \\ \text{in the process } ({\sigma}^{-,\mathbf{-1}}_{ Q ,t}, t\geq 0) \end{matrix} \right)\,+\, \\ P \left( \begin{matrix} \text{nucleation occurs before ${\tau_\beta}$} \\ \text{in the process } ({\sigma}^{-,\mathbf{-1}}_{ Q ,t}, t\geq 0) \end{matrix} \right) \,. \end{multline*} By theorem~\ref{totcontrole} applied with ${\cal D}={\cal R}_d(Q)$, the first term of the righthand side is ${\hbox{\footnotesize\rm SES}}$. By lemma~\ref{fugaup}, the second term is less than \[ 4{\beta} (m_d+2)^2 |Q|^{2m_d+2}\tau_\beta \, \exp(-{\beta}{\Gamma}_{d}) \,+\,{\hbox{\footnotesize\rm SES}}\,, \] whence \[ {\mathbb P}\big({\mathrm{diam }} {\cal C}^*\geq\ln\ln\beta\big)\,\leq\, \big|{\Lambda}_\beta\big| 4{\beta} (d\ln\beta)^{d(2m_d+4)} \tau_\beta \,\exp(-{\beta}{\Gamma}_{d}) \,+\,{\hbox{\footnotesize\rm SES}}\,. \] It follows that $$\limsup_{{\beta}\to\infty} \frac{1}{\beta} \ln {\mathbb P}\big({\mathrm{diam }} {\cal C}^*\geq\ln\ln\beta\big)\,\leq\, dL+{\kappa}-{\Gamma}_d \,<\,0 \,,$$ and we are done!! \section{The relaxation regime.} \label{relaxa} In this section, we prove the upper bound on the relaxation time stated in theorem~\ref{mainfv}. This part is considerably easier than the lower bound. The argument relies on the construction of an infection process, as done by Dehghanpour and Schonmann \cite{DS1} in dimension two, together with an induction on the dimension and a simple computation involving the associated growth model \cite{CM2}. Let us give a quick outline of the structure of the proof. To each site of the lattice, we associate the box of side length $\ln\beta$ centered at~$x$. A site becomes infected once all the spins in the associated box are equal to $+1$. The site remains infected as long as the associated box contains less than $2\ln\ln\beta$ minus spins (section~\ref{inucl}). We give a lower bound for the probability of a site becoming infected, this corresponds to a nucleation event. We estimate the probability that a neighbor of an infected site becomes infected, this corresponds to the spreading of the infection (section~\ref{ispread}). Finally, we define a simple scenario for the invation of a box of sidelength $\exp(\beta L)$, starting from a single infected site (section~\ref{iinva}). We combine all these estimates and we obtain the required upper bound on the relaxation time. \subsection{The infection process.} \label{inucl} Let ${\Lambda}(\exp(\beta L))$ be a cubic box of side length $\exp(\beta L)$. Following the strategy of Dehghanpour and Schonmann \cite{DS1}, we define a renormalized process $(\mu_t)_{t\geq 0}$ on ${\Lambda}(\exp(\beta L))$ as follows. For $x\in {\Lambda}(\exp(\beta L)) $, we set \[{\Lambda}_x\,=\,x+{\Lambda}^d(\ln\beta)\] and we define $T_x$ to be the first time when all the spins of the sites of the box ${\Lambda}_x$ are equal to $+1$ in the process $({\sigma}_{ {\Lambda}(\exp(\beta L)) ,t}^{-,\mathbf{-1}})_{t\geq 0}$: \[ T_x\,=\,\inf\,\big\{\,t\geq 0: \forall y\in {\Lambda}_x\quad {\sigma}_{ {\Lambda}(\exp(\beta L)) ,t}^{-,\mathbf{-1}}(y)=+1\,\big\}\,.\] For ${\Lambda}$ a box, we define the set ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda})$ to be the set of the configurations in ${\Lambda}$ having at most $\ln\ln\beta$ minus spins: $$ {{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}}({\Lambda})\,=\,\Big\{\,\eta\in \smash{\{\,-1,+1\,\}^{\Lambda}}: \sum_{x\in{\Lambda}}\eta(x)\geq |{\Lambda}|-2\ln\ln\beta\,\Big\}\,.$$ We set finally \[ T'_x\,=\,\inf\,\big\{\,t\geq T_x: {\sigma}_{ {\Lambda}(\exp(\beta L)) ,t}^{-,\mathbf{-1}}|_{{\Lambda}_x}\not\in {{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}}({\Lambda}_x) \,\big\}\,.\] The infection process $(\mu_t)_{t\geq 0}$ is given by \[\forall x\in {\Lambda}(\exp(\beta L)) \qquad \mu_t(x)\,=\, \begin{cases} 0 \qquad \text{if }t< T_x \\ 1 \qquad \text{if }T_x\leq t< T'_x \\ 0 \qquad \text{if }t\geq T'_x \\ \end{cases} \] We first show that, once a site is infected, with very high probability, it remains infected until time ${\tau_\beta}$. \bl{eaz} For any $x$ in $ {\Lambda}(\exp(\beta L)) $, \[ \forall C>0\qquad P\big(T'_x-T_x\leq\exp(\beta C)\big)\,=\,{\hbox{\footnotesize\rm SES}}\,. \] \end{lemma} \begin{proof} From the Markov property and the monotonicity with respect to the boundary conditions, we have \begin{multline*} P\big(T'_x-T_x\leq\exp(\beta C)\big)\\ \,\leq\, P\Big( \text{for the process $({\sigma}_{{\Lambda}_x,t}^{-,\mathbf{+1}})_{t\geq 0}$} \quad \tau({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x))\leq \exp(\beta C)\,\Big)\,. \end{multline*} We consider the dynamics in ${\Lambda}_x$ starting from $\mathbf{+1}$ and restricted to the set ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x)$, with $-$ boundary conditions on ${\Lambda}_x$. We denote by $(\widehat{{\sigma}}_{{\Lambda}_x,t}^{-,\mathbf{+1}})_{t\geq 0}$ the corresponding process. The invariant measure of this process is the Gibbs measure restricted to ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x)$, which we denote by $\widehat{\mu}_{{\Lambda}_x}$: \[ \forall {\sigma}\in{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x)\qquad \widehat{\mu}_{{\Lambda}_x} ({\sigma})\,=\, \frac{ \mu_{{\Lambda}_x}^{-}({\sigma})} {\mu_{{\Lambda}_x}^{-}({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x))}\,. \] We use the graphical construction described in section~\ref{sigm} to couple the processes \[ ({{\sigma}}_{{\Lambda}_x,t}^{-,\mathbf{+1}})_{t\geq 0}\,,\qquad (\widehat{{\sigma}}_{{\Lambda}_x,t}^{-,\widehat{\mu}})_{t\geq 0} \,. \] We define \[\partial^{\, in}{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x)\,=\,\big\{\,\sigma\in {\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x):\exists\, y\in{\Lambda}_x\quad{\sigma}^y\not\in {\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x)\,\big\}\,.\] Proceeding as in lemma~\ref{fugaup}, we obtain that \begin{multline*} P\Big( \text{for the process $({\sigma}_{{\Lambda}_x,t}^{-,\mathbf{+1}})_{t\geq 0}$} \quad \tau({\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x))\leq \exp(\beta C)\,\Big)\\ \,\leq\, P\big(\exists\, t\leq\exp(\beta C)\quad \widehat{{\sigma}}_{{\Lambda}_x,t}^{-,\widehat{\mu}}\in \partial^{\, in}{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x) \big)\\ \,\leq\, 4{\beta}\l \, \widehat\mu_{{\Lambda}_x}^{-}\big( \partial^{\, in} {\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}\big) +\exp(-{\beta}\l\ln{\beta}) \end{multline*} where $\l=(\ln \beta)^d \exp(\beta C)$. Next, if $\eta\in \partial^{\, in}{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x)$, then \[ \sum_{y\in{\Lambda}_x}\eta(y)\leq |{\Lambda}_x|-2\ln\ln\beta+1\,,\] and \[ H_{{\Lambda}_x}^{-}(\eta)\,-\, H_{{\Lambda}_x}^{-}(\mathbf{+1})\,\geq\,h(\ln\ln\beta-1) \] so that \[ \widehat\mu_{{\Lambda}_x}^{-}(\eta) \,\leq\, \exp\big(-\beta h(\ln\ln\beta-1)\big)\,.\] Thus \begin{multline*} \widehat\mu_{{\Lambda}_x}^{-}\big( \partial^{\, in} {\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}\big) \,\leq\, | \partial^{\, in} {\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H} |\min\,\{\, \widehat\mu_{{\Lambda}_x}^{-}(\eta): \eta\in \partial^{\, in}{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x)\,\big\}\\ \,\leq\, \Big((\ln \beta)^d\Big)^{\ln\ln\beta} \exp\big(-\beta( h\ln\ln\beta-1)\big)\,. \end{multline*} This last quantity is ${\hbox{\footnotesize\rm SES}}$ and the lemma is proved.\end{proof} \subsection{Spreading of the infection.} \label{ispread} We show first that any configuration in ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x)$ can reach the configuration $\mathbf{+1}$ through a downhill path. \bl{down} Let $\eta$ belong to ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x)$. There exists a sequence of $r\leq\ln\ln\beta$ distinct sites $x_1,\dots,x_r$ such that, if we set ${\sigma}_0=\eta$ and \[\forall i\in\{\,1,\dots,r\,\}\quad {\sigma}_i\,=\,{\sigma}_{i-1}^{x_i}\,,\] then we have ${\sigma}_r=\mathbf{+1}$ and for $i\in\{\,1,\dots,r\,\}$, $\eta(x_i)={\sigma}_{i-1}(x_i)= -1$ and $x_i$ has at least $d$ plus neighbors in ${\sigma}_{i-1}$. \end{lemma} \begin{proof} We prove the result by induction over the dimension $d$. Suppose first that $d=1$. Let $\eta$ be a configuration in ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}^1(\ln\beta))$. Let $x_0\in{\Lambda}^1(\ln\beta)$ such that $\eta(x_0)=1$. We define then \begin{align*} x_1\,&=\,\max\,\big\{\,y< x_0:\eta(y)=-1\,\big\}\,,\\ &\dots\\ x_k\,&=\,\max\,\big\{\,y< x_{k-1}:\eta(y)=-1\,\big\}\,,\\ x'_1\,&=\,\min\,\big\{\,y> x_0:\eta(y)=-1\,\big\}\,,\\ &\dots\\ x'_l\,&=\,\min\,\big\{\,y> x'_{l-1}:\eta(y)=-1\,\big\}\,. \end{align*} The sequence of sites $x_1,\dots,x_k,x'_1,\dots,x'_l$ answers the problem. Suppose that the result has been proved at rank $d-1$. Let $\eta$ be a configuration in ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}^d(\ln\beta))$. We consider the hyperplanes \[P_i\,=\,\{\,x=(x_1,\dots,x_d)\in {\mathbb Z}^d: x_{d}=i\,\}\,,\qquad i\in{\mathbb Z}\,\] and we denote by $\eta_i$ the restriction of $\eta$ to $P_i$. The configuration $\eta_i$ can naturally be identified with a $d-1$ dimensional configuration. Since there is at most $\ln\ln\beta$ minuses in the configuration $\eta$, there exists an index $i^*$ such that $\eta_{i^*}=\mathbf{+1}$. We apply next the induction result at rank $d-1$ to $\eta_{i^*+1}$. This way, we can fill $P_i\cap{\Lambda}^d(\ln\beta)$ with a sequence of positive spin flips which never increase the $d-1$ dimensional energy. Each site which is flipped in $\eta_{i^*+1}$ has at least $d-1$ plus neighbours in $P_{i^*+1}$, hence at least $d$ plus neighbours in ${\Lambda}^d(\ln\beta)$. Thus no spin flip of this sequence increases the $d$ dimensional energy. We iterate the argument, filling successively the sets $P_i\cap{\Lambda}^d(\ln\beta)$ above and below $i^*$ until the box ${\Lambda}^d(\ln\beta)$ is completely filled. \end{proof} \noindent This result leads directly to a lower bound on the time needed to reach the configuration $\mathbf{+1}$ starting from a configuration of ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}^d(\ln\beta))$. \bc{zddz} For any configuration $\eta$ in ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x)$, we have \[ P\big( {\sigma}_{{\Lambda}_x, \ln\ln\beta }^{-,\eta}=\mathbf{+1} \,\big) \,\geq\, 7^{-|{\Lambda}_x|\ln\ln\beta}\,. \] \end{corollary} \begin{proof} Let $\eta\in {\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_x)$ and let $x_1,\dots,x_r$, $r\leq\ln\ln\beta$, be a sequence of sites as given by lemma~\ref{down}. We evaluate the probability that, starting from $\eta$, the successive spin flips at $x_1,\dots,x_r$ occur. For $i\in\{\,1,\dots,r\,\}$, let $E_i$ be the event: during the time interval $[i-1,i]$, there is a time arrival for the Poisson process associated to the site $x_i$, and none for the other sites of the box ${\Lambda}_x$. Let $F$ be the event that there is no arrival for the Poisson processes in the box ${\Lambda}_x$ during $[r,\ln\ln\beta]$. We have then $$\displaylines{ P(F)\,\geq\, \Big(1- \frac{1}{e}\Big)^{|{\Lambda}_x|\ln\ln\beta}\,,\cr \forall i \in\{\,1,\dots,r\,\}\qquad P(E_i)\,\geq\,\frac{1}{e}\Big(1- \frac{1}{e}\Big)^{|{\Lambda}_x|} }$$ and \[ P\Big(F\cap\bigcap_{1\leq i\leq r}E_i\Big) \,=\, P(F)\times \prod_{i\in I}P(E_i) \,\geq\, 7^{-|{\Lambda}_x|\ln\ln\beta}\,. \] Yet the event $E_1\cap\cdots\cap E_r\cap F$ implies that, at time $r$, the process starting from $\eta$ has reached the configuration $\mathbf{+1}$ and that it does not move until time $\ln\ln\beta$. \end{proof} \noindent For $x\in {\Lambda}(\exp(\beta L)) $, we define the enlarged neighborhood ${\Lambda}'_x$ of ${\Lambda}_x$ as \[ {\Lambda}'_x\,=\,\bigcup_{y:|y-x|=1} {\Lambda}_y\,. \] \bp{nucl} Let $n\in\{\,1,\dots,d\,\}$. Let $\eta$ be a configuration in ${\Lambda}'_x$ such that there exist $d-n$ neighbors $y_1,\dots,y_{d-n}$ of $x$ in $d-n$ distinct directions for which the restriction $\eta|_{{\Lambda}_{y_i}}$ is in ${\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def\cH{{\cal H}({\Lambda}_{y_i})$ for $i\in\{\,1,\dots,d-n\,\}$. We have the following estimates: \noindent {\bf Nucleation:} For any $\kappa$ such that ${\Gamma}_{n-1}<\kappa<{\Gamma}_{n}$ and $\varepsilon>0$, we have for $\beta$ large enough \[ P\left( \begin{matrix} \text{in the process }({\sigma}_{{\Lambda}'_x,t}^{-,\eta})_{t\geq 0},\text{ the site $x$}\\ \text{becomes infected before time $\exp(\beta\kappa)$} \end{matrix} \right)\,\geq\,\exp\big(\beta(\kappa-{\Gamma}_{n}-\varepsilon)\big)\,. \] {\bf Spreading:} For any $\kappa$ such that $\kappa>{\Gamma}_{n}$, we have \[ P\left( \begin{matrix} \text{in the process }({\sigma}_{{\Lambda}'_x,t}^{-,\eta})_{t\geq 0},\text{ the site $x$ has}\\ \text{not become infected by time $\exp(\beta\kappa)$} \end{matrix} \right)\,=\,{\hbox{\footnotesize\rm SES}}\,. \] \end{proposition} \begin{proof} We consider the process $({\sigma}_{{\Lambda}'_x,t}^{-,\eta})_{t\geq 0}$ and we set $$\tau_\mathbf{+1}\,=\,{\tau}({\{\,\mathbf{+1}|_{{\Lambda}'_x}\,\}^c})\,=\, \inf\,\big\{\,t\geq 0:\forall y\in{\Lambda}'_x\quad {\sigma}_{{\Lambda}'_x,t}^{-,\eta}(y)=+1\,\big\} $$ the hitting time of the configuration equal to $+1$ everywhere in ${\Lambda}'_x$. Let $$I\,=\,\ \exp({\beta}\kappa)- \exp({\beta}{\Gamma}_{n-1})$$ and let $\theta$ be the time of the last visit to $\mathbf{-1}|_{{\Lambda}_x}$ before reaching $\mathbf{+1}|_{{\Lambda}'_x}$: $$\theta\,=\,\sup\,\big\{\,t\leq \tau_{\mathbf{+1}} :\forall y\in{\Lambda}_x\quad {\sigma}_{{\Lambda}'_x,t}^{-,\eta}(y)=-1\,\big\} \,.$$ In case the process does not visit $\mathbf{-1}|_{{\Lambda}_x}$ before $\tau_{\mathbf{+1}}$, we set $\theta=0$. Let $\alpha$ be the configuration in ${\Lambda}'_x$ such that \[\forall y\in{\Lambda}'_x\qquad \alpha(y)\,=\, \begin{cases} +1 \qquad \text{if }y\in\bigcup_{1\leq i\leq d-n}{\Lambda}_{y_i} \\ -1 \qquad \text{if }y\in{\Lambda}_x \\ \end{cases} \] We write, using the Markov property, $$\displaylines{ {\mathbb P}\big({\tau}_\mathbf{+1}< \exp({\beta}\kappa)\big) \,\geq\,\hfill\cr \sum_{0\leq i \leq I} {\mathbb P}\big(\sigma_{{\Lambda}'_x,i}^{-,\eta}=\alpha,\,i\leq\theta<i+1,\, {\tau}_\mathbf{+1}< i+ \exp({\beta}{\Gamma}_{n-1})\big) \cr \,\geq\, \Big(\sum_{0\leq i \leq I} {\mathbb P}\big(\sigma_{{\Lambda}'_x,i}^{-,\eta}=\alpha,\, {\tau}_\mathbf{+1}> i \big)\Big) \, {\mathbb P} \left( \begin{matrix} \text{for the process } ({\sigma}_{{\Lambda}'_x,t}^{-,\alpha})_{t\geq 0}\\ 0\leq\theta<1,\, {\tau}_\mathbf{+1}< \exp({\beta}{\Gamma}_{n-1}) \end{matrix} \right) \,. }$$ By proposition~\ref{dee}, the maximal depth in the reference cycle path in the box ${\Lambda}_x$ with $n\pm$ boundary conditions is strictly less than~${\Gamma}_{n-1}$, so that we have for ${\varepsilon}>0$ and ${\beta}$ large enough $${\mathbb P} \left( \begin{matrix} \text{for the process } ({\sigma}_{{\Lambda}'_x,t}^{-,\alpha})_{t\geq 0}\\ 0\leq\theta<1,\, {\tau}_\mathbf{+1}< \exp({\beta}{\Gamma}_{n-1}) \end{matrix} \right) \,\geq\,\exp\big(-{\beta}({\Gamma}_n+{\varepsilon})\big) \,.$$ This estimate is a continuous--time analog of theorem~$5.2$ and proposition~$10.9$ of \cite{CaCe}. It relies on a continuous--time formula giving the expected exit time given the exit point, which is the analog of lemma~$10.2$ of \cite{CaCe}. Let ${\cal C}_n^\alpha$ be the largest cycle included in $\{\,-1,+1\,\}^{{\Lambda}'_x}$ containing $\alpha$ and not $\mathbf{+1}$. For $i\leq I$, we have $$\displaylines{ {\mathbb P}\big(\sigma_{{\Lambda}'_x,i}^{-,\eta}=\alpha,\, {\tau}_\mathbf{+1}> i \big)\,\geq \, {\mathbb P}\big(\sigma_{{\Lambda}'_x,i}^{-,\eta}=\alpha,\, {\tau}({\cal C}_n^\alpha)> i \big)\hfill\cr \,\geq \, {\mathbb P}\big( \text{for the process } ({\sigma}_{{\Lambda}'_x,t}^{-,\alpha})_{t\geq 0},\quad {\tau}({\cal C}_n^\alpha)> I \big) {\mathbb P}\big(\sigma_{{\Lambda}'_x,i}^{-,\alpha}=\alpha\,|\, {\tau}({\cal C}_n^\alpha)>i \big)\,. }$$ Since $\kappa<{\Gamma}_n$, then $$\lim_{{\beta}\to\infty}{\mathbb P}\big( \text{for the process } ({\sigma}_{{\Lambda}'_x,t}^{-,\alpha})_{t\geq 0},\quad {\tau}({\cal C}_n^\alpha)> I \big)\,=\,1\,. $$ This follows from the continuous--time analog of corollary~$10.8$ of \cite{CaCe}. We compare next the process starting from $\alpha$ with the process starting from $\widetilde\mu_{{\cal C}_n^\alpha}$, the Gibbs measure restricted to the metastable cycle ${\cal C}_n^\alpha$. We have \begin{multline*} \widetilde\mu_{{\cal C}_n^\alpha}(\alpha) \,=\, \sum_{\eta\in {\cal C}_n^\alpha} \widetilde\mu_{{\cal C}_n^\alpha}(\eta) {\mathbb P}\big(\sigma_{{\Lambda}'_x,i}^{-,\eta}=\alpha\,\big|\, {\tau}({\cal C}_n^\alpha)>i \big) \\ \,\leq\, \widetilde\mu_{{\cal C}_n^\alpha}(\alpha) {\mathbb P}\big(\sigma_{{\Lambda}'_x,i}^{-,\alpha}=\alpha\,\big|\, {\tau}({\cal C}_n^\alpha)>i \big) + \sum_{\eta\in {\cal C}_n^\alpha,\eta\neq\alpha} \widetilde\mu_{{\cal C}_n^\alpha}(\eta)\,. \end{multline*} The configuration $\alpha$ is the bottom of the cycle ${\cal C}_n^\alpha$, thus there exists $\delta>0$ such that $$\forall \eta\in{\cal C}_n^\alpha\,,\quad \eta\neq\alpha\quad\Longrightarrow\quad \widetilde\mu_{{\cal C}_n^\alpha}(\eta)\,\leq\, \widetilde\mu_{{\cal C}_n^\alpha}(\alpha) \exp(-\beta\delta) \,. $$ For $\beta$ large enough, we have also $|{\cal C}_n^\alpha\big|\,\leq\,\exp(\beta\delta/2)$. We conclude that $$ {\mathbb P}\big(\sigma_{{\Lambda}'_x,i}^{-,\alpha}=\alpha\,\big|\, {\tau}({\cal C}_n^\alpha)>i \big)\,\geq\, \frac{1} {1 -\exp(-\beta\delta/2)}\,. $$ Combining these estimates, we conclude that for ${\beta}$ large enough, $$\displaylines{ {\mathbb P}\big({\tau}_\mathbf{+1}< \exp({\beta}\kappa)\big) \,\geq\, I \exp\big(-{\beta}({\Gamma}_n+{\varepsilon})\big)\,. }$$ Sending successively ${\beta}$ to $\infty$ and ${\varepsilon}$ to~$0$, we obtain the desired lower bound. The second estimate stated in the proposition is a standard consequence of the first. \end{proof} \subsection{Invasion.} \label{iinva} We denote by $e_1,\dots,e_d$ the canonical orthonormal basis of $\mathbb R^d$. We will prove the following result by induction over $n$. \bp{inva} Let $n\in\{\,0,\dots,d\,\}$ and let $L\geq 0$. Let ${\Lambda}_\beta^n$ be the parallelepiped $${\Lambda}_\beta^n\,=\,{\Lambda}^{n}(\exp(\beta L))\times {\Lambda}^{d-n}(1)\,.$$ For any $s\geq 0$ and any $\kappa> \max\big( \Gamma_{n}-nL, \kappa_{n}\big)$, we have \[ P\!\left( \begin{matrix} \text{all the sites of ${\Lambda}_\beta^n$ are} \\ \text{infected at time} \\ \text{$s +\exp(\beta \kappa)$} \end{matrix} \,\, \bigg| \,\,\, \begin{matrix} \text{all the sites of} \\ \text{$e_{n+1}+{\Lambda}_\beta^n,\dots, e_{d}+{\Lambda}_\beta^n$} \\ \text{ are infected at time $s$ } \end{matrix} \right)\,=\,1-{\hbox{\footnotesize\rm SES}}\,. \] \end{proposition} \begin{proof} Thanks to the Markovian character of the process, we need only to consider the case where $s=0$. Let us consider first the case $n=0$. We have then $\kappa_0=\Gamma_0=0$. The box ${\Lambda}_\beta^0$ is reduced to the singleton $\{\,0\,\}$. The result is an immediate consequence of proposition~\ref{nucl}. We suppose now that $n\geq 1$ and that the result has been proved at rank $n-1$. Let $L>0$, let ${\Lambda}_\beta^n$ be a parallelepiped as in the statement of the proposition, and let $\kappa> \max( \Gamma_{n}-nL, \kappa_{n})$. We define the nucleation time $\tau_{\text{nucleation}}$ in $\Lambda_\beta^n$ as \begin{equation*} \tau_{\text{nucleation}} \,=\,\inf\,\big\{\,t\geq 0:\exists\, x\in\Lambda^n_\beta \quad \mu_{t}(x)=1\,\big\}\,. \end{equation*} Let $c> \max( \Gamma_{n}-nL, \Gamma_{n-1})$. Let $(x_i)_{i\in I}$ be a family of sites of ${\Lambda}_\beta^n$ which are pairwise at distance larger than $4\ln\beta$ and such that $$|I|\,\geq\, \frac{\exp(\beta L n)}{(6\ln\beta)^{n}}\,.$$ We can for instance consider the sites of the sublattice $(5\ln\beta){\mathbb Z^n}\times {\Lambda}^{d-n}(1)$ which are included in ${\Lambda}_\beta^n$. For $i\in I$, let $\eta_i$ be the initial configuration restricted to the box ${{\Lambda}'_{x_i}}$. We write \begin{multline*} {\mathbb P}\big( \tau_{\text{nucleation}} \,>\,\exp(\beta c)\big) \,\leq\, P\left( \begin{matrix} \text{no site $x$ in ${\Lambda}_\beta^n$ has become}\\ \text{infected by time $\exp(\beta c)$} \end{matrix} \right)\\ \,\leq\, P\left( \begin{matrix} \text{for any $i$ in $I$,} \text{ the site $x_i$ has not} \\ \text{become infected by time $\exp(\beta c)$}\\ \text{in the process }({\sigma}_{{\Lambda}'_{x_i},t}^{-,\eta_i})_{t\geq 0} \end{matrix} \right)\\ \,\leq\, \prod_{i\in I}\,\, P\left( \begin{matrix} \text{ the site $x_i$ has not become infected by}\\ \text{time $\exp(\beta c)$} \text{ in the process }({\sigma}_{{\Lambda}'_{x_i},t}^{-,\eta_i})_{t\geq 0} \end{matrix} \right)\,. \end{multline*} Since all the sites of $\text{$e_{n+1}+{\Lambda}_\beta^n,\dots, e_{d}+{\Lambda}_\beta^n$}$ are initially infected, by proposition~\ref{nucl} we have for any $\varepsilon>0$ $$ {\mathbb P}\big( \tau_{\text{nucleation}} \,>\,\exp(\beta c)\big) \,\leq\, \Big(1-\exp\big(\beta(c-{\Gamma}_{n}-\varepsilon)\big)\Big)^{ \textstyle\frac{\exp(\beta L n)}{(6\ln\beta)^{n}} }\,. $$ Therefore, up to a SES event, the first infected site in the box $\smash{{\Lambda}^n_\beta}$ appears before time $\exp(\beta c)$. For $i\geq 1$, we define the first time $\tau^i$ when there is a $n$ dimensional parallelepiped of infected sites of diameter larger than or equal to $i$ in $\Lambda_\beta^n$, i.e., $$\displaylines{ \tau^i\,=\,\inf\, \left\{ \,t\geq 0: \begin{matrix} \text{there is a $n$ dimensional parallelepiped} \\ \text{of infected sites included in $\Lambda_\beta^n$ whose}\\ \text{$\, {\rm d}_{\infty}\, $ diameter is larger than or equal to $i$} \end{matrix} \right\} }$$ The face of an $n$ dimensional parallelepiped is an $n-1$ dimensional parallelepiped. The sites of a face of an infected parallelepiped in ${\Lambda}_\beta^n$ have already $d-n+1$ infected neighbours. From the induction hypothesis, up to a SES event, an $n-1$ dimensional box of sidelength $\exp(\beta K)$ whose sites have already $d-n+1$ infected neighbours is fully infected at a time $$\exp\Big(\beta\big( \max(\Gamma_{n-1}-(n-1)K,\kappa_{n-1})+\varepsilon\big) \Big)\,. $$ This implies that, up to a SES event, the box ${\Lambda}_\beta^n$ is fully occupied at time $$\displaylines{ \tau^ {\exp(\beta L)} \,\leq\, \tau_{\text{nucleation}} \,+\, \sum_{1\leq i< {\exp(\beta L)} } (\tau^{i+1} - \tau^{i}) \hfill\cr \,\leq\, \exp(\beta c) + \sum_{1\leq i< \exp(\beta L) } 2n\exp\Big(\beta\big( \max(\Gamma_{n-1}-\frac{n-1}{\beta}\ln i,\kappa_{n-1})+\varepsilon\big) \Big) }$$ We consider two cases. \noindent $\bullet$ First case: $L\leq L_{n-1}$. Notice that $L_0=0$, hence this case can happen only whenever $n\geq 2$. In this case, we have $$ \forall i< \exp(\beta L)\qquad \kappa_{n-1}\,\leq\, \Gamma_{n-1}-\frac{n-1}{\beta}\ln i$$ and $$\displaylines{ \sum_{1\leq i< \exp(\beta L)} \exp\Big(\beta \max(\Gamma_{n-1}-\frac{n-1}{\beta}\ln i,\kappa_{n-1}) \Big) \hfill\cr \,\leq\, \exp(\beta \Gamma_{n-1}) \sum_{1\leq i< \exp(\beta L)} \frac{1}{i^{n-1}} \cr \,\leq\, \exp(\beta \Gamma_{n-1}) \sum_{1\leq i< \exp(\beta L)} \frac{1}{i} \,\leq\, \beta L \exp(\beta \Gamma_{n-1})\,. }$$ \noindent $\bullet$ Second case: $L> L_{n-1}$. We have then $$\displaylines{ \sum_{ \exp(\beta L_{n-1}) \leq i< \exp(\beta L)} \exp\Big(\beta \max(\Gamma_{n-1}-\frac{n-1}{\beta}\ln i,\kappa_{n-1}) \Big) \hfill\cr \,\leq\, \Big( \exp(\beta L)- \exp(\beta L_{n-1})\Big) \exp(\beta \kappa_{n-1}) \cr \,\leq\, \exp\Big(\beta (L+\kappa_{n-1})\Big)\,. }$$ We conclude that, in both cases, for any $\varepsilon>0$, up to a SES event, the box ${\Lambda}_\beta^n$ is fully occupied at time $$\displaylines{ 2n\beta L\exp(\beta\varepsilon) \left( \exp\Big(\beta( \Gamma_{n}-nL) \Big) + \exp(\beta \Gamma_{n-1}) + \exp\Big(\beta (L+\kappa_{n-1})\Big) \right)\,. }$$ Therefore, for any $\kappa$ such that $$\kappa\,>\, \max\big( \Gamma_{n}-nL, \Gamma_{n-1}, L+\kappa_{n-1}\big)$$ the probability that the box ${\Lambda}_\beta^n$ is not fully occupied at time $\exp(\beta\kappa)$ is SES. If $L\leq L_n$ then $$\max\big( \Gamma_{n}-nL, \Gamma_{n-1}, L+\kappa_{n-1}\big) \,=\, \Gamma_{n}-nL $$ and we have the desired estimate. Suppose next that $L> L_n$. By the previous result applied with $L=L_n$, we know that, for any $\kappa>\kappa_n$, up to a SES event, a box of sidelength $\exp(\beta L_n)$ is fully occupied at time $\exp(\beta\kappa)$. We cover ${\Lambda}_\beta^n$ by boxes of sidelength $\exp(\beta L_n)$. Such a cover contains at most $\exp(\beta nL)$ boxes, thus $$\displaylines{ {\mathbb P}\tonda{ \text{${\Lambda}_\beta^n$ is not fully occupied at time $\tau_\beta$} }\hfill\cr \,\leq\, {\mathbb P}\left( \begin{matrix} \text{there exists a box included in ${\Lambda}_\beta^n$ of sidelength }\\ \text{ $\exp(\beta L_n)$ which is not fully occupied at time $\tau_\beta$} \end{matrix} \right)\cr \,\leq\, \exp(\beta nL)\, {\mathbb P}\tonda{ \begin{matrix} \text{the box ${\Lambda}^n( \exp(\beta L_n))$ is not}\\ \text{fully occupied at time $\tau_\beta$} \end{matrix} }\,. }$$ The last probability being SES, we are done. \end{proof} \noindent Proposition~\ref{inva} with $n=d$ readily yields the upper bound of the relaxation time stated in theorem~\ref{mainfv}. \bibliographystyle{alpha}
1,116,691,499,416
arxiv
\section*{Results} \subsection*{Model Description} \begin{figure*}[h] \centering \includegraphics[width=0.9\textwidth]{fig_Schematic.pdf} \caption{Schematic representation of the model with its main geometrical parameters. (\textit{A}) Two adjacent stereocilia are represented with their basal elastic linkages to the cuticular plate. (\textit{B} and \textit{C}) The transduction unit, encircled in dashed red, is detailed for two different geometries: when the tip link’s central axis is perpendicular to the stereociliary membrane (\textit{B}) and in the generic case where the stereociliary membrane makes an angle with the perpendicular to the tip link’s central axis (\textit{C}). In \textit{B}, two different realizations are displayed: when tension in the tip link is low, in which case the channels are most likely to be closed (\textit{B, Left}), and when tension in the tip link is high, in which case the channels are most likely to be open (\textit{B, Right}). In \textit{C}, only the case of low tip-link tension is represented. (\textit{D}) We plot here the elastic membrane potentials in units of $k_{\rm B}T$ and as functions of the distance $a$ between the channels and the tip link's central axis. Each curve corresponds to a different configuration of the channel pair: CC (blue), OC (red), and OO (green). The analytic expressions of these potentials, together with the values of the associated parameters, are given in \textit{Materials and Methods}. }\label{FigSchematicModel} \end{figure*} We describe here the basic principles of our model, illustrated in Fig.~\ref{FigTwoChannelModel}. Structural data indicate that the tip link is a dimeric, string-like protein that branches at its lower end into two single strands, which anchor to the top of the shorter stereocilium~\cite{kachar_high-resolution_2000,kazmierczak_cadherin_2007}. The model relies on three main hypotheses. First, each strand of the tip link connects to one MET channel, mobile within the membrane. Second, an intracellular spring---referred to as the adaptation spring---anchors each channel to the cytoskeleton, in agreement with the published literature~\cite{fettiplace_physiology_2014, howard_hypothesis:_2004, zhang_ankyrin_2015, powers_stereocilia_2012}. Third, and most importantly, the two MET channels interact via membrane-mediated elastic forces, which are generated by the mismatch between the thickness of the hydrophobic core of the bare bilayer and that of each channel~\cite{nielsen_energetics_1998}. Such interactions have been observed in a variety of transmembrane proteins, including the bacterial mechanosensitive channels of large conductance (MscL)~\cite{wiggins_analytic_2004, ursell_cooperative_2007, phillips_emerging_2009, grage_bilayer-mediated_2011, haselwandter_connection_2013}. Since the thickness of the channel’s hydrophobic region changes during gating, this hydrophobic mismatch induces a local deformation of the membrane that depends on the channel’s state~\cite{wiggins_analytic_2004, ursell_cooperative_2007, phillips_emerging_2009, haselwandter_connection_2013}. For a closed channel, the hydrophobic mismatch is small, and the membrane is barely deformed. An open channel's hydrophobic core, however, is substantially thinner, and the bilayer deforms accordingly~\cite{ursell_cooperative_2007, phillips_emerging_2009}. When the two channels are sufficiently near each other, the respective bilayer deformations overlap, and the overall membrane shape depends both on the states of the channels as well as on the distance between them. As a result, the pair of MET channels is subjected to one of three different energy landscapes: open--open (OO), open--closed (OC), or closed--closed (CC)~\cite{ursell_cooperative_2007}. The effects of this membrane-mediated interaction are most apparent at short distances: The potentials strongly disfavor the OC state, favor the OO state, and generate an attractive force between the two channels when they are both open. Channel motion as a function of the imposed external force can be pictured as follows (Fig.~\ref{FigTwoChannelModel} and Movie~S1). When tip-link tension is low, the two channels are most likely to be closed, and they are kept apart by the adaptation springs; at this large inter-channel distance, the membrane-mediated interaction between them is negligible (Fig.~\ref{FigTwoChannelModel}\textit{A}). When a positive deflection is applied to the hair bundle, tension in the tip link rises. Consequently, the channels move toward one another and their open probabilities increase (Fig.~\ref{FigTwoChannelModel}\textit{B}). When the inter-channel distance is sufficiently small, the membrane's elastic energy favors the OO state, and both channels open cooperatively. As a result, the attractive membrane interaction in the OO state enhances their motion toward one another (red horizontal arrows, Fig.~\ref{FigTwoChannelModel}\textit{B} and Movie~S1), which provides an effective gating swing that is larger than the conformational change of a single channel (red vertical arrow, Fig.~\ref{FigTwoChannelModel}\textit{B}). Eventually, the channels close---for example due to Ca\textsuperscript{2+} binding~\cite{choe_model_1998, cheung_ca2+_2006}---and the membrane-mediated interactions become negligible (Fig.~\ref{FigTwoChannelModel}\textit{C}). Now the adaptation springs can pull the channels apart. Their lateral movement away from each other increases tip-link tension and produces the twitch, a hair-bundle movement associated with fast adaptation~\cite{cheung_ca2+_2006, benser_rapid_1996, ricci_active_2000}. \begin{table*}[h] \centering \caption{Parameters of the model} \includegraphics[width=0.95\textwidth]{fig_Table.pdf} \label{TableParameters} \end{table*} \subsection*{Mathematical Formulation} We represent schematically our model in Fig.~\ref{FigSchematicModel}. Fig.~\ref{FigSchematicModel}\textit{A} illustrates the geometrical arrangement of a pair of adjacent stereocilia. They have individual pivoting stiffness $k_{\rm SP}$ at their basal insertion points. The displacement coordinate $X$ of the hair bundle's tip along the axis of mechanosensitivity and the coordinate $x$ along the tip link's axis are related by a geometrical factor $\gamma$. With $H$ the height of the tallest stereocilium in the hair bundle and $D$ the distance between its rootlet and that of its neighbor, $\gamma$ is approximately equal to $D/H$~\cite{howard_compliance_1988}. The transduction unit schematized in Fig.~\ref{FigSchematicModel}\textit{A} is represented in more detail in Fig.~\ref{FigSchematicModel} \textit{B} and \textit{C}. In Fig.~\ref{FigSchematicModel}\textit{B}, the stereociliary membrane is orthogonal to the tip link’s central axis. Depending on tip-link tension, the channels are likely to be closed (small tip-link tension, Fig.~\ref{FigSchematicModel}\textit{B}, \textit{Left}) or open (large tip-link tension, Fig.~\ref{FigSchematicModel}\textit{B}, \textit{Right}), and positioned at different locations. The tip link is modeled as a spring of constant stiffness $k_{\rm t}$ and resting length $l_{\rm t}$. It has a current length $x_{\rm t}$ and branches into two rigid strands of length $l$, a distance $d$ away from the membrane. Each strand connects to one MET channel. Due to the global geometry of the hair bundle (Fig.~\ref{FigSchematicModel} \textit{A} and \textit{B}), $x_{\rm t} + d=\gamma(X-X_0)$, where $X_0$ is a reference position of the hair-bundle tip related to the position of the adaptation motors, to which the upper part of the tip link is anchored (see also Hair-Bundle Force and Stiffness and Fig.~S2). The channels' positions are symmetric relative to the tip-link axis, with their attachments to the tip link a distance $2a$ from each other. The channels have cylindrical shapes with axes perpendicular to the membrane plane. They have a diameter $2\rho$ when closed and $2\rho + \delta$ when open, where $\delta$ corresponds to the conformational change of each channel along the membrane plane; we refer to it as the single-channel gating swing. Each tip-link branch inserts a distance $\rho/2$ from the inner edge of each channel. The adaptation springs are parallel to the direction of channel motion. They have stiffness $k_{\rm a}$ and resting length $l_{\rm a}$ and are anchored to two fixed reference positions a distance $L$ away from the tip-link axis. Under tension, the stereociliary membrane can present different degrees of tenting~\cite{kachar_high-resolution_2000, assad_tip-link_1991}. To account for this geometry, and more generally for the non-zero curvature of the membrane at the tips of stereocilia, we introduce in Fig.~\ref{FigSchematicModel}\textit{C} an angle $\alpha$ between the perpendicular to the tip link’s axis and each of the half membrane planes, along which the channels move. The simpler, flat geometry of Fig.~\ref{FigSchematicModel}\textit{B} is recovered in the case where $\alpha = 0$. The inter-channel forces mediated by the membrane are described by three elastic potentials $V_{{\rm b},n}(a)$, one for each state $n$ of the channel pair (OO, OC, and CC), and are functions of the distance $a$ (Fig.~\ref{FigSchematicModel}D). The index $n$ can be 0, 1, or 2, corresponding to the number of open channels in the transduction unit. We choose analytic expressions and parameters that mimic the shapes of the potentials used to model similar interactions between bacterial MscL channels~\cite{ursell_cooperative_2007, haselwandter_connection_2013} (\textit{Materials and Methods}). In addition to the membrane-mediated elastic force $f_{{\rm b},n} = - {\rm d}V_{{\rm b},n}/{\rm d}a$, force balance on the channels depends on the force $f_{\rm t} = k_{\rm t}(x_{\rm t} - l_{\rm t})$ exerted by the tip link on its two branches and on the force $f_{\rm a} = k_{\rm a}(a_{\rm adapt} - a - n\delta/2)$ exerted by the adaptation springs, where $a_{\rm adapt} = L - l_{\rm a} - 3\rho/2$ is the value of $a$ for which the adaptation springs are relaxed when both channels are closed. Taking into account the geometry and the connection between $x_{\rm t}$ and $X$ given previously, force balance on either of the two channels reads: \begin{equation}\label{EqForceBalance} \begin{aligned} k_{\rm t}[\gamma(X - X_0) - d - l_{\rm t}] = 2\frac{d + a \sin{\alpha}}{a + d \sin{\alpha}} \times \\ \times \left [k_{\rm a} \left ( a_{\rm adapt} - a -\frac{n}{2}\delta \right ) - \frac{{\rm d}V_{{\rm b},n}(a)}{{\rm d}a} \right ]\, . \end{aligned} \end{equation} In addition, the geometry implies: \begin{equation}\label{EqGeometry} d = \sqrt{l^2 - (a \cos{\alpha})^2} - a \sin{\alpha}\, . \end{equation} Putting the expressions of $d$ and $V_{{\rm b},n}$ as functions of $a$ into Eq.~\ref{EqForceBalance} allows us to solve for $X$ as a function of $a$, for each state $n$. Inverting these three functions numerically gives three relations $a_n(X)$, which are then used to express all the relevant quantities as functions of the displacement coordinate $X$ of the hair bundle, taking into account the probabilities of the different states. Further details about this procedure are presented in \textit{Materials and Methods}. Finally, global force balance is imposed at the level of the whole hair bundle, taking into account the pivoting stiffness of the stereocilia at their insertion points into the cuticular plate of the cell (Fig.~\ref{FigTwoChannelModel}\textit{B}, \textit{Inset}, and Fig.~\ref{FigSchematicModel}\textit{A}): \begin{equation} F_{\rm ext} = K_{\rm sp}(X - X_{\rm sp}) + F_{\rm t}\, , \label{EqBundleForceBalance} \end{equation} where $F_{\rm ext}$ is the total external force exerted at the tip of the hair bundle along the $X$ axis, $K_{\rm sp}$ is the combined stiffness of the stereociliary pivots along the same axis, $X_{\rm sp}$ is the position of the hair-bundle tip for which the pivots are at rest, and $F_{\rm t} = N\gamma f_{\rm t}$ is the combined force of the tip links projected onto the $X$ axis, with $N$ being the number of tip links. In these equations, two related reference positions appear: $X_0$ and $X_{\rm sp}$. As the origin of the $X$ axis is arbitrary, only their difference is relevant. The interpretation of $X_{\rm sp}$ is given just above. As for $X_0$, it sets the amount of tension exerted by the tip links, since the force exerted by the tip link on its two branches reads $f_{\rm t} = k_{\rm t}[\gamma(X - X_0) - d - l_{\rm t}]$. To fix $X_0$---or equivalently the combination $X_0+l_{\rm t}/\gamma$, which appears in this expression---we rely on the experimentally observed hair-bundle movement that occurs when tip links are cut, and which is typically on the order of 100~nm~\cite{assad_tip-link_1991, jaramillo_displacement-clamp_1993}. Therefore, imposing $X = 0$ as the resting position of the hair-bundle tip with intact tip links, Eq.~\ref{EqBundleForceBalance} must be satisfied with $F_{\rm ext} = 0$, $X = 0$, and $X_{\rm sp} = 100$~nm, which formally sets the value of $X_0$ for any predefined $l_{\rm t}$. Solving for $X_0$, however, requires a numerical procedure, the details of which are presented in {\it Materials and Methods}. All parameters characterizing the system together with their default values are listed in Table~\ref{TableParameters}. The geometrical projection factor $\gamma$ and number of stereocilia $N$ are set, respectively, to 0.14 and 50~\cite{howard_compliance_1988}. The combined stiffness of the stereociliary pivots $K_{\rm sp}$ is set to 0.65~mN$\cdot$m$^{-1}$~\cite{jaramillo_displacement-clamp_1993}. We use a tip-link stiffness $k_{\rm t}$ and an adaptation-spring stiffness $k_{\rm a}$ of 1~mN$\cdot$m$^{-1}$ to obtain a total hair-bundle stiffness in agreement with experimental observations~\cite{howard_compliance_1988}. The length $l$ of the tip-link branch can be estimated by analyzing the structure of protocadherin-15, a protein constituting the tip link’s lower end. Three extracellular cadherin (EC) repeats are present after the kink at the EC8--EC9 interface, which suggests that $l$ is $\sim$12--14~nm~\cite{araya-secchi_elastic_2016}. This estimate agrees with studies based on high-resolution electron microscopy of the tip link~\cite{kachar_high-resolution_2000}. We allow for the branch to fully relax the adaptation springs by choosing $a_{\rm adapt}=2\cdot l$. The parameters $\delta$ and $\rho$ correspond respectively to the amplitude of the conformational change of a single channel in the membrane plane upon gating and to the radius of the closed channel (see Fig.~\ref{FigSchematicModel}\textit{B}). Since the hair-cell MET channel has not yet been crystallized, we rely on the crystal structures of another mechanosensitive protein, the bacterial MscL channel, and choose $\delta=2$~nm and $\rho=2.5$~nm~\cite{ursell_cooperative_2007}. Finally, the channel gating energy $E_{\rm g}$ is estimated in the literature to be on the order of 5--20~$k_{\rm B}T$~\cite{corey_kinetics_1983,hudspeth_hair-bundle_1992, ricci_mechano-electrical_2006}. We use 9~$k_{\rm B}T$ as a default value. We now focus on the predictions of this model regarding the main biophysical characteristics of hair-bundle mechanics: open probability, force and stiffness as functions of displacement, the twitch during fast adaptation, and effects of Ca\textsuperscript{2+} concentration on hair-bundle mechanics. \subsection*{Open Probability} To determine the accuracy of the model and to investigate the effect of its parameters, we first focus on the predicted open probability ($P_{\rm open}$) as a function of the hair-bundle displacement $X$, for four sets of parameters (Fig.~\ref{FigOpenprobability}). \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig_Popen.pdf} \caption{Open probability curves as functions of hair-bundle displacement. All curves share a common set of parameters, whose values are specified in Table~\ref{TableParameters}. Parameter values that are not common to all curves are specified below. In addition, for each curve, the value of $X_0$ is set such that the external force $F_{\rm ext}$ applied to the hair bundle vanishes at $X = 0$. (Orange) ($k_{\rm t} = 1$~mN$\cdot$m$^{-1}$, $\alpha = 0 \degree$, $k_{\rm a} = 1$~mN$\cdot$m$^{-1}$, $E_{\rm g} = 9$~$k_{\rm B}T$). The curve is roughly sigmoidal and typical of experimental measurements. (Orange dashed) Fit to a two-state Boltzmann distribution as resulting from the classical gating-spring model, with expression $1/(1 + \exp[z(X_0 - X)/{k_{\rm B}T}])$, where $z \simeq 0.36$~pN, $X_0 \simeq 35$~nm, and $k_{\rm B}T \simeq 4.1$~zJ. Here, $z$ corresponds to the gating force in the framework of the classical gating-spring model. (Blue) ($k_{\rm t} = 2$~mN$\cdot$m$^{-1}$, $\alpha = 15 \degree$, $k_{\rm a} = 1$~mN$\cdot$m$^{-1}$, $E_{\rm g} = 8.8$~$k_{\rm B}T$). The values of $X_0$ and $E_{\rm g}$ have been chosen so that the force is zero at $X = 0$ within the region of negative stiffness, which is required for a spontaneously oscillating hair bundle~\cite{martin_negative_2000}. Channel gating occurs here over a narrower range of hair-bundle displacements. (Blue dashed) Fit to a two-state Boltzmann distribution, with $z \simeq 1.0$~pN, $X_0 \simeq 1.4$~nm, and $k_{\rm B}T \simeq 4.1$~zJ. (Red) ($k_{\rm t} = 1$~mN$\cdot$m$^{-1}$, $\alpha = 0 \degree$, $k_{\rm a} = 1$~mN$\cdot$m$^{-1}$, $E_{\rm g} = 9$~$k_{\rm B}T$, no membrane potentials). The channels remain closed over the whole range of displacements shown in the figure. (Green) ($k_{\rm t} = 1$~mN$\cdot$m$^{-1}$, $\alpha = 0 \degree$, $k_{\rm a} = 200$~mN$\cdot$m$^{-1}$, $E_{\rm g} = 9$~$k_{\rm B}T$, no membrane potentials). The curve presents a plateau around $P_{\rm open} = 0.5$. }\label{FigOpenprobability} \end{figure} For our default parameter set defined above (see also Table~\ref{TableParameters}), the open probability as a function of hair-bundle displacement is a sigmoid that matches the typical curves measured experimentally (orange, continuous curve). It is well fit by a two-state Boltzmann distribution (orange, dashed curve). In this case, the range of displacements over which the channels gate is $\sim$100~nm, in line with experimental measurements~\cite{ricci_mechanisms_2002, he_mechanoelectrical_2004, jia_mechanoelectric_2007, hudspeth_integrating_2014}. Recorded ranges vary, however, from several tens to hundreds of nanometers, depending on whether the hair bundle moves spontaneously or is stimulated, and depending on the method of stimulation (ref.~\cite{meenderink_voltage-mediated_2015} and reviewed in ref.~\cite{fettiplace_physiology_2014}). Increasing the tip-link stiffness~$k_{\rm t}$ and the angle~$\alpha$ compresses this range to a few tens of nanometers (blue curve), matching that measured for spontaneously oscillating hair bundles~\cite{meenderink_voltage-mediated_2015}. Decreasing the amplitude of the single-channel gating swing~$\delta$ instead broadens the range and shifts it to larger hair-bundle displacements (Fig.~S1). In contrast to the classical model of mechanotransduction, where channel gating is intimately linked to the existence of the single-channel gating swing, here, gating still takes place when $\delta = 0$ due to the membrane elastic potentials. To demonstrate the crucial role played by these potentials, we compare in Fig.~\ref{FigOpenprobability} the open probability curves obtained using the default set of parameters with (orange) and without (red) the bilayer-mediated interaction. Without the membrane contribution, the channels remain closed over the whole range of hair-bundle displacements. The associated curve (red) is barely visible close to the horizontal axis of Fig.~\ref{FigOpenprobability}. It is possible, however, to have the channels gate over this range of displacements without the membrane contribution by choosing a value of $k_{\rm a}$ sufficiently large for the lateral channel motion to be negligible. This configuration mimics the case of immobile channels, as in the classical gating-spring model on timescales that are smaller than the characteristic time of slow adaptation. The resulting curve (green) does not match any experimentally measured open-probability relations: It displays a plateau at $P_{\rm open} = 0.5$ corresponding to the OC state. This state is prevented in the complete model with mobile channels by the membrane-mediated forces. Reintroducing these forces while keeping the same large value of $k_{\rm a}$ hardly changes the open-probability relation, because the channels are maintained too far from each other by the adaptation springs to interact via the membrane. Therefore, we only display one of the two curves here. We illustrate further the influence of the value of $k_{\rm a}$ as well as of the amplitude of the elastic membrane potentials in Fig.~S5. We conclude from these results that our model can reproduce the experimentally observed open-probability relations using only realistic parameters, and that the membrane-mediated interactions as well as the ability of the MET channels to move within the membrane are essential features of the model. \subsection*{Hair-Bundle Force and Stiffness} Two other classical characteristics of hair-cell mechanics are the force- and stiffness- displacement relations. In Fig.~\ref{FigForceStiffness}, we display them using the same sets of parameters and color coding as in Fig.~\ref{FigOpenprobability}. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig_Force_Stiffness.pdf} \caption{Hair-bundle force (\textit{A}) and stiffness (\textit{B}) as functions of hair-bundle displacement. The different sets of parameters are the same as the ones used in Fig.~\ref{FigOpenprobability}, following the same color code. (Orange) The force--displacement curve shows a region of gating compliance, characterized by a decrease in its slope over the gating range of the channels, recovered as a decrease in stiffness over the same range. (Blue) The force--displacement curve shows a region of negative slope, characteristic of a region of mechanical instability. The corresponding stiffness curve shows associated negative values. (Red) Without the membrane elastic potentials, the channels are unable to open and the hair-bundle mechanical properties are roughly linear, except for geometrical nonlinearities. (Green) The curves display two regions of gating compliance, better visible on the stiffness curve. }\label{FigForceStiffness} \end{figure} The predicted forces necessary to move the hair bundle by tens of nanometers are on the order of tens of piconewtons, in line with the literature~\cite{howard_compliance_1988,van_netten_channel_2003}. With our reference set of parameters, the force is weakly nonlinear, associated with a small drop in stiffness (orange curves). When the range of displacements over which the channels gate is sufficiently narrow, a nonmonotonic trend appears in the force, corresponding to a region of negative stiffness (blue curves). In the absence of membrane-mediated interactions---in which case the channels do not gate---the force-displacement curve is nearly linear and the stiffness nearly constant (red curves). The relatively small stiffness variation along the curve is due to the geometry, which imposes a nonlinear relation between hair-bundle displacement and channel motion (Eqs.~\ref{EqForceBalance} and \ref{EqGeometry}). When channel motion is prevented by a large value of $k_{\rm a}$, two separate regions of gating compliance appear, corresponding to the two transitions between the three states (CC, OC, and OO) (green curves). The red curves of this figure demonstrate no contribution from the channels whereas the green curves are again unlike any experimentally measured ones. These results confirm the importance in our model of both the lateral mobility of the channels and the membrane-mediated elastic forces. We next investigate whether we can reproduce the effects on the force--displacement relation of the slow and fast adaptation~(reviewed in refs.~\cite{eatock_adaptation_2000} and \cite{holt_two_2000}). Slow adaptation is attributed to a change in the position of myosin motors that are connected to the tip link’s upper end and regulate its tension~\cite{howard_compliance_1988,corey_analysis_1983,eatock_adaptation_1987,howard_mechanical_1987,crawford_activation_1989,hacohen_regulation_1989,assad_active_1992,wu_two_1999,kros_reduced_2002}. Here, this phenomenon corresponds to a change in the value of the reference position $X_0$. This parameter affects tip-link tension via the force exerted by the tip link on its two branches: $f_{\rm t} = k_{\rm t}[\gamma(X - X_0) - d - l_{\rm t}]$ (Mathematical Formulation). Starting from the parameters associated with the blue curve and varying $X_0$, we obtain force-displacement relations that are in agreement with experimental measurements (Fig.~S2)~\cite{martin_negative_2000,le_goff_adaptive_2005}. Fast adaptation is thought to be due to an increase in the gating energy $E_{\rm g}$ of the MET channels, for example, due to Ca\textsuperscript{2+} binding to the channels, which decreases their open probability~\cite{choe_model_1998,cheung_ca2+_2006,ricci_active_2000,wu_two_1999,kennedy_fast_2003}. Starting from the same default curve and changing $E_{\rm g}$ by 1~$k_{\rm B}T$, we obtain a shift in the force--displacement relation (Fig. S3). In this case, the amplitude of displacements over which channel gating occurs remains roughly the same, but the associated values of the external force required to produce these displacements change. Such a shift has been measured in a spontaneously oscillating, weakly slow-adapting cell by triggering acquisition of force--displacement relations after rapid positive or negative steps~\cite{le_goff_adaptive_2005}. During a rapid negative step, the channels close, which we attribute to fast adaptation with an increase in $E_{\rm g}$. In Fig.~S3, increasing $E_{\rm g}$ by 1~$k_{\rm B}T$ increases the value of the force for the same imposed displacement. This mirrors the results in ref.~\cite{le_goff_adaptive_2005}, where a similar outcome is observed when comparing the curve measured after rapid negative steps with that measured after rapid positive steps. From Figs.~S2 and S3, we conclude that our model is capable of reproducing the effects of both slow and fast adaptation on the force--displacement relation. In summary, the model reproduces realistic force--displacement relations when both lateral channel mobility and membrane-mediated interactions are present. These relations exhibit a region of gating compliance and can even show a region of negative stiffness while keeping all parameters realistic. \subsection*{A Mechanical Correlate of Fast Adaptation, the Twitch} Next, we investigate whether our model can reproduce the hair-bundle negative displacement induced by rapid reclosure of the MET channels, known as the twitch~\cite{cheung_ca2+_2006, benser_rapid_1996, ricci_active_2000}. It is a mechanical correlate of fast adaptation, an essential biophysical property of hair cells, which is believed to allow for rapid cycle-by-cycle stimulus amplification~\cite{choe_model_1998}. To reproduce the twitch observed experimentally~\cite{cheung_ca2+_2006, benser_rapid_1996, ricci_active_2000}, we compute the difference in the positions of the hair bundle before and after an increment of $E_{\rm g}$ by 1~$k_{\rm B}T$, and plot it as a function of the external force (Fig. \ref{FigTwitch}). \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig_Twitch.pdf} \caption{Twitch as a function of the external force exerted on the hair bundle (main image) and normalized twitch as a function of the open probability (\textit{Inset}). The different sets of parameters are the same as the ones in Figs.~\ref{FigOpenprobability} and \ref{FigForceStiffness} for the orange, blue and green curves. The additional purple curve is associated with the same parameter set as that of the orange curve, except for the number of intact tip links, set to $N=25$ rather than $N=50$. (Orange) The maximal twitch amplitude for the standard set of parameters is $\sim$5~nm. (Blue) Because of the region of mechanical instability associated with negative stiffness, the corresponding curve for the twitch is discontinuous, as shown by the two regions of near verticality in the blue curve. This corresponds to the two regions of almost straight lines in the normalized twitch. Both of these linear parts are displayed as guides for the eye. (Green) The channels gate independently, producing two distinct maxima of the twitch amplitude. (Purple) The twitch peaks at a smaller force and its amplitude is reduced compared with the orange curve. Plotted as a function of the open probability, however, the two curves are virtually identical. }\label{FigTwitch} \end{figure} With the same parameters as in Figs.~\ref{FigOpenprobability} and \ref{FigForceStiffness}, we find twitch amplitudes within the range reported in the literature~\cite{cheung_ca2+_2006, benser_rapid_1996, ricci_active_2000}. They reach their maxima for intermediate, positive forces and drop to zero for large negative or positive forces, as experimentally observed. The twitch is largest and peaks at the smallest force when the hair bundle displays negative stiffness (blue curve), since the channels open then at the smallest displacements. For the green curve, the channels gate independently, producing two distinct maxima of the twitch amplitude, mirroring the biphasic open-probability relation. Note that no curve is shown with the parameter set corresponding to the red curves of Figs.~\ref{FigOpenprobability} and \ref{FigForceStiffness}, since the twitch is nearly nonexistent in that case. Twitch amplitudes reported in the literature are variable, ranging from $\sim$4~nm in single, isolated hair cells~\cite{cheung_ca2+_2006}, to $>$30~nm in presumably more intact cells within the sensory epithelium~\cite{benser_rapid_1996, ricci_active_2000}. A potential source of variability is the number of intact tip links, since these can be broken during the isolation procedure. We show that decreasing the number of tip links in our model shifts the twitch to smaller forces and decreases its amplitude (Fig.~\ref{FigTwitch}, purple vs. orange curves). Twitch amplitudes are further studied for different values of the adaptation-spring stiffness and amplitudes of the elastic membrane potentials in Fig.~S5. To compare further with experimental data~\cite{benser_rapid_1996, ricci_active_2000}, we also present the twitch amplitude normalized by its maximal value, and plot it as a function of the channels’ open probability (Fig.~\ref{FigTwitch}, \textit{Inset}). The twitch reaches its maximum for an intermediate level of the open probability and drops to zero for smaller or larger values, as measured experimentally~\cite{benser_rapid_1996, ricci_active_2000}. Another factor that strongly affects both the amplitude and force dependence of the twitch is the length $l$ of the tip-link branching fork. For a long time, the channels were suspected to be located at the tip link's upper end, where the tip-link branches appear much longer~\cite{kachar_high-resolution_2000}. With long branches, the twitch is tiny and peaks at forces that are too large (Fig.~S4), unlike what is experimentally measured. This observation provides a potential physiological reason why the channels are located at the tip link's lower end rather than at the upper end as previously assumed~\cite{zhao_elusive_2015,spinelli_bottoms_2009}. There are two more reasons why our model requires the channels to be located at the lower end of the tip link. First, as shown in Figs.~\ref{FigOpenprobability}--\ref{FigTwitch}, some degree of membrane tenting increases the sensitivity and nonlinearity of the system, as well as the amplitude of the twitch. While it is straightforward to obtain the necessary membrane curvature at the tip of a stereocilium, this is not the case on its side. Second, while pulling on the channels located at the tip compels them to move toward one another, doing so with the channels located on the side would instead make them slide down the stereocilium, impairing the efficiency of the mechanism proposed in this work. In summary, our model reproduces correctly the hair-bundle twitch as well as its dependence on several key parameters. It therefore includes the mechanism that can mediate the cycle-by-cycle sound amplification by hair cells. \subsection*{Effect of Ca\textsuperscript{2+} Concentration on Hair-Bundle Mechanics} Our model can also explain the following important results that have so far evaded explanation. First, it is established that, with increasing Ca\textsuperscript{2+} concentration, the receptor current vs. displacement curve shifts to more positive displacements, while its slope decreases~\cite{corey_kinetics_1983}. Second, within the framework of the classical gating-spring model, Ca\textsuperscript{2+} concentration appears to affect the magnitude of the gating swing~\cite{tinevez_unifying_2007}: When a hair bundle is exposed to a low, physiological, Ca\textsuperscript{2+} concentration of 0.25~mM, the force--displacement relation presents a pronounced region of negative slope, and the estimated gating swing is large, on the order of 9--10~nm. But when the same hair bundle is exposed to a high Ca\textsuperscript{2+} concentration of $\sim$1~mM, the region of negative stiffness disappears, and the estimated gating swing becomes only half as large. In our model, it is the decrease of the interchannel distance following channel opening that, transmitted onto the tip link's main axis, effectively plays the role of the classical gating swing (Figs.~\ref{FigTwoChannelModel} and \ref{FigSchematicModel} and Movie~S1). To quantify the change of tip-link extension as the channels open, we introduce a new quantity, which we call the gating-associated tip-link extension (GATE). It is defined mathematically as $d_{\rm OO}-d_{\rm CC}$, where $d_{\rm OO}$ and $d_{\rm CC}$ are the respective values of the distance $d$ in the OO and CC states. To study the influence of Ca\textsuperscript{2+} concentration on the GATE, we hypothesize that Ca\textsuperscript{2+} ions favor the closed conformation of the channels over the open one, that is, that the energy difference $E_{\rm g}$ between the two states increases with Ca\textsuperscript{2+} concentration~\cite{choe_model_1998, cheung_ca2+_2006}. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig_GATE.pdf} \caption{GATE and open probability as functions of hair-bundle displacement (\textit{A}) and GATE as a function of the open probability (\textit{B}), for different values of the channel gating energy $E_{\rm g}$. (\textit{A}) Open-probability curves are generated by using the default parameter set of the blue curves of Figs.~\ref{FigOpenprobability}--\ref{FigTwitch}, and otherwise different values of the channel gating energy $E_{\rm g}$, as indicated directly on \textit{B}. The GATE as a function of $X$ (red curve) depends only on the geometry of the system, such that only one curve appears here. We indicate, in addition, directly on the image the values of the single-channel gating force $z$ obtained by fitting each open-probability relation with a two-state Boltzmann distribution, as done in Fig.~\ref{FigOpenprobability} for the orange and blue curves. We report, together with these values, the relative magnitudes of the single-channel gating swing $g_{\rm swing}$ obtained with the classical model. (\textit{B}) The GATE values are plotted as functions of the open probability, for each chosen value of $E_{\rm g}$. }\label{FigGATE} \end{figure} Within this framework, we expect to see the following effect of Ca\textsuperscript{2+} on the GATE, via the change of $E_{\rm g}$: Higher Ca\textsuperscript{2+} concentrations correspond to higher gating energies, causing the channels to open at greater positive hair-bundle displacements. Greater displacements in turn correspond to smaller values of the inter-channel distance before channel opening. Since the final position of the open channels is always the same (at $a=a_{\rm min}$, where the OO membrane potential is minimum; Fig.~\ref{FigSchematicModel}\textit{D}), the change of the interchannel distance induced by channel opening is smaller for higher Ca\textsuperscript{2+} concentrations. As a result, the GATE experienced by the tip link is smaller for higher Ca\textsuperscript{2+} concentrations, in agreement with the experimental findings cited above. We study this effect quantitatively in Fig.~\ref{FigGATE}. In Fig.~\ref{FigGATE}\textit{A}, we plot simultaneously the GATE and $P_{\rm open}$ as functions of the hair-bundle displacement $X$, for five values of $E_{\rm g}$. Although the function ${\rm GATE}(X)$ spans the whole range of displacements, the relevant magnitudes of the GATE are constrained by the displacements for which channel opening is likely to happen; we use as a criterion that $P_{\rm open}$ must be between 0.05 and 0.95. The corresponding range of displacements depends on the position of the $P_{\rm open}$ curve along the horizontal axis, which ultimately depends on $E_{\rm g}$. We display in Fig.~\ref{FigGATE}\textit{A} the two ranges of hair-bundle displacements (dashed vertical lines) associated with the smallest and largest values of $E_{\rm g}$, together with the amplitudes of the GATE within these intervals (dashed horizontal lines). For $E_{\rm g} = 6$~$k_{\rm B}T$ (blue curve and GATE interval), the size of the GATE is on the order of $4.1\text{--}5.3$~nm, whereas it is on the order of $1.3\text{--}2.3$~nm for $E_{\rm g} = 14$~$k_{\rm B}T$ (orange curve and GATE interval). In general, larger values of the channel gating energy $E_{\rm g}$ cause smaller values of the GATE. To compare directly with previous analyses, we next fit the open-probability relations of Fig.~\ref{FigGATE}\textit{A} with the gating-spring model, obtaining the corresponding single-channel gating forces $z$. This procedure allows us to quantify the change of the magnitude of an effective gating swing $g_{\rm swing}$ with $E_{\rm g}$ by the formula $z=g_{\rm swing}k_{\rm gs}\gamma$, where $k_{\rm gs}$ is the stiffness of the gating spring. We give directly on the panel the relative values of $g_{\rm swing}$ obtained by this procedure. Taking, for example, $k_{\rm gs}=1$~mN$\cdot$m$^{-1}$, $g_{\rm swing}$ ranges from 8.3~nm for $E_{\rm g} = 6$~$k_{\rm B}T$ to 3.6~nm for $E_{\rm g} = 14$~$k_{\rm B}T$. In Fig.~\ref{FigGATE}\textit{B}, we show the GATE as a function of $P_{\rm open}$ for the different values of $E_{\rm g}$. For each curve, the amplitude of the GATE is a decreasing function of $P_{\rm open}$ that presents a broad region of relatively weak dependence for most $P_{\rm open}$ values. These results demonstrate that the GATE defined within our model decreases with increasing values of $E_{\rm g}$, corresponding to increasing Ca\textsuperscript{2+} concentrations. In addition, the same dependence is observed for the effective gating swing estimated from fitting the classical gating-spring model to our results, as it is when fit to experimental data~\cite{tinevez_unifying_2007}. Finally, we can see from Fig.~\ref{FigGATE}\textit{A} that the predicted open-probability vs. displacement curves shift to the right and their slopes decrease with increasing values of $E_{\rm g}$, a behavior in agreement with experimental data (see above and ref.~\cite{corey_kinetics_1983}). Together with the decrease in the slope, the region of negative stiffness becomes narrower (Fig.~S3) and even disappears for a sufficiently large value of $E_{\rm g}$ (Fig.~S3, yellow curve). This weakening of the gating compliance has been measured in hair bundles exposed to a high Ca\textsuperscript{2+} concentration~\cite{tinevez_unifying_2007}. In summary, our model explains the shift in the force-displacement curve as well as the changes of the effective gating swing and stiffness as functions of Ca\textsuperscript{2+} concentration. \section*{Discussion} We have designed and analyzed a two-channel, cooperative model of hair-cell mechanotransduction. The proposed geometry includes two MET channels connected to one tip link. The channels can move relative to each other within the stereociliary membrane and interact via its induced deformations, which depend on whether the channels are open or closed. This cross-talk produces cooperative gating between the two channels, a key feature of our model. Most importantly, because the elastic membrane potentials are affected by channel gating on length scales larger than the proteins’ conformational rearrangements, and because the channels can move in the membrane over distances greater than their own size, the model generates an appropriately large effective gating swing without invoking unrealistically large conformational changes. Moreover, even when the single-channel gating swing vanishes, the effective gating swing determined by fitting the classical model to our results does not. In this case, the conformational change of the channel is orthogonal to the membrane plane and its gating is triggered only by the difference in membrane energies between the OO and CC states. We have shown that our model reproduces the hair bundle’s characteristic current-- and force--displacement relations as well as the existence and characteristics of the twitch, the mechanical correlate of fast adaptation. It also explains the puzzling effects of the extracellular Ca\textsuperscript{2+} concentration on the magnitude of the estimated gating swing and on the spread of the negative-stiffness region, features that are not explained by the classical gating-spring model. In addition to reproducing these classical features of hair-cell mechanotransduction, our model may be able to account for other phenomena that have had so far no---or only unsatisfactory---explanations. One of them is the flick, a small, voltage-driven hair-bundle motion that requires intact tip links but does not rely on channel gating~\cite{cheung_ca2+_2006, ricci_active_2000, meenderink_voltage-mediated_2015}. It is known that changes in membrane voltage modulate the membrane mechanical tension and potentially the membrane shape by changing the interlipid distance~\cite{zhang_voltage-induced_2001, breneman_hair_2009}, but it is not clear how this property can produce the flick. This effect could be explained within our framework as a result of a change in the positions of the channels following the change in interlipid distance driven by voltage. This would in turn change the extension of the tip link and thus cause a hair-bundle motion corresponding to the flick. Another puzzling observation from the experimental literature is the recordings of transduction currents that appear as single events but with conductances twofold to fourfold that of a single MET channel~\cite{pan_tmc1_2013, beurg_conductance_2014}. Because tip-link lower ends were occasionally observed to branch into three or four strands at the membrane insertion~\cite{kachar_high-resolution_2000}, one tip link could occasionally be connected to as many channels. According to our model, these large-conductance events could therefore reflect the cooperative openings of coupled channels. Our model predicts that changing the membrane properties must affect the interaction between the MET channels, potentially disrupting their cooperativity and in turn impairing the ear’s sensitivity and frequency selectivity. For example, if the bare bilayer thickness were to match more closely the hydrophobic thickness of the open state of the channel rather than that of the closed state, the whole shape of the elastic membrane potentials would be different. In such a case, the open probability vs. displacement curves would be strongly affected, and gating compliance and fast adaptation would be compromised. Potentially along these lines, it was observed that chemically removing long-chain---but not short-chain---phospholipid PiP2 blocked fast adaptation~\cite{hirono_hair_2004}. With a larger change of membrane thickness, one could even imagine reversing the roles of the OO and CC membrane-mediated interactions. This would potentially change the direction of fast adaptation, producing an “anti-twitch”, a positive hair-bundle movement due to channel reclosure. Such a movement has indeed been measured in rat outer hair cells~\cite{kennedy_force_2005}. Whether it was produced by this or a different mechanism remains to be investigated. Our model fundamentally relies on the hydrophobic mismatch between the MET channels and the lipid bilayer. Several studies have demonstrated that the lipids with the greatest hydrophobic mismatch with a given transmembrane protein are depleted from the protein's surrounding. The timescale of this process is on the order of a 100~ns for the first shell of annular lipids~\cite{beaven_gramicidin_2017}. It is much shorter than the timescales of MET-channel gating and fast adaptation. Therefore, it is possible that lipid rearrangement around a MET channel reduces the hydrophobic mismatch and thus decreases the energy cost of the elastic membrane deformations, lowering in turn the importance of the membrane-mediated interactions in hair-bundle mechanics. However, such lipid demixing in the fluid phase of a binary mixture is only partial, on the order of 5--10\%~\cite{yin_hydrophobic_2012}. Furthermore, ion channels are known to bind preferentially specific phospholipids such as PiP2~\cite{suh_pip2_2008}, further suggesting that the lipid composition around a MET channel does not vary substantially on short timescales. We therefore expect the effect of this fast lipid mobility to be relatively minor. Slow, biochemical changes of the bilayer composition around the channels, however, could have a stronger effect. It would be interesting for future studies to investigate the role played by lipid composition around a MET channel on its gating properties and how changes in this composition affect hair-cell mechanotransduction. \matmethods{ \subsection*{Membrane-Mediated Interaction Potentials} The one-dimensional interaction potentials mediated by the membrane between two mechanosensitive channels of large conductance (MscL) in \textit{Escherichia coli} have been modeled by Ursell {\it et al.}~\cite{ursell_cooperative_2007, phillips_emerging_2009}. Here, we mimic the shape of the potentials used in that study with the following analytic expressions: \begin{equation} {\footnotesize \begin{aligned} V_{{\rm b},0}(a) = & E_{\rm CC} \left ( \frac{a - a_{\rm cross, CC}}{a_{\rm min} - a_{\rm cross, CC}} \right ) \exp \left [ - \left ( \frac{a - a_{\rm min}}{l_{\rm V}} \right)^2 \right]\\ V_{{\rm b},1}(a) = & E_{\rm OC} \left [ \frac{(a_{\rm cross,OC} - a) (a_{\rm cross, OC} - a_{\rm min})^2}{(a - a_{\rm min})^3} \right ] \times\\ & \times \exp \left [ - \left ( \frac{a - a_{\rm min}}{l_{\rm V}} \right)^2 \right]\\ V_{{\rm b},2}(a) = & E_{\rm CC} \left ( \frac{a - a_{\rm cross, OO}}{a_{\rm min} - a_{\rm cross, OO}} \right ) \exp \left [ - \left ( \frac{a - a_{\rm min}}{l_{\rm V}} \right)^2 \right]\, , \end{aligned}} \end{equation} where the coordinate $a$ corresponds to the distance between either of the two anchoring points of the tip link and the tip link's central axis (Fig.~\ref{FigSchematicModel}). Note that this choice of coordinate is different from that of Ursell {\it et al.}~\cite{ursell_cooperative_2007}, who chose to represent their potentials as functions of the channels’ centre-to-centre distance. The different parameters entering these expressions, with their associated numerical values used to generate the results presented in this work, are as follows: $a_{\rm min} = 1.25$~nm represents the minimal value reached by the variable $a$; $l_{\rm V} = 1.5$~nm is the characteristic length over which the membrane-mediated interaction decays; $a_{\rm cross, CC} = 3$~nm, $a_{\rm cross,OC} = 2.75$~nm, and $a_{\rm cross,OO} = 2.5$~nm are the respective values of the variable $a$ for which the membrane potentials $V_{{\rm b},0}$, $V_{{\rm b},1}$, and $V_{{\rm b},2}$ cross the $X = 0$ axis; $E_{\rm CC} = -2.5$ $k_{\rm B}T$ and $E_{\rm OO} = -25$ $k_{\rm B}T$ represent, respectively, the values of the potentials $V_{{\rm b},0}$ and $V_{{\rm b},2}$ at $a = a_{\rm min}$; and finally, $E_{\rm OC} = 50$ $k_{\rm B}T$ is an energy scale that describes the global amplitude of $V_{{\rm b},1}$. A graphical representation of the resulting elastic potentials is shown in Fig.~\ref{FigSchematicModel}\textit{D}. \subsection*{Open Probability} The open probability of the channels ($P_{\rm open}$) depends on a total energy that is the sum of the following contributions: the elastic energy of the two adaptation springs $E_{{\rm a},n} = 2 \cdot k_{\rm a} ({a}_{\rm adapt} - a - n\delta/2)^2/2$ (for $a < {a}_{\rm adapt} - n\delta/2$, zero otherwise), the elastic energy of the tip link $E_{{\rm t},n} = k_{\rm t}(\gamma(X-X_0) - d - l_{\rm t})^2/2$ (for $\gamma(X-X_0) > d + l_{\rm t}$, zero otherwise), the membrane mechanical energy $V_{{\rm b},n}(a)$ detailed above, and the energy due to channel gating $n \times E_{\rm g}$, where $E_{\rm g}$ is the gating energy of a single channel. Adding these four contributions and using the computed relations $a_n(X)$ (see Numerical Solution of the Model for further details), one can compute the total energy $E_{{\rm tot},n}(X)$ associated with each channel configuration $n$ at every displacement $X$ of the hair bundle. The probability weights of the different channel states are, respectively, $W_{\rm CC} = \exp(- E_{{\rm tot},0}(X))$, $W_{\rm OC} = 2\exp(- E_{{\rm tot},1}(X))$, and $W_{\rm OO} = \exp(- E_{{\rm tot},2}(X))$, the factor two in $W_{\rm OC}$ reflecting the fact that the OC state comprises two canonical states, open-closed and closed-open. Furthermore, the probability of each configuration is equal to its associated probability weight divided by the sum $W_{\rm OO} + W_{\rm OC} + W_{\rm CC}$. At the level of the whole hair bundle, and under the hypothesis that all channel pairs are identical, the overall open probability of the channels finally reads: $P_{\rm open} = P_{\rm OO} + P_{\rm OC}/2$. \subsection*{Numerical Solution of the Model} To compute the model outcomes---including the open probability as discussed above---we first need to solve Eq.~\ref{EqForceBalance} to find the three relations $a_n(X)$. This equation, however, cannot be solved analytically for $a$. To obtain numerical solutions, we first solve it analytically for $X$ and obtain three expressions for $X(a,n)$. We then produce a set of three tables of numerical values $X_{n,i} = X(a_i, n)$, where $a_i = a_{\rm min} + i \cdot \Delta a$ is a set of values of the coordinate $a$, equispaced by a length $\Delta a$. To produce tables $a_{n,j} = a_n(X_j)$ that share a common set of entries $X_j$, we first generate a table of entries for the variable $X$, regularly spaced: $X_j = X_{\rm min} + j \cdot \Delta X$. For each entry $X_j$, we then take in the original table $X_{n,i} = X(a_i,n)$ the value $X_{n,i}$ that is the closest to $X_j$. We then choose for $a_{n,j}$ the corresponding value $a_i$. As a result, we obtain three stepwise functions $a_n(X)$ such that $a_n(X) = a_{n,j}$ for all $X$ in $[X_j, X_{j+1}[$. We further smoothen these functions by interpolating linearly the values of $a$ between two neighboring plateaus, to avoid discontinuities and obtain a continuous, piecewise-linear function. \subsection*{Reference Tip-Link Tension} As described in Mathematical Formulation, the parameter $X_0$, which sets the reference tension in the tip link, is determined by imposing the force-balance Eq.~\ref{EqBundleForceBalance} with $F_{\rm ext} = 0$~N, $X = 0$~m, and $X_{\rm sp} = 100$~nm. To impose this condition, however, one needs the global tip-link force $F_{\rm t} = N \gamma f_{\rm t}$, which itself depends on $X_0$ via the respective probabilities of the three channel states (OO, OC and CC). In addition, as discussed above, the curves $a_n(X)$ can only be computed numerically, meaning that no closed analytic expression can be obtained for $X_0$. We therefore proceed numerically according to the following scheme: We first generate a table of numerical values of $X_0$ and compute the associated table of tip-link forces $f_{\rm t}$ that satisfy force balance at the level of the whole hair bundle (namely, that solve Eq.~\ref{EqBundleForceBalance} with $F_{\rm ext} = 0$~N, $X = 0$~m, and $X_{\rm sp} = 100$~nm). We then insert these values into the force-balance condition at the level of the individual MET channels (Eq. \ref{EqForceBalance}), evaluated at $X = 0$. The proper value for $X_0$ corresponds to the element for which this condition is satisfied. This ensures force balance both at the level of the individual MET channels and at the level of the whole hair bundle. This procedure corresponds to determining the reference tension in the tip links for a given hair-bundle displacement. } \acknow{We thank the members of the A.S.K. laboratory for comments on the manuscript. Work on this project in the A.S.K. laboratory was supported by The Royal Society Grant $RG140650$, Wellcome Trust Grant $108034/Z/15/Z$, and the Imperial College Network of Excellence Award. T.R. was supported by the LabEx CelTisPhyBio ANR-10-LABX-0038.} \showmatmethods{} \showacknow{}
1,116,691,499,417
arxiv
\section{Introduction} Cygnus X-1, the only Galactic X-ray binary with a high mass companion where existing observations require a black hole for the compact object, was first discovered in 1962 (Cowley 1992, and references therein). In addition to the well-established high mass function found with optical observations, the X-ray data of Cyg X-1 display transitions from a high flux state in the 2-10 keV band (where a strong soft component dominates) to a low flux state (where the soft component largely disappears) that has been interpreted as characteristic of black hole systems. It also displays highly broadened Fe K$\alpha$ emission (Miller~{\it et~al.}~2002) that is consistent with models for X-ray reflection in Galactic black holes and AGNs. The broad-line shape of Fe K$\alpha$ may be caused by Doppler shifts and the gravitational field of the black hole. The Cyg X-1 system consists of a supergiant star and a compact object. The mass of the compact object is in the range 7-20 $M_{\odot}$ (Shaposhnikov \& Titarchuk 2007; Ziolkowski 2005) the mass of the visible star is in the range 18-40$M_{\odot}$ (Ziolkowski 2005; Tarasov~{\it et~al.}~2003; Brocksopp~{\it et~al.}~1999). The binary orbital period is 5.6 days (Bolton 1972). Miller~{\it et~al.}~(2005) using Chandra/HETG observations find that the X-ray spectrum of Cygnus X-1 at phase 0.76 is dominated by absorption lines, in strong contrast to spectra of other HMXBs such as Vela X-1 and Cen X-3. Schulz~{\it et~al.}~(2002) report marginal evidence for ionized Fe transitions with P-Cygni type profiles at orbital phase 0.93 whereas Marshall~{\it et~al.}~(2001) find no evidence for such line profiles at phase 0.84. Miller~{\it et~al.}~(2005) suggest that, while the spectra of Cen X-3 can be modelled by a spherically-symmetric wind (Wojdowski~{\it et~al.}~2003), the X-ray absorption spectrum of Cyg X-1 requires dense material preferentially along the line of sight; considered together, the Chandra spectra provide evidence in X-rays for a focused wind in Cygnus X-1. Our Space Telescope Imaging Spectrometer (STIS) observations of Cygnus X-1 were obtained when Cyg X-1 was in its soft/high X-ray state and show line profiles that change significantly between orbital phases 0.0 and 0.5. We interpret these changes in terms of models that include the effects of X-ray photoionization on the stellar wind of the normal companion or of a focused wind. We test our model predictions with contemporaneous X-ray observations. The observations and analysis are described in Section 2, our models of the line profiles are presented in Section 3, and our interpretation and conclusions are discussed in Section 4. \\ \section{Observations and Analysis} Cyg X-1 was observed with the STIS on the Hubble Space Telescope (HST) when the X-ray source was behind the normal star and half an orbit later on two separate epochs roughly one year apart. Figure \ref{rxtehst} shows the times of our observations in comparison with the 1 day--averaged light curves obtained from the all-sky monitor on the Rossi X-Ray Timing Explorer (RXTE; Levine~{\it et~al.}~1996). The RXTE light curves show that the HST observations were taken when the binary was at relatively high X-ray flux which is associated with the X-ray soft/high state. The STIS instrument design and in-orbit performance have been described by Woodgate et al. (1998) and Kimble et al. (1998). The E140M grating provided a resolving power of 6 km s$^{-1}$ in the wavelength range 1150--1740$\AA$. The data were processed through the standard HST/STIS pipeline and further reduced using the STSDAS routines available through IRAF. Table 1 lists the HST observation identifier, along with the start dates, exposure length, and orbital phase at the start of each observation. Phases were calculated using an ephemeris that places phase zero at JD2441874.707$\pm$0.009, with an orbital period of 5.599829$\pm$0.000016 (Brocksopp~{\it et~al.}, 1999). Phase zero corresponds to supergiant inferior conjunction. Figure \ref{cygx1hstall} shows the eight datasets in increasing orbital phase order regardless of epoch. These data are not smoothed and not corrected for reddening and no instrumental quality control has been applied (i.e., removal of hot pixels, etc). It is clear that the changes in high ionization material line profiles are orbital phase dependent, with stronger absorption near orbital phase 0.0. The close-ups of the N~V, Si~IV, and C~IV regions in Figure 3 demonstrate that this orbital phase dependence persists for observations taken a year apart. Figure \ref{cygx1lines} shows a spectrum taken at orbital phase 0.96, with the stronger spectral lines labeled. Table 2 lists the features identified using the program SpecView \footnote{$http://www.stsci.edu/resources/software\_hardware/specview$} and tables supplied by NIST \footnote{$http://www.physics.nist.gov/PhysRefData/Handbook/index.html$}. We note that some of the lines are saturated and several are blended with other features. Figures \ref{cygx1nv}-\ref{cygx1heii} show profiles of both high (N~V, C~IV, Si~IV) and low (Si~II, C~II) ionization state material in each of two orbital phases at two epochs. The low ionization material and He~II are likely to be from the stellar surface rather than the wind. \\ \section{Line Profile Models} We have implemented a model that uses the Sobolev method with Exact Integration (SEI; Lamers~{\it et~al.}~1987) to predict the P Cygni line profiles from a wind ionized by an embedded X-ray source. The details of the wind ionization are identical to those given by Boroson~{\it et~al.}~(1999; equations 11-16). The SEI method extends the escape probability method of McCray {\it et~al.}~(1984) by including integration along the line of sight. This allows scattering from a range of points around the ``resonant point", as will result when there is small-scale turbulence (microturbulence). The SEI method also allows a more exact treatment of interaction between P~Cygni line doublet components. We take note of a complementary analysis of the same STIS observations by Gies~{\it et~al.}~(2007). Gies~{\it et~al.}~also use the SEI method to analyze the wind, but assume that the ions that form the P~Cygni lines are {\it only} present in the region of the wind in which the primary blocks X-ray ionization from the black hole. Following McCray {\it et~al.}~(1984), we compute the local ionization fraction by combining ambient photoionization (from shocks in the wind, etc.) that would be present in an isolated O~star with the ionization rate computed using the XSTAR code, assuming a local ionization parameter $\xi=L_x/n r_x^2$ (with X-ray luminosity $L_x$, number density $n$, and distance from the black hole $r_x$). We use the X-ray spectrum shape modeled by Wilms~{\it et~al.}~(2006) for observations taken near our HST observations. Where the O~star shadows the wind from X-ray photoionization by the black hole, we assume only the ambient photoionization rate, which gives optical depths in the line parameterized by $\alpha_1$ and $\alpha_2$, or $\tau_{wind}$, often used for winds of isolated OB stars (Lamers~{\it et~al.}~1987). The XSTAR ionization code is described in Bautista \&\ Kallman (2001). We fit the model, which includes adjustable free parameters, to the spectra at phases near both superior and inferior conjunction of the black hole. Thus, the fits are sensitive to both the blue-shifted absorption trough and the red-shifted emission peak. Although the lines are saturated, the optical depths of the lines in the absence of X-ray photoionization still affect the resultant fits. We attribute the sharp features (velocity widths of order 30 km~s$^{-1}$) to interstellar absorption lines, the narrow absorption lines (widths of 94 km~s$^{-1}$) originate on the stellar photosphere, and the broadest features come from the wind (terminal velocity from our model of 1420 km~s$^{-1}$). In addition, when we model the N\,V lines, we incorporate narrow interstellar Mg\,II lines at $1239.925,1240.397$\AA. The SEI calculation of the line profiles near the rest velocity can require integration to start extremely close to the stellar surface in order to agree with the results of comoving frame methods (Lamers~{\it et~al.}~1987). Hence we expect the P~Cygni line fits to be least reliable near the rest wavelengths of the lines. We assumed fixed values for parameters describing the O~star and orbit as given in Table 3. Our results are not sensitive to the abundance values used, except for the determination of $\dot{M}$. The parameters describing the wind and X-ray illumination are allowed to vary until a $\chi^2$ minimum is found through the downhill simplex (``amoeba") method of Nelder \& Mead (1965). The errors we quote are the rms variation in the final simplex, in which the $\chi^2_\nu$ varies by less than 1\%. There may be other local $\chi^2$ minima and some of the parameters may interact. For example, different values of $\tau_{\rm wind}$, $\alpha_1$, $\alpha_2$ may produce similar functions of optical depth versus radius in the wind. The Si\,IV doublet is generally the most reliable indicator of wind behavior in stars of this spectral type (Lamers~{\it et~al.}~1999), as the N\,V and C\,IV lines are saturated. We show that the lines can be fit using a wind acceleration law that is standard for OB stars: $v=v_\infty(1-1/r)^{\beta}$, with $r$ the radius in the wind normalized to the stellar radius, and $v_\infty$ and $\beta$ free parameters of our model. (We note that our code used $v=0.01 v_\infty + 0.99*v_\infty(1-1/r)^{\beta}$ in order to avoid singularities). The best-fit value of $\beta$ is $\approx0.75$, which is standard for OB star winds. The best-fit value of $v_\infty\approx1420$~km~s$^{-1}$ is lower than the 2300~km~s$^{-1}$ found by Davis \& Hartmann (1983). It is possible that the higher terminal velocity found by Davis \& Hartmann was due to line blending on the blue side of the feature that was hard to discern in the weak IUE spectrum. We also note that Davis \& Hartmann used IUE data of mostly C~IV to determine their terminal velocity whereas we use STIS data of Si~IV. Also we allow for microturbulence. For the C\,IV lines we fix $\beta$ to that found for the Si\,IV fits, and then for N\,V, which has a lower signal to noise ratio, we fix also $v_\infty$. The best-fit parameters for the fits shown in Figures \ref{nvmod}-\ref{civmod} are given in Table 4. \section{Accretion Rate and X-ray Luminosity} \subsection{Mass Accretion Rate} The fits to the changing P~Cygni lines by a model of X-ray ionization within a spherically symmetric wind determine parameters that may be used to estimate the accretion rate onto the black hole. For accretion purely through capture of stellar wind material, Bondi \&\ Hoyle (1944) show that the rate at which mass is captured from a stellar wind by a compact object ($\dot M_{\rm capture}$) is given by \begin{equation} \dot M_{\rm capture}=\frac{4 \pi G^2 M_{\rm bh}^2 \rho}{V_{\rm rel}^3} \end{equation} where $G$ is the gravitational constant, $M_{\rm bh}$ is the mass of the compact object, $\rho$ is the density of the undisturbed gas flow near the compact object, and $V_{\rm rel}=(v_{\rm wind}^2+v_{\rm orbit}^2)^{1/2}$ is the velocity of the wind relative to the compact object, which has velocity $v_{\rm orbit}=2\pi R/P$ for orbital radius $R$ and period $P$ in a wind we take to have $v_{\rm wind}=v_\infty(1-R_*/R)^\beta$. From mass conservation we have \begin{equation} \rho=\frac{\dot{M}}{4 \pi R^2 v_{\rm wind}} \end{equation} We can relate $\dot M_{\rm capture}$ to the X-ray luminosity if we assume accretion releases energy with some efficiency $e\approx0.1$, so that $L_x=e \dot M_{\rm capture} c^2$. For disk accretion, we expect $e=0.057$ for a nonrotating black hole and $e=0.42$ for a maximally rotating black hole (Shapiro \&\ Teukolsky 1983, p. 429). Thus our best-fit values for $\dot{M}$, $\beta$, and $v_{\infty}$ provide an independent prediction of $L_x$. This prediction is based on the idealization that the accretion proceeds entirely through the gravitational capture of stellar wind material. The prediction also depends on other uncertain parameters ($e$, $M_{\rm bh}$). Our estimate, $L_x\approx8\times10^{37}$~erg~s$^{-1}$ depends sensitively on the velocity of the stellar wind near the black hole. As $v_\infty$ varies from $1000$ to $2000$~km~s$^{-1}$, $L_x$ goes from 3$\times10^{37}$ to $3\times10^{38}$~erg~s$^{-1}$. The model dependence on the properties of the X-ray ionized region of the wind also leads to a determination of $L_x/\dot{M}$ and $\dot{M}$, which together give $L_x=1.6\times10^{37}$~erg~s$^{-1}$ and $\dot M = 4.8\times10^{-6}M_{\odot}~yr^{-1}$ in the fit to the Si\,IV lines. The C\,IV and N\,V lines, which we expect to be less reliable, give $L_x=1.8\times10^{38}$ and $L_x=8.0\times10^{37}$~erg~s$^{-1}$, respectively. We can compare these models for the X-ray luminosity with the observed contemporaneous RXTE ASM count rate, $\approx80$~counts~s$^{-1}$ (Figure~\ref{rxtehst}). Schulz et al. (2002) observed Cygnus X-1 with {\it Chandra} during a period in which the RXTE ASM showed flares. An upper estimate for the ASM count rate during their observation, $\approx50$~cts~s$^{-1}$, together with their measure of the 0.5-10 keV X-ray luminosity as 1.6$\times10^{37}$~erg~s$^{-1}$ (for a distance of 2.5 kpc), would imply an X-ray luminosity of $\ge 2.6\times10^{37}$~erg~s$^{-1}$ during the STIS observations. \subsection{X-ray Luminosity} One further uncertainty in our model of the wind is that we assume a constant X-ray luminosity whereas the X-rays from Cygnus X-1 often flare and display a complex power spectrum. The saturated P~Cygni lines may respond nonlinearly to flares, although light travel times may diminish the response. In Figures~\ref{changlum1} and \ref{changlum2}, we show how the P~Cygni lines may change in our model in response to a change in X-ray luminosity. If the change were linear, the graph of the model with best-fit value of $L_{\rm x}$ would be identical to the average of the other two graphs. Whereas we see a clear asymmetry. In Figures \ref{conplot1}, \ref{conplot2}, and \ref{conplot3}, we show contours of constant values of Log$_{10}(a_{\rm Si\,IV})$, where $a_{\rm Si\,IV}$ is the fraction of Si that is in the form Si\,{\sc IV}. The lines extending radially from the center of the O~star show the lines of sight at $\phi=0.55$ (extending to the bottom) and $\phi=0.96$ (extending to the top), adjusted for orbital inclination $i$ by the use of an effective phase $\phi^\prime$ such that $\cos 2 \pi \phi^{\prime}=\cos 2 \pi \phi \sin i$. Figures \ref{conplot1} through \ref{conplot3} show that the black hole is very effective at removing the Si\,{\sc IV} ion from the wind. However, it is difficult to see intuitively the optical depth in the wind, given the ion fraction. Therefore, we show in Figures~\ref{contau1} through \ref{contau3} the radial optical depths of Si\,{\sc IV} in the wind, given the same three X-ray luminosities, $L_x=(1/3, 1, 5/3)\times 1.6\times 10^{37}$ erg~s$^{-1}$, with other parameters fixed to the best-fit values. From these plots it should be apparent that the global ionization in the wind is sensitive to the X-ray luminosity. This is particularly true at the low end of the X-ray luminosity range. In that case, a large region outside of the X-ray shadow remains at an optical depth that causes noticable signatures in the line profiles, in spite of X-rays from the black hole. In these figures, we use our parameterization of the background optical depth in terms of $\alpha_1$, $\alpha_2$, and $\tau_{\rm wind}$ to determine the optical depth in the shadow region. The resulting contours of constant optical depth are ellipsoidal, compacted towards the compact object. \section{Discussion and Conclusions} The Space Telescope Imaging Spectrograph on Hubble provides the highest resolution ultraviolet spectra taken of Cyg X-1 to date. Observations were taken at two epochs roughly a year apart: at each epoch orbital phases when the compact object is behind the stellar companion and when the compact object is in front of the companion star were covered. We find P~Cygni profiles from high ionization (N\,V, C\,IV, Si\,IV) gas. For both epochs the P~Cygni profiles show significantly less absorption at phases when the compact object is in the line of sight. RXTE observations indicate that the X-ray flux of the system was at a similar level at each epoch. The observed changes can be attributed entirely to orbital effects. We interpret this to mean that X-rays from the compact object photoionize the wind from the massive companion resulting in reduced absorption by the wind material. P Cygni profiles of selected species are consistent with the Hatchett-McCray effect, in which X-rays from the compact object photoionize the stellar wind from the companion star, thereby reducing absorption. This effect also appears in UV observations of LMC X-4 and SMC X-1 (Boroson et al. 1999; Vrtilek et al. 1997; Treves et al. 1980). SEI models can fit the observed P Cygni profiles and provide measurements of the stellar wind parameters. The Si\,IV fits are the most reliable and we use them to determine L$_x/\dot{M} =4.8 \pm0.3 \times 10^{42}$ ergs s$^{-1}$ M$_{\odot}^{-1}$ yr, where L$_{X}$ is the X-ray luminosity and $\dot{M}$ is the mass-loss rate of the star. The results from the C\,IV and N\,V lines are less reliable because they are saturated and the C\,IV fit does not match the data well. For these fits we fixed the terminal velocity $\nu_{\infty}$ and the microturbulent velocity to those given by our fits to Si\,IV. The best fit values for the optical depth in the ambient wind are high ($\ge 10$). Once saturated, the OB star wind lines hardly change with large optical depth, but when much of the wind is ionized by the black hole, the ion fraction in the remaining regions can have a significant effect on the line profile. The reduced absorption when the compact object is in the line of sight is inconsistent with focusing of the wind toward the compact object, as has been suggested by several authors (e.g, Sowers~{\it et~al.}~1998; Tarasov~{\it et~al.}~2003; Miller~{\it et~al.}~2005), as then we would expect more absorbers in the line-of-sight and hence increased P Cygni absorption. Also, IUE observations taken at 8 orbital phases show a continuous variation in the P Cygni profiles with maximum absorption at phase 0.0 and minimum at 0.5 (Treves~{\it et~al.}~1980; van Loon~{\it et~al.}~2001). Further high spectral resolution ultraviolet observations of Cyg X-1 will be necessary to study the behavior of the P Cygni lines during different X-ray states. In an analysis of 2 years of RXTE/ASM data Wen~{\it et~al.}~(1999) found the 5.6 day orbital period of Cyg X-1 during the X-ray low/hard state, but no evidence of the orbital period during the high/soft state. Wen {\it et~al.}~suggest that absorption of X-rays by a stellar wind from the companion star can reproduce the observed X-ray orbital modulations in the hard state: The lack of modulation in the soft state could be due either to a reduction of the wind density during the soft state or to partial covering of a central hard X-ray emitting region by an accretion stream. Gies~{\it et~al.}~(2003) used the results of a four year spectroscopic monitoring program of the H$\alpha$ emission strength of HDE226868, the normal companion to Cyg X-1, to argue that the low/hard X-ray state occurs when there is a strong, fast wind and accretion rate is low, while in the high/soft state a weaker, highly ionized wind attains only a moderate velocity and the accretion rate increases. The interpretations of both Wen~{\it et~al.}~(1999) and Gies~{\it et~al.}~(2003) are inconsistent with the fact that the {\it total} X-ray luminosity from 1.3-200 keV remains constant during both the X-ray soft and hard states (Wen~{\it et~al.}~1999): the designation of X-ray high or X-ray low during these states is only applicable for the 1.5-12 keV ASM band. Since the 1.3-200 keV X-ray luminosity is unchanged from the hard to soft state, fluctuations in the narrow ASM band cannot be due to reduction in accretion, or obscuration of the X-ray source; rather it is a physical change that causes the dominant emission mechnism to switch from thermal to power-law. We note that the orbital modulation observed by Wen~{\it et~al.}~in the hard state ($\pm 1.6$ ASM cts/sec around the average) is less than the errors on the counts during the soft state ($\pm 3$ ASM cts/sec); we suggest that the quality of the RXTE ASM is not sufficient to detect this low level orbital modulation during the soft state. Ultraviolet observations clearly show orbital modulation during both X-ray hard and soft states (Gies~{\it et~al.}~2007). Our wind models explain both the X-ray and ultraviolet flux during the soft/high state. We need high spectral resolution ultraviolet observations during the hard/low state to determine if there is a change in wind density between states. Our determination of the mass-accretion rate can be considered as a positive check on the Hatchett-McCray models, as the Si\,IV line gave $L_x=1.6\times10^{37}$~erg~s$^{-1}$, and the model is subject to systematic uncertainties. However, this value of $L_x$ is a factor of 3 lower than the best estimate of the accretion rate according to Bondi-Hoyle-Littleton spherical wind accretion. We suggest that some of the energy of accretion may go into powering the jet. Our test of the dependence of our results on X-ray luminosity confirms the utility of our models and demonstrates that the wind outside the shadow zone is still sensitive to X-ray illumination, an arrangement which allows us to fit $L_{\rm x}$ as a free parameter in our model. In future observations of the time-variability of the wind lines, the light travel-time effects may be used to advantage, as the wind may act as a ``low-pass filter" to the X-ray observations, with the filter cutoff indicating the size of the ionized region (Kallman, McCray, \&\ Voit 1987). \acknowledgments AH would like to acknowledge support from the REU program NSF grant 9731923 awarded to SAO. DG would like to acknowledge support provided by NASA through a grant (GO-9840) from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. The X-ray results were provided by the ASM/RXTE teams at MIT and at the RXTE SOF and GOF at NASA's GSFC. The contour graphs of ion fraction benefitted from the programming of Corey Casto, who helped increase the resolution of the contours.
1,116,691,499,418
arxiv
\section{Introduction} A number of authors have argued that, at sub-Eddington accretion rates, the gravitational potential energy released by turbulent stresses in an accretion flow may be stored as thermal energy, rather than being radiated (Ichimaru 1977; Rees et al. 1982; Narayan \& Yi 1994, 1995; Abramowicz et al. 1995; Chen et al. 1995; see Narayan, Mahadevan, \& Quataert 1998b, and Kato, Fukue, \& Mineshige 1998 for reviews). Narayan \& Yi (1994,1995) noted that such advection-dominated accretion flows (ADAFs) have the interesting property that their Bernoulli parameter, a measure of the sum of the kinetic energy, gravitational potential energy, and enthalpy, is positive; since, in the absence of viscosity, the Bernoulli parameter is conserved on streamlines, the gas can, in principle, escape to ``infinity'' with positive energy. Narayan \& Yi speculated that this might make ADAFs a natural candidate for launching the outflows/jets seen to originate from a number of accretion systems. Blandford \& Begelman (1998; hereafter BB98) have recently suggested that mass loss via winds in ADAFs may be both dynamically crucial and quite substantial. They construct self-similar ADAF solutions in which the mass accretion rate in the flow varies with radius $R$ as $\dot M \propto R^p$. If the wind carries away roughly the specific angular momentum and energy appropriate to the radius from which it is launched, they show that the remaining (accreting) gas has a negative Bernoulli parameter only for large values of $p \sim 1$. They therefore propose that the majority of the mass originating at large radii is lost to a wind. For example, for $p=1$, only a fraction $\sim(R_{in}/R_{out})\ll1$ of the mass would accrete onto the central object, where $R_{in}$ and $R_{out}$ are the inner and outer radii of the ADAF. In a separate study, Di Matteo et al. (1998; hereafter D98) measured the flux of radio and submillimeter emission from the nuclei of nearby elliptical galaxies and found fluxes significantly below the values predicted by the ADAF model. Their observations are difficult to reconcile with Fabian \& Rees's (1995) proposal that these galactic nuclei contain ADAFs. D98 discuss a number of explanations for the ``missing'' flux; one of their suggestions is that a significant wind may carry off much of the accreting mass in the ADAF. Spectral models of ADAFs without mass loss have been applied to a number of low luminosity accreting black hole systems. They give a satisfying description of the spectral characteristics of several quiescent black hole binaries (Narayan, McClintock, \& Yi 1996, Narayan, Barret, \& McClintock 1997; Hameury et al. 1997) and low luminosity galactic nuclei, e.g., Sgr A* (Narayan, Mahadevan, \& Yi 1995; Manmoto et al. 1997; Narayan et al. 1998) and NGC 4258 (Lasota et al. 1996a, Gammie, Narayan, \& Blandford 1998). Our goal in this paper is to use broad-band spectral observations to test for the presence of mass loss in low luminosity accreting black holes, paying special attention to the implications of uncertainties in the microphysics of the accretion flow. Specifically, we attempt to answer the following question: are the no-mass loss ADAF models in the literature, which fit the observations reasonably well, unique ``ADAF'' fits to the data, or are models with substantial mass loss also viable? If the latter, since it is unlikely that purely theoretical arguments will be definitive, can we distinguish between no-wind and wind models with future observations? As a first step toward addressing these questions, we calculate spectral models of ADAFs with $\dot M \propto R^p$, and compare them with observations of the X-ray binary V404 Cyg in quiescence, the Galactic center source Sgr A*, and the nucleus of the elliptical galaxy NGC 4649. We assume throughout that all observed radiation from the systems under consideration is due to the accretion flow, i.e., the wind/outflow does not radiate significantly. In the next section (\S2), we discuss our modeling techniques. We then show models for V404 Cyg (\S3) and Sgr A* (\S4) and compare the models to observations, focusing on the available theoretical parameter space. In \S5 we discuss D98's results on the radio emission in nearby ellipticals. We then propose several future observations which may help clarify the physics of ADAFs (\S6). Finally, in \S7 we summarize and discuss our results. \section{Modeling Techniques} Over the last few years, ADAF models have seen a series of improvements such that the modeling techniques used currently are much superior to earlier methods. The first published spectral models of ADAFs used the self-similar solution of Narayan \& Yi (1994) to model the dynamics, but this was soon replaced by global models, initially for a pseudo-Newtonian potential (Narayan, Kato, \& Honma 1997, Chen, Abramowicz, \& Lasota 1997), and more recently in the full Kerr metric (Abramowicz et al. 1996, Peitz \& Appl 1997, Gammie \& Popham 1998). The spectral modeling too has seen improvements, particularly in the treatment of the electron energy equation and the Comptonization. The electron energy equation was originally taken to be local (e.g., Narayan, McClintock, \& Yi 1996), with heating due to Coulomb collisions and turbulent heating balancing cooling. As emphasized by Nakamura et al. (1997), however, the electron entropy gradient (electron advection) generally cannot be neglected, and so this is now included (see eq. [\ref{ee}]). Narayan et al. (1998a) discuss how the predicted spectra have changed as the modeling techniques have improved. The changes have generally been fairly modest, at least compared to the large changes we see in the present paper when we include mass loss from the accretion flow. In this paper, we use the latest techniques for numerically calculating spectral models of ADAFs (plus any thin disk at large radii), as described in detail by Narayan et al. (1998a; see also Esin, McClintock, \& Narayan 1998 and Narayan, Barret, \& McClintock 1997). Here we mention only the relevant differences. As in Narayan et al. (1998a) and Esin et al. (1997), we solve the full electron energy equation, including the electron entropy gradient. The equation takes the form \begin{equation} n_e v {d \over dR } \left({k T_e \over \gamma_e - 1}\right)= k T_e v {d n_e \over d R}+H_e +q_{ie}-q_e^-, \label{ee} \end{equation} where $T_e$ is the electron temperature, $\gamma_e$ is the adiabatic index of the electrons, $n_e$ is the electron number density, $v$ is the radial velocity, $H_e$ is the turbulent heating rate of the electrons, $q_{ie}$ is the energy transferred to the electrons from the ions by Coulomb collisions, and $q_e^-$ is the radiative cooling rate of the electrons. The first term on the right hand side of equation (\ref{ee}) describes the increase in the electron internal energy due to $PdV$ work, and is the volumetric version of $q_c$ defined in equation (\ref{comp}) below. A difference in this paper, relative to earlier work, is that we take $\gamma_e$ to be that of a monatomic ideal gas ($5/3$ in the non-relativistic limit, decreasing to $4/3$ in the relativistic limit). Esin et al. (1998) argued that $\gamma_e$ should include contributions from the magnetic energy density in the flow. As discussed in Quataert \& Narayan (1998; their Appendix A), this is incorrect if MHD adequately describes the accretion flow. This is of some significance for models of low luminosity systems. For example, in the ``standard'' ADAF model of Sgr A*, the electrons are, to good approximation, adiabatically compressed. The larger $\gamma_e$ used in this paper yields higher electron temperatures (by a factor of $\sim 3$) and significantly more synchrotron emission. As a result, to produce a radio flux comparable to that in Narayan et al. (1998a), we require a noticeably weaker magnetic field. We describe the turbulent heating of the electrons via a parameter $\delta$, defined by $H_e \equiv \delta q^+$, where $q^+$ is the usual ``viscous'' dissipation rate of accretion theory (e.g., Kato et al. 1998). Thus, $\delta$ is the fraction of the total energy generated by turbulent stresses in the fluid ($q^+$) that directly heats the electrons. As discussed in Quataert \& Narayan (1998), there is a subtlety in interpreting $q^+$ in ADAFs which is not present in thin disks; namely, only a fraction $\eta$ ($\sim 1/2$) of $q^+$ is likely to end up in the particles; the rest is used to build up the magnetic field and turbulence as the accreting gas flows in.\footnote{Essentially, the parameter $\eta$ reflects the fact that, just as one must account for advection by the particles, one must also account for advection by the turbulence.} Of the fraction $\eta$, a fraction $\delta_H$ goes into electrons and $(1-\delta_H)$ goes into ions. Thus, in terms of $\eta$ and $\delta_H$, the $\delta$ we use in this paper is $\delta \equiv \delta_H\eta$. Accounting for a variable mass accretion rate in the flow, the continuity equation becomes \begin{equation} \dot M = - 4 \pi R^2 H_\theta \rho v = \dot M_{\rm out} \left({R \over R_{\rm out}}\right)^p, \label{cont} \end{equation} where $H_\theta$, $\rho$, and $v$, are, respectively, the angular scale height, mass density, and radial velocity in the flow. The quantity $\dot M_{\rm out}$ is the accretion rate at the radius $R_{\rm out}$, where winds become important. We take the radial velocity, angular velocity, and sound speed of the flow from the global, relativistic, models of Gammie \& Popham (1998), and then use equation (\ref{cont}) to calculate the density, $\rho$. This is, strictly speaking, inconsistent, as Gammie \& Popham's models were derived under the assumption of constant $\dot M$. The error made in this approximation should, however, be of order unity. From a spectral modeling point of view, the primary importance of the wind is that it modifies the density in the flow; this is correctly captured by equation (\ref{cont}). Generically, ADAFs with winds will rotate more quickly than those without winds. This is seen in the self-similar solution of BB98, where the rotational support enables the enthalpy of the gas to decrease, thus permitting the Bernoulli parameter to become negative. The shear and the viscous dissipation per unit mass in the flow are therefore expected to be larger in the presence of a wind.\footnote{This is actually true only for certain ``types'' of winds (in particular, depending on BB98's parameters $\epsilon$ and $\lambda$). As discussed in, e.g., Blandford \& Payne (1982), it is possible for a wind to take away all of the angular momentum and energy flux from the disk, leaving it cold and dissipationless.} We have crudely accounted for this effect as follows. In non-wind models, the flow structure is, among other things, a function of $\gamma_g$, the adiabatic index of the fluid. In calculating models of systems with winds and high $\delta$ (Figures 2b, 4b, 7, \& 8), we have chosen $\gamma_g$ such that it yields a rotation rate in the interior of the flow which is comparable to that expected from the self-similar wind solution of BB98. In particular, a self-similar, non-relativistic, ADAF has a Bernoulli parameter equal to zero only if $\Omega/\Omega_K \approx [2p/(p + 5/2)]^{1/2}$. For our typical value of $p = 0.4$, this yields $\Omega/\Omega_K \approx 0.53$. In this case we take $\gamma_g \approx 1.5$ in our global calculations, since it reproduces this rotation rate well. Note that it is important to get the right $\Omega$ only for high $\delta$. For low $\delta$, since turbulent heating of electrons is unimportant, the exact $\gamma_g$ we use is not important. We have confirmed this by calculating models with various choices of $\gamma_g$ at low $\delta$ and making sure that the spectral models are only weakly modified. We are reasonably confident that, even though we have used an ad hoc prescription in choosing the global solutions, our parameter estimates are fairly accurate. Ultimately, of course, global, relativistic, models of ADAFs with winds will be needed to correctly assess some of the issues addressed in this paper. \subsection{Choice of parameters} We measure black hole masses in solar units and (radially varying) accretion rates in Eddington units: $M = m M_{\odot}$ and $\dot M = \dot m \dot M_{edd}$. We take $\dot M_{edd} = 10L_{edd}/c^2 = 2.2 \times 10^{-8} m M_{\odot} {\rm yr}^{-1}$, i.e., with a canonical 10 \% efficiency. We measure radii in the flow in Schwarzschild units: $R = r R_s$, where $R_s = 2GM/c^2$ is the Schwarzschild radius of the black hole. The parameters of our models are $m$, $\dot m_{\rm out} = \dot M_{\rm out}/\dot M_{\rm edd}$, $\beta$, $\alpha$, $\delta$, $r_{\rm out}$, and $p$. Our primary focus is to consider the effects of winds via the parameter $p$ (defined in eq. [\ref{cont}]). As we show, however, variations in $p$ are qualitatively degenerate with variations in other parameters of the problem. The mass of the central black hole, $m$, is estimated from observations. As in all previous work, we fix $\dot m_{\rm out}$ by adjusting it so that the X-ray flux in the model fits the available data. For all of the models presented here, $r_{\rm out}=10^4$. Note that $r_{\rm out}$ and $p$ are, roughly speaking, degenerate; what is of primary importance is $r_{\rm out}^{-p}$, the fraction of the incoming mass accreted onto the central object. Typical values of $p$ considered are $p = 0$ (no winds) and $p = 0.4$ (moderately strong wind). The quantities $\beta \equiv P_{\rm gas}/P_{\rm mag}$, $\alpha$, and $\delta$ are microphysical parameters representing the magnetic field strength in the flow, the efficiency of angular momentum transport, and the fraction of the turbulent energy which heats the electrons, respectively.\footnote{Our definition of $\beta$ is that utilized in the plasma physics literature. A number of workers in the accretion literature define a ``$\beta$'' via $\beta_{\rm adv} \equiv P_{gas}/P_{tot}$, with $P_{tot} = P_{gas} + P_{mag}$. This is related to our $\beta$ by $\beta_{\rm adv} = 3\beta/(3\beta + 1)$ or $\beta_{\rm adv} = \beta/(\beta + 1)$, depending on whether one defines the magnetic pressure to be $B^2/24 \pi$ or $B^2/8 \pi$ (as we do here).} Recent ADAF models in the literature have favored the values $\beta = 1$, $\alpha = 0.25$, and $\delta = 10^{-3}$ (cf. Narayan et al. 1998a), and have considered only factor of few variations in $\alpha$ and $\beta$ and factor of $\sim 10$ variations in $\delta$. There is, however, considerable uncertainty in the microphysics of ADAFs; each of the above parameters must be regarded as uncertain to at least an order of magnitude, likely more. As we will show in this paper, mass loss from the accretion flow has a dramatic effect on theoretically predicted spectra. If we were to restrict ourselves to the values of the microphysics parameters given above, significant mass loss would be all but ruled out by the observations. Such a restriction would, however, be an inaccurate reflection of the theoretical uncertainty in the microphysics of the flow. The philosophy adopted in this paper is therefore somewhat different from previous studies. We allow $\alpha$, $\beta$ and $\delta$ to vary over a much larger range, but one which we believe correctly encompasses the theoretical uncertainties. For purely theoretical reasons (see below) we take our ``canonical'' values to be different from those of previous studies, namely, $\beta = 10$, $\alpha = 0.1$, and $\delta = 10^{-2}$. By canonical we mean (only) that, when one of the parameters is varied (e.g., $p$), the others (e.g., $\delta$) are typically fixed at their canonical values. A major point of this paper will be that, depending on the importance of winds, these values may or may not be consistent with observations. Theoretical work on particle heating in ADAFs (Gruzinov 1998, Quataert 1998, Quataert \& Gruzinov 1998) and ``fluid'' models for the evolution of the turbulent energy in an ADAF (Quataert \& Narayan 1998) suggest that subthermal magnetic fields may be likely; we consider $\beta = 10$ to be a plausible value. We take, however, a range of $\beta$, from $\beta=1$ (strict equipartition of gas and magnetic pressure) to $\beta=100$ (weak fields). If the turbulent stresses arise solely from magnetic fields, we expect the viscosity parameter to scale roughly as $\alpha\sim1/\beta$ (Hawley, Gammie \& Balbus 1996). We do not always enforce this relation in our models, but sometimes vary $\alpha$ and $\beta$ independently. We consider values of $\alpha$ ranging from 0.03 to 0.3. We should note, however, that large values of $\alpha \approx 0.25$ are needed in applications of the ADAF model to X-ray binaries such as Nova Muscae 1991, Cyg X--1 and GRO J0422+20 in the low/hard state (Narayan 1996, Esin, McClintock \& Narayan 1997, Esin et al. 1998). If $\alpha$ is much smaller than 0.25, the maximum accretion rate, $\dot m_{crit}$, up to which the ADAF solution is possible decreases significantly, and the maximum luminosity of the models becomes much smaller than the observed luminosities. We have confirmed that this limit on $\alpha$ is not modified if winds are included in the models. The value of the parameter $\delta$ is uncertain. Traditional ADAF models have taken $\delta$ to be small ($\sim 10^{-3})$ and never considered $\delta \mathrel{\mathpalette\@versim>} 0.03$. A number of studies have been carried out to investigate the heating of protons and electrons in hot plasmas. Quataert (1998) and Gruzinov (1998; see also Blackman 1998, Quataert \& Gruzinov 1998) considered particle heating by MHD turbulence and concluded that $\delta$ might be small so long as $\beta$ is greater than about $\sim 10$. Bisnovatyi-Kogan \& Lovelace (1997; see also Quataert \& Gruzinov 1998), however, argue that magnetic reconnection, and its presumed electron heating, may lead to large values of $\delta \sim 1$.\footnote{Despite our disagreement with some of Bisnovatyi-Kogan \& Lovelace's arguments (see Blackman 1998), their basic point, that reconnection may be crucial for ADAF models, is nonetheless important.} In this paper, we avoid theoretical prejudice and consider values of $\delta$ ranging from 0 to 0.75. Since the maximum value of $\delta$ is $\eta$, the fraction of the turbulent energy that goes into the particles, the value $\delta = 0.75$ likely corresponds to a situation where electrons are heated much more strongly than ions. \subsection{Description of Spectra} In the following sections we compare theoretical spectra of ADAFs, both with and without winds, to observations of low-luminosity systems. In preparation for this we introduce here the main features of the calculated spectra. Three radiation processes are of importance in ADAF spectra: synchrotron emission, Compton scattering, and bremsstrahlung. Each of these produces distinct and easily recognized features in the spectrum. The relative importance of each mechanism is a function of the temperature and density of the plasma, and thus of the model parameters, $\alpha$, $\beta$, $\delta$, $p$, $m$, and $\dot m$. Thermal synchrotron emission in ADAFs is invariably self-absorbed and produces a sharply cutoff peak, with a peak frequency that depends on the mass of the black hole: $\nu_{s} \sim10^{15}m^{-1/2}$ Hz. The synchrotron peak is in the optical band for stellar-mass black holes, and in the radio for supermassive black holes. Synchrotron emission from different radii in the flow occurs at different frequencies. The peak emission, however, is always from close to the black hole and reflects the properties of the accreting gas near $r \sim 1$. In spectra of quiescent systems of the sort we discuss in this paper ($\dot m_{in}\lsim10^{-3}$), and especially in the absence of winds, the synchrotron peak is the most luminous feature in the spectrum. The maximum value of $\nu L_\nu$ is given by (Mahadevan 1997) \begin{equation} \nu_s L_{\nu,s} \propto B^3 T^7_e \propto \beta^{-3/2} \dot m_{in}^{3/2} T^7_e, \label{lsynch} \end{equation} where all quantities should be evaluated at $r \sim 1$ and $\dot m_{in}$ is the accretion rate near $r \sim 1$. Note the very steep dependence on the electron temperature. In writing equation (\ref{lsynch}), we have taken $\rho \propto \dot m$, but independent of $\alpha$, as is appropriate near the central object. This can be understood by noting that, near the central object, the self-similar scaling $\rho \propto \dot m/\alpha$ fails. The dynamics in the synchrotron and Compton emitting regimes is dominated by the presence of a sonic point at $r \sim 3-5$ (Narayan, Kato, \& Honma 1997, Chen et al. 1997, Gammie \& Popham 1998). Near this radius the flow velocity is $\sim$ the sound speed, independent of $\alpha$. By the continuity equation, then, the density in the interior scales as $\rho \propto \dot m$, with only a weak dependence on $\alpha$. The density on the outside, however, does scale as $\rho \propto \dot m/\alpha$ because, away from the sonic point, self-similarity is reasonably valid. Compton scattering of synchrotron photons by the hot electrons in the accreting gas produces one, or sometimes two peaks in the spectrum at frequencies higher than the synchrotron peak. The peaks correspond to successive scatterings by the electrons. As with the synchrotron peak, the Compton features are again sensitive to the properties of the gas near the black hole. The frequency of the first Compton peak $\nu_{c}$ is related to $\nu_{s}$ by the Compton boost factor $A$, which is a function only of the electron temperature \begin{equation} {\nu_{c}\over\nu_{s}}=A=1+4\theta_e+16\theta_e^2, \qquad \theta_e={kT_e\over m_ec^2}={T_e\over 5.9\times10^9~{\rm K}}. \label{A} \end{equation} The power in the Compton peak relative to that in the synchrotron peak depends on both $A$ and the optical depth of the flow to electron scattering ($\tau$) \begin{equation} \nu_c L_{\nu,c} \approx \nu_s L_{\nu,s} \left({\nu_c \over \nu_s}\right)^{\alpha_c}, \qquad \alpha_c = 1 + {\ln \tau \over \ln A}. \label{compt} \end{equation} The relative power in the synchrotron and Compton peaks therefore provides some information on the density in the inner regions of the flow, and thus on $\dot m_{in}$. Note that for the low luminosity systems considered here, $\tau \ll 1$ and $\alpha_c < 0$, so that the synchrotron luminosity dominates the Compton luminosity. Finally, bremsstrahlung emission produces a peak that typically extends from a few to a few hundred keV. In contrast to the other two processes discussed above, this emission arises from all radii in the flow. To see this, consider a self-similar ADAF with a wind, for which $\rho \propto r^{-3/2 + p}$ and $T_e \propto r^{-\epsilon}$ ($\epsilon \sim 1$ at large radii). Let the minimum flow temperature be $T_{min}$ (which occurs at $r_{out}$) and the maximum temperature be $T_{max}$ (near $r\sim1$). At photon energies $\ll kT_{min}$, the bremsstrahlung emission is given roughly by $\nu L_\nu \propto \nu$ (the spectral index is a little different from unity because of the Gaunt factor, which we ignore for simplicity), while for $kT_{min} \ll h \nu \ll kT_{max}$, it is $\nu L_\nu \propto \nu^{1/2 - 2p/\epsilon}$. In each case, the emission comes from the largest radius which satisfies $h \nu \sim k T(r)$. In our models, $T_{min}$ is $\sim (10^{12}/r_{\rm out})$ K; therefore, for $r_{\rm out}=10^4$, $kT_{min}$ is $\sim 10$keV. For X-ray observations in the range $0.1-10$keV, $h \nu \mathrel{\mathpalette\@versim<} kT_{min}$. By above, $\nu L_{\nu}$ should be roughly proportional to $\nu$, but should flatten beyond about 10 keV; $\nu L_\nu$ will vary as $\nu^{1/2}$ beyond 10 keV if there is no wind ($p = 0$) and it will be flatter or even turn over (in $\nu L_\nu$) if there is a strong wind (large $p$). In all cases, the hardest emission, at $\mathrel{\mathpalette\@versim>} 100$ keV, occurs from the inner $r \mathrel{\mathpalette\@versim<} 100$, while the softer emission comes from $\sim r_{\rm out}$ (this is particularly true for $p > 0$). Observations in the 1--10 keV X-ray band are therefore most sensitive to the outer regions of the ADAF. In these regions, the electron temperature is fairly well-determined since the gas is essentially virial and one-temperature. Therefore, observations of the bremsstrahlung emission at a few keV give direct information on the density of the outer flow, and thereby the accretion rate on the outside $\dot m_{out}$. In the sources that we consider below, the synchrotron peak is isolated and well observed (in X-ray binaries, the companion must be subtracted out); it occurs in softer bands, either in the optical or radio. The Compton and bremsstrahlung peaks, however, can sometimes be superposed in the X-ray band. In particular, an important consequence of the $m$ dependence of the frequency of the synchrotron peak ($\nu_s$) is that, without winds, the X-ray spectrum of ADAF models of low luminosity galactic black hole candidates is usually dominated by the first Compton peak. In low luminosity AGN, however, the precise behavior in the X-ray band is sensitive to the details (microphysics, accretion rate) of the model, being a competition between the second Compton peak and bremsstrahlung. This is because the peak synchrotron emission is at substantially lower frequencies and a synchrotron photon must be scattered more than once (or off of hotter electrons) in order to be scattered into the X-ray band; this tends to suppress the importance of Comptonization. Note that bremsstrahlung and Comptonization can be readily distinguished by their different spectral slopes in the X-ray band. If there is a thin disk outside the ADAF, as in our models of X-ray binaries (\S3), the emission of the disk is seen as a blackbody-like feature in the spectrum. This emission is in the red or near infrared for quiescent X-ray binaries in which the disk is restricted to $r>r_{\rm out}\sim10^4$. \subsection{The Effects of Winds on Spectral Models} Bremsstrahlung emission at $\sim 1-10$ keV is rather insensitive to the presence of a wind (i.e., to $p$) since it originates in the outer regions of the flow and essentially measures $\dot m_{\rm out}$. At higher energies, $\mathrel{\mathpalette\@versim>} 10$ keV, however, the bremsstrahlung emission decreases with increasing $p$ ($\nu L_\nu \propto \nu^{1/2 - 2p/\epsilon}$) and thus provides a powerful probe of the flow density and the value of the parameter $p$ (see \S 6.2). By contrast, the predicted synchrotron emission decreases strongly with increasing $p$. There are two reasons for this. First, increasing $p$ decreases the density of the plasma near $r \sim 1$, where the high frequency synchrotron emission originates. This implies a lower gas pressure and hence a weaker magnetic field (for fixed $\beta$). Perhaps more importantly, the electron temperature decreases as $p$ increases. For the low luminosity systems considered in this paper, and for small $\delta$, the electrons are nearly adiabatic, i.e., $T_e \propto \rho^{\gamma_e - 1} \propto r^{(-1.5 + p)(\gamma_e -1)}$. When $p$ is large, the density profile is flatter, adiabatic compression is less efficient, and hence $T_e$ is smaller. By equation (\ref{lsynch}), the synchrotron emission is particularly sensitive to the electron temperature. Therefore, the synchrotron emission falls very rapidly with increasing $p$. This effect can, as we show explicitly below, be countered by increasing $\delta$, since a larger $\delta$ means stronger turbulent heating of the electrons and thus larger $T_e$. The Compton power decreases with increasing $p$ even more strongly than the synchrotron does. As equation (\ref{compt}) shows, $\nu_c L_{\nu,c}$ depends on both $\nu_s L_{\nu,s}$ and $\alpha_c$, both of which decrease because of the wind ($\alpha_c$ decreases because $\tau$ and $\theta_e$ both decrease). Increasing $\delta$ to restore the synchrotron power also increases the Compton power, as discussed in the following sections. \section{Soft X-ray Transients in Quiescence} Soft X-ray transients (SXTs) are mass transfer binaries which occasionally enter a high luminosity, ``outburst,'' phase, but most of the time remain in a very low luminosity, ``quiescent,'' phase. The spectra of quiescent SXTs are not consistent with a thin accretion disk model, which is unable to account for the fluxes and spectral slopes in the optical and X-ray bands consistently (e.g., McClintock et al. 1995). Narayan, McClintock, \& Yi (1996; see also Narayan et al. 1997a, Hameury et al. 1997) showed that this problem can be resolved if quiescent SXTs accrete primarily via ADAFs, with the thin disk confined to large radii, $r>r_{\rm out} \sim 10^4$. (Note that $r_{out}$ is taken here to be the same as the transition radius, $r_{tr}$, defined in previous papers. However, it need not be if winds only become important well inside the outer boundary of the ADAF.) In this section, we give a detailed description of models of the SXT V404 Cyg in quiescence. The X-ray data on V404 Cyg (Narayan et al. 1997a) are much superior to the data on other SXTs, which makes this system better suited for the parameter study we present. Table 1 lists the various parameter combinations we have tried for modeling V404 Cyg, and some of the characteristics of these models, including the microphysical parameters, the maximum electron temperature, and the radiative efficiency. Following Shahbaz et al. (1994), we have taken the mass of the black hole to be $m = 12$. Outbursts in SXTs are believed to be triggered by a thermal-viscous instability in the thin disk (enhanced mass transfer from the companion may also be important). Initial ADAF models of black hole SXTs in quiescence (Narayan, McClintock, \& Yi 1996) assumed that the observed optical emission from these systems was blackbody emission from a steady state outer thin disk. Wheeler (1996) and Lasota, Narayan, \& Yi (1996) pointed out that this was inconsistent because quiescent thin disks are not likely to be in steady state. Furthermore, in non-steady quiescent disks, the mass accretion rate decreases rapidly with radius (so as to maintain a roughly constant effective temperature $\sim$ a few thousand K; e.g., Cannizzo 1993). This implies a limit on $r_{\rm out}$; if $r_{\rm out}$ is too small, then the disk cannot supply sufficient mass to the inner ADAF to fit the X-ray observations. Quantitatively, the limit is $r_{\rm out} \mathrel{\mathpalette\@versim>} 10^4$ for V404 Cyg (it is slightly smaller for A0620-00). We fix $r_{\rm out} = 10^4$ in all the models presented here, and we take the thin disk to extend from $r = 10^4$ to $10^5$. \subsection{Spectral Models of V404 Cyg} Figure 1a shows spectral models of V404 Cyg for different $p$ for our standard microphysics parameters: $\alpha = 0.1$, $\beta = 10$, and $\delta = 0.01$. We see two important effects of changing $p$. First, in the X-ray band, models with weak winds (small $p$) have Compton-dominated X-ray spectra, while models with strong winds (large $p$) are bremsstrahlung dominated. The reason for this has already been explained in \S2. The Compton emission comes from near the black hole, while the bremsstrahlung comes from the outer regions of the ADAF. As the wind becomes stronger, the inner mass accretion rate $\dot m_{in}=\dot m_{out}r_{\rm out}^{-p}$ becomes significantly smaller than $\dot m_{out}$, reducing the importance of Comptonization relative to bremsstrahlung. Associated with this switch is another interesting feature. For weak winds (small values of $p$), we see that $\dot m_{in}$ remains roughly constant when we change $p$ (e.g. $\dot m_{in}= 10^{-3}, 9 \times 10^{-4}$, for $p=0$, 0.2), while for large values of $p$, it is $\dot m_{out}$ that remains roughly constant (e.g. $\dot m_{out}=0.016$, 0.02, for $p=0.4$, 0.6). This is again easy to understand once we realize that the mass accretion rate is adjusted so as to reproduce the X-ray flux. When the model is Compton-dominated, the X-ray flux depends on $\dot m_{in}$, and so this quantity remains roughly the same as $p$ varies. However, when bremsstrahlung dominates, the X-ray flux depends on $\dot m_{out}$ and so it is $\dot m_{out}$ that remains constant. The second effect that is seen in Figure 1a (and even more clearly in Fig. 4a for Sgr A$^*$) is that the synchrotron emission becomes weaker as the wind becomes stronger. Once $p$ is large enough ($\gsim0.2$) for the model to become bremsstrahlung-dominated, $\dot m_{out}$ is more or less frozen at a fixed value. For yet larger $p$, $\dot m_{in}$ decreases rapidly with increasing $p$. Since the synchrotron emission depends primarily on $\dot m_{in}$, the synchrotron peak drops significantly in magnitude. The decrease in the synchrotron power at large $p$ is actually more dramatic than is apparent in Figure 1a (see Fig. 4a). Most of the optical/infrared flux in the $p=0.4$, 0.6 models in Figure 1a is blackbody emission from the outer disk, which depends only on $\dot m_{out}$, and does not change with $p$ at large $p$. This emission is cool (it is limited by the disk's effective temperature, which is about 5000 K) and the peak occurs at lower frequencies. The above analysis hinges on the change in the flow density with $p$. How is it modified if the microphysical parameters are varied from the canonical values taken above? Figure 1b shows models with a moderate wind, $p = 0.4$, for various values of the parameter $\beta$, which determines the strength of the magnetic field ($\alpha$ and $\delta$ are fixed at their canonical values of 0.1 and 0.01, respectively). Changing $\beta$ has little effect in the X-ray band since bremsstrahlung emission does not depend on the magnetic field strength. Increasing $\beta$ to $\sim 1$ naturally increases the synchrotron flux (eq. [\ref{lsynch}]). Even for $\beta = 1$, however, the synchrotron luminosity is too low by a factor of $\sim 2-3$. Note that, for $\beta = 1$, the optical emission in the models of Figure 1b is primarily synchrotron, while for $\beta \mathrel{\mathpalette\@versim>} 10$ it is primarily disk emission. Figure 2a shows models of V404 Cyg for $p = 0.4$ for several $\alpha$ ($\beta$ and $\delta$ are fixed at their canonical values of 10 and $10^{-2}$). These models show little variation in X-ray behavior with $\alpha$, but there is a decrease in optical emission as $\alpha$ decreases. This can be understood as follows. In the self-similar regime (reasonably valid at large radii), the flow density in an ADAF is $\rho\propto {\dot m}/\alpha$. Furthermore, the X-ray flux, which we fix to the observed value, arises from the outer regions of the ADAF via bremsstrahlung. Since the bremsstrahlung luminosity is $\propto \rho^2$, $\dot m_{\rm out}/\alpha$ remains roughly constant as $\alpha$ varies ($\dot m_{\rm out} = 5 \times 10^{-3}, 1.6 \times 10^{-2}$, $2.6 \times 10^{-2}$ for $\alpha = 0.03, 0.1$, $0.3$). All three models therefore have nearly the same density and temperature on the outside, which accounts for the lack of significant change in the X-ray band. For these models, however, the optical is dominated by disk emission, which is proportional to $\dot m_{out}$, rather than $\dot m_{out}/\alpha$. For smaller $\alpha$, $\dot m_{\rm out}$ is smaller and thus the disk emission decreases, as seen in Figure 2a. Finally, Figure 2b shows models of V404 Cyg for $p = 0.4$ for several $\delta$ ($\beta = 10$, $\alpha = 0.1$). For small $\delta\lsim10^{-2}$, the electrons are heated primarily by adiabatic compression (the first term on the right in eq. [\ref{ee}]) and so the results are nearly independent of the value of $\delta$. However, once $\delta\gsim10^{-2}$, turbulent heating ($H_e$) becomes the dominant heating mechanism. In this regime, increasing $\delta$ causes the electrons to become hotter (see Table 1), thereby increasing the synchrotron emission and Comptonization. For sufficiently large $\delta \mathrel{\mathpalette\@versim>} 0.1$, Comptonization dominates bremsstrahlung in the X-ray band, and the spectra begin to resemble the no-wind model shown by the solid line in Figure 1a. The above results are for ADAF models with winds, since that is the primary focus of this paper. For completeness, we have considered the sensitivity of no-wind (or weak wind) models to variations in $\alpha, ~\beta$ and $\delta$. Figure 3a shows models of V404 Cyg with $p = 0$ taking, for brevity, $\alpha \sim \beta^{-1}$, as suggested by numerical simulations of thin accretion disks (Hawley, Gammie, \& Balbus 1996). We see that larger values of $\alpha$ (and lower $\beta$) lead to more synchrotron emission. Figure 3b shows models for various $\delta$. Increasing $\delta$ leads to a noticeable increase in the electron temperature. This is seen explicitly in Table 1 and also in the larger ``displacement'' of the Compton peak relative to the synchrotron peak (see eq. [\ref{A}] for the Compton A parameter). Since the synchrotron and Compton emission increase strongly with temperature, the model with the largest $\delta$ has a significantly lower $\dot m$ (Table 1). \subsection{Comparison with Observations} Figures 1-3 show the available observational constraints on the spectrum of V404 Cyg (taken from Narayan et al. 1997a). The optical data give the luminosity of the source and constrain the effective temperature of the radiation to be $\gsim10^4$ K. There is an upper limit on the EUV flux, which is not very interesting since it is easily satisfied by all the models considered here. Thanks to an excellent ASCA observation, the luminosity in the X-ray band is known accurately, and the spectral index is also well constrained; in terms of $\nu L_{\nu} \propto \nu^{2-\Gamma}$, the 2 $\sigma$ error bars on the photon index $\Gamma$ are $2.1^{+0.5}_{-0.3}$. The observations give a few important constraints. First, the $>10^4$ K temperature of the optical argues against the outer thin disk as the source of this radiation (Lasota, Narayan, \& Yi 1996b; see below). Thus, the optical has to come from synchrotron and this emission must be stronger than the disk emission. Second, the observed photon index in X-rays in V404 Cyg is incompatible with the $\Gamma = 1$ expected for thermal bremsstrahlung (\S2). This means that the X-ray emission has to be Compton-dominated. There is preliminary evidence that the same is also true for A0620-00 (Narayan et al. 1996, 1997a), but the ROSAT data on that source (McClintock et al. 1995) are not sufficiently good to trust this conclusion; on the other hand, for GRO J1655-40, preliminary ASCA data in quiescence indicate a much harder X-ray spectrum than in V404 Cyg and A0620-00 (Hameury et al. 1997). Finally, the data show that the optical emission is about an order of magnitude larger (in $\nu L_\nu$) than the X-ray flux, another constraint that has to be satisfied by models. The baseline no-wind ($p=0$) model of V404 Cyg, with canonical values for the microphysics parameters, is shown by the solid line in Figure 1a. This model fits the observations well, as emphasized by Narayan et al. (1997a). It has roughly the right luminosity and effective temperature in the optical and is consistent with the X-ray data. The model shown here differs somewhat in the X-ray band from that shown in Narayan et al. (1997a). The difference is due to the different energy equation used here, which leads to hotter electrons and more pronounced Compton bumps. The value of $\dot m$ is also lower by a factor of a few. The observed X-ray spectral index in V404 Cyg places interesting constraints on models. For weak winds ($p \sim 0$) the models are in agreement with the observed slope for a wide range of microphysical parameters (Figure 3). For strong winds, however, most of the models are too bremsstrahlung-dominated to fit the X-ray slope. For small $\delta \sim 10^{-2}$, the observed slope rules out $p \mathrel{\mathpalette\@versim>} 0.3$, for any $\alpha$ and $\beta$ (Figures 1-2). For the value of $r_{\rm out} = 10^4$ we have taken, this means that at least $\sim 10 \%$ of the mass supplied from the companion must reach the central object. As discussed by Lasota et al. (1996b), a thin disk cannot account for the observed optical emission in quiescent SXTs. This is because thin disk annuli with effective temperatures comparable to the observed values, $\sim 10^4$ K, are thermally and viscously unstable. In fact, within the context of the disk instability model, quiescent disks in black hole SXTs have effective temperatures $\sim 3000-5000$ K (Lasota et al. 1996b), too low to account for the observations. This is an independent argument against high $p$, low $\delta$ ADAF models, since the optical emission in these models is always dominated by the disk (the synchrotron being strongly suppressed by the large $p$). Perhaps the most interesting result to come out of these comparisons is that wind models agree with the data for larger values of the electron heating parameter $\delta$. The $p=0.4$, $\delta=0.3$ model in Figure 2b is as good as the no wind low-$\delta$ model shown in Figure 1a. The increase in $T_e$ associated with increasing $\delta$ brings the synchrotron emission into rough agreement with the observed optical flux, despite the low value of $\dot m_{in}$;\footnote{The synchrotron peak is a little too cool to fit the data; given the model uncertainties, however, the difference is not large enough to argue against these models.} at the same time, it shifts the balance in the X-ray band from bremsstrahlung to Comptonization, as required by observations. \section{The Galactic Center} Observations of the Galactic Center indicate that the mass of the black hole in Sgr A* is $m \sim (2.5 \pm 0.4) \times 10^6$ (Haller et al. 1996; Eckart \& Genzel 1997; Genzel et al. 1997). The accretion rate is estimated to lie in the range $10^{-4} \mathrel{\mathpalette\@versim<} \dot m_{\rm out} \mathrel{\mathpalette\@versim<} {\rm few}\,\times 10^{-3}$ (Genzel et al. 1994; Melia 1992), with the upper end of the range considered more likely (Coker \& Melia 1997). For a radiative efficiency of 10\%, and assuming that $\dot m$ is constant in the accretion flow, the implied luminosity is between $\sim 10^{40} $erg s$^{-1}$ and $\sim 10^{42}$erg s$^{-1}$. This is well above the bolometric luminosity of $\mathrel{\mathpalette\@versim<} 10^{37}$erg s$^{-1}$ inferred from observations in the radio to $\gamma$--rays (see Narayan et al. 1998a for a review of the observations). An optically thin, two temperature, ADAF model is a possible explanation for the low luminosity of Sgr A* (Rees 1982; Narayan, Yi, \& Mahadevan 1995, Manmoto et al. 1997, Narayan et al. 1998a, Mahadevan 1998). An alternative explanation is that most of the gas supplied at large radii is lost to a wind and very little reaches the central black hole (BB98). We consider both possibilities in this section. There is little observational evidence in Sgr A$^*$ for (or against) a particular value of $r_{\rm out}$. In addition, there is little evidence that the accretion outside $r_{\rm out}$ occurs via a thin disk. In our models, we set $r_{\rm out} = 10^4$ and assume that, whatever form the plasma takes at larger radii, it is non-radiating. \subsection{Spectral Models of Sgr A$^*$} The parameters of each of our models of Sgr A* are given in Table 2. Figure 4a shows spectral models of Sgr A* for various $p$, taking $\alpha = 0.1$, $\beta = 10$, and $\delta = 0.01$. As usual, the value of $\dot m_{out}$ in each model has been adjusted to fit the X-ray flux (even though the ROSAT measurement used in the fits is really only an upper limit; cf. Narayan et al. 1998a). The results in Figure 4a are similar to those shown in Figure 1a for V404 Cyg, but the effects are somewhat more pronounced. At $\sim 1$ keV, the baseline no-wind ($p=0$) model in Figure 4a corresponds to an interesting situation: there are roughly equal contributions from the second Compton bump and bremsstrahlung. Recall that increasing $p$ always shifts the balance in favor of bremsstrahlung. Therefore, once $p$ is increased above zero, the Compton flux decreases, and the spectrum becomes bremsstrahlung-dominated in the X-ray band. This switch is evident already at $p=0.2$ and it becomes more pronounced for larger $p$. The three bremsstrahlung-dominated models with $p=0.2$, 0.4 and 0.6 all have nearly the same value of $\dot m_{out} \approx 2 \times 10^{-4}$, while there is a modest change in $\dot m_{out}$ between $p=0$ and 0.2 (see Table 2). Another effect seen very clearly in Figure 4a is the decrease in the synchrotron emission in the radio with increasing $p$. This is due to a decrease in both the magnetic field strength and $T_e$ (\S2). Note, in particular, that $T_e$ decreases by a factor of $\approx 5$ from $p = 0$ to $p = 0.6$ (Table 2). The dependence of wind models of Sgr A* on the microphysical parameters is very similar to that of V404 Cyg. The one exception is that all of the $p \mathrel{\mathpalette\@versim>} 0.2$ models in Figure 4-6 are bremsstrahlung-dominated in X-rays; we practically never see Comptonized power in the X-ray band. This is simply because the source of soft photons -- the synchrotron peak -- is at substantially lower frequencies in Sgr A* compared to the SXTs (recall that $\nu_s \propto m^{-1/2}$; \S2.2). Figures 4b, 5a and 5b show models of Sgr A* for $p = 0.4$ and different values of $\beta$, $\alpha$ and $\delta$. We reach two conclusions from these calculations. First, no combination of $\alpha$ and $\beta$ alone is sufficient to bring the synchrotron emission of wind models back to the level seen in the baseline no-wind model (Figures 4b \& 5a). Just as in V404 Cyg, however, increasing $\delta$ has a very strong effect on the synchrotron emission. Indeed, a $p=0.4$, $\delta=0.3$ model has roughly the same synchrotron power as the $p=0$, $\delta=0.01$ no-wind model. The reason is clear --- increasing $\delta$ causes a substantial increase in $T_e$ (Table 2), which compensates for the reduced density and field strength due to the wind. By contrast, neither $\alpha$ nor $\beta$ has a comparable effect. As Figure 5a shows, decreasing $\alpha$ decreases the radio emission in Sgr A*. This is because, near the central object, $\rho \propto \dot m$, and is only a weak function of $\alpha$. The density on the outside, however, scales as $\rho \propto \dot m/\alpha$. If we fix the X-ray flux, $\dot m_{\rm out}$ has to decrease as $\alpha$ decreases in order to keep $\rho$ the same on the outside and thereby produce the same level of bremsstrahlung radiation. This causes a decrease in the density in the interior of the flow and thus a decrease in the synchrotron emission in the radio (Figure 5a). Small values of $\alpha$ therefore add to the decrease in synchrotron emission that is associated with a strong wind. Finally, Figure 6 shows models of Sgr A* with no winds ($p = 0$) for several $\alpha \sim \beta^{-1}$ (Fig. 6a) and for several $\delta$ (Fig. 6b). \subsection{Comparison with Observations} Figures 4-6 show the observational data on Sgr A$^*$. The source has been reliably detected only in the radio and mm bands, where there is a good spectrum available (see Narayan et al. 1998a for original references to the data). It has been convincingly demonstrated that there is a break in the radio spectrum at around 50--100 GHz, so that the source apparently has two components, one which produces the emission below the break and the other above (Serabyn et al. 1997, Falcke et al. 1998). The latter component, which cuts off steeply somewhere between $10^{12}$ and $10^{13}$ Hz, has been fitted with the ADAF model (Narayan et al. 1998a). The model does not, however, fit the low frequency radio emission. This emission may be from an outflow (e.g. Falcke 1996), or, as in the model of Mahadevan (1998), may be due to non-thermal electrons (in Mahadevan's model, these, along with positrons, are created by the decay of charged pions created in proton-proton collisions). In the following we consider a model to be satisfactory if it fits the high frequency radio data. In the infrared, Menten et al. (1997) obtained a conservative 2.2 micron flux limit of 9 mJy, after accounting for extinction. The source may, however, be variable, since in later epochs Genzel et al. (1997) observed a $K\sim15$ source at the location of Sgr A$^*$; this corresponds to $F_\nu \approx 13$ mJy (Andreas Eckart, private communication). If verified, this would suggest that the infrared flux varies around a mean value of order a few mJy. This is a potentially stringent constraint on theoretical models. We, however, adopt a more conservative approach and treat the IR data as an upper limit. The implications of an IR detection are discussed in \S6. Although we fit our models to the ROSAT X-ray observations of the galactic center, they too should be treated as an upper limit because of ROSAT's poor angular resolution ($\approx 20$'') and the presence of diffuse emission at the Galactic Center. This is again the conservative approach, since a decrease in the X-ray flux would necessitate a decrease in the importance of mass loss; see \S6. Vargas et al. (1998) have recently provided new SIGMA upper limits on hard X-ray emission from the Galactic Center: between $40-75$ keV the luminosity is $\mathrel{\mathpalette\@versim<} 1.4 \times 10^{35}$ ergs s$^{-1}$ while between $75-150$ keV it is $\mathrel{\mathpalette\@versim<} 2.0 \times 10^{35}$ ergs s$^{-1}$. We have converted these to limits in $\nu L_\nu$ by assuming that the spectrum is flat in $L_\nu$, as would be appropriate for a no-wind bremsstrahlung spectrum. The solid line in Figure 4a (and the dotted line in Figure 6) shows our standard, no-wind ($p=0$), model, with $\beta = 10$, $\alpha = 0.1$, and $\delta = 0.01$. Figure 6 shows no-wind models for a number of other microphysics parameters. All of the no-wind models are in reasonable agreement with the data. In particular, they explain the mm fluxes fairly well as synchrotron emission, and produce Compton emission in the infrared roughly consistent with the Menten et al. (1997) limit. Relatively lower $\delta$, larger $\beta$, and smaller $\alpha$ are favored if the IR limit is taken to be stringent; if, however, the Genzel et al. (1997) observations are interpreted as a detection, the opposite is true --- larger $\delta$ and/or smaller $\beta$ are favored. In addition, the small $\beta$, large $\delta$ models tend to slightly overproduce the synchrotron emission at $\sim 10^{12}$ GHz. Note that these conclusions are somewhat different from those of Narayan et al. (1998a), who advocated strict equipartition ($\beta = 1$). As discussed in \S2, this is due to our use of a monatomic ideal gas adiabatic index in the electron energy equation. For small $\delta$, the electrons in Sgr A* are nearly adiabatic; since our adiabatic index is larger than that of Narayan et al. the electrons are hotter in our models. This accounts for the increased synchrotron emission and the need for weaker fields (larger $\beta$) for a fixed radio flux. To obtain a radio flux comparable to Narayan et al's $\beta = 1$ model, we require $\beta \approx 30$ for $p = 0$, or else $p \approx 0.2$. In fact, our no-wind, low $\delta$, models of Sgr A* are rather similar to those of Manmoto et al. (1997), who noted that smaller $\alpha$ were favored if the IR limit in Sgr A* is taken to be stringent. This is because our treatment of the electron energy equation is similar to Manmoto et al.'s. They took the electron adiabatic index to be $\gamma_e = 5/3$, which is correct in not including a magnetic contribution.\footnote{It is incorrect, however, in neglecting the change to $\gamma_e \approx 4/3$ for $r \mathrel{\mathpalette\@versim<} 10^2$ when the electrons become semi-relativistic.} What about large $p$, dynamically important winds? Such winds decrease the density and electron temperature in the interior of the flow, thereby severely suppressing the synchrotron and Compton emission (Figure 4a). Requiring wind models to produce the observed $10^{11}- 10^{12}$ Hz emission imposes the following strong constraints on the parameters. For small $\delta$, we require $p \mathrel{\mathpalette\@versim<} 0.2$ if we allow $\beta \sim 1$, $\alpha \sim 0.3$. If, for theoretical reasons, we were to favor larger $\beta \sim 10-100$, the constraint is even stronger. For the value of $r_{\rm out} = 10^4$ used in our models, this corresponds to at least 15 \% of the mass supplied at large radii reaching the central object. As in V404 Cyg, the strongest degeneracy in the problem is with $\delta$. For $\delta \mathrel{\mathpalette\@versim>} 0.3$, large $p$ models of Sgr A* are in good agreement with the data (Figure 5b). All ADAF models of Sgr A* in the literature have $\dot m_{\rm out} \sim 10^{-4}$. This is at the lower end of the values considered plausible from Bondi capture of stellar winds in the Galactic center, and may be $\sim 10-100$ times smaller than favored values (Coker \& Melia 1997). It is interesting to see that winds do not alter this conclusion (see Table 2). Neither wind nor non-wind models can have $\dot m_{\rm out}$ much greater than $\sim 10^{-4}$ because, if they did, the bremsstrahlung emission would yield an X-ray luminosity well above the observed limits. Since the bremsstrahlung emission at $\sim 1$ keV is from the largest radii in the accretion flow, this conclusion is independent of the strength of winds in the system. In this context, it is important to note that, although $p = 0$, large $\delta$ models produce spectra reasonably consistent with the observations (Fig. 6b), they require small accretion rates, $\dot m_{\rm out} \sim 10^{-5}$ (Table 2). This argues against them as viable models. \section{Nuclei of Nearby Ellipticals} D98 recently measured high frequency radio fluxes from the nuclei of several nearby giant elliptical galaxies. These galactic nuclei are known to be unusually dim in X-rays compared to the accretion rates inferred from Bondi capture (Fabian \& Canizares 1988). Fabian \& Rees (1995) explained the low X-ray luminosities by invoking accretion via ADAFs. D98 found, however, that the predicted radio emission, based on the ADAF model (for $\beta=1$), exceeded their measured fluxes by $2-3$ orders of magnitude. They suggested several explanations for this large discrepancy, including the presence of strong winds or highly subthermal magnetic fields. If, as we are inclined to believe is the case, Sgr A* is simply a scaled version of the systems observed by D98, why is the predicted emission in Sgr A* roughly in accord with the observations while that in the nearby ellipticals is so discrepant? We might expect both theoretical predictions to be wrong, or both to be right, if the same physics operates in each system. We see two potential answers to this question. One possibility lies in the X-ray constraints in Sgr A* versus those in D98's sample. In Eddington units, i.e, scaled with respect to the mass of the black hole, the X-ray detection/upper limit in Sgr A* is $\sim 2.5$ orders of magnitude below the upper limits in D98's sample. This means that we have a significantly stronger constraint on the accretion rate in Sgr A$^*$. If the X-ray luminosities (in Eddington units) of the ellipticals were as low as in Sgr A$^*$, then models similar to those that work for Sgr A$^*$ would work for the ellipticals as well. It would mean, however, that D98's estimate of $\dot m_{\rm out}$ is too large, by a factor $\sim 30$ (see below). The other possibility is that the high frequency ($>10^{11}$ Hz) radio observations of Sgr A*, which the ADAF model fits reasonably well, do not probe the accretion flow at all. If the high resolution VLBI observations at 86 GHz represent the true synchrotron emission from the ADAF in Sgr A*, and the higher frequency radio emission is from a completely different source, then typical no-wind (e.g. $p = 0$, $\beta \sim 1$) models would overpredict the synchrotron luminosity by $\sim 3$ orders of magnitude, just as D98 found for the ellipticals. To investigate these issues further, Figure 7 shows a series of models of NGC 4649, which D98 consider to be the most convincing member of their sample. The data are taken from their Table 5. We take $m = 8 \times 10^9$ (slightly higher than D98's $4 \times 10^9$ because we find this mass fits the location of the radio peak better), $r_{\rm out} = 10^4$, and assume a distance of $15.8$ Mpc. All calculations were done with $\alpha = 0.1$ and $\beta = 10$. Table 3 shows the parameters for the models. The solid line in Figure 7a corresponds to our ``standard'' ADAF model: $p = 0$, $\delta = 0.01$, and $\dot m_{\rm out} = 10^{-3}$. The latter value corresponds to the Bondi mass accretion rate estimated by D98. In agreement with D98, we find that, at this accretion rate, the model overpredicts the radio emission by $\sim$ 3 orders of magnitude. To make matters worse, our model is also in violation of the X-ray upper limit, in contrast to D98, whose ``standard'' ADAF model just satisfies the upper limits. The difference is primarily because our electrons are hotter --- D98 used Esin et al's (1998) electron adiabatic index. We have varied $p$ and $\dot m_{\rm out}$ in our models to judge their sensitivity to these parameters.\footnote{Initially, we found important quantitative differences between our models with varying $p$ and $\dot m_{\rm out}$ and the models in the original version of D98's paper on astro-ph. We have determined, however, that this was due to the fact that they did not use the electron temperature profile appropriate for the given $\dot m_{\rm out}$ and $p$ (Di Matteo, private communication). In particular, they originally required $p = 1$ and $r_{\rm out} = 300$ to fit the radio flux at $\dot m_{\rm out} = 10^{-3}$, while their new calculations give $p \approx 0.8$ and $r_{\rm out} \approx 80$ (since they take the inner radius of the flow to be $r = 3$, this corresponds to $\approx 7 \%$ of the incoming mass accreted, comparable to our value of $\approx 10 \%$). In addition, at $p = 0$, they originally required $\dot m_{\rm out} = 10^{-6}$ to fit the radio flux, while they now require $\dot m_{\rm out} \approx 10^{-5}$, again in reasonable agreement with our value of $\dot m_{\rm out} \approx 10^{-4.5}$.} The dotted line in Figure 7a is a model with $p = 0.25$ and $\delta = 10^{-2}$. This is roughly the $p$ we need to account for the observed radio flux at low $\delta$ (note that this model is also in agreement with the X-ray upper limit). In Figure 7b we show several models of NGC 4649 for $\dot m_{\rm out} = 10^{-4.5}$. This accretion rate is $\sim 30$ times smaller than the value D98 infer from Bondi capture. The solid line shows a standard no-wind model: $p = 0$ and $\delta = 0.01$. This model is in reasonable agreement with the radio flux. If $T_e$ and $\beta$ are fixed, equation (\ref{lsynch}) shows that the peak synchrotron luminosity scales like $\nu L_\nu \propto \dot m^{3/2}$. Thus, to decrease $\nu L_\nu$ by a factor of $10^3$, as required by the observations, $\dot m_{\rm out}$ must decrease by $\sim 100$. In fact, due to other factors, the required decrease is even less, $\sim 30$. The above argument requires that $T_e(r)$ should be roughly the same for $\dot m_{\rm out} = 10^{-3}$ and for $\dot m_{\rm out} = 10^{-4.5}$. This is confirmed by the numerical results shown in Table 3, but it can also be understood simply by noting that in both models the electrons adiabatically compress as the gas flows in. To see this, it is sufficient to estimate the PdV energy gained per unit time by the electrons in a spherical shell of radius $R$ and thickness $dR \sim R$ as they accrete onto the central object (cf eq. [\ref{ee}]), \begin{equation} q_c \approx k T_e v {d n_e \over dR} 4 \pi R^3 \approx {m_e \over m_p} \theta_e(r) \dot M(r) c^2 \approx 10^{43} \left({\theta_e \over 1}\right) \left({\dot m \over 10^{-3}}\right)\left({m \over 8 \times 10^9}\right) {\rm ergs \ s^{-1}}. \label{comp} \end{equation} Our most luminous model (solid line, $\dot m_{\rm out} = 10^{-3}$; Figure 7a) has $\nu L_\nu \approx 10^{41.5} {\rm ergs \ s^{-1}}$ at the synchrotron peak (and a bolometric luminosity of $\approx 10^{42} {\rm ergs \ s^{-1}}$), which is $\ll q_c$. For lower $\dot m_{\rm out}$, the ratio of $\nu_s L_{\nu,s}$ to $q_c$ is even smaller. Therefore, the electrons in all of our low $\delta$ models are nearly adiabatic, and thus $T_e(r)$ is essentially unchanged as $\dot m_{\rm out}$ decreases from $10^{-3}$ to $10^{-4.5}$. The short dashed lines in Figure 7 show $p = 0.25$, $\delta = 0.3$ models for $\dot m_{\rm out} = 10^{-3}$ (Figure 7a) and $\dot m_{\rm out} = 10^{-4.5}$ (Figure 7b). As suggested by the previous results on V404 Cyg and Sgr A$^*$, these models are comparable to the $p = 0$, $\delta = 0.01$ models. In particular, for $\dot m_{\rm out} = 10^{-4.5}$, the wind model gives reasonably good agreement with D98's radio data, while for $\dot m_{\rm out} = 10^{-3}$ it is in disagreement. The results of Figure 7 thus lead to two scenarios for understanding NGC 4649, depending on which value of $\dot m_{\rm out}$ we take, $10^{-4.5}$ or $10^{-3}$. (There is, of course, a range of intermediate scenarios if we take intermediate values of $\dot m_{\rm out}$.) If $\dot m_{\rm out} \approx 10^{-4.5}$, then we require $0 \mathrel{\mathpalette\@versim<} p \mathrel{\mathpalette\@versim<} 0.25$ for $0 \mathrel{\mathpalette\@versim<} \delta \mathrel{\mathpalette\@versim<} 0.3$. As we saw for V404 Cyg and Sgr A$^*$, increasing $p$ requires a corresponding increase in $\delta$, though the precise mapping between the two parameters in the case of NGC 4649 is slightly different. As in V404 Cyg and Sgr A$^*$, strong wind, low $\delta$ models are ruled out as they cannot explain the radio data (dotted line; Figure 7b). If, on the other hand, $\dot m_{\rm out} \approx 10^{-3}$, as proposed by D98, then we require $0.25 \mathrel{\mathpalette\@versim<} p \mathrel{\mathpalette\@versim<} 0.55$ for $0 \mathrel{\mathpalette\@versim<} \delta \mathrel{\mathpalette\@versim<} 0.3$. Low $\delta$, low $p$, is ruled out by the observed radio flux (Figure 7a), which is a different result from that obtained in V404 Cyg and Sgr A$^*$. In addition, the region of $p-\delta$ space available for NGC 4649 at $\dot m_{\rm out} = 10^{-3}$, if applied to our models of V404 Cyg and Sgr A$^*$, is somewhat uncomfortable. For example, at $p \approx 0.25$ and low $\delta$ (which gives an acceptable fit in NGC 4649 if $\dot m_{\rm out} \approx 10^{-3}$), the predicted X-ray spectral index in V404 Cyg is only marginally compatible with the 2 $\sigma$ ASCA measurements (Figure 1a). Similarly, the radio luminosity of Sgr A* for $p = 0.25$ and $\delta = 0.01$ is $1-2$ orders of magnitudes below the peak observed luminosity. One might therefore have to abandon the claim that the $10^{11}- 10^{12}$ Hz emission in Sgr A$^*$ is synchrotron emission from the ADAF. If we believe that Sgr A*, V404 Cyg, and NGC 4649 are simply scaled versions of each other (in $m$ and $\dot m_{\rm out}$, and perhaps somewhat in $p$, $\delta$, $\beta$, $\alpha$, and $r_{\rm out}$), the above considerations are suggestive, if only weakly, of an $\dot m_{\rm out} \sim 10^{-4.5}$ rather than $10^{-3}$ in NGC 4649. This conclusion is independent of the importance of winds. \section{Key Future Observations} There are two main conclusions from the previous sections: (i) If the electron heating parameter $\delta$ is small, current observations rule out ADAF models with moderate winds (say $p\gsim0.25$ as an average for V404 Cyg and Sgr A*). (ii) If $\delta$ is allowed to have large values --- given the uncertain role of magnetic reconnection there is no strong theoretical argument against this --- current observations provide no information on the importance of winds in ADAFs; large $\delta$, strong wind models are in as good agreement with the data as low $\delta$, weak wind models. Figure 8 shows the $p/\delta$ degeneracy explicitly for V404 Cyg (Fig. 8a) and Sgr A* (Fig. 8b). We see that the two $p = 0.4, ~\delta \approx 0.3$ models are very similar to the $p = 0, ~\delta = 0.01$ models. Indeed, there is a family of intermediate solutions with values of $p$ and $\delta$ in between these two extremes. Note, however, that the very large $p = 0.8, ~\delta = 0.75$ models shown in Figure 8 differ more noticeably. For such large $p$, the electron temperatures needed to make the X-ray spectrum of V404 Cyg Compton dominated, rather than bremsstrahlung dominated, are so large that the Compton peak moves well into the X-ray band. This is discussed further in the next subsection. In the case of Sgr A*, for $p \sim 1$, $r_{\rm out} \sim 10^4$ and $\dot m_{\rm out} \sim 10^{-4}$, the inner mass accretion rate is $\dot m_{in} \sim 10^{-8}$. The density in the interior is then so low that the synchrotron emission is no longer highly self-absorbed; this accounts for the substantially broader synchrotron peak. Leaving aside the $p=0.8$ models, we conclude that there is a degeneracy between $p$ and $\delta$ such that any model in the range $0<p\lsim0.5$ and $0<\delta\lsim0.4$ is viable, so long as $p$ and $\delta$ are chosen in some proportionate manner. Current observations are insufficient to break this degeneracy and additional observational tests are clearly needed to improve the situation. Below we suggest a number of such tests, some of which will be feasible in the near future (e.g., AXAF; \S6.3-\S6.5), while others will require a more concerted observational effort (e.g., \S6.1 \& \S6.2). \subsection{Position of the Compton Peak} Generically, strong wind models have a lower density and optical depth than weak wind models. If these models are to fit the synchrotron and X-ray flux, and have a Compton-dominated X-ray spectrum (as required in SXTs for instance), they must have a larger $T_e$. The larger $T_e$ is necessary to boost the synchrotron luminosity, and also to reproduce the required $\alpha_c$ (cf eq. [\ref{compt}]), despite the smaller optical depth. A larger $T_e$ implies a larger amplification factor $A$. Thus, the ``distance'' between the synchrotron and Compton peaks in large $\delta$ wind dominated systems will generally be larger by a factor of a few than in weak wind systems. This is seen explicitly in Figure 8a. Note in particular the $p = 0.8$, $\delta = 0.75$ model, where $A$ is so large that the synchrotron peak has moved substantially to the left and at the same time the peak of the Compton emission is well into the X-ray band. This is, however, not a unique property of wind models, but rather is characteristic of any model with a very high $T_e$, as can be seen by the $\delta = 0.3$, $p = 0$ model in Figure 3b. In principle, the Compton A parameter could be measured in SXTs, with observations in the optical and soft X-ray ($\sim 0.1$ keV) bands. The strong Galactic absorption below a keV, however, makes direct detection of the Compton peak problematic, except in very high $\delta$ models. A more encouraging possibility is that the peak's position could be inferred by detection of curvature at $\sim 1$ keV. This should be feasible with AXAF or XMM (\S 6.4). \subsection{Shape of the Bremsstrahlung Spectrum} Detailed measurements of the bremsstrahlung spectrum of an ADAF system can explicitly probe the density profile and outer radius of the accretion flow. This constitutes one of the most direct tests for the presence of winds. As discussed in \S2, at photon energies $\mathrel{\mathpalette\@versim>}$ the minimum electron thermal energy in the flow, which is a function of the outer radius, bremsstrahlung should give rise to a $\nu L_\nu \propto \nu^{1/2 - 2p/\epsilon}$ spectrum (where $\epsilon \approx 1$; see \S2.2). This behavior is clearly seen in Figures 1a (V404 Cyg), 4a (Sgr A*), and, in particular, Figure 8b (Sgr A*), where the bremsstrahlung emission cuts off strongly at $\mathrel{\mathpalette\@versim>} 10$ keV for large $p$. More importantly, the details of the cutoff, e.g., the slope at $\sim 10$ keV, are a strong function of $p$, thus providing the opportunity to study winds directly through their effect on the density profile of the flow. While energies $\mathrel{\mathpalette\@versim>} 10$ keV are somewhat high for observations of quiescent systems with current X-ray detectors, they may still be observationally accessible. Sgr A* is an excellent source in which to apply this technique since a bremsstrahlung-dominated X-ray spectrum is expected above a few keV for a wide range of the microphysical parameters. The SXTs are probably less useful, since the Compton peak is generally more important. One potential complication is that, if winds only become important well inside the outer boundary of the flow (for which, at least within the context of BB98's proposal, we see no obvious theoretical reason), the bremsstrahlung emitting region will be unaffected by the wind. Non-detection of a strong $\sim 10$ keV bremsstrahlung cutoff therefore would not rule out winds, although it would place interesting constraints on the radius at which mass loss becomes important. \subsection{Observations of SXTs in Quiescence} Measurements of the X-ray spectral index in SXTs by AXAF and, in particular, XMM (with its larger collecting area) will clearly be important. Bremsstrahlung predicts a photon spectral index of $\Gamma \sim 1$, while Comptonization predicts a less hard spectrum. ASCA observations rule out $\Gamma \sim 1$ in V404 Cyg; therefore, models in which Comptonization dominates are favored (Narayan et al. 1997a). Confirmation of this in additional systems would be of considerable interest. Note that detection of a bremsstrahlung spectrum would all but rule out weak wind ($p \approx 0$) models of quiescent SXTs. As Figure 3 shows, for no combination of $\alpha, \beta$ and $\delta$ can such a spectrum be produced. When Comptonization dominates, the $0.5-10$ keV X-ray band is often the energy range where the first and second Compton bumps meet. Therefore, most models predict significant curvature, i.e., an energy dependent spectral index, within the band. The spectrum would be softer at lower photon energies and harder at higher energies. Detection of curvature of this sign would be an important confirmation of the Compton origin of the X-ray emission. Curvature of the opposite sign will be seen only if $\delta$, and therefore the Compton $A$-parameter, is so large that the Compton peak moves into the middle of the X-ray band (\S6.1). It is equally important to measure the optical/UV spectrum of quiescent SXTs. An unambiguous determination that the temperature of the optical radiation is above the maximum temperature $\sim5000$ K of the outer thin disk (Lasota et al. 1996), would immediately suggest that the optical emission is synchrotron emission from the ADAF. This information would be very useful, since only a subset of models will be able to fit both the observed synchrotron power and the position of the peak. Although significantly tighter observational constraints on the energy-dependent spectral index in X-rays and the synchrotron peak in the visible/UV would narrow the available theoretical parameter space, it will be difficult to infer unique model parameters from these observations. There is simply too much degeneracy in the models. In particular, strong wind, large $\delta$ models of SXTs can readily produce X-ray and optical behavior similar (though not identical) to that of weak wind models (see Figure 8a). The variations introduced by $\alpha$ and $\beta$ only complicate things further (Figure 3). \subsection{Observations of Sgr A*} \subsubsection{X-ray Observations} Observations of Sgr A* may be particularly helpful in discriminating among theoretical models. An AXAF/XMM detection of Sgr A* will help in two ways. First, a significantly higher resolution detection will determine whether the ROSAT observation was an upper limit. If significantly so, $\dot m_{\rm out}$ will need to decrease in order to fit the reduced X-ray flux; the synchrotron emission in the radio will decrease as well (all other parameters being held fixed). In the case of no winds ($p = 0$), this would argue for stronger magnetic fields and/or larger $\delta$. In wind scenarios, the amount of mass loss in the flow would be further constrained and/or the models would be pushed towards larger $\delta \sim 1$. As Figures 4--5 and Figure 8b show, all of the models of Sgr A* with noticeable winds have a bremsstrahlung dominated X-ray spectrum with $\Gamma \sim 1$. This is true even for the large $\delta$ models which agree with the radio observations (Fig. 8b). Confirmation of this prediction would be of considerable interest. It would not, however, rule out weak wind scenarios, although strictly $p = 0$ models would be constrained to having small $\alpha$ and $\delta$ and large $\beta$ (see Figure 6).\footnote{Even at $p = 0.2$ (a relatively weak wind in the scheme of things) the X-ray band would be bremsstrahlung dominated for a wide range of microphysical parameters, eliminating these constraints.} On the other hand, an observed spectral index deviating significantly from bremsstrahlung, e.g., $\Gamma \mathrel{\mathpalette\@versim>} 2$, would strongly constrain mass loss via winds, arguing for $p \mathrel{\mathpalette\@versim<} 0.2$ and/or larger $\delta$. \subsubsection{Infrared Observations} Additional constraints on the accretion in Sgr A* may come from infrared observations. Menten et al. (1997) obtained strong upper limits on the 2.2 $\mu m$ flux from Sgr A* at an angular resolution of $0.15''$, while Genzel et al. (1997) reported a possible infrared detection of Sgr A* at a level above the Menten et al. limit, indicating possible variability in the source. As seen in Figures 4-6 \& 8b, the 2.2 micron K band corresponds to the location of the first Compton peak (if present) in ADAF models. If Genzel et al.'s detection is confirmed, and interpreted as the Compton peak, it would argue against significant mass loss due to winds in Sgr A*. As seen in Figures 4, 5, \& 8b, large $p$ models have difficulty, even at large $\delta$, accounting for a Compton luminosity at the level of the Menten et al. (1997) upper limit ($p\mathrel{\mathpalette\@versim<} 0.2$ and large $\delta \sim 1$ is tenable). This is because the Compton power decreases more rapidly with $p$ than the synchrotron power. Increasing $\delta$ to bring the synchrotron power back into agreement still leaves the Compton power smaller than in no-wind models (see Fig. 8b). Equally interesting constraints on models will arise if the infrared limits decrease substantially. This would argue for weak magnetic fields and/or small $\delta$ in no wind scenarios (see Figure 6) or for the presence of a reasonable wind. If the IR limits decrease substantially, it may be difficult for theoretical models to reconcile the absence of a Compton peak (the source of IR emission in the models) with the interpretation of the $\sim 10^{12} $ Hz emission as synchrotron emission from the flow. An alternative to the Compton interpretation of the infrared flux is that it is due to synchrotron emission from non-thermal electrons. Mahadevan (1998) has described a specific model of this kind in which the infrared flux is from electron/positron pairs created by the decay of charged pions in the accretion flow. \subsubsection{Gamma-ray Observations} Mahadevan et al. (1997) showed that gamma-ray emission, due to the decay of neutral pions created in proton-proton collisions, may be detectable from an ADAF in Sgr A*. They argued that it may account for the EGRET source 2EG J1746-2852, although current theoretical predictions suggest that the ADAF contribution to these observations may be small (Narayan et al. 1998a, Mahadevan 1998; see Markoff et al. 1997 for an alternative discussion of gamma-ray emission from Sgr A*). Since the gamma-ray luminosity is $\propto \rho^2$, the confirmed detection of gamma-rays from Sgr A$^*$ (i.e., a detection with reasonable angular resolution which shows variability) would directly constrain the density of the plasma and thus the strength of winds. GLAST may provide such observations. A complication in this analysis is the unknown, and almost certainly non-thermal (Mahadevan \& Quataert 1997, Blackman 1998), proton distribution function. If there is a significant population of relativistic protons at all radii in the flow, the emission will contain important contributions from large radii (Mahadevan et al. 1997), and will thus be a complicated convolution of the density profile of the flow and the (radially varying) proton distribution function. If, on the other hand, relativistic protons are only present in the interior of the flow, e.g., if the protons are heated by incompressible turbulence (Gruzinov \& Quataert 1998), gamma-ray observations would impose particularly strong constraints on the flow properties near $r \sim 1$. Specifically, they will provide a lower limit on the mass accretion rate near the black hole, which, when combined with the estimate of $\dot m_{\rm out}$ obtained from measurements of the bremsstrahlung emission, would strongly constrain the wind parameter $p$. \subsubsection{Measurements of Radio Brightness Temperatures} VLBI observations of Sgr A* at 43 GHz and 86 GHz indicate brightness temperatures in excess of $10^{10}$ K (Backer et al. 1993, Rogers et al. 1994). This was a problem for Narayan et al's (1998a) no wind model since their electron temperature was everywhere below $10^{10}$ K. With the revised electron adiabatic index used in this paper (\S2), we find $T_e \mathrel{\mathpalette\@versim>} 10^{10}$K for $r \mathrel{\mathpalette\@versim<} 10$, so this is less of a concern. Small $\delta$, large $p$ models give $T_e \mathrel{\mathpalette\@versim<} 5 \times 10^9$ K at all radii in the flow (see Table 2), incompatible with the observations. This is because, as noted in \S2.3, large $p$ implies less compression and hence lower $T_e$. These low values of $T_e$ are an independent argument against high $p$, low $\delta$ models. For $\delta \mathrel{\mathpalette\@versim>} 0.1$, turbulent heating of the electrons is sufficiently strong that $T_e$ is again $\mathrel{\mathpalette\@versim>} 10^{10}$K, compatible with the observations. For a fixed radio flux, large $p$ models must have higher electron temperatures to compensate for the lower flow densities; the difference is, however, only a factor of a few. Nonetheless, a careful comparison of theoretical and observed temperature profiles may help constrain the models. If brightness temperatures $\mathrel{\mathpalette\@versim>} 3 \times 10^{10}$K are observed, they would argue for larger $\delta$, in either wind or non-wind scenarios, or for non-thermal electrons. \subsection{Observations of the Nuclei of Nearby Ellipticals} As is clear from Figure 7, one way to resolve the degeneracy between $\dot m_{\rm out} \approx 10^{-3}$ and $\dot m_{\rm out} \approx 10^{-4.5}$ models of NGC 4649, and the associated $p$/$\delta$ degeneracy, is through better X-ray limits in this and other nearby ellipticals. The $\dot m_{\rm out} \approx 10^{-3}$ scenario predicts an X-ray flux comparable to the current limits, so AXAF or XMM should be able to detect these systems. On the other hand, the $\dot m_{\rm out} \approx 10^{-4.5}$ scenario predicts X-ray fluxes $\sim 3$ orders of magnitude beneath current limits. D98's observations in the radio and mm bands with the VLA and SCUBA emphasize the usefulness of such observations for testing ADAF models. They allow both the synchrotron power and the position of the peak to be measured. Additional observations of low luminosity nuclei, including LINERs (Lasota et al. 1996a), will further constrain theoretical models. \section{Summary} The goal of this paper has been to explore spectral models of ADAFs with winds/mass loss, by applying the models to the soft X-ray transient V404 Cyg, the Galactic Center source Sgr A*, and the nucleus of the nearby elliptical galaxy NGC 4649. The first two of these systems are explained fairly well by the ADAF model without any winds. However, recent theoretical arguments (BB98; see also Narayan \& Yi 1994, 1995), as well as observations of NGC 4649 (D98), suggest that mass loss via winds may be important (even dynamically crucial) in sub-Eddington, radiatively inefficient, accretion flows. We have therefore investigated under what conditions ADAF models with mass loss might account for the observations, and the extent to which observations can distinguish between the various proposals for the physics of the accretion flow. A fundamental assumption of our analysis is that, in spite of the possibility of substantial mass loss, the observed radiation is due only to the matter that accretes onto the central object, with no contribution from the outflow. In assessing the importance of mass loss, considerable care must be taken in how one treats the microphysics of the accretion flow; this is parameterized by: $\alpha$, the viscosity parameter, $\delta$, the fraction of the turbulent energy which heats the electrons, and $\beta$, the ratio of the gas to the magnetic pressure. As we have shown in this paper, mass loss from the accretion flow has a dramatic effect on theoretically predicted spectra. If we were to restrict ourselves to the microphysics parameters considered in previous treatments of ADAFs ($\alpha \sim 0.3$, $\beta \sim 1$, and $\delta \sim 10^{-3}$), significant mass loss would be incompatible with observations. Such a restriction would not, however, correctly reflect the uncertainty in the microphysics of the flow. We have therefore varied $\alpha$, $\beta$ and $\delta$ over a large range ($\alpha \ \epsilon \ [0.03, 0.3]$, $\beta \ \epsilon \ [1, 100]$, and $\delta \ \epsilon \ [0, 1]$), which we believe generously encompasses the theoretical uncertainties. Despite this large parameter space, firm and interesting constraints on theoretical models can still be drawn. Spectral models of ADAFs without mass loss provide a reasonable description of the observations of a number of low luminosity black hole systems. In fact, it is this success which has led to the recent wide interest in ADAFs. From our study of spectral models that include mass loss/winds, we reach two principal conclusions. First, if the turbulent heating of the electrons is weak ($\delta \lsim0.01$), then winds must also be relatively weak ($p\lsim0.25$); specifically, the mass accreted by the central black holes in V404 Cyg and Sgr A$^*$ must be at least $\sim 10 \%$ of the mass supplied at large radii (for any $\alpha$ and $\beta$). Second, for larger values of $\delta$, current observations do not readily assess the importance of winds in ADAFs. Strong wind ($p=0.4$), large $\delta$ ($\sim0.3$) models of V404 Cyg and Sgr A* are in as good agreement with the data as weak wind ($p\sim0$), low $\delta$ ($\lsim0.01$) models; indeed, there is a family of acceptable models in which $p$ and $\delta$ are assigned intermediate values in a proportionate manner. In \S6, we have proposed a number of observational tests which should help to resolve the $p/\delta$ degeneracy. Radio, infrared, optical/UV, X-ray and gamma-ray observations of quiescent soft X-ray transients, the Galactic Center, and the nuclei of other galaxies provide powerful and complementary information on the various emission processes in the accretion flow and can potentially pin down the importance of winds in ADAFs. D98's observations of radio emission in nearby ellipticals such as NGC 4649 seem to suggest that mass loss may be important; confirmation of this interpretation of their observations, however, requires substantially better X-ray observations than are presently available. Current X-ray upper limits and radio observations alone are compatible with weak (or no) mass loss at accretion rates $\sim 30$ times smaller than the typical Bondi value inferred by D98 (\S5). X-ray detection of these systems near current limits would favor high accretion rates and substantial mass loss while noticeably stronger upper limits would favor low accretion rates and weaker mass loss (\S6.5; Figure 7). In this context, it is interesting to note that Sgr A* itself may have a discrepancy between the accretion rate favored by ADAF models and that favored by hydrodynamical simulations of Bondi capture. All non-wind ADAF models in the literature have $\dot m_{\rm out} \sim 10^{-4}$, while Bondi capture estimates are often $10-100$ times larger (e.g., Coker \& Melia 1997). Winds do not alleviate this discrepancy (see Table 2); neither strong wind nor non-wind ADAF models of Sgr A* can have $\dot m_{\rm out}$ much greater than $\sim 10^{-4}$ because, if they did, the bremsstrahlung emission would yield an X-ray luminosity well above the observed limits. \subsection{Radiative Efficiency} Previous discussions of ADAFs in the literature have emphasized the low radiative efficiency of these accretion flows and the connection between the low efficiency and the presence of an event horizon in the central object (Narayan et al. 1997ab, Narayan et al. 1998a, Menou, Quataert \& Narayan 1998). How is this modified when there is a wind? Tables 1-3 give the radiative efficiencies of our models, defined both with respect to the accretion rate at the outer edge of the flow ($\eta_o \equiv L/\dot M_{\rm out} c^2$) and the accretion rate at the horizon ($\eta_i \equiv L/\dot M_{\rm in} c^2$). The former is perhaps observationally accessible, while the latter is of more physical interest from the point of view of inferring the presence of a horizon. In the absence of winds, $\eta_o = \eta_i \equiv \eta$. Qualitatively, a viable spectral model with strong winds weakens the argument for the presence of an event horizon. This is because, in the presence of a strong wind, the central object accretes less mass, but if the microphysical parameters can be adjusted so that the accretion flow produces the same luminosity, then $\eta_i$ can be noticeably larger. What remains to be seen is, quantitatively, how $\eta_i$ varies with $p$ and the microphysical parameters. The models of most interest are those in Figure 8, the sequence of roughly similar $p/\delta$ models of V404 Cyg and Sgr A$^*$. The standard no wind models of V404 Cyg and Sgr A$^*$ have $\eta\approx10^{-3}$ and $2.6 \times 10^{-5}$, respectively, which makes both sources highly inefficient radiators. As we increase $p$, and correspondingly increase $\delta$ to keep a roughly similar luminosity, $\eta_i$ increases. For the intermediate values of $p = 0.4$ and $\delta \sim 0.3$, $\eta_i$ has increased by nearly an order of magnitude for both systems. The radiative efficiency is, however, still $\ll 1$, particularly for Sgr A*. For the larger values of $p = 0.8$ and $\delta = 0.75$, $\eta_i \approx 0.1$ for V404 Cyg and $\eta_i \approx 0.01$ for Sgr A*. The model of V404 Cyg appears to be incompatible with the observations. The synchrotron peak in the optical has shifted to much lower frequencies in this model and we believe that this discrepancy is serious and hard to overcome. Nevertheless, if we are willing to allow models with this level of deviation from the data (there are, after all, uncertainties in our modeling techniques in this extremal range of parameter space), then it means that there are viable ADAF models of V404 Cyg in which the luminosity of the accretion flow is comparable to the rest mass energy accreted by the central object. This weakens the argument for an event horizon in this source. In the case of Sgr A$^*$, the $p = 0.8$, $\delta = 0.75$ model is in moderate agreement with the data (given some allowance for modeling uncertainties), but it is still a relatively inefficient radiator ($\eta_i \approx 0.01$). The difference between Sgr A* and V404 Cyg in this context is, of course, that $\dot m_{\rm out}$ is quite a bit smaller in Sgr A* (because of the constraints imposed by X-ray observations), so that it naturally has a lower radiative efficiency if all other model parameters are the same. In fact, we find that all of our models of Sgr A* with $\dot m_{\rm out} \sim 10^{-4}$, which fit the observations reasonably well, have $\eta_i \mathrel{\mathpalette\@versim<} 0.01$. At small $p$, this is because only small values of $\delta$ are allowed if $\dot m_{\rm out} \sim 10^{-4}$ (see Table 2). At large $p$, the density in the interior of the flow is so low that, even if $\delta \equiv 1$ or the plasma is one temperature, the electron cooling time is $\mathrel{\mathpalette\@versim>}$ the inflow time of the gas. As a result, the accretion flow is always reasonably advection dominated, because the electrons themselves are. Therefore, although the argument for the event horizon in Sgr A* is significantly less dramatic because of the possibility of substantial mass loss ($\eta_i \approx 0.01$ for $p \approx 0.8$ vs. $\eta_i \approx 2.6 \times 10^{-5}$ for $p \approx 0$), it nonetheless appears that all viable ADAF models of Sgr A* have $\eta_i$ substantially smaller than the usual thin disk value of $\eta_i \approx 0.1$. This problem should be investigated in more detail, in particular, with a better treatment of the dynamics of large $p$ models. BB98 noted that the presence of winds would cause the difference between the quiescent luminosities of black hole and neutron star soft X-ray transients to reduce. This is discussed further in Menou et al. (1998). \subsection{Discussion} In all of our models we have assumed that winds lead to an $\dot m \propto r^p$ profile inside $r_{\rm out} = 10^4$. This corresponds to $10^{-4p}$ of the incoming mass being accreted. The spectral models are, however, primarily sensitive to the fraction of the mass accreted, rather than to $p$ and $r_{\rm out}$ separately. If one favors a smaller range of radii over which winds are important, this corresponds roughly to a new $p' = 4p/\log r_w$, where $r_w$ is the radial extent of the region where winds are important. Note, however, that if this approach is taken, our values of $\delta$ may only be upper limits, since a model with a larger $p' > p$ will rotate more quickly, and thus have a larger viscous dissipation per unit mass. In particular, then, our conclusion that low $\delta$ models of Sgr A* and V404 Cyg are compatible with the observations only for $p \mathrel{\mathpalette\@versim<} 0.25$ need not conflict with theoretical estimates that $p \sim 1$ is needed for the Bernoulli parameter of the accreting gas to be negative. It may simply mean that winds are important only over $\sim 1-1.5$ decades of radius ($r_w \approx 10-30$), instead of 4 decades as we have assumed here. Nonetheless, it is clear that for winds to be both dynamically crucial (large $p$) and take away the majority of the mass (large $r_w$), large values of $\delta$ are required. \noindent{\it Acknowledgments.} We thank Jeff McClintock, Kristen Menou, and Jun-Hui Zhao for useful discussions. Jean-Pierre Lasota provided a number of useful comments on an earlier version of this paper. This work was supported by an NSF Graduate Research Fellowship and by NASA Grant NAG 5-2837. \newpage { \footnotesize \hyphenpenalty=10000 \raggedright \noindent {\large \bf References} \\ \hangindent=20pt \hangafter=1 \noindent Abramowicz, M., Chen, X., Granath, M., \& Lasota, J.P. 1996, ApJ, 471, 762 \\ \hangindent=20pt \hangafter=1 \noindent Abramowicz, M., Chen, X., Kato, S., Lasota, J.-P., \& Regev, O., 1995, ApJ, 438, L37 \\ \hangindent=20pt \hangafter=1 \noindent Backer, D., C, 1982, in Proc. IAU Symposium, eds. D.S. Heeschen \& C. M. Wade, 97, 389 \\ \hangindent=20pt \hangafter=1 \noindent Blackman, E., 1998, MNRAS in press (astro-ph/ 9710137) \\ \hangindent=20pt \hangafter=1 \noindent Blandford, R. D. \& Begelman, M. C., 1998, MNRAS submitted (astro-ph/9809083) (BB98) \\ \hangindent=20pt \hangafter=1 \noindent Blandford, R. D. \& Payne, D.G., 1982, MNRAS, 199, 883 \\ \hangindent=20pt \hangafter=1 \noindent Bisnovatyi-Kogan, G. S., \& Lovelace R. V. E., 1997, ApJ, 486, L43 \\ \hangindent=20pt \hangafter=1 \noindent Cannizzo, J. K., 1993, in ``Accretion Disks in Compact Stellar Systems,'' ed. J. Wheeler (Singapore: World Scientific), p. 6 \\ \hangindent=20pt \hangafter=1 \noindent Chen, X., Abramowicz, M.A., Lasota, J.-P., Narayan, R., \& Yi, I. 1995, ApJ, 443, L61 \\ \hangindent=20pt \hangafter=1 \noindent Chen, X., Abramowicz, M.A., Lasota, J.-P., 1997, ApJ, 476, 61 \\ \hangindent=20pt \hangafter=1 \noindent Coker, R. \& Melia, F., 1997, ApJ, 488, L149 \\ \hangindent=20pt \hangafter=1 \noindent Di Matteo, T., Fabian, A. C., Rees, M. J., Carilli, C. L., \& Ivison, R. J., 1998, MNRAS, submitted (astro-ph/9807245) (D98)\\ \hangindent=20pt \hangafter=1 \noindent Eckart, A., \& Genzel, R. 1997, MNRAS, 284, 576 \\ \hangindent=20pt \hangafter=1 \noindent Esin, A. A., McClintock, J. E., \& Narayan, R., 1997, ApJ, 489, 86 \\ \hangindent=20pt \hangafter=1 \noindent Esin, A. A., Narayan, R., Cui, W., Grove, E., \& Zhang, S., 1998, ApJ in press (astro-ph/9711167) \\ \hangindent=20pt \hangafter=1 \noindent Fabian, A. C. \& Canizares, C. R., 1988, Nature, 333, 829 \\ \hangindent=20pt \hangafter=1 \noindent Fabian, A. C. \& Rees, M. J., 1995, MNRAS, 277, L55 \\ \hangindent=20pt \hangafter=1 \noindent Falcke, H., 1996, in IAU Symp. 169, Unsolved Problems of the Milky Way, eds. L. Blitz \& P. J. Teuben (Dordrecht: Kluwer), 163 \\ \hangindent=20pt \hangafter=1 \noindent Falcke, H., Goss, W. M., Matsuo, H., Teuben, P., Zhao, J., \& Zylka, R., 1998, ApJ, 499, 731 \\ \hangindent=20pt \hangafter=1 \noindent Gammie, C.F., Narayan, R., \& Blandford, R., 1998, MNRAS submitted (astro-ph/9808036) \\ \hangindent=20pt \hangafter=1 \noindent Gammie, C.F. \& Popham, R.G., 1998, ApJ, 498, 313 \\ \hangindent=20pt \hangafter=1 \noindent Genzel, R., Eckart, A., Ott, T., \& Eisenhauer, F., 1997, MNRAS, 291, 219 \\ \hangindent=20pt \hangafter=1 \noindent Genzel, R., Hollenbach, D., \& Townes, C. H. 1994, Rep. Prog. Phys., 57, 417 \\ \hangindent=20pt \hangafter=1 \noindent Gruzinov, A. 1998, ApJ, 501, 787 \\ \hangindent=20pt \hangafter=1 \noindent Gruzinov, A. \& Quataert, E., 1998, ApJ submitted (astro-ph/9808278)\\ \hangindent=20pt \hangafter=1 \noindent Haller, J. W., Rieke, M.J., Rieke, G.H., Tamblyn, P., Close, L., \& Melia, F., 1996, 456, 194 (Erratum, 1996, ApJ, 468, 955) \\ \hangindent=20pt \hangafter=1 \noindent Hameury, J. M., Lasota, J. P., McClintock, J. E., \& Narayan, R., 1997, ApJ, 489, 234 \\ \hangindent=20pt \hangafter=1 \noindent Hawley, J. F., Gammie, C. F., \& Balbus, S. A., 1996, ApJ, 464, 690 \\ \hangindent=20pt \hangafter=1 \noindent Ichimaru, S. 1977, ApJ, 214, 840 \\ \hangindent=20pt \hangafter=1 \noindent Kato, S., Fukue, J., Mineshige, S., 1998, {\em Black-Hole Accretion Disks} (Japan: Kyoto University Press) \\ \hangindent=20pt \hangafter=1 \noindent Lasota, J. P., Abramowicz, M.A., Chen, X., Krolik, J., Narayan, R., \& Yi, I., 1996a, ApJ, 462, 142 \\ \hangindent=20pt \hangafter=1 \noindent Lasota, J. P., Narayan, R., \& Yi, I., 1996b, A \& A, 314, 813 \\ \hangindent=20pt \hangafter=1 \noindent Mahadevan, R., 1997, ApJ, 477, 585\\ \hangindent=20pt \hangafter=1 \noindent Mahadevan, R., 1998, Nature, 394, 651 \\ \hangindent=20pt \hangafter=1 \noindent Mahadevan, R, Narayan, R., \& Krolik, J. 1997, ApJ, 486, 268 \\ \hangindent=20pt \hangafter=1 \noindent Mahadevan, R. \& Quataert, E. 1997, ApJ, 490, 605 \\ \hangindent=20pt \hangafter=1 \noindent Manmoto, T., Mineshige, S., \& Kusunose, M., 1997, ApJ, 489, 791\\ \hangindent=20pt \hangafter=1 \noindent Markoff, S., Melia, F., \& Sarcevic, I., 1997, ApJ, 489, L47 \\ \hangindent=20pt \hangafter=1 \noindent McClintock, J. E., Horne, K., \& Remillard, R. A., 1995, ApJ, 442, 35 \\ \hangindent=20pt \hangafter=1 \noindent Melia, F. 1992, ApJ, 387, L25 \\ \hangindent=20pt \hangafter=1 \noindent Menou, K., Esin, A. A., Narayan, R., Garcia, Lasota, J.P., \& M., McClintock, J. E., 1998, in preparation \\ \hangindent=20pt \hangafter=1 \noindent Menou, K., Quataert, E., \& Narayan, R., 1998, in Proc. of the 8th Marcel Grossmann Meeting on General Relativity (astro-ph/9712015) \\ \hangindent=20pt \hangafter=1 \noindent Menten, K. M., Reid, M. J., Eckart, A., \& Genzel, R. 1997, ApJ, 475, L111 \\ \hangindent=20pt \hangafter=1 \noindent Nakamura, K. E., Masaaki, K., Matsumoto, R., \& Kato, S. 1997, PASJ, 49, 503 \\ \hangindent=20pt \hangafter=1 \noindent Narayan, R. 1996, ApJ, 462, 136 \\ \hangindent=20pt \hangafter=1 \noindent Narayan, R., Barret, D., \& McClintock, J. E., 1997a, ApJ, 482, 448\\ \hangindent=20pt \hangafter=1 \noindent Narayan, R., Kato, S., \& Honma, F. 1997, ApJ, 476, 49 \\ \hangindent=20pt \hangafter=1 \noindent Narayan, R., Mahadevan, R., Grindlay, J.E., Popham, R.G., \& Gammie, C., 1998a, ApJ, 492, 554 \\ \hangindent=20pt \hangafter=1 \noindent Narayan, R., Mahadevan, R., \& Quataert, E., 1998b, in {\em The Theory of Black Hole Accretion Discs}, eds. M.A. Abramowicz, G. Bjornsson, and J.E. Pringle (Cambridge: Cambridge University Press) (astro-ph/9803131) \\ \hangindent=20pt \hangafter=1 \noindent Narayan, R., McClintock, J.E., \& Yi, I., 1996, ApJ, 457 821 \\ \hangindent=20pt \hangafter=1 \noindent Narayan, R., \& Yi, I., 1994, ApJ, 428, L13 \\ \hangindent=20pt \hangafter=1 \noindent Narayan, R., \& Yi, I., 1995, ApJ, 444, 231 \\ \hangindent=20pt \hangafter=1 \noindent Narayan, R., Yi, I., \& Mahadevan, R., 1995, Nature, 374, 623 \\ \hangindent=20pt \hangafter=1 \noindent Peitz, J. \& Appl, S. 1997, MNRAS, 286, 681 \\ \hangindent=20pt \hangafter=1 \noindent Quataert, E. 1998, ApJ, 500, 978 \\ \hangindent=20pt \hangafter=1 \noindent Quataert, E. \& Gruzinov, A., 1998, ApJ submitted (astro-ph/9803112) \\ \hangindent=20pt \hangafter=1 \noindent Quataert, E. \& Narayan, R., 1998, ApJ submitted (astro-ph/9810117)\\ \hangindent=20pt \hangafter=1 \noindent Rees, M. J., 1982, in {\em The Galactic Center} (ed. G. R. Riegler \& R. D. Blandford). AIP, p. 166, New York \\ \hangindent=20pt \hangafter=1 \noindent Rees, M. J., Begelman, M. C., Blandford, R. D., \& Phinney, E. S., 1982, Nature, 295, 17 \\ \hangindent=20pt \hangafter=1 \noindent Rogers, A. E. E., et al., 1994, ApJ, 434, L59 \\ \hangindent=20pt \hangafter=1 \noindent Serabyn, E., Carlstrom, J., Lay, O., Lid, D.C., Hunter, T.R., \& Lacy, J.H., 1997, ApJ, 490, L77 \\ \hangindent=20pt \hangafter=1 \noindent Shahbaz, T., Ringwald, F.A., Bunn, J.C., Narylor, T., Charles, P.A., \& Casares, J., 1994, MNRAS, 271, L10 \\ \hangindent=20pt \hangafter=1 \noindent Vargas. M. et al., 1996, in {\em The Galactic Center}, ed. R. Gredel, 431 \\ \hangindent=20pt \hangafter=1 \noindent Wheeler, J. C., 1996, in {\em Relativistic Astrophysics}, eds. B. Jones \& D. Markovic (Cambridge: Cambridge University press) \\ } \newpage \begin{deluxetable}{lcccccccc} \tablecaption{Model Parameters for V404 Cyg: $r_{\rm out} = 10^4$, $m = 12$, $\dot M_{\rm in} \equiv \dot M_{\rm out} 10^{-4 p}$} \tablewidth{0pt} \tablehead{ \colhead{Fig.} & \colhead{$\alpha$} & \colhead{$\beta$} & \colhead{$\delta$} & \colhead{$p$} & \colhead{$\dot m_{\rm out} \times 10^2$} & \colhead{$T_{\rm e, max} \times 10^{-10}$} & \colhead{$L/{\dot M_{\rm out}} c^2$} & \colhead{$L/{\dot M_{\rm in}} c^2$}} \startdata 1a & 0.1 & 10 & 0.01 & 0 & 0.1 & 1.0 & $1.1 \times 10^{-3}$ & $1.1 \times 10^{-3}$ \nl 1a & 0.1 & 10 & 0.01 & 0.2 & 0.54 & 0.92 & $1.6 \times 10^{-4}$ & $1.0 \times 10^{-3}$ \nl 1a & 0.1 & 10 & 0.01 & 0.4 & 1.6 & 0.87 & $2.2 \times 10^{-5}$ & $8.8 \times 10^{-4}$ \nl 1a & 0.1 & 10 & 0.01 & 0.6 & 2.0 & 0.6 & $7.5 \times 10^{-6}$ & $2.0 \times 10^{-3}$ \nl \nl 1b & 0.1 & 1 & 0.01 & 0.4 & 1.2 & 0.57 & $6.0 \times 10^{-5}$ & $2.4 \times 10^{-3}$ \nl 1b & 0.1 & 10 & 0.01 & 0.4 & 1.6 & 0.87 & $2.2 \times 10^{-5}$ & $8.8 \times 10^{-4} $ \nl 1b & 0.1 & 100 & 0.01 & 0.4 & 1.7 & 1.0 & $1.2 \times 10^{-5}$ & $4.8 \times 10^{-4} $\nl \nl 2a & 0.3 & 10 & 0.01 & 0.4 & 2.6 & 0.89 & $1.9 \times 10^{-5}$ & $7.6 \times 10^{-4}$ \nl 2a & 0.1 & 10 & 0.01 & 0.4 & 1.6 & 0.87 & $2.2 \times 10^{-5}$ & $8.8 \times 10^{-4}$ \nl 2a & 0.03 & 10 & 0.01 & 0.4 & 0.59 & 0.77 & $5.5 \times 10^{-5}$ & $2.2 \times 10^{-3}$ \nl \nl 2b & 0.1 & 10 & $10^{-3}$ & 0.4 & 1.6 & 0.87 & $2.2 \times 10^{-5}$ & $8.8 \times 10^{-4}$ \nl 2b & 0.1 & 10 & 0.01 & 0.4 & 1.6 & 0.87 & $2.2 \times 10^{-5}$ & $ 8.8 \times 10^{-4}$ \nl 2b & 0.1 & 10 & 0.03 & 0.4 & 1.2 & 0.94 & $3.6 \times 10^{-5}$ & $ 1.4 \times 10^{-3}$ \nl 2b & 0.1 & 10 & 0.1 & 0.4 & 1.0 & 1.15 & $6.2 \times 10^{-5}$ & $ 2.5 \times 10^{-3}$ \nl 2b & 0.1 & 10 & 0.3 & 0.4 & 0.58 & 1.8 & $1.7 \times 10^{-4}$ & $6.8 \times 10^{-3}$ \nl \nl 3a & 0.3 & 1 & 0.01 & 0 & 0.1 & 0.74 & $1.8 \times 10^{-3}$ & $1.8 \times 10^{-3}$ \nl 3a & 0.1 & 10 & 0.01 & 0 & 0.1 & 1.0 & $1.1 \times 10^{-3}$ & $1.1 \times 10^{-3}$\nl 3a & 0.03 & 30 & 0.01 & 0 & 0.068 & 1.1 & $1.1 \times 10^{-3}$ & $1.1 \times 10^{-3}$ \nl \nl 3b & 0.1 & 10 & 0.01 & 0 & 0.1 & 1.0 & $1.1 \times 10^{-3}$ & $1.1 \times 10^{-3}$ \nl 3b & 0.1 & 10 & 0.1 & 0 & 0.05 & 1.3 & $2.5 \times 10^{-3}$ & $2.5 \times 10^{-3}$ \nl 3b & 0.1 & 10 & 0.3 & 0 & 0.01 & 2.4 & $6.4 \times 10^{-3}$ & $6.4 \times 10^{-3}$ \nl \nl 8a & 0.1 & 10 & 0.01 & 0 & 0.1 & 1.0 & $1.1 \times 10^{-3}$ & $1.1 \times 10^{-3}$ \nl 8a & 0.1 & 10 & 0.3 & 0.4 & 0.58 & 1.8 & $1.7 \times 10^{-4}$ & $6.8 \times 10^{-3}$ \nl 8a & 0.1 & 10 & 0.75 & 0.8 & 0.64 & 4.8 & $6.3 \times 10^{-5}$ & $0.1$ \nl \enddata \label{tab-v} \end{deluxetable} \newpage \begin{deluxetable}{lcccccccc} \tablecaption{Model Parameters for Sgr A*: $r_{\rm out} = 10^4$, $m = 2.5 \times 10^6$, $\dot M_{\rm in} \equiv \dot M_{\rm out} 10^{-4 p}$} \tablewidth{0pt} \tablehead{ \colhead{Fig.} & \colhead{$\alpha$} & \colhead{$\beta$} & \colhead{$\delta$} & \colhead{$p$} & \colhead{$\dot m_{\rm out} \times 10^4$} & \colhead{$T_{\rm e, max} \times 10^{-10}$} & \colhead{$L/{\dot M_{\rm out}} c^2$} & \colhead{$L/{\dot M_{\rm in}} c^2$}} \startdata 4a & 0.1 & 10 & 0.01 & 0 & 0.68 & 2.0 & $2.6 \times 10^{-5}$ & $2.6 \times 10^{-5}$\nl 4a & 0.1 & 10 & 0.01 & 0.2 & 1.8 & 1.1 & $5.5 \times 10^{-7}$ & $3.5 \times 10^{-6}$\nl 4a & 0.1 & 10 & 0.01 & 0.4 & 2.4 & 0.63 & $1.2 \times 10^{-7}$ &$4.8 \times 10^{-6}$\nl 4a & 0.1 & 10 & 0.01 & 0.6 & 2.8 & 0.37 & $8.1 \times 10^{-8}$ & $2.0 \times 10^{-5}$ \nl \nl 4b & 0.1 & 1 & 0.01 & 0.4 & 1.9 & 0.47 & $1.2 \times 10^{-7}$ & $4.8 \times 10^{-6}$ \nl 4b & 0.1 & 10 & 0.01 & 0.4 & 2.4 & 0.63 & $1.2 \times 10^{-7}$ & $4.8 \times 10^{-6}$ \nl 4b & 0.1 & 100 & 0.01 & 0.4 & 2.4 & 0.65 & $1.2 \times 10^{-7}$ & $4.8 \times 10^{-6}$ \nl \nl 5a & 0.3 & 10 & 0.01 & 0.4 & 4.0 & 0.72 & $ 7.6 \times 10^{-8}$ & $3.1\times 10^{-6}$\nl 5a & 0.1 & 10 & 0.01 & 0.4 & 2.4 & 0.63 & $1.2 \times 10^{-7}$ &$4.8\times 10^{-6}$\nl 5a & 0.03 & 10 & 0.01 & 0.4 & 1.0 & 0.54 & $2.8 \times 10^{-7}$ &$1.1 \times 10^{-5}$\nl \nl 5b & 0.1 & 10 & 0.01 & 0.4 & 2.4 & 0.63 & $1.2 \times 10^{-7}$ &$4.8\times 10^{-6}$\nl 5b & 0.1 & 10 & 0.1 & 0.4 & 1.7 & 1.6 & $ 3.1 \times 10^{-7}$ &$1.2\times 10^{-5}$\nl 5b & 0.1 & 10 & 0.3 & 0.4 & 1.7 & 3.5 & $3.1 \times 10^{-6}$ &$1.2\times 10^{-4}$\nl \nl 6a & 0.3 & 1 & 0.01 & 0 & 0.69 & 1.6 & $7.6 \times 10^{-5}$ &$7.6\times 10^{-5}$\nl 6a & 0.1 & 10 & 0.01 & 0 & 0.68 & 2.0 & $2.6 \times 10^{-5}$ &$2.6\times 10^{-5}$\nl 6a & 0.03 & 30 & 0.01 & 0 & 0.48 & 1.6 & $7.7 \times 10^{-6}$ &$7.7\times 10^{-6}$\nl \nl 6b & 0.1 & 10 & 0.01 & 0 & 0.68 & 2.0 & $2.6 \times 10^{-5}$ &$2.6\times 10^{-5}$\nl 6b & 0.1 & 10 & 0.1 & 0 & 0.54 & 2.3 & $2.6 \times 10^{-5}$ &$2.6\times 10^{-5}$\nl 6b & 0.1 & 10 & 0.3 & 0 & 0.11 & 5 & $1.0 \times 10^{-4}$ &$1.0\times 10^{-4}$\nl \nl 8b & 0.1 & 10 & 0.01 & 0 & 0.68 & 2.0 & $2.6 \times 10^{-5}$ &$2.6\times 10^{-5}$\nl 8b & 0.1 & 10 & 0.4 & 0.4 & 1.6 & 4.4 & $6.9 \times 10^{-6}$ &$2.7\times 10^{-4}$\nl 8b & 0.1 & 10 & 0.75 & 0.8 & 1.2 & 13.0 & $3.7 \times 10^{-6}$ &$5.9\times 10^{-3}$\nl \enddata \label{tab-s} \end{deluxetable} \newpage \begin{deluxetable}{lcccccccc} \tablecaption{Model Parameters for NGC 4649: $r_{\rm out} = 10^4$, $m = 8 \times 10^9$, $\dot M_{\rm in} \equiv \dot M_{\rm out} 10^{-4 p}$} \tablewidth{0pt} \tablehead{ \colhead{Fig.} & \colhead{$\alpha$} & \colhead{$\beta$} & \colhead{$\delta$} & \colhead{$p$} & \colhead{$\dot m_{\rm out}$} & \colhead{$T_{\rm e, max} \times 10^{-10}$} & \colhead{$L/{\dot M_{\rm out}} c^2$} & \colhead{$L/{\dot M_{\rm in}} c^2$}} \startdata 7a & 0.1 & 10 & 0.01 & 0 & $10^{-3}$ & 1.9 & $3.1 \times 10^{-4}$&$3.1\times 10^{-4}$ \nl 7a & 0.1 & 10 & 0.3 & 0.25 & $10^{-3}$ & 3.4 & $3.6 \times 10^{-5}$&$3.6\times 10^{-4}$ \nl 7a & 0.1 & 10 & 0.01 & 0.25 & $10^{-3}$ & 1.0 & $2.1 \times 10^{-6}$&$2.1\times 10^{-5}$ \nl 7a & 0.1 & 10 & 0.3 & 0.54 & $10^{-3}$ & 2.4 & $6.6 \times 10^{-7}$&$9.5\times 10^{-5}$ \nl \nl 7b & 0.1 & 10 & 0.01 & 0 & $10^{-4.5}$ & 1.9 & $2.6 \times 10^{-6}$&$2.6\times 10^{-6}$ \nl 7b & 0.1 & 10 & 0.3 & 0.25 &$10^{-4.5}$ & 3.5 & $9.1 \times 10^{-7}$&$9.1\times 10^{-6}$ \nl 7b & 0.1 & 10 & 0.01 & 0.25 & $10^{-4.5}$ & 1.0 & $5.7 \times 10^{-8}$&$5.7\times 10^{-7}$ \nl \enddata \label{tab-ngc} \end{deluxetable} \newpage \vskip 5in \newpage \begin{figure} \plottwo{fig1a.ps}{fig1b.ps} \caption{(a) Spectral models of V404 Cyg for several values of $p$, taking $\alpha = 0.1$, $\beta = 10$, and $\delta = 0.01$. (b) Models for several $\beta$, taking $\alpha = 0.1$, $p = 0.4$, and $\delta = 0.01$.} \end{figure} \begin{figure} \plottwo{fig2a.ps}{fig2b.ps} \caption{(a) Spectral models of V404 Cyg for several values of $\alpha$, taking $p = 0.4$, $\beta = 10$, and $\delta = 0.01$. (b) Models for several $\delta$, taking $\alpha = 0.1$, $p = 0.4$, $\beta = 10$.} \end{figure} \newpage \begin{figure} \plottwo{fig3a.ps}{fig3b.ps} \caption{(a) No wind ($p = 0$) spectral models of V404 Cyg for several values of $\alpha$ and $\beta$, taking $\delta = 0.01$. (b) Models for several values of $\delta$, taking $p = 0$, $\alpha = 0.1$, and $\beta = 10$.} \end{figure} \begin{figure} \plottwo{fig4a.ps}{fig4b.ps} \caption{(a) Spectral models of Sgr A* for several values of $p$, taking $\alpha = 0.1$, $\beta = 10$, and $\delta = 0.01$. (b) Models for several $\beta$, taking $\alpha = 0.1$, $p = 0.4$, and $\delta = 0.01$.} \end{figure} \newpage \begin{figure} \plottwo{fig5a.ps}{fig5b.ps} \caption{(a) Spectral models of Sgr A* for several values of $\alpha$, taking $p = 0.4$, $\beta = 10$, and $\delta = 0.01$. (b) Models for several $\delta$, taking $\alpha = 0.1$, $p = 0.4$, $\beta = 10$.} \end{figure} \begin{figure} \plottwo{fig6a.ps}{fig6b.ps} \caption{(a) Spectral models of Sgr A* for several values of $\alpha$ and $\beta$, taking $p = 0$, $\delta = 0.01$. (b) Models for several $\delta$, taking $\alpha = 0.1$, $\beta = 10$, and $p = 0$.} \end{figure} \newpage \begin{figure} \plottwo{fig7a.ps}{fig7b.ps} \caption{Spectral models of NGC 4649 for several values of $p$ and $\delta$, taking $\alpha = 0.1$, $\beta = 10$, $r_{\rm out} = 10^4$, and $m = 8 \times 10^9$. Panel (a) assumes that the accretion rate at the outer edge of the flow is $\dot m_{\rm out} = 10^{-3}$, while panel (b) takes $\dot m_{\rm out} = 10^{-4.5}$.} \end{figure} \begin{figure} \plottwo{fig8a.ps}{fig8b.ps} \caption{Spectral models of V404 Cyg (Fig. 8a) and Sgr A* (Fig. 8b) for several values of $p$ and $\delta$, taking $\alpha = 0.1$, $\beta = 10$, and $r_{\rm out} = 10^4$.} \end{figure} \end{document} We feel that it is important to work with global flow dynamics rather than self-similar dynamics. Both give similar results at large radii (Narayan, Kato \& Honma 1997, Chen, Abramowicz \& Lasota 1997), but they differ significantly near the black hole; the global solutions make a sonic transition near the black hole and fall in supersonically, whereas the self-similar solutions remain subsonic throughout. Because of this, the two solutions differ in their predicted densities near the black hole. Since most of the radiation originates from close to the black hole, it is important to treat this region of the flow accurately. The difference between the global and self-similar models is especially important for small values of the viscosity parameter $\alpha$. It is also important when considering the effect of changing the value of $\alpha$. Two models with different $\alpha$ but the same value of $\dot M/\alpha$ have similar densities, temperatures and emission characteristics in the outer self-similar region of the flow. They would, however, differ markedly in the inner regions. It is worth explicitly reiterating the effects of winds on spectral models, and the resulting degeneracy with microphysics parameters. Bremsstrahlung emission at $\sim 1-10$ keV is dominated by large radii in the flow and is sensitive primarily to the mass accretion rate on the outside $\dot m_{\rm out}$; it is rather insensitive to the wind parameter $p$ (defined in eq. [\ref{cont}]) which describes how $\dot m$ varies with $r$. By contrast, synchrotron and Compton emission originate from the interior of the flow, and, for a given $\dot m_{\rm out}$, decrease significantly with increasing $p$ (increasing strength of the wind). The magnetic field strength and the optical depth in the interior of the flow decrease because of the decreased gas density and pressure at large $p$. In addition, electrons in low luminosity ADAF models are often nearly adiabatic. Since winds decrease the density contrast between the interior and exterior of the flow, the electron temperature $T_e$ is lower in wind models than in non-wind models (other parameters being equal). Self-absorbed synchrotron emission being highly temperature sensitive ($\propto T^7_e$; see eq. [\ref{lsynch}]), the decrease in $T_e$ with increasing $p$ causes a substantial decrease in the synchrotron emission. The extreme temperature sensitivity of self-absorbed synchrotron emission is the reason why strong wind models with strong turbulent heating of electrons (large values of $\delta$) are semi-quantitatively similar to weak wind, low $\delta$ models. The increased $T_e$ associated with the large $\delta$ readily compensates for the reduced synchrotron emission associated with the strong wind. By contrast, for low values of $\delta$, no combination of $\alpha$ and $\beta$ in strong wind models is able to ``mimic'' weak wind models. In particular, low values of $\alpha$ do not bring strong wind models into better agreement with weak wind models; they actually make the discrepancy worse (\S4, Figure 5a). D98 found that they needed $p \sim 1$ for $r_{\rm out} \sim 100$, which corresponds to $p \approx 0.5$ at $r_{\rm out} = 10^4$, a noticeably larger value. For our $p$, $\dot m$ in the interior is ``only'' a factor of $\sim 10$ smaller than $\dot m_{\rm out}$, whereas D98's $\dot m_{in}$ is 100 times smaller than $\dot m_{\rm out}$. The reason that a factor of 10 decrease in $\dot m_{in}$ is able to decrease the synchrotron luminosity by $\sim 3$ orders of magnitude is that the electron temperature also decreases as $p$ increases (see Table 3 and the discussion in \S2). By equation (\ref{lsynch}), the decrease in $T_e$ has a significant effect on the synchrotron luminosity. By contrast, D98 found that they needed to reduce $\dot m_{\rm out}$ to $\sim 10^{-6}$ to explain the radio flux with a $\beta \sim 1$, $p = 0$ model. This discrepancy is puzzling.
1,116,691,499,419
arxiv
\section{Introduction} Graphene, an atomic layer of graphite that supports massless Dirac fermions, displays remarkable and promising electronic properties. Recently there is increasing interest in bilayers and few layers of graphene, where the physics and applications of graphene become richer, with, e.g., a tunable band gap~\cite{MF,OBSHR,Mc,CNMPL} for bilayer graphene. There are some key signatures of Dirac fermions that distinguish graphene from conventional electron systems. (i) In a magnetic field, graphene supports, as the lowest Landau level (LLL), a special set of four zero-energy levels differing in spin and valley, as observed via the half-integer quantum Hall effect. (ii) Graphene is an intrinsically many-body system equipped with the valence band acting as the Dirac sea. Quantum fluctuations of the filled valence band are fierce, even leading to ultraviolet divergences; and one encounters such many-body phenomena as velocity renormalization,~\cite{velrenorm} screening of charge,~\cite{KSbgr} and nontrivial Coulombic corrections to cyclotron resonance.~\cite{JHT,IWFB,BMgr,KCC,KScr} In multilayer graphene the zero-mode Landau levels acquire a new aspect. Bilayer graphene supports eight such levels, with an extra twofold degeneracy~\cite{MF} in Landau orbitals $n$=0 and 1. Trilayer graphene has 12 such levels with threefold $\lq\lq$orbital" degeneracy, and so on. This orbital degeneracy is a new feature peculiar to the LLL in multilayer graphene, and leads to intriguing quantum phenomena~\cite{BCNM,KSpzm,BCLM,CLBM,CLPBM,CFL} such as orbital mixing and orbital-pseudospin waves. In real samples these zero-energy levels evolve, due to general interactions, into a variety of pseudo-zero-mode (PZM) levels, or broken-symmetry states within the LLL, as discussed theoretically.~\cite{BCNM, NL} It has been unnoticed until recently that many-body effects work to lift orbital degeneracy. Each zero-mode level, subjected to quantum fluctuations of the valence band, gets shifted differently within the LLL, just like the Lamb shift~\cite{Lambshift} in the hydrogen atom. This orbital Lamb shift was first noted~\cite{KSLs} for bilayer graphene and is also realized in an analogous fashion~\cite{KSLsTL} in rhombohedral ($ABC$-stacked) trilayer graphene, which is a $\lq\lq$chiral" trilayer generalization of bilayer graphene. This orbital shift is considerably larger in scale than intrinsic spin or valley breaking, and one has to take it into account in clarifying the fine structure of the LLL in multilayers. Graphene trilayers attracted theorists' attention~\cite{GCP,KA,NCGP, KM79} even before experiments, and it has been verified experimentally~\cite{BZZL,KEPF,TWTJ,LLMC,BJVL,ZZCK,JCST} that the electronic properties of graphene trilayers strongly depend on the stacking order, with $ABC$-stacked trilayers exhibiting a tunable band gap and Bernal $(ABA)$-stacked trilayers, the most common type of trilayers, remaining metallic. Currently trilayers are under active study both experimentally~\cite{HNE,LVTZ} and theoretically.~\cite{KMc81,KMc83,ZSMM,YRK,ZTM} The purpose of this paper is to examine the orbital Lamb shift and its consequences in $ABA$-stacked trilayers, with focus on electron-hole and valley asymmetries due to general band parameters. It turns out that $ABA$ trilayers critically differ in zero-mode characteristics from $ABC$ trilayers. In particular, the way the Coulomb interaction acts within the LLL substantially differs between the two types of trilayers, leading to distinct basic filling-factor steps in which large level gaps appear in each of them. In addition, the LLL of $ABA$ trilayers, unlike that of $ABC$ trilayers, is greatly afflicted with interaction-enhanced electron-hole and valley asymmetries, which affect the sequence of broken-symmetry states within the LLL, observable via the quantum Hall effect. In Sec.~II we examine the one-body spectrum of the PZM levels in $ABA$-trilayer graphene, and in Sec.~III show how the orbital Lamb shift modifies their full spectrum. In Sec.~IV we discuss how the level spectra evolve, with filling, via the Coulomb interaction. Section~VI is devoted to a summary and discussion on how $ABA$ trilayers differ in zero-mode characteristics from $ABC$ trilayers. \section{$ABA$-stacked trilayer graphene} $ABA$-stacked trilayer graphene consists of three graphene layers with vertically-arranged dimer bonds $(B_{1}, A_{2})$ and $(A_{2}, B_{3})$, where $(A_{i}, B_{i})$ denote inequivalent lattice sites in the $i$-th layer. The interlayer coupling $\gamma_{0}\equiv \gamma_{B_{i}A_{i}} \sim 3$~eV is related to the Fermi velocity $v = (\sqrt{3}/2) a_{L}\gamma_{0}/\hbar \sim 10^6$ m/s in monolayer graphene. Interlayer hopping via the nearest-neighbor dimer coupling $\gamma_{1}\equiv \gamma_{B_{1}A_{2}} =\gamma_{A_{2}B_{3}} \sim $ 0.4~eV leads to linear (monolayer-like) and quadratic (bilayer-like) spectra~\cite{GCP,KA} $\propto |{\bf p}|, {\bf p}^2$ in the low-energy branches $|\epsilon|< \gamma_{1}$. The effective Hamiltonian for $ABA$-stacked trilayer graphene with general intra- and interlayer couplings is written as~\cite{KMc83} \begin{eqnarray} H^{\rm tri} &=&\!\! \int\! d^{2}{\bf x}\, \Big[ (\Psi^{K})^{\dag}\, {\cal H}_{K} \Psi^{K} + (\Psi^{K'})^{\dag}\, {\cal H}_{K'}\, \Psi^{K'}\Big], \nonumber\\ {\cal H}_{K} &=& \left( \begin{array}{lll} D & V & W \\ V^{\dag} & D & V^{\dag}\\ W & V & D \\ \end{array} \right) + U,\ D = \left( \begin{array}{cc} 0 & v\, p^{\dag} \\ v\, p & 0 \\ \end{array} \right), \nonumber\\ V &=& \left( \begin{array}{cc} -v_{4}\, p^{\dag}& v_{3}\, p \\ \gamma_{1} & -v_{4}\, p^{\dag} \\ \end{array} \right),\ W= \left( \begin{array}{cc} \gamma_{2}/2& 0 \\ 0 &\gamma_{5}/2\\ \end{array} \right),\ \ \ \nonumber\\ U&=& {\rm diag}(U_{1}, U_{1} + \Delta', U_{2}+ \Delta', U_{2} , U_{3}, U_{3} + \Delta'), \label{Htri} \end{eqnarray} with $p= p_{x} + i p_{y}$ and $p^{\dag}= p_{x} - i p_{y}$. Here $\Psi^{K}=(\psi_{A_{1}}, \psi_{B_{1}}, \psi_{A_{2}},\psi_{B_{2}}, \psi_{A_{3}}, \psi_{B_{3}})^{\rm t}$ stands for the electron field at the $K$ valley. $v_{3}$ and $v_{4}$ are related to the nonleading nearest-layer coupling $\gamma_{3} \equiv \gamma_{A_{1} B_{2}}$ and $\gamma_{4} \equiv \gamma_{A_{1} A_{2}}= \gamma_{B_{1} B_{2}}$, respectively. $\gamma_{2} \equiv \gamma_{A_{1}A_{3}}$ and $\gamma_{5} \equiv \gamma_{B_{1}B_{3}}$ describe coupling between the top and bottom layers. $(U_{1}, U_{2}, U_{3})$ denote the on-site energies of the three layers; we take $U_{2}=0$ without loss of generality and focus on the case of a symmetric bias~\cite{GCP} $U_{3}= -U_{1} \equiv u$. Such an interlayer bias leads to a tunable band gap for $ABC$-stacked trilayers, but not for $ABA$-stacked trilayers which involve monolayer-like subbands. $\Delta'$ stands for the energy difference between the dimer and non-dimer sites. ${\cal H}_{K}$ is diagonal in (suppressed) electron spin. The Hamiltonian ${\cal H}_{K'}$ at another valley is given by ${\cal H}_{K}$ with $p \rightarrow - p_{x} + i p_{y} = -p^{\dag}$ and $p^{\dag}\rightarrow - p$, and acts on a spinor of the same sublattice content as $\Psi^{K}$. ${\cal H}_{K'}$ is not linked to ${\cal H}_{K}$ in a simple way and, in a magnetic field, their Landau-level spectra significantly differ,~\cite{KMc81} especially for zero-mode levels, although they precisely but nontrivially~\cite{KMc81} agree when only the leading parameters $(v, \gamma_{1})$ are kept. This is in sharp contrast to the case of bilayers and $ABC$-stacked trilayers, for which ${\cal H}_{K'}$ is linked to ${\cal H}_{K}$ via unitary equivalence,~\cite{KSLs,KSLsTL} such as ${\cal H}_{K'}^{ABC} \sim {\cal H}_{K}^{ABC}|_{-v_{3}, -\gamma_{2}; U_{1} \leftrightarrow U_{3} }$. For the trilayer hopping parameters one may use, as typical values, those for graphite,~\cite{KMc83} \begin{eqnarray} &&\gamma_{0}\approx 3.16\, {\rm eV\ or} \ v\approx 1.0 \times 10^6 {\rm m/s}, \nonumber\\ &&\gamma_{1}\approx 0.4\, {\rm eV}, \gamma_{3}\approx 0.3\, {\rm eV}, \gamma_{4}\approx 0.04\, {\rm eV}, \nonumber\\ &&\gamma_{2}\approx -0.02\, {\rm eV}, \gamma_{5}\approx 0.04\, {\rm eV}, \Delta' \approx 0.05\, {\rm eV}. \label{Triparameter} \end{eqnarray} In the present analysis we regard $(v, \gamma_{1})$ as the basic parameters and treat the nonleading ones $(\gamma_{2}, \gamma_{5}, \gamma_{4}, \cdots)$ and bias $u$ as perturbations. We ignore $v_{3}\propto \gamma_{3}$ from the start since its effect is negligible in high magnetic fields, as discussed later in this section. Let us place trilayer graphene in a strong uniform magnetic field $B_{z} = B>0$ normal to the sample plane; we set, in ${\cal H}_{K}$, $p\rightarrow \Pi = p + eA$ with $A= A_{x}+ iA_{y}= -B\, y$, and scale $a \equiv \sqrt{2eB}\, \Pi^{\dag}$ so that $[a,a^{\dag}]=1$. It is easily seen that the eigenmodes of ${\cal H}_{K}$ have the structure \begin{eqnarray} \Psi_{n} &=& \Big( |n -2 \rangle\, b_{n}^{(1)} ,|n-1 \rangle\, d_{n}^{(1)} , |n -1 \rangle\, b_{n}^{(2)} , \nonumber\\ && |n \rangle\, d_{n}^{(2)} , |n -2 \rangle\, b_{n}^{(3)} , |n-1 \rangle\, d_{n}^{(3)} \Big)^{\rm t} \label{Psi_n} \end{eqnarray} with $n=0,1,2,\cdots$, where only the orbital eigenmodes are shown using the standard harmonic-oscillator basis $\{ |n\rangle \}$ (with the understanding that $|n\rangle =0$ for $n<0$). The coefficients ${\bf v}_{n}=(b_{n}^{(1)}, d_{n}^{(1)}, b_{n}^{(2)}, d_{n}^{(2)}, b_{n}^{(3)}, d_{n}^{(3)} )^{\rm t}$ are given by the eigenvectors (chosen to form an orthonormal basis) of the reduced Hamiltonian $\hat{\cal H}_{\rm red} \equiv \omega_{c} {\cal H}_{n}$ with \begin{equation} {\cal H}_{n} \!=\! \! \left( \!\!\! \begin{array}{cccccc} -\hat{u} & r_{n-1} &\! - \lambda r_{n-1}&0 &R_{2}&0 \\ r_{n-1} &\! \delta -\hat{u} &\hat{\gamma} & - \lambda r_{n} &0 &R_{5} \\ -\lambda r_{n-1} & \hat{\gamma} & \delta& r_{n} &\! - \lambda r_{n-1} & \hat{\gamma} \\ 0&-\lambda r_{n} & r_{n} & 0 & 0 & -\lambda r_{n} \\ R_{2}& 0 &\! - \lambda r_{n-1} & 0 & \hat{u} & r_{n-1} \\ 0& R_{5} & \hat{\gamma} & - \lambda r_{n} & r_{n-1} & \delta + \hat{u} \\ \end{array} \! \! \right), \nonumber\\ \label{Hn} \end{equation} where $r_{n}\equiv \sqrt{n}$ for short; $ \hat{u} \equiv u/\omega_{c}$, $\hat{\gamma} \equiv \gamma_{1}/\omega_{c}$. $\lambda \equiv \gamma_{4}/\gamma_{0}\, (\approx 0.013)$, $R_{2} \equiv (\gamma_{2}/2)/\omega_{c}$, $R_{5} \equiv (\gamma_{5}/2)/\omega_{c}$ and $\delta\equiv \Delta'/\omega_{c}$. Here \begin{equation} \omega_{c}\equiv \sqrt{2}\, v/\ell \approx 36.3 \times v[10^{6}{\rm m/s}]\, \sqrt{B[{\rm T}]}\ {\rm meV} \end{equation} stands for the characteristic cyclotron energy for monolayer graphene, with $v$ in units of $10^{6}$m/s and $B$ in teslas; $\ell \equiv 1/\sqrt{eB}$ denotes the magnetic length. Note that eigenvectors ${\bf v}_{n}$ can be taken real since ${\cal H}_{n}$ is a real symmetric matrix. Solving the secular equation shows that there are 6 branches of Landau levels for each integer $n\ge 2$, with two branches of monolayer-like spectra $\epsilon \sim \pm \sqrt{n-1}\, \omega_{c}$ and four branches of bilayer-like spectra. We denote the eigenvalues as $\epsilon_{-n''} < \epsilon_{-n'} < \epsilon_{-n} < 0 < \epsilon_{n}<\epsilon_{n'} < \epsilon_{n''}$, so that the index $\pm n$ reflects the sign of $\epsilon_{n}$. The $|n|=2$ levels, e.g., consist of the $n=(\pm 2. \pm 2',\pm 2'')$ branches. As verified easily, with only $(v,\gamma_{1})$ and bias $u$ kept, the spectrum and eigenvectors of $\hat{\cal H}_{\rm red}$ have the property \begin{equation} \epsilon_{- n} = - \epsilon_{n}|_{-u}, b_{- n}^{(i)} = - b_{n}^{(i)}|_{-u}, d_{- n}^{(i)} = d_{n}^{(i)}|_{-u}, \label{vn} \end{equation} for $|n|\ge 2$ [and each branch $(n,n',n'')$], where $b_{n}^{(i)}|_{-u}$, e.g., stands for $b_{n}^{(i)}$ with $u\rightarrow -u$. This structure~\cite{fnone} is also seen from the fact that $-{\cal H}_{K}$ is unitarily equivalent to ${\cal H}_{K}$ with the signs of $(U_{i}, v_{4}, \gamma_{2},\gamma_{5}, \Delta')$ reversed, \begin{equation} \Sigma_{3}^{\dag}{\cal H}_{K}\Sigma_{3} = - {\cal H}_{K}|_{-U_{i}, -v_{4}, -\gamma_{2}, -\gamma_{5},-\Delta'} , \label{Hequi} \end{equation} where $\Sigma_{3} = {\rm diag} (\sigma_{3}, \sigma_{3}, \sigma_{3})$; thus Eq.~(\ref{vn}) is generalized to the full spectrum accordingly. There are three zero-energy solutions (per spin) within the $n\in (0,1)$ sector for $u\rightarrow 0$. As seen from Eq.~(\ref{Psi_n}), for $n=0$, $\hat{\cal H}_{\rm red}$ is reduced to a matrix of rank 1, with an obvious eigenvalue \begin{equation} \epsilon_{0}=U_{2}= 0 \label{zmK} \end{equation} and the eigenvector ${\bf v}_{0} = (0,0,0,1,0,0)^{\rm t}$ or \begin{equation} \Psi_{0} = ( 0,0,0, |0\rangle, 0, 0)^{\rm t}. \label{psizero} \end{equation} For $n=1$, $\hat{\cal H}_{\rm red}$ has rank 4, and we specify the four eigenmodes as $n=1_{\pm}$ and $n=\pm 1'$, with energy spectra $\epsilon_{1_{\pm}} =\pm \sigma\, u$ and $\epsilon_{\pm 1'} = \pm (1/\sigma)\, \omega_{c} \sim \pm \sqrt{2}\, \gamma_{1}$ when only $(v, \gamma_{1}, u)$ are kept, where \begin{equation} \sigma \approx 1/\sqrt{2\, \hat{\gamma}^2 +1} \ <1; \end{equation} $\hat{\gamma} \approx 2.4$ and $\sigma \approx 0.28$ at $B=20\, $T with $\gamma_{1} \approx 0.4\, $eV. For $u\rightarrow +0$, in particular, the $n=1_{\pm}$ modes have zero energy with wave functions \begin{eqnarray} \Psi_{1_{+}}^{(0)} \!\! &\stackrel{u\rightarrow 0}{=}& \! \Big( 0, \alpha^{-} |0\rangle, 0, c_{1}\, |1\rangle, 0, - \alpha^{+} |0\rangle \Big)^{\rm t}, \nonumber\\ \Psi_{1_{-}}^{(0)} \!\! &\stackrel{u\rightarrow 0}{=}& \! \Big( 0, - \alpha^{+} |0\rangle, 0, c_{1}\, |1\rangle, 0, \alpha^{-} |0\rangle \Big)^{\rm t},\ \ \ \end{eqnarray} where $\alpha^{\pm} \equiv (1\pm \sigma)/2\sim 1/2$ and $c_{1} \equiv \sqrt{2\, \alpha^{+}\alpha^{-}} = \hat{\gamma}/\sqrt{2 \hat{\gamma}^2 +1} \sim 1/\sqrt{2}$. When bias $u$ and nonleading parameters $(\gamma_{2}, \gamma_{5}, \cdots)$ are turned on, the zero-modes $\Psi_{0}$ and $\Psi_{1_{\pm}}^{(0)}$ in general deviate from zero energy and become the pseudo-zero-modes. Their spectra, to first order in such perturbations, can also be determined using this $u\rightarrow0$ zero-mode basis $\Psi^{\rm pz} = (\Psi_{0}, \Psi_{1_{+}}^{(0)}, \Psi_{1_{-}}^{(0)})^{\rm t}$. Writing $H^{\rm tri}$ in the $3\times 3$ matrix form ${\cal H}^{\rm pz}_{ij} \sim (\Psi^{\rm pz})^{\dag}_{i} {\cal H}_{K} (\Psi^{\rm pz})_{j}$ yields the spectrum of the pseudo-zero-mode (PZM) sector, \begin{eqnarray} {\cal H}^{\rm pz} &=&\{0\} \oplus {\cal H}_{1}, \nonumber\\ {\cal H}_{1}&=& \sigma u\, \sigma_{3} + \beta_{0}\, 1 + \beta\, \sigma_{1}, \end{eqnarray} where \begin{eqnarray} \beta &=& \textstyle{1\over{2}} (1-c_{1}^2)\, \gamma_{5} - c_{1}^2\, \Delta' + 2\,\sigma\, c_{1} \lambda\,\omega_{c} , \nonumber\\ \beta_{0} &=& - \textstyle{1\over{2}} c_{1}^2\, \gamma_{5} + (1-c_{1}^2)\, \Delta' + 2\, \sigma\, c_{1} \lambda\,\omega_{c} . \end{eqnarray} This PZM spectrum ${\cal H}^{\rm pz}$, in the framework of degenerate perturbation theory, is correct to order linear in $(u,\gamma_{5}, \lambda, \Delta')$, which is sufficient for our present purpose. Diagonalizing ${\cal H}_{1}$ by a rotation within the $\{1_{\pm} \}$ sector, \begin{eqnarray} \Psi_{1_{+}} &=& \cos (\theta/2)\, \Psi_{1_{+}}^{(0)} - \sin (\theta/2)\, \Psi_{1_{-}}^{(0)}, \nonumber\\ \Psi_{1_{-}} &=& \sin (\theta/2)\, \Psi_{1_{+}}^{(0)} + \cos (\theta/2)\, \Psi_{1_{-}}^{(0)}, \end{eqnarray} yields the eigenspectrum \begin{equation} \epsilon_{1_{\pm}} =\beta_{0} \pm \sqrt{\beta^2 +\sigma^2 u^2} =\beta_{0} \pm |\beta|/\sin \theta, \label{Eonepm} \end{equation} with $\sin \theta =1/\sqrt{1+ \sigma^2u^2/\beta^2 }$ and $\cot \theta = - \sigma u/\beta$; note that $\beta \approx -11.4\, {\rm meV} < 0$ and $\beta_{0} \approx 18.6\, {\rm meV} >0$ for the set~(\ref{Triparameter}) of parameters and at $B=$20 T. In particular, for $u\rightarrow +0$ $(\theta\rightarrow\pi/2$) the spectrum reads \begin{eqnarray} \epsilon_{1_{+}} &\stackrel{u\rightarrow0}{=}& \beta_{0} + |\beta| =\Delta' - \textstyle{1\over{2}} \gamma_{5} \ (\sim 30\, {\rm meV}) , \nonumber\\ \epsilon_{1_{- }} &\stackrel{u\rightarrow0}{=}& (1-2c_{1}^2)\, ( \textstyle{1\over{2}} \gamma_{5} \! + \Delta') + 4 \sigma c_{1} \lambda\,\omega_{c} , \end{eqnarray} which, for $\hat{\gamma} \rightarrow \infty$, recovers an earlier result,~\cite{KMc83} with $c_{1}^2 \rightarrow 1/2$, $\sigma \rightarrow 0$ and $\epsilon_{1_{- }}\rightarrow 0$. Here we wish to discuss possible effects of the interlayer coupling $\gamma_{3} \equiv \gamma_{A_{1} B_{2}} \propto v_{3}$. It induces transitions that go outside the PZM sector, as one can verify using the solutions $(\Psi_{0}, \Psi_{1_{\pm}})$. Accordingly, its contributions to the spectra $(\epsilon_{0},\epsilon_{1_{\pm}})$ are only of second order in $v_{3}/v$ and are negligible in high magnetic fields. The Hamiltonian ${\cal H}_{K'}$ at another valley is given by ${\cal H}_{K}$ with replacement $\Pi \leftrightarrow -\Pi^{\dag}$. As for its spectrum one readily finds the following: The associated eigenmodes $\Psi_{n}^{K'}$ take the form of $\Psi_{n}$ in Eq.~(\ref{Psi_n}), with replacement $|n \rangle \rightarrow |n-2 \rangle$ for $d_{n}^{(2)}$ and $|n-2 \rangle \rightarrow |n \rangle$ for $(b_{n}^{(1)}, b_{n}^{(3)} )$. The reduced Hamiltonian ${\cal H}_{n}|^{K'}$ is obtained from ${\cal H}_{n}$ in Eq.~(\ref{Hn}) by replacing each $r_{n-1}$ by $- r_{n}$ and each $r_{n}$ by $- r_{n-1}$. One, of course, has to calculate the eigenvectors ${\bf v}_{n}|^{K'}=(b_{n}^{(1)}, d_{n}^{(1)},...)^{\rm t}|^{K'}$ anew. Unlike ${\cal H}_{n}$, ${\cal H}_{n}^{K'}$ has rank 2 for $n=0$ and rank 5 for $n=1$. This already signals that the PZM spectra significantly differ between the two valleys. For $n=0$ one considers the $2\times 2$ matrix Hamiltonian $\hat{\cal H}_{\rm red}|^{K'} \sim -u\, \sigma_{3} + {1\over{2}}\gamma_{2} \sigma_{1}$, with eigenmodes (denoted as $n=0_{\pm})$, \begin{eqnarray} \Psi_{0_{+}} &=& \big(-\sin (\phi/2)\, |0\rangle, 0, 0,0, \cos (\phi/2)\, |0\rangle, 0 \big)^{\rm t}, \nonumber\\ \Psi_{0_{-}} &=& \big(\cos (\phi/2)\, |0\rangle, 0, 0,0, \sin (\phi/2)\, |0\rangle, 0 \big)^{\rm t}, \end{eqnarray} and the associated spectra \begin{equation} \epsilon_{0_{\pm}} =\pm \sqrt{ (\gamma_{2}/2)^2 +u^2} =\pm \textstyle{1\over{2}} |\gamma_{2}|/\sin \phi , \end{equation} where $\sin \phi =1/\sqrt{1 + (2u/\gamma_{2})^2}$ and $\tan \phi = - {1\over{2}} \gamma_{2}/u$. For $n=1$, ${\cal H}_{n}^{K'}$ has rank 5. Of its five eigenvalues, one belongs to the PZM sector, two are monolayer-like with $\epsilon_{\pm1'} \sim \pm \omega_{c}$ and two are bilayer-like with $\epsilon_{\pm1''} \sim \pm \sqrt{2} \gamma_{1}$. In the $u\rightarrow 0$ basis, the zero-energy mode is given by \begin{equation} \Psi_{n= 1} \stackrel{u\rightarrow0}{=} c_{1}\, ( |1\rangle,0, \kappa\, |0\rangle, 0, |1\rangle, 0)^{\rm t}, \label{Psinone} \end{equation} where $\kappa \equiv 1/\hat{\gamma}$ and $c_{1}\! \equiv \hat{\gamma}/\sqrt{2 \hat{\gamma}^2\! +1}= 1/\/\sqrt{2 +\kappa^2}$. Evaluating the expectation value $\epsilon_{1} =\Psi_{1}^{\dag} (\omega_{c}\, {\cal H}_{n=1}|^{K'}) \Psi_{1}$ yields the spectrum of the $n=1$ mode, \begin{equation} \epsilon_{1} = c_{1}^{2}\, (\gamma_{2} +4 \kappa \lambda\, \omega_{c} +\kappa^2 \Delta' ), \end{equation} correct to order linear in $(u, \gamma_{2}, \gamma_{5}, v_{4}, \Delta')$ as well. The LLL, i.e, the PZM sector, consists of $n \in (0,1_{\pm})$ at valley $K$ and of $n\in (0_{\pm},1)$ at valley $K'$; there are thus twelve PZM levels differing in spin, valley and orbital. It is interesting to look into their structure. For zero bias $u\rightarrow +0$ (i.e., $\theta=\phi \rightarrow \pi/2)$, $\Psi_{0}$ and $\Psi_{1_{-}}$ at valley $K$ are predominantly composed of the orbital mode $|0\rangle$ and $|1\rangle$, respectively, residing on the $B$ sites of the middle layer; let us denote this feature as $\Psi_{0}|^{K} \sim |0\rangle$ on $B_{2}$ and $\Psi_{1_{-}}|^{K} \sim |1\rangle$ on $B_{2}$. One can further write $\Psi_{1_{+}}|^{K} \sim |0\rangle$ on $B_{1,3}$, $\Psi_{1}|^{K'} \sim |1\rangle$ on $A_{1,3}$, $\Psi_{0_{-}}|^{K'} \sim |0\rangle$ on $A_{1}$, and $\Psi_{0_{+}}|^{K'} \sim |0\rangle$ on $A_{3}$. This naturally explains why $\epsilon_{0}$ and $\epsilon_{1_{-}}$ are insensitive to the outer-layer coupling $(\gamma_{2}, \gamma_{5})$ and are less sizable. In this way, in $ABA$ trilayers the PZM levels show valley asymmetry in composition. \begin{figure}[htbp] \includegraphics[scale=.55]{fig1.eps} \caption{ (Color online) One-body level spectra $\epsilon^{\rm h}_{n}$ as a function of $B$ for (a)~$u=0$ and (b)~$u=20$ meV. Real curves refer to valley $K$ and dashed ones to valley $K'$. } \end{figure} The valley asymmetry is manifest in the one-body spectra $\{\epsilon_{n}\}$, which, from now on, are denoted as $\{\epsilon_{n}^{\rm h}\}$ to indicate that they come from $H^{\rm tri}$. Numerically, for $B = 20$\,T and with the set of parameters in Eq.~(\ref{Triparameter}) taken, \begin{eqnarray} &&( \epsilon_{1_{+}}^{\rm h}, \epsilon_{1_{-}}^{\rm h}, \epsilon_{0}^{\rm h} ) \stackrel{u\rightarrow0}{\approx} (30, 7.15, 0)\, {\rm meV}, \nonumber\\ &&(\epsilon_{0_{+}}^{\rm h}, \epsilon_{1}^{\rm h}, \epsilon_{0_{-}}^{\rm h} ) \stackrel{u\rightarrow0}{\approx} (10, -3.64, -10)\, {\rm meV}. \end{eqnarray} Figure~1 shows the spectra $\{ \epsilon_{n}^{\rm h} \}$ for $u=(0, 20)$~meV as a function of magnetic field $B$, along with the $n=\pm 2, \pm 3$ bilayer-like spectra. The PZM sector is considerably spread in energy $\sim \Delta' - {1\over{2}}\, ( \gamma_{5}+\gamma_{2}) \sim 40$\,meV, but, for large $B >15\,$T, it is practically isolated from other levels. The PZM spectra prominently differ between the two valleys (real curves {\it vs} dashed ones). In addition, they are highly electron-hole ($eh$) asymmetric (i.e., not symmetric about zero energy) and this $eh$ asymmetry comes from $\beta_{0} \not =0$ and $\epsilon_{1} \not =0$, i.e., primarily from $(\gamma_{5}, \Delta')$ at valley $K$ and $\gamma_{2}$ at valley $K'$. Note that $\epsilon_{1_{\pm}}^{\rm h} $ and $\epsilon_{0_{\pm}}^{\rm h} $ vary with interlayer bias $u$. Practically only $\epsilon_{0_{\pm}}^{\rm h} $ vary sensitively with $u$ while other levels are barely affected. Let us now make the Landau-level structure explicit by passing to the $|n,y_{0}\rangle$ basis $\ni \{ \Psi_{n} \}$ with $y_{0}\equiv \ell^{2}p_{x}$ via the expansion $(\Psi^{K} ({\bf x}), \Psi^{K'} ({\bf x}) ) = \sum_{n, y_{0}} \langle {\bf x}| n, y_{0}\rangle\, \{ \psi^{n;a}_{\alpha}(y_{0})\}$, where $n$ refers to the level index, $\alpha \in (\uparrow, \downarrow)$ to the spin, and $a \in (K,K')$ to the valley. The charge density $\rho_{-{\bf p}} =\int d^{2}{\bf x}\, e^{i {\bf p\cdot x}}\,\rho$ with $\rho = (\Psi^{K})^{\dag}\Psi^{K} + (\Psi^{K'})^{\dag}\Psi^{K'}$ is thereby written as~\cite{KSLs} \begin{eqnarray} \rho_{-{\bf p}} &=& \gamma_{\bf p}\sum_{k, n =-\infty}^{\infty} \sum_{a,\alpha} g^{k n;a}_{\bf p}\, R^{k n;aa}_{\alpha\alpha;\bf p}, \nonumber\\ R^{kn;ab}_{\alpha\beta;{\bf p}}&\equiv& \int dy_{0}\, {\psi^{k,a}_{\alpha}}^{\dag}(y_{0})\, e^{i{\bf p\cdot r}}\, \psi^{n,b}_{\beta} (y_{0}), \label{chargeoperator} \end{eqnarray} where $\gamma_{\bf p} \equiv e^{- \ell^{2} {\bf p}^{2}/4}$; ${\bf r} = (i\ell^{2}\partial/\partial y_{0}, y_{0})$ denotes the center coordinate with $[r_{x}, r_{y}] =i\ell^{2}$. The coefficient matrix $g^{k n;a}_{\bf p} \equiv g^{kn}_{\bf p}|^{a}$ at valley $a\in (K,K')$ is constructed from the eigenvectors ${\bf v}_{n}|^{a}$, \begin{eqnarray} g^{kn}_{\bf p}|^{K} &=&\{ b_{k}^{(1)} b_{n}^{(1)} + b_{k}^{(3)} b_{n}^{(3)} \}\, f_{\bf p}^{|k|-2,|n|-2} \nonumber\\ &&+ \{ d_{k}^{(1)} d_{n}^{(1)} + b_{k}^{(2)} b_{n}^{(2)} + d_{k}^{(3)} d_{n}^{(3)} \}\, f_{\bf p}^{|k|-1,|n|-1} \nonumber\\ && + d_{k}^{(2)} d_{n}^{(2)}\, f_{\bf p}^{|k|,|n|}, {\rm etc.}, \label{fknp} \end{eqnarray} and has the property $(g^{mn;a}_{\bf p})^{\dag}= g^{n,m;a}_{\bf -p}$. Here \begin{equation} f^{k n}_{\bf p} = \sqrt{n!/k!}\, (-\bar{q}/\sqrt{2})^{k-n}\, L^{(k-n)}_{n} (|\bar{q}|^{2}/2) \end{equation} for $k \ge n \ge 0$, and $f^{n k}_{\bf p} = (f^{k n}_{\bf -p})^{\dag}$; $\bar{q} =\ell (p_{x}\! -i\, p_{y})$; it is understood that $f^{kn}_{\bf p}=0$ for $k<0$ or $n<0$. Within the PZM sector, \begin{eqnarray} &&g^{00}_{\bf p}=1,\ g^{01_{\pm}}_{\bf p} = (\cos \textstyle{\theta\over{2}} \mp \sin {\theta\over{2}} )\, c_{1} \ell\, p/\sqrt{2}, \nonumber\\ &&g^{1_{\pm}1_{\pm}}_{\bf p} = 1 - (1\mp \sin \theta ) \textstyle{1\over{2}} (c_{1})^2\, \ell^2{\bf p}^2, \nonumber\\ &&g^{1_{+}1_{-}}_{\bf p} = g^{1_{-}1_{+}}_{\bf p} = - (\cos \theta)\, \textstyle{1\over{2}} (c_{1})^2\, \ell^2 {\bf p}^2, \label{gpzmK} \end{eqnarray} at valley $K$, and \begin{eqnarray} &&g^{0_{+}0_{+}}_{\bf p}=g^{0_{-}0_{-}}_{\bf p} =1,\ \ g^{0_{+}0_{-}}_{\bf p}=0, \ \ \nonumber\\ && g^{0_{\pm}1}_{\bf p} = (\cos \textstyle{\phi\over{2}} \mp \sin {\phi\over{2}} )\, c_{1} \ell\, p/\sqrt{2}, \ \ \nonumber\\ && g^{11}_{\bf p} = 1 - (c_{1})^2\, \ell^2 {\bf p}^2, \ \ \ \ \label{gpzmKp} \end{eqnarray} at valley $K'$, with $c_{1}= 1/\sqrt{2+ \kappa^2}$ and $\kappa \equiv 1/\hat{\gamma}$. One can further show that, with only $(v,\gamma_{1}, u)$ kept, these $g^{kn}_{\bf p}$ are only corrected to $O(\hat{u}^2\kappa^2)$ or smaller. In view of Eqs.~(\ref{vn}) and (\ref{Hequi}), the form factors $g^{mn;a}_{\bf p}$ enjoy the property \begin{equation} g^{mn;a}_{\bf p} = g^{-m,-n;a}_{\bf p}|_{-U_{i}, -v_{4}, -\gamma_{2}, -\gamma_{5},-\Delta'} \label{gmnproperty} \end{equation} for general $(m,n)$, where it is understood that one sets $\pm m \rightarrow j$ for the PZM level $j$. This property plays a key role in our analysis later. Equations~(\ref{gpzmK}) and~(\ref{gpzmKp}) are expressions valid to zero$th$ order in perturbations $(u,\gamma_{2}, \gamma_{5},\cdots)$, but they actually know the nature of perturbations through the mixing angles $(\theta, \phi)$ that depend on the relative strengths $(u/\beta, u/\gamma_{2})$. They indeed satisfy Eq.~(\ref{gmnproperty}). The form factors $g^{kn}_{\bf p}$ generally differ between the two valleys. Interestingly, they happen to coincide for $u\rightarrow 0$: Indeed, for $u\rightarrow 0$, one finds \begin{eqnarray} &&g^{1_{+}1_{+}}_{\bf p}=g^{00}_{\bf p}=1, g^{1_{-}1_{-}}_{\bf p}= 1- c_{1}^2\ell^2 {\bf p}^2, \nonumber\\ &&g^{01_{+}}_{\bf p}=g^{1_{-}1_{+}}_{\bf p}=0, g^{01_{-}}_{\bf p}= c_{1}\, \ell\, p, \label{gknU0} \end{eqnarray} at valley $K$, and analogous $K'$-valley expressions with $(1_{+},1_{-},0)$ replaced by $(0_{+},1,0_{-})$ in the above. This fact tells us that the charge $\rho_{\bf p}$ takes a manifestly valley-symmetric form for zero bias $u=0$ while the one-body spectra $\{\epsilon_{n}^{\rm h} \}$ inevitably break valley symmetry. In addition, Eq.~(\ref{gknU0}) implies that, for $u\rightarrow 0$, $1_{+}|^{K}$ is isolated from $(0,1_{-})|^{K}$, and similarly, $0_{+}|^{K'}$ from $(1,0_{-})|^{K'}$. From now on we frequently suppress summations over levels $n$, spins $\alpha$ and valleys $a$, with the convention that the sum is taken over repeated indices. The Hamiltonian $H^{\rm tri}$ projected to the PZM sector is thereby written as \begin{equation} H^{\rm h} = \epsilon^{\rm h}_{n}\, R^{nn}_{\alpha\alpha;{\bf p= 0}} - \mu_{\rm Z}\, (T_{3})_{\beta\alpha} R^{nn}_{\alpha\beta;{\bf p= 0}} \label{Hzero} \end{equation} with $n\in (0,1_{\pm}, 0_{\pm}, 1)$. Here the Zeeman term $\mu_{\rm Z} \equiv g^{*}\mu_{\rm B}B \approx 0.12\, B[{\rm T}]$ meV is introduced via the spin matrix $T_{3} = \sigma_{3}/2$. Actually, the Zeeman energy $\mu_{\rm Z}$ is only about 3\,meV even at $B=30$\,T and is generally smaller than energy splitting due to valley breaking. Accordingly, in what follows, we mostly suppose that the spin is practically unresolved and focus on energy gaps due to valley and orbital breaking. \section{vacuum fluctuations} In this section we examine the effect of Coulombic quantum fluctuations on the PZM multiplet. The Coulomb interaction is written as \begin{equation} V = \textstyle{1\over{2}} \sum_{\bf p} v_{\bf p}\, :\rho_{\bf -p}\, \rho_{\bf p}:, \label{Hcoul} \end{equation} where $v_{\bf p}= 2\pi \alpha/(\epsilon_{\rm b} |{\bf p}|)$ with $\alpha = e^{2}/(4 \pi \epsilon_{0}) \approx 1/137$ and the substrate dielectric constant $\epsilon_{\rm b}$. For simplicity we ignore the difference between the intralayer and interlayer Coulomb potentials. In this paper we generally study many-body ground states $|G\rangle$ with a homogeneous density, realized at integer filling factor $\nu \in [-6, 6]$. We set the expectation values $\langle G| R^{mn;ab}_{\alpha\beta; {\bf k}}|G \rangle = \delta_{\bf k,0}\, \rho_{0}\, \nu^{mn;ab}_{\alpha\beta}$ with $\rho_{0} = 1/(2\pi \ell^{2})$, so that the filling factor $\nu^{nn;aa}_{\alpha\alpha}=1$ for a filled $(n,a,\alpha)$ level. Let us define the Dirac sea $|{\rm DS}\rangle$ as the valence band with levels below the PZM sector (i.e., levels with $n\le -2$, $n'\le -1'$, ...) all filled. We construct the Hartree-Fock Hamiltonian $V^{\rm HF}= V_{\rm D}+ V_{\rm X}$ out of $V$ as the effective Hamiltonian that governs the electron states over $|{\rm DS}\rangle$. As usual, the direct interaction $V_{\rm D}$ is removed if one takes into account the neutralizing positive background. We thus focus on the exchange interaction \begin{equation} V_{\rm X} = - \sum_{\bf p}v_{\bf p}\gamma_{\bf p}^{2}\, g^{m n';b}_{\bf -p}\,g^{m'n;a}_{\bf p}\, \nu^{mn;ba}_{\beta \alpha}\, R^{m' n';ab}_{\alpha\beta;{\bf 0}}, \label{Vex} \end{equation} where we sum over filled levels $(m,n)$ and retain the PZM sector $m',n'\in (0,1_{\pm}, 0_{\pm}, 1)$. Let us first extract the contribution from the Dirac sea, \begin{equation} V^{\rm DS}_{\rm X} = - \sum_{\bf p}v_{\bf p}\gamma_{\bf p}^{2}\, \sum_{n\in {\rm DS}}|g^{m'n;a}_{\bf p}|^2\, R^{m' m';aa}_{\alpha\alpha;{\bf 0}}, \label{VxDS} \end{equation} where the sum is understood over spin $\alpha$ and over $m'\in (0,1_{\pm})$ for $a=K$ and $m'\in (0_{\pm},1)$ for $a=K'$. Actually, the sum over infinitely many filled levels with $n \in {\rm DS}$ gives rise to an ultraviolet divergence. Fortunately this infinite sum is evaluated exactly to zero$th$ order in perturbations $(u,\gamma_{2}, \gamma_{5},\cdots)$, as done earlier,~\cite{KSLs, KSLsTL} if one notes Eq.~(\ref{gmnproperty}) and the completeness relation~\cite{KSLs} \begin{equation} \sum_{n=-\infty}^{\infty} |g^{mn;a}_{\bf p}|^{2} = e^{\ell^{2}{\bf p }^{2}/2}\ \ {\rm for\ each}\ a \in (K,K'). \label{compRel} \end{equation} The result is \begin{equation} \sum_{n \in {\rm DS}} |g^{jn;K}_{\bf p}|^{2} = {1\over{2}}\, (e^{\ell^{2}{\bf p }^{2}/2} - |g^{j0}_{\bf p}|^{2} - |g^{j1_{+}}_{\bf p}|^{2}- |g^{j1_{-}}_{\bf p}|^{2}), \label{DSsum} \end{equation} for $j\in (0,1_{\pm})$; analogously for valley $K'$. The $e^{ \ell^{2}{\bf p }^{2}/2}$ term leads to a divergence upon integration over ${\bf p}$; it, however, shifts all levels $j$ uniformly and is safely omitted. The regularized Dirac-sea contribution then reads \begin{eqnarray} V^{\rm DS}_{\rm X}\!\! &\stackrel{0^{th}}{=}& \! \! \epsilon^{\rm v}_{0}\, R^{00}_{\alpha\alpha;{\bf 0}} + \epsilon^{\rm v}_{1_{+}}\, R^{1_{+}1_{+}}_{\alpha\alpha;{\bf 0}} + \epsilon^{\rm v}_{1_{-}}\, R^{1_{-}1_{-}}_{\alpha\alpha;{\bf 0}} \nonumber\\ &&\!\!\! + \epsilon^{\rm v}_{0_{+}}\, R^{0_{+}0_{+}}_{\alpha\alpha;{\bf 0}} + \epsilon^{\rm v}_{0_{-}}\, R^{0_{-}0_{-}}_{\alpha\alpha;{\bf 0}} + \epsilon^{\rm v}_{1}\, R^{11}_{\alpha\alpha;{\bf 0}}, \nonumber\\ \epsilon^{\rm v}_{j}\!\!\! &=&\!\!\!\! {1\over{2}} \sum_{\bf p}v_{\bf p}\gamma_{\bf p}^{2} \sum_{n \in (0, 1_{\pm})} |g^{jn}_{\bf p}|^{2} \ {\rm for}\ j \in (0, 1_{\pm});\ \ \ \label{VxDStwo} \end{eqnarray} analogously for $j \in (0_{\pm},1)$. Integration over ${\bf p}$, with the formula $ \sum_{\bf p}v_{\bf p}\gamma_{\bf p}^2 \,[1, (\ell |{\bf p}|)^2, (\ell |{\bf p}|)^4] = [1,1,3]\, \tilde{V}_{c} $, yields \begin{eqnarray} \epsilon^{\rm v}_{0} &=& \textstyle{1\over{2}}\, ( 1 + c_{1}^2)\, \tilde{V}_{c}, \nonumber\\ \epsilon^{\rm v}_{1_{\pm}}\!\!\! &=& \textstyle{1\over{2}} \big[1 - (1\mp \sin \theta)\, {1\over{2}} (c_{1}^2 - 3 c_{1}^4) \big]\, \tilde{V}_{c}, \nonumber\\ \epsilon^{\rm v}_{0_{\pm}}\!\!\! &=& \textstyle{1\over{2}}\, \big[ 1 + \textstyle{1\over{2}} c_{1}^2 (1 \mp \sin \phi) \big]\, \tilde{V}_{c}, \nonumber\\ \epsilon^{\rm v}_{1} &=& \textstyle{1\over{2}}\, (1 - c_{1}^2+ 3\, c_{1}^4 )\, \tilde{V}_{c}, \label{Vzeroreg} \end{eqnarray} where $c_{1}^2\equiv (c_{1})^2$, etc., $\tilde{V}_{c} \equiv \sqrt{\pi/2}\, V_{c}$ and \begin{equation} V_{c} \equiv \alpha/(\epsilon_{b}\ell) \approx (56.1/\epsilon_{b})\, \sqrt{B[{\rm T}]}\, {\rm meV}. \end{equation} Note that Eqs.~(34) $\sim $ (36) are the zeroth-order expressions, which depend on bias $u$ through the zero$th$-order ratios $\sin \phi$ and $\sin \theta$. Numerically, for $u=0$ and $B = 20$\,T, and with $\epsilon_{b}= 5$ taken as a typical value, \begin{eqnarray} \epsilon^{\rm v}_{0}&=& \epsilon^{\rm v}_{0_{-}} = \textstyle{1\over{2}}\, ( 1 + c_{1}^2)\, \tilde{V}_{c} \approx 0.73\, \tilde{V}_{c}, \nonumber\\ \epsilon^{\rm v}_{1_{-}} \!\!\! &=& \epsilon^{\rm v}_{1} = \textstyle{1\over{2}} ( 1 - c_{1}^2+ 3\, c_{1}^4 )\, \tilde{V}_{c} \approx 0.59\, \tilde{V}_{c}, \nonumber\\ \epsilon^{\rm v}_{1_{+}} \!\!\! &=& \epsilon^{\rm v}_{0_{+}} = \textstyle{1\over{2}} \, \tilde{V}_{c} = 0.5\, \tilde{V}_{c}. \label{Spectrum} \end{eqnarray} In this way, the PZM levels are $\lq\lq$Lamb-shifted" due to vacuum fluctuations and the splitting among $\{\epsilon^{\rm v}_{j}\}$ reflects the difference in their spatial (or ${\bf p}$) distributions. \begin{figure}[thbp] \includegraphics[scale=.55]{fig2.eps} \caption{ (Color online) (a) Orbital Lamb-shift corrections $\{\epsilon^{\rm v}_{j}\}$ plotted in units of $\tilde{V}_{c}$ as a function of bias $u$ at $B=20\,$T. (b)~$\sin \theta$ and $\sin \phi$ as a function of $u$ for $B=20\,$T. } \end{figure} In Fig.~2 the Lamb shifts $\{\epsilon^{\rm v}_{j}\}$ are plotted, in units of $\tilde{V}_{c}$, as a function of bias $u$. They are ordered, e.g., as $\epsilon_{0}^{\rm v} > \epsilon_{1_{-}}^{\rm v} > \epsilon_{1_{+}}^{\rm v}$ at one valley, and are valley-symmetric for $u=0$ (this reflects the manifest valley symmetry of the charge $\rho_{\bf p}$ noted in Sec. II), with valley asymmetry developing gradually with increasing bias $u$. \begin{figure}[tthp] \includegraphics[scale=.52]{fig3.eps} \caption{ (Color online) Level spectra of the empty and filled PZM sector. (a)~$\epsilon_{b}=5$ and $u=(0, 20)$ meV. The upper and lower halves of each parabolic spectrum refer to the empty ($\nu =-6$) and filled ($\nu =6$) case, respectively. (b)~$\epsilon_{b}=10$ and $u=(0, 20)$ meV, with a weaker Coulomb potential. } \end{figure} The spectra~(\ref{Vzeroreg}) or~(\ref{Spectrum}) refer to those of empty levels. Actually the spectra vary (i.e., generally go down due to the exchange interaction) with filling of the PZM levels. In particular, Eq.~(\ref{VxDS}) tells us that, when the PZM sector is filled up, $\{\epsilon^{\rm v}_{j}\}$ change sign $\epsilon^{\rm v}_{j} \rightarrow - |\epsilon^{\rm v}_{j}|$. In this sense, the Lamb-shift corrections $\{\epsilon^{\rm v}_{j}\}$ preserve $eh$ symmetry. Thus, as the PZM sector is filled from $\nu=-6$ (empty) to $\nu=6$ (full), the spectrum of the $j$-th level varies from $\epsilon^{\rm h}_{j} + \epsilon^{\rm v}_{j}$ to $\epsilon^{\rm h}_{j} - \epsilon^{\rm v}_{j}$. See Fig.~3(a), which depicts the (empty/filled) spectra $\epsilon^{\rm h}_{j} \pm \epsilon^{\rm v}_{j}$ as a function of $B$, for $u=0$ and $\epsilon_{\rm b} =5$; the upper and lower halves of each spectrum refer to empty and filled levels, respectively. Note that bias $u$ works to enhance the splitting of the $0_{\pm}$ spectra. For reference, Fig.~3(b) shows the level spectra for a weaker potential with $\epsilon_{\rm b} =10$. At valley $K$, the one-body spectra $\{\epsilon_{n}^{\rm h} \}$ are ordered so that $\epsilon_{1_{+}}^{\rm h} > \epsilon_{1_{-}}^{\rm h} > \epsilon_{0}^{\rm h}$ while $\{\epsilon_{n}^{\rm v} \}$ are ordered so that $\epsilon_{0}^{\rm v} > \epsilon_{1_{-}}^{\rm v} > \epsilon_{1_{+}}^{\rm v}$. The Lamb-shift contributions $\{\epsilon_{n}^{\rm v} \}$ therefore enhance splitting among filled levels $(1_{+}, 1_{-}, 0)$ with spectra $\epsilon_{n}^{\rm h} - \epsilon_{n}^{\rm v}$. For empty levels (with $\epsilon_{n}^{\rm h} + \epsilon_{n}^{\rm v}$) they work oppositely and even reverse the ordering of the $1_{-}$ and $0$ spectra when the Coulomb interaction $\tilde{V}_{c}$ is strong enough (i.e., for smaller $\epsilon_{b}$ and higher $B$); compare Figs.~3(a) and 3(b). Replacing $(1_{+}, 1_{-}, 0) \rightarrow (0_{+}, 1, 0_{-})$ also allows one to find essentially the same features for valley $K'$. In this way the Lamb-shift contributions, though $eh$ symmetric by themselves, work to enhance $eh$ asymmetry in the full PZM spectra $\epsilon^{\rm h}_{j} \pm \epsilon^{\rm v}_{j}$. To see how each level evolves with filling, one has to examine the Coulomb interaction acting within the LLL; we study this in the next section. \section{Coulomb interactions} The Coulomb exchange interaction acting within the PZM sector is written as \begin{equation} V_{\rm X}^{\rm pz} = - \tilde{V}_{c}\, \Gamma^{n' m'}_{\beta\alpha}\, R^{m' n'}_{\alpha\beta;{\bf 0}}, \end{equation} with $\Gamma^{n' m'}_{\beta\alpha} \equiv \sum_{\bf p} v_{\bf p}\gamma_{\bf p}^{2}\, (g^{n' m}_{\bf p})^{*}\,g^{m'n}_{\bf p}\, \nu^{mn}_{\beta \alpha}/\tilde{V}_{c} = \Gamma^{m' n'}_{\alpha\beta}$, where $(m', n')$ and $(m,n)$ are summed over $(0,1_{\pm}, 0_{\pm},1)$. For definiteness we focus on the $u\rightarrow 0$ case, where $g^{mn}_{\bf p}$ and $\Gamma^{n' m'}_{\beta\alpha}$ considerably simplify. Indeed, as noted in Eq.~(\ref{gknU0}), for $u=0$, the charge $\rho_{\bf p}$ takes a valley-symmetric form and, in addition, one has $g^{01_{+}}_{\bf p} =g^{1_{-}1_{+}}_{\bf p} =0$ and $g^{0_{+}0_{-}}_{\bf p}=g^{0_{+}1}_{\bf p} =0$. This structure suggests that, at valley $K$, $1_{+}$ tends to be isolated from $(0, 1_{-})$ which may potentially get mixed; similarly, $0_{+}$ tends to be isolated from $(0_{-},1)$ at valley $K'$. Actually, for $u=0$ one finds that \begin{eqnarray} &&\Gamma^{00} = \nu^{00} + \nu^{1_{-}1_{-}}\, c_{1}^{2},\ \nonumber\\ &&\Gamma^{1_{-}1_{-}} = \nu^{00}\, c_{1}^{2} + \nu^{1_{-}1_{-}}\, (1- 2 c_{1}^{2} + 3 c_{1}^{4}), \nonumber\\ &&\Gamma^{01_{-}} = \nu^{01_{-}}\, (1- c_{1}^{2}),\ \Gamma^{1_{+}1_{-}} = \nu^{1_{+}1_{-}}\, (1- c_{1}^{2}), \nonumber\\ &&\Gamma^{1_{+}1_{+}} = \nu^{1_{+}1_{+}}, \ \Gamma^{01_{+}} = \nu^{01_{+}}, \label{GatK} \end{eqnarray} at valley $K$, with obvious spin indices $(\alpha,\beta)$ suppressed. Replacing $(0,1_{-},1_{+}) \rightarrow (0_{-},1,0_{+})$ in the above yields expressions $\Gamma^{K'K'}$ for valley $K'$ and, in an analogous way, mixed-valley components $\Gamma^{KK'}$ as well. One can further use new orbital labels $\hat{n}$ and rename $(0,1_{-},1_{+})$ as $(\hat{1},\hat{2}, \hat{3})$ with valley $a=K$, and $(0_{-}, 1, 0_{+})$ as $(\hat{1},\hat{2}, \hat{3})$ with valley $a=K'$, so that, e.g., $\nu^{00}=\nu^{\hat{1}\hat{1}; KK}$, $\nu^{00_{-}}=\nu^{\hat{1}\hat{1}; KK'}$, etc. Then the exchange interaction $V_{\rm X}^{\rm pz}$ itself is cast into a valley- (and spin-)symmetric form composed of terms like $\nu^{\hat{n} \hat{m};ba}_{\beta\alpha}\, R^{\hat{m}' \hat{n}';ab}_{\alpha\beta;{\bf 0}}$. The PZM levels are now governed by the effective Hamiltonian ${\cal V} \equiv H^{\rm h} +V^{\rm DS}_{\rm X} + V_{\rm X}^{\rm pz}$. Note first that the interaction $V^{\rm DS}_{\rm X} + V_{\rm X}^{\rm pz}$ is symmetric in spin and valley (for $u=0$). Thus ${\cal V}$ is made diagonal in valley and spin if one takes the valley basis $(K, K')$ and the spin basis $(\uparrow, \downarrow)$ of the one-body part $H^{\rm h} \sim \{\epsilon_{n}^{\rm h}\}$. Accordingly, one can treat each valley and spin separately, and diagonalize ${\cal V}$ with respect to the orbital modes $(0, 1_{-}, 1_{+})|^{K}$ and $(0_{-},1, 0_{+})|^{K'}$ for each spin. Let us now discuss how the PZM levels evolve with filling. For definiteness, we first suppose filling the empty PZM sector with electrons gradually under a fixed magnetic field $B=20$\,T and $u=0$. We consider 6 levels, $(0,1_{\pm},0_{\pm}, 1)$ per spin, and use $0 \le n_{\rm f} \le 6$ to denote the filling factor for this subsector; with electron spins supposed to be unresolved, the PZM sector thereby has the filling factor $\nu = 2 (n_{\rm f}-3)$. (We refer to the case of resolved spins later.) Our focus is on uniform ground states at integer filling. To follow their evolution, we choose to diagonalize the Hamiltonian ${\cal V}$ with uniform states at intermediate filling factors; this serves to visualize how level mixing and crossing take place, as we shall see. One can read from Fig.~3(a) the level spectra of the empty/filled PZM sector, \begin{eqnarray} &&(\ \epsilon_{1}, \ \ \epsilon_{0_{-}},\ \ \epsilon_{0_{+}},\ \epsilon_{1_{-}},\ \ \epsilon_{0},\ \, \epsilon_{1_{+}} ) \nonumber\\ &\stackrel{\rm empty}{\approx}& (33.3, 35.9, 41.4, 44.1, 45.9, 61.4)\ {\rm meV}, \nonumber\\ &\stackrel{\rm filled}{\rightarrow}& \!\!\!\! - (40.6, 55.9, 21.4, 29.8, 45.9, 1.44)\ {\rm meV}, \label{emptypzmspec} \end{eqnarray} where $\epsilon_{n}|^{\rm empty/filled} = \epsilon^{\rm h}_{n} \pm \epsilon^{\rm v}_{n}$ and $\epsilon_{b}=5$. It is seen from these spectra that the $1|^{K'}$ and $0_{-}|^{K'}$ levels potentially have crossing, so do $0|^{K}$ and $1_{-}|^{K}$. This signals level mixing within each pair. It is the lowest-lying $1|^{K'}$ level that starts to be filled first. As it is filled, it cooperates with the $0_{-}|^{K'}$ level, paired via the exchange interaction. To clarify how they evolve, one can now try to diagonalize ${\cal V} = H^{\rm h} +V^{\rm DS}_{\rm X} + V_{\rm X}^{\rm pz}$ for the three lowlying levels $(1, 0_{-}, 0_{+})|^{K'}$, and subsequently for those at valley $K$. See the appendix for an analysis. \begin{figure}[thbp] \includegraphics[scale=.7]{fig4.eps} \caption{(Color online) (a) Evolution of level spectra at $B=20\,$T and $\epsilon_{b} =5$ as the PZM sector is filled from $n_{\rm f}=0$ (empty) to $n_{\rm f}=6$ (full); the filling factor $\nu=2(n_{\rm f} - 3) \in [-6,6]$, with the electron spin supposed to be unresolved. Orbital mixing takes place over the interval $n_{\rm f} \in (0.145, 1.145)$ and $n_{\rm f} \in (3.101, 4.101)$, indicated by colored stars. Thin dotted curves represent evolution of level spectra when no orbital rotation were allowed. (a')~Evolution of level spectra when the PZM sector is emptied from $n_{\rm f}=6$ to $n_{\rm f}=0$. (b) and (c)~Level spectra for $B=6\, {\rm T}$ and $\epsilon_{b} =5$. (d)~Level mixing is absent for a weaker Coulomb potential with $\epsilon_{b} =10$ and at $B=15\,$T. } \end{figure} Figure 4(a) summarizes the resulting evolution of level spectra. For $0 \le n_{\rm f} \le n_{\rm cr}$ with $n_{\rm cr} \approx 0.145$ only the $1|^{K'}$ level is filled and gets lower in energy along with the (paired) empty $0_{-}|^{K'}$ level. Beyond $n_{\rm cr}$, $1|^{K'}$ is mixed with $0_{-}|^{K'}$ and turns into $0_{-}|^{K'}$ at $n_{\rm f} \approx 1.145$; at the same time, $0_{-}|^{K'}$ turns into $1|^{K'}$. Both $0_{-}|^{K'}$ and $1|^{K'}$ levels are eventually filled up at $n_{\rm f}=2$. At integer filling $n_{\rm f}=1$, the $(1,0_{-})$-mixed levels consist of a filled level of energy $\approx -27.0$ meV and an empty level of energy $\approx 7.0$ meV. A close look into Fig.~4(a) reveals that an $\lq\lq$orbital" rotation takes place so as to avoid level crossing. The remaining $0_{+}|^{K'}$ level evolves individually, and is filled over the interval $n_{\rm f} \in [2,3]$. For $n_{\rm f} >3$ an analogous process is repeated for $(1_{-},0,1_{+})|^{K}$ levels at another valley. There mixing of $1_{-}|^{K}$ and $0|^{K}$ takes place over the interval $n_{\rm f} \in (n'_{\rm cr}, 1+ n'_{\rm cr} )$ with $n'_{\rm cr} \approx 3.101$, and avoids level crossing. From Fig.~4(a) one can read off the spectra of the PZM sector at each integer filling factor $\nu \in [-6,6]$. The spectra are $eh$- and valley-asymmetric. Let us now note that, due to this $eh$ asymmetry, the level spectra may evolve in a different pattern when one empties the PZM sector rather than filling it. Indeed, such a difference is clearly seen from Fig.~4(a'), which shows the evolution of level spectra when the filled PZM sector is gradually emptied (i.e., $\nu= 6 \rightarrow -6$) under the same $B=20$\,T. Actually, Fig.~4(a') is a result of direct calculation, but it will be clear how to draw it by a glance at Fig.~4(a). For comparison, see also Figs.~4(b) and 4(c), which show the evolution of level spectra under $B=6\,$T. There the pattern of evolution is uniquely fixed, independent of whether one fills or empties the PZM sector. Lastly, Fig.~4(d) illustrates the case of a weaker Coulomb potential with $\epsilon_{\rm b} =10$ and at $B=15\,$T, corresponding to the level spectra in Fig.~3(b). Here again the pattern of level spectra is uniquely fixed, but, unlike in the above cases, there is no level mixing. In general, the level spectra $\epsilon^{\rm h}_{j} \pm \epsilon^{\rm v}_{j}$ of the empty/filled PZM sector (in Fig.~3) are fixed in advance by specifying the value of magnetic field $B$ at $\nu=-6$ and $\nu=6$, respectively. How the spectra evolve at intermediate filling factors, as we have seen, depends on whether one fills or empties the PZM sector and how one does it, e.g., under fixed $B$ or fixed density $\rho \propto \nu B$. It will be clear from the model calculations above that the $1_{+}|^{K}$ and $0_{+}|^{K'}$ levels evolve individually without mixing with others while $1_{-}|^{K}$ and $0|^{K}$ move in pairs, so do $(1, 0_{-})|^{K'}$. Actually, with this experience, a close look into the empty/filled spectra in Fig.~3 allows one to draw a general idea about how the level spectra evolve under fixed $B$ and $u=0$ (or even for small $u$ as well). For example, the presence or absence of level mixing is inferred from Fig.~3. Level mixing takes place so as to avoid crossing of paired levels $(1_{-}, 0)|^{K}$ or $(1,0_{-})|^{K'}$. As noted in Sec.~II, the ordering of these paired levels, i.e., $\epsilon_{1_{-}}^{\rm h} > \epsilon_{0}^{\rm h}$ and $\epsilon_{1}^{\rm h} > \epsilon_{0_{-}}^{\rm h}$ for $\epsilon_{n}^{\rm h}$, is reversed for the full spectra $\epsilon_{n}^{\rm h}+\epsilon_{n}^{\rm v}$ when the Coulomb potential $\tilde{V}_{c} \sim \alpha/(\epsilon_{b}\ell)$ is strong enough. It is thus this inversion of (empty) paired levels that drives level mixing. Accordingly, with $\epsilon_{\rm b}=5$, mixing of paired levels is necessarily present for almost all values of $B$ in Fig.~3(a), as indeed seen from Figs.~4(a) and 4(b). For Fig.~3(b), i.e., for a weaker potential with $\epsilon_{\rm b} =10$, mixing is present only at low $B \lesssim 7\, $T and is absent at higher $B$, as is the case with Fig.~4(d). When bias $u$ is turned on, mixing of $(1, 0_{-})|^{K'}$ disappears rapidly with increasing $u$, but mixing of $(1_{-}, 0)|^{K}$ tends to persist at low $B$, as verified easily. In the level spectra of Fig.~3, the $0_{+}|^{K'}$ level is relatively isolated upward from the paired levels $(1,0_{-})|^{K'}$, so is $1_{+}|^{K}$ from $(1_{-},0)|^{K}$. It is therefore the lowest-lying $(1, 0_{-})|^{K'}$ pair that is filled first as $n_{\rm f}=0 \rightarrow 2$ (or emptied last as $n_{\rm f}=2 \rightarrow 0$). This leads to a unique $\nu=-2$ ground state [consisting of filled $(1,0_{-})|^{K'}$ levels] with a relatively large $\nu=-2$ level gap, as is evident from Fig.~4. Likewise, an isolated $1_{+}|^{K}$ level leads to a unique $\nu=4$ state with a relatively large gap. The ground states at other filling factors, in contrast, vary in composition case by case. In particular, one notices an equally large $\nu=2$ gap and a relatively small $\nu=0$ gap in the spectra of Figs.~4(b) and 4(d), which show essentially the same low-$B$ characteristics of the PZM sector [at $B\lesssim 10 {\rm T}$ in Fig.~3(a) or $\lesssim 20{\rm T}$ in Fig.~3(b)]. Interestingly, in those low-$B$ cases, the $\nu=0$ state (essentially) consists of filled $(0,1,0_{-})$ levels, which is the same in composition as the $\nu=0$ state one naively expects from the one-body spectra $\{\epsilon_{n}^{\rm h}\}$ in Fig.~1 alone. We have so far supposed unresolved electron spins. Note that the exchange interaction acts on pairs of the same spin and valley. Accordingly, if, e.g., in Fig.~4(b), there were two $(1, 0_{-})|^{K'}$ pairs of spin up and down resolved against possible disorder, each pair would repeat the $n_{\rm f}=0 \rightarrow 2$ evolution in the figure over the interval $\nu= - 6 \rightarrow -4 \rightarrow -2$, yielding a $\nu=-4$ gap comparable to the $\nu=-2$ gap. In this way, small spin gaps, if resolved, are equally well enhanced by the interaction, and will modify the evolution patterns in Fig.~4 accordingly. The transport properties of graphene trilayers have been studied in a number of experiments,~\cite{BZZL,KEPF,TWTJ,LLMC,BJVL,ZZCK,JCST,HNE,LVTZ} and some nontrivial features of the LLL of $ABA$-stacked trilayers have been observed. Evidence for the opening of the $\nu=0$ gap comes from early observations~\cite{BZZL,BJVL} of an insulating $\nu=0$ state in both $ABA$ and $ABC$ trilayers. Recent experiments on substrate-supported $ABA$ trilayer graphene by Henriksen {\it et al.}~\cite{HNE} observed a robust $\nu=-2$ Hall plateau and a possible incipient $\nu=2$ or $\nu=4$ plateau under zero bias ($u\sim 0)$, and $\nu=\pm2, \pm 4$ plateaus in biased samples. Subsequent measurements on dual-gated suspended devices by Lee {\it et al.}~\cite{LVTZ} observed $\nu= \pm 2$ plateaus at low magnetic field $B < 4\, {\rm T}$ and also resolved, in high magnetic fields, additional plateaus at $\nu= \pm 1, \pm 3, -4$, and $- 5$, indicating almost complete lifting of the 12-fold degeneracy of the LLL. Common to these observations, in particular, is $eh$ asymmetry in the sequence of plateaus, with a prominent $\nu=-2$ plateau. The $B=6\, {\rm T}$ and $\epsilon_{b}=5$ case of Fig.~4(b) appears to capture these features seen in experiment at low $B$. For resolved spins, this case will lead to large gaps at $\nu= \pm 2, \pm 4$, 0, 3 and 5, and relatively small gaps at $\nu=\pm 1$, -3 and -5. In our picture, appreciable $eh$ asymmetry is a result of Coulombic enhancement of the $eh$ asymmetry in $H^{\rm h}$ and a large $\nu=-2$ gap is triggered by the valley asymmetry of $H^{\rm h}$ such that ${\rm min}[\epsilon_{0_{-}}^{\rm h}, \epsilon_{1}^{\rm h}]^{K'} < {\rm min}[\epsilon_{0}^{\rm h}, \epsilon_{1_{-}}^{\rm h} ]^{K}$, i.e., the $K'$ valley is relatively lower in spectrum. In general, large level gaps are associated with evolution of orbital modes $(1_{-}, 0)|^{K}$ and $(1, 0_{-})|^{K'}$ of basic filling step $\Delta n_{\rm f} =2$ per spin and evolution of rather independent modes $1_{+}|^{K}$ and $0_{+}|^{K'}$ of step $\Delta n_{\rm f} =1$. This is in sharp contrast to the case of $ABC$ trilayers, where large gaps within the LLL are associated with evolution (actually, mixing) of orbital modes of basic step~\cite{KSLsTL} $\Delta n_{\rm f} =3$ per spin, leading to visible $\nu=0, \pm 3$ plateaus. \section{Summary and discussion} In a magnetic field graphene trilayers develop, as the LLL, a multiplet of twelve nearly-zero-energy levels with a three-fold orbital degeneracy. In this paper we have examined the quantum characteristics of this PZM multiplet in $ABA$ trilayers, with the Coulomb interaction and the orbital Lamb shift taken into account. It turned out that $ABA$ trilayers are distinct in zero-mode characteristics from $ABC$ trilayers examined earlier.~\cite{KSLsTL} We have, in particular, seen that both valley and $eh$ symmetries are markedly broken in the LLL of $ABA$ trilayers. These asymmetries appear in the one-body spectra $\{\epsilon^{\rm h}_{n}\}$ already to first order in nonleading hopping parameters (such as $\gamma_{2}$, $\gamma_{5}$ and $\Delta'$), and are enhanced via the Lamb-shift contributions $\{\epsilon^{\rm v}_{n}\}$ and the Coulomb interaction acting within the LLL. In contrast, for $ABC$ trilayers the one-body PZM spectra $\{\epsilon^{\rm h}_{n}\}$ involve, for zero bias $u=0$, only a tiny $eh$ asymmetry of $O(v_{4}) \sim O(v_{4} \omega_{c}/v \tilde{\gamma})$ and no valley breaking~\cite{fntwo} linear in nonleading parameters $(\gamma_{2}, \gamma_{3}, v_{4})$. The Lamb-shift corrections $\{\epsilon^{\rm v}_{n}\}$ add no further breaking (to the leading order). Accordingly, in $ABC$ trilayers the LLL is far less afflicted by $eh$ and valley asymmetries. The PZM levels differ in structure between the two types of trilayers. In $ABA$ trilayers they are composed of the $|0\rangle$ and $|1\rangle$ orbital modes distributed in a distinct way at each valley, as noted in Sec.~II, and in this sense the associated valley asymmetry is intrinsic. In contrast, in $ABC$ trilayers these levels are characterized by the $|0\rangle$, $|1\rangle$ and $|2\rangle$ orbital modes residing predominantly on one of the outer layers,~\cite{KSLsTL} with the two valleys related symmetrically [by layer and site interchange $(A_{1}, B_{1}) \leftrightarrow (B_{3}, A_{3})]$. The two types of trilayers substantially differ in the way the Coulomb interaction acts within the LLL. They thus differ in the way large level gaps or the associated conductance plateaus appear within the LLL, with $ABA$ trilayers having basic filling steps of $\Delta n_{\rm f} =(2,1)$ and $ABC$ trilayers having a step of $\Delta n_{\rm f} =3$. Interlayer bias $u$ also acts quite differently on the two types of trilayers. For $ABC$ trilayers, $u$ acts oppositely at the two valleys and enhances valley gaps. In contrast, for $ABA$ trilayers, it works to further split $\epsilon_{0_{\pm}}|^{K'}$ (and $\epsilon_{1_{\pm}}|^{K}$), i.e., enhance orbital breaking at each valley. The orbital Lamb shift is a many-body vacuum effect but is intimately correlated with the Coulomb interaction acting within the multiplet. This is clear if one notes that the filled PZM sector and the empty one, both subject to quantum fluctuations of the filled valence band, differ by the amount of this Coulomb interaction. It will be clear now why this vacuum effect, though it could easily be overlooked if one naively relies on the Coulomb interaction projected to the LLL alone, has to be properly taken into account in every attempt to explore the PZM sector in graphene few-layers.~\cite{Toke} The $eh$ and valley asymmetries inherent to $ABA$ trilayers substantially modify the electron and hole spectra within the LLL. The sequence of broken-symmetry states, observable via the quantum Hall effect, is thereby both $eh$- and valley-asymmetric and can change in pattern, depending on how one fills or empties the LLL. We have presented some model calculations in Sec.~IV, assuming a typical set~(\ref{Triparameter}) of parameters and $\epsilon_{b}$. They are intended to illustrate what would generally happen when the orbital Lamb shift and Coulomb interactions are properly taken into account. They will also be a good base point for a more elaborate analysis when more data on graphene trilayers become available via future experiments. \acknowledgments This work was supported in part by a Grant-in-Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture of Japan (Grant No. 24540270).
1,116,691,499,420
arxiv
\section{Introduction} \setcounter{footnote}{1} Mirror symmetry of Calabi-Yau manifolds is one of the remarkable predictions of type II string theory. The $(2,2)$ superconformal field theory associated with a string propagating on a Ricci-flat K\"ahler manifold has a $U(1)_L\times U(1)_R$ R-symmetry group, and the Hodge numbers of the manifold correspond to the charges of $(R,R)$ ground states under the R-symmetry. There is a symmetry in the conformal field theory which is a flip of the sign of the $U(1)_L$ current, $J_L\rightarrow -J_L$. If physically realized, this symmetry implies existence of pairs of manifolds $({\cal M},{\cal W})$, which have ``mirror'' hodge diamonds, $H^{p,q}({\cal M})=H^{d-p,q}({\cal W}),$ and give rise to exactly the same superconformal field theory. While this observation predicts existence of mirror pairs of Calabi-Yau $d$-folds, it is not constructive: one would like to know how to find ${\cal W}$ if one is given a Calabi-Yau manifold ${\cal M}$. The mirror construction has been proposed on purely mathematical grounds by Batyrev for Calabi-Yau manifolds which can be realized as complete intersections in toric varieties. In \cite{SYZ}, Strominger, Yau and Zaslow argued that, for mirror symmetry to extend to a symmetry of non-perturbative string theory, $({\cal M},{\cal W})$ must be $T^{d}$ fibrations, with fibers which are special lagrangian, and furthermore, that mirror symmetry is $T^{d}$-duality of the fibers. The argument of SYZ is local, and is very well understood only for smooth fibrations. To be able to fully exploit the idea, one must understand degenerations of special lagrangian tori, or more generally, limits in which the Calabi-Yau manifold itself becomes singular. Some progress on how this is supposed to work has been made in \cite{SYZ}, \cite{VafaLeung}, \cite{mina3}, and local mirror symmetry appears to be the simplest to understand in this context. In an a priori unrelated development initiated by \cite{mirror3d} `mirror pairs' of gauge theories were found. In the field theory context, a duality in the sense of IR equivalence of two gauge theories is usually referred to as a `mirror symmetry' if \begin{itemize} \item the duality swaps Coulomb and Higgs branch of the theories, trading FI terms for mass parameters, \item the R-symmetry of the gauge theory has the product form $G_L \times G_R$ and duality swaps the two factors. \end{itemize} The example of \cite{mirror3d} studies ${\cal N}=4$ SUSY theories in 3 dimensions. Recently it was shown \cite{KS} that this duality can even be generalized to an equivalence of two theories at all scales, a relation that will prove to be crucial for our applications. The purpose of this note is to obtain geometrical mirror pairs by a `worldsheet' construction. Since there seems to be little hope that one can do so directly in the non-linear sigma model (NL$\sigma$M), we study linear sigma models \cite{phases} which flow in the IR to non-linear sigma models with Calabi-Yau's as their target spaces. String theory offers a direct physical interpretations of these models as world-volume theory of a D-string probe of the Calabi-Yau manifolds. We find gauge theory mirror duality for the linear sigma model which reduces to geometric mirror symmmetry for Calabi-Yau target spaces. Using brane constructions T-dual to the D1 brane probe on $\cal{M}$ one can easily construct gauge theories which flow to mirror manifolds. We will show that two different dualities of $d=2$, ${\cal N}=(2,2)$ supersymmetric gauge theories arise: One has a realization in string theory as `S-duality\footnote{ where by S-duality in type IIA setups we mean a flip of the 2 and the 10 direction}' of `interval' brane setups. This duality is a consequence of mirror symmetry of $d=3$, ${\cal N}=2$ field theories where both the Coulomb branch and the Higgs branch are described by non-compact Calabi-Yau manifolds. This duality in $d=3$ maps a Calabi Yau manifold to an identical one, so while it is a non-trivial field theory statement, this does not provide a linear $\sigma$ model construction of the mirror Calabi-Yau manifold. There exists another $d=2$, ${\cal N}=(2,2)$ field theory duality obtained as S-duality of the diamond brane constructions of \cite{mina3}. This duality maps a theory whose Coulomb branch is dual to Calabi-Yau manifold ${\cal M}$ in the ``boring'' way described above, to a theory whose Higgs branch is the mirror manifold ${\cal W}$. The composition of these two dualities, therefore, flows to Calabi-Yau mirror symmetry. While we consider in this note a very particular family on non-compact Calabi-Yau manifolds, the generalization to arbitrary affine toric varieties is possible. The organization of the note is as follows: In the next section we discuss different possible dualities in two dimensions as obtained via brane constructions for the case of the conifold. Section three generalizes this discussion to sigma models built from branes dual to more general non-compact CY manifolds and to the non-abelian case, describing $N$ D-brane probes on the singular CY. In the last section we present a detailed study of the moduli space and argue that the composition of two dualities is Calabi-Yau mirror symmetry. \section{Mirror duality in 2d gauge theories} \subsection{From three to two dimensions} The original $D=3$ ${\cal N}=4$ of \cite{mirror3d} upon compactification implies also a duality relation in ${\cal N}=(4,4)$ theories in 2 dimensions, as noted in \cite{sethim,brodiem}. The recent results of \cite{KS} are needed to make this precise. The nature of this duality with 8 supercharges will teach us, how we should understand the ${\cal N}=(2,2)$ examples. Since in 2d the concept of a moduli space is ill defined, equivalence of the IR physics does not require the moduli spaces and metrics to match point by point, but only that the NL$\sigma$Ms on the moduli space\footnote{or in the non-compact CY examples we are considering the two disjoint CFTs of Coulomb and Higgs branch \cite{higgsbranch, comments}} are equivalent, as we will see in several examples. Start with the 3d theory compactified on a circle. This is the setup analyzed in \cite{diaconescu}. It is governed by two length scales, $g^{-2}_{YM}$, the 3d Yang-Mills coupling, and $R_2$, the compactification radius. To flow to the deep IR is equivalent to sending both length scales to zero. However physics still might depend on the dimensionless ratio $$\gamma=g^2_{YM} \; R_2. $$ As shown in \cite{diaconescu}, while the Higgs branch metric is protected, the Coulomb branch indeed does depend on $\gamma$. For $\gamma\gg 1$ we first have to flow into the deep IR in 3d and then compactify, resulting in a 2d NL$\sigma$M on the 3d quatum corrected Coulomb branch. The resulting target space is best described in terms of the dual photon in 3d, a scalar of radius $\gamma$. For `the mirror of the quiver' (U(1) with $N_f$ electrons) it turns out to be an ALF space with radius $\gamma$. For small $\gamma$ we should first compactify, express the theory in terms of the Wilson line, a scalar of radius $\frac{1}{\gamma}$, and obtain as a result a tube metric with torsion, corresponding to the metric of an NS5 brane on a transverse circle of radius $\frac{1}{\gamma}$ \cite{diaconescu,seibergsethi}. Indeed these two NL$\sigma$Ms are believed to be equivalent \cite{NSdual} and exchanging the dual photon for the Wilson line amounts to the T-duality of NS5 branes and ALF space in terms of the IR NL$\sigma$M \footnote{This picture is obvious from the string theory perspective. Studying a D2 D6 system on a circle, going to the IR first lifts us to an M2 on an ALF space which becomes a fundamental string on the ALF, while going to 2d first makes us T-dualize to D1 D5, leaving us with the $\sigma$ model of a string probing a 5-brane background.}. In order to obtain linear $\sigma$-model description of this scenario, one has to use the all scale mirror symmetry of \cite{KS}. They show that $g^2_{YM}$ maps to a Fermi type coupling in the mirror theory, or more precisely: one couples the gauge field via a BF coupling to a twisted gauge field, the gauge coupling of the twisted gauge field being baptised Fermi coupling. For the case of the quiver theory with Fermi coupling, one obtains the same ALF space, this time on the Higgs branch. In the same spirit we will present two different dualities for ${\cal N}=(2,2)$ theories. \subsection{Mirror symmetry from the interval} One way to `derive' field theory duality is to embed the field theory into string theory and then field theory duality is a consequence of string duality. A construction of this sort was implemented in \cite{HW} for the ${\cal N}=4$ theory in d=3 via brane configurations. One uses an interval construction with the 3 basic ingredients: NS5 along 012345, D5 along 012789 and D3 along 0126. The two R-symmetries are $SU(2)_{345}$ and $SU(2)_{789}$. D3 brane segments between NS5 branes give rise to vector multiplets, with the 3 scalars in the 3 of $SU(2)_{345}$. D3 brane segments between D5 branes are hypermultiplets with the four scalars transforming as 2 doublets of $SU(2)_{789}$. Under S-duality the D5 branes turn into NS5 branes and vice versa while D3 branes stay invariant. One obtains the same kind of setup but with D5 and NS5 branes interchanged. S-duality of type IIB string theory is mirror symmetry in the gauge theory\footnote{ To be precise, the S-dual theory will really contain a gauge theory of twisted hypers coupled to twisted vectors. If in addition one performs a rotation taking 345 into 789 space, the two R-symmetries are swapped and the theory is written in terms of vectors and hypers.}. Now let us move on to the 2d theories. The brane realization of this duality is via an interval theory in IIA with NS and NS' branes and D2 branes along 016 \cite{amihori}. The IIA analog of S-duality, the 2-10 flip, takes this into D4 and D4' branes. The following parameters define the interval brane setup and the gauge theory: \begin{itemize} \item The seperation of NS and NS' brane along 7 is the FI term. It receives a complex partner, the 10 seperation which maps to the 2d theta angle. \item The seperation of the D4 branes along 2 and 3 gives twisted masses to the flavors. \end{itemize} Mirror symmetry maps the FI term to the twisted masses. A twisted mass sits in a background vector multiplet and has to be contrasted with the standard mass from the superpotential which sits in a background chiral multiplet. Like the real mass in ${\cal N}=2$ theories in $d=3$, it arises from terms like $$ \int d^4 \theta Q^{\dagger} e^{V_B} Q. $$ where $V_B$ is a background vector multiplet. \noindent {\bf An Example:} As an example let us discuss the interval realization of the small resolution of the conifold. As shown in \cite{uranga,dandm} by performing $T_6$ T-duality on a D-string probe of the conifold we get an interval realization of the conifold gauge theory in terms of an elliptic IIA setup with D2 branes stretched on a circle with one NS and one NS' brane. In this IIA setup the seperation of the NS branes in 67 is the small resolution, while turning on the diamond mode would be the deformation of the conifold. The gauge group on the worldvolume of the D-string on the conifold is \cite{klebwit} a $U(1) \times U(1)$ gauge group with 2 bifundamental flavors $A_1$, $A_2$, $B_1$ and $B_2$. We can factor out the decoupled center of mass motion, the diagonal $U(1)$, which does not have any charged matter and hence is free. We are left with an interacting $U(1)$ with 2 flavors. The scalar in the decoupled vector multiplet is the position of the D1 brane in the 23 space transvere to the conifold. While the Coulomb branch describes seperation into fractional branes, the Higgs branch describes motion on the internal space and reproduces the conifold geometry. The complexified blowup mode for resolving the conifold is the FI term and the $\theta$ angle. After 2-10 flip, the dual brane setup is again an elliptic model, this time with one D4 and one D4' brane. The gauge theory is a single ${\cal N}=(8,8)$ $U(1)$ from the D2 brane with 2 additional ${\cal N}=(2,2)$ matter flavors from the D4 and D4' brane. That is we have \begin{itemize} \item 3 `adjoints', that is singlet fields $X$, $Y$ and $Z$ and \item matter fields $Q$, $\tilde{Q}$, $T$ and $\tilde{T}$ with charges +1,-1,+1,-1. \item They couple via a superpotential $$W=Q X \tilde{Q} + T X \tilde{T} .$$ \item The singlet $Z$ is decoupled and corresponds to the center of mass motion. \end{itemize} Turning on the FI term and the $\theta$ angle in the original theory is a motion of the NS brane along the 7 and 10 direction respectively. It maps into a 23 motion for the D4 brane, giving a twisted mass to $Q$ and $\tilde{Q}$. This analysis can also be performed by going to the T-dual picture of D1 branes probing D5 branes intersecting in codimension 2, that is over 4 common directions. Aspects of this setup and its T-dual cousins in various dimensions have already been studied by numerous authors, e.g. for the D3 D7 D7' system in \cite{sengp} or for the D0 D4 D4' in \cite{044}. The resulting gauge theory agrees with what we have found by applying the standard interval rules. \subsection{Twisted mirror symmetry from diamonds} A second T-dual configuration for D1 brane probes of singular CY manifolds is D3 branes ending on a curve of NS branes, called diamonds in \cite{mina3}. These setups are the $T_{48}$-duals of D1 brane probes of the ${\cal C}_{kl}$ spaces. Indeed it was this relation that allowed us to derive the diamond matter content to begin with \cite{mina3}. In order to use the diamond construction to see mirror symmetry, we use S-duality of string theory, as in the original work of \cite{HW}. Let us first consider the parameters defining a diamond and how they map under S-duality: \begin{itemize} \item the complex parameter defining the NS brane diamond contains the FI term which is paired up with the 2d $\theta$ angle, \item the S-dual D5-brane diamond is defined by a complex parameter which is derived from a {\it superpotential} mass term. \end{itemize} FI term and theta angle are contained in a background twisted chiral multiplet. Under the duality this twisted chiral multiplet is mapped to a background chiral multiplet containing the mass term. Since ordinary mirror symmetry mapped FI terms to mass terms in twisted chiral multiplet, the map of operators under the two versions of duality will be different. \noindent {\bf An Example:} Let us start once more with the simplest example, the D1 string on the blowup of the conifold. That is, we consider a single diamond, one NS and one NS' brane, on a torus. After S-duality this elliptic model with NS5 and NS5' brane turns into an elliptic model with D5 and D5' brane. Since we have only D-branes in this dual picture, the matter content can be analyzed by perturbative string techniques. To shortcut, we perform T$_{48}$ duality to the D1 D5 D5' system as in the interval setup. For the special example of the conifold the two possible mirrors do not differ in the gauge and matter content, only in the parameter map. This will not be the case in the more general examples considered below. As analyzed above, the corresponding dual gauge theory is a U(1) gauge group with 3 neutral fields $X$, $Y$ and $Z$ and two flavors $Q$, $\tilde{Q}$, $T$ and $\tilde{T}$ with charges $+1$, $-1$, $+1$, $-1$ respectively. The superpotential in the singular case is $ W=QX \tilde{Q} + TY \tilde{T}$. By S-duality, as in the NS NS' setup, turning on the D5-brane diamonds corresponds turning on vevs for the d=4 hypermultiplets from the D5 D5' strings. Under ${\cal N}=(2,2)$ these hypermultiplets decompose into background chiral multiplets and hence appear as parameters in the superpotential. If we call those chiral multiplets $h$ and $\tilde{h}$ the corresponding superpotential contributions are \cite{044} $Q h \tilde{T} + \tilde{Q} \tilde{h} T $, so that all in all the full superpotential reads $$ W=QX \tilde{Q} + TY \tilde{T}+ Q h \tilde{T} + \tilde{Q} \tilde{h} T.$$ \section{More mirror pairs} \subsection{Other singular CY spaces} According to the analysis of \cite{uranga,mina3}, D1 brane probes on the blowup of spaces of the form $$G_{kl}: \; \; xy=u^kv^l $$ are $T_6$ dual to an interval setup with $k$ NS and $l$ NS' branes. The gauge group is a $U(1)^{k+l-1}$ with bifundamental matter. It is straight forward to construct interval mirrors via the 2-10 flip in terms of a $U(1)$ with 2 singlets and $k+l$ flavors. The $k+l-1$ complexified FI terms map into the $k+l-1$ independend twisted mass terms (one twisted mass can be absorbed by redefining the origin of the Coulomb branch). Similarly we can construct diamond mirrors for D1 brane probes of $C_{kl}$ spaces, $$ C_{kl}: \; \; xy=z^k, \; \; uv=z^l .$$ The gauge group for the D1 brane probe is $U(1)^{2kl-1}$. The mirror is once more a single $U(1)$ with 2 singlets and $k+l$ flavors. This time $(k+1)(l+1)-3$ complexified FI terms map to superpotential masses. Note that, while the D1-brane gauge theory has $2kl-1$ FI terms, only $(k+1)(l+1)-3$ lead to independent deformations of the moduli space. This is a consequence of the fact the D1 brane gauge theory is not the minimal linear sigma model of $C_{kl}$, which is just a $U(1)^{(k+1)(l+1)-3}$ (the same phenomenon arises in the case of $\mathbb{C}^3/\Gamma$ orbifolds \cite{dougmor}). \subsection{Generalization to non-abelian gauge groups} Our realization in terms of brane setups gives us for free the non-abelian version of the story, the mirror dual of $N$ D1 branes sitting on top of the conifold. Let us spell out the dual pairs once more in the simple example of the conifold. Generalization to arbitrary $G_{kl}$ and $C_{kl}$ spaces is straight forward. The gauge group on $N$ D1 branes on the blowup of the conifold is \cite{klebwit} $$SU(N) \times SU(N) \times U(1) $$ where we already omitted the decoupled center of mass VM. The matter content consists of 2 bifundamental flavors $A_{1,2}$, $B_{1,2}$. They couple via a superpotential $$W=A_1 B_1 A_2 B_2 - A_1 B_2 A_2 B_1.$$ The diamond mirror of this theory is a single $U(N)$ gauge groups with 3 adjoints \footnote{And here by adjoint we really mean a $U(N)$ adjoint, that is a $SU(N)$ adjoint and a singlet. The singlet in $Z$ once more corresponds to the overall center of mass motion and decouples.} $X$, $Y$ and $Z$ and 2 fundamental flavors $Q$, $\tilde{Q}$, $T$, $\tilde{T}$ coupling via a superpotential: $$W=X[Y,Z] + Q X \tilde{Q}+ T Y \tilde{T} + h Q \tilde{T} + \tilde{h} \tilde{Q} T$$ where $h$ and $\tilde{h}$ are the same background parameters determining the diamond as in the abelian case. \section{Geometric mirror symmetry from linear sigma models} The basic conjecture is that applying both dualities succesively maps the L$\sigma$M for a given Calabi-Yau to the L$\sigma$M on the mirror. The parameter map we have presented above implies that the dual theory is formulated in terms of twisted multiplets, realizing the required flip in the R-charge. In order to support our conjecture, let us do the calculation for the single D1 brane probe on a ${\cal C}_{kl}$ space. By construction the Higgs branch of the gauge theory we start with is the blownup ${\cal C}_{kl}$ space. The twisted mirror of this theory is a U(1) gauge theory coupled to $k+l$ flavors $Q$, $\tilde{Q}$ and $T$, $\tilde{T}$ and two singlet fields $X$ and $Y$. The superpotential takes the form \begin{equation} \label{eq1} W=\sum_{i=1}^k Q_i (X-a_i) \tilde{Q}^i + \sum_{a=1}^l T_a(Y-b_a) \tilde{T}^a + \sum_{ia} Q_i h^i_a \tilde{T}^a + \tilde{Q}^i \tilde{h}_i^a T_a, \end{equation} where $h$ and $\tilde{h}$ are background hypermultiplets parametrizing the diamonds and the $a_i$ and $b_a$ are the relative positions of the D5 and D5' branes in the D1 D5 D5' picture along 45 and 89 respectively; $\sum a_i = \sum b_a =0$. According to the conjecture we now must find the ordinary mirror of this theory, whose Higgs branch, it is claimed, will be the mirror manifold. Ordinary mirror symmetry derives from 3d mirror symmetry. In three dimensions the Higgs branch of the mirror theory is the same as the quatum corrected Coulomb branch of the original one. For the purpose of computing the mirror of the ${\cal C}_{kl}$ space it suffices therefore to calculate the effective Coulomb branch of the 3d U(1) gauge theory with $k+l$ flavors and superpotential eq.(\ref{eq1}). First let us study the classical moduli space. The D-term equations require $$\sum_{i=1}^k |Q_i|^2 - |\tilde{Q}^i|^2 + \sum_{a=1}^l |T_a|^2 - |\tilde{T}^a|^2 =0$$ the F-term requirements for the $Q$, $T$, $\tilde{Q}$ and $\tilde{T}$ fields are $$ N \left ( \begin{array}{c} Q\\T \end{array} \right ) =0, \;\;\;\;\;\;\; (\tilde{Q}, \tilde{T}) N^T = 0 $$ where $N$ is the $k+l$ by $k+l$ matrix \begin{equation*} N= \left ( \begin{array} {c|c} \mbox{diag} \{X-a_1, X-a_2, \ldots, X-a_k \} & h\\ \tilde{h} & \mbox{diag} \{Y-b_1, Y-b_2, \ldots, Y-b_l \} \end{array} \right ) \end{equation*} In addition the scalar potential contains the standard piece $$2 \sigma^2 (\sum_{i=1}^k |Q_i|^2 + |\tilde{Q}^i|^2 + \sum_{a=1}^l |T_a|^2 + |\tilde{T}^a|^2) $$ from the coupling of the scalar $\sigma$ in the vector multiplet to the matter fields and the F-terms for $X$ and $Y$. The classical Coulomb branch is three complex dimensional parametrized by $X$, $Y$ and $\sigma + i \gamma$, where $\gamma$ is the dual photon. Along this branch, $Q$, $\tilde{Q}$, $T$ and $\tilde{T}$ are zero. The Coulomb branch meets the Higgs branch along the curve \footnote{Note that this is the defining equation of the curve the NS5-branes wrap, the diamond \cite{mina3}. It is also the defining equation of the complex structure of the local mirror manifold for the blownup ${\cal C}_{kl}$, the deformed ${\cal G}_{kl}$, whose defining equation obtained by adding the `quadratic pieces' $UV - \mbox{det} (N)=0$ which do not change the complex structure.} $$\mbox{det} (N)=0. $$ Now consider the quantum Coulomb branch. As shown in \cite{many} the quantum Coulomb branch of a $U(1)$ theory with $N_f=k+l$ flavors has an effective description in terms of chiral fields $V_+$ and $V_-$ and a superpotential $$W_{eff}= - N_f (V_+ V_- \mbox{ det}(M))^{1/N_f}.$$ $M$ is the $k+l$ by $k+l$ meson matrix $$M =\left ( \begin{array}{cc} Q_i \tilde{Q}^j & Q_i \tilde{T}^b \\ T_a \tilde{Q}_j & T_a \tilde{T}^b \end{array} \right ) . $$ Far out on the Coulomb branch $V_{\pm}$ are related to the classical variables via $V_{\pm} \sim e^{\pm 1/g^2 (\sigma + i \gamma)}$. Adding the tree level superpotential eq.(\ref{eq1}) written in the compact form $$\mbox{Tr } (N M)$$ to this effective superpotential, the $M$ F-term equations describing our quantum Coulomb branch read \begin{equation} \label{modul} \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; N_{\beta \gamma} - (V_+ V_-)^{1/N_f} \frac{H_{\beta \gamma}}{\mbox{det}(M)^{ 1-1/N_f} }=0 \end{equation} where $$H_{\beta \gamma} = \frac{\partial \mbox{det}(M)}{ \partial M^{\beta \gamma}}$$ Taking the determinant in eq.(\ref{modul}) we find that the quantum Coulomb branch is described by a hypersurface $$\mbox{det}(N)=V_+ V_-.$$ This is precisely the mirror manifold of ${\cal C}_{kl}$ \cite{mirror1}. Since the origin $V_+=V_-=X=Y=0$ is no longer part of this branch of moduli space, we arrive at a smooth solution even so we started from the effective superpotential of \cite{many} that is singular at the origin. We here considered only mirror symmetry for ${\cal C}_{kl}$ spaces. Since any affine toric CY can be imbedded in ${\cal C}_{kl}$ for sufficiently large $k$ and $l$, mirror symmetry for all such spaces follows by deformation. \section*{Acknowledgements} We would like thank Jacques Distler, Ami Hanany, Sav Sethi, Matt Strassler and Andy Strominger for useful discussions. \newpage \bibliographystyle{utphys}
1,116,691,499,421
arxiv
\section{Introduction} \label{s:intro} The existence of a supermassive BH in the centers of galaxies is well-established \citep{Kormendy95, Richstone, Kormendy01}. Strong links exist between the supermassive BH and host galaxy properties, as evidenced by the BH mass-stellar velocity dispersion ($\Mbh-\sigma$) relation \citep{Ferrarese, Gebhardt}, amongst others. The $\Mbh-\sigma$ relation has been demonstrated not to be a selection effect by \citet{Gultekin11}, to be the strongest correlation between $\Mbh$ and galaxy properties by \citet{Beifiori11}, and has been estimated most recently by \citet{Gultekin} based on 49 $\Mbh$ measurements and 19 $\Mbh$ upper limits, by \citet{Graham} based on 64 $\Mbh$ measurements, and by \citet{Beifiori11} based on 49 $\Mbh$ measurements and 94 $\Mbh$ upper limits. Despite the wealth of observational data, the origin of this relation is not firmly established. There are several theoretical models that explain the origin of the observed correlations between BH mass and galaxy properties. Feedback from BH accretion on the hosting galaxy is one proposal \citep{Silk98, Fabian, Burkert01}. Simulations often involve galaxy mergers with strong inflow of gas that feeds the BH, powers the quasar and expels enough gas to quench both star formation and further fueling of the BH \citep{Kauffmann, Wyithe03, DiMatteo, Murray05, Robertson06, Johansson09, Hopkins09, Silk10}. This feedback from active galactic nuclei (AGN) regulates the BH-galaxy systems, and leads to tight BH mass-galaxy property relations. This scenario predicts that $\Mbh$ and $\sigma$ {\it coevolve}. Other mechanisms have been proposed to explain the tight correlation between BH mass and galaxy properties. \citet{Peng07} and \citet{Hirschmann10} construct a model for the origin of the $\Mbh-M_{bulge}$ relation in which mergers do not lead to accretion-based growth on the BH. In this model a tight $\Mbh-M_{bulge}$ relation is established through the central limit theorem. Recently, \citet{Jahnke11} build on this model by including BH accretion and star formation (these processes are un-correlated in the model). They conclude that a causal link between galaxy growth and BH growth is not necessary for obtaining the observed BH mass-galaxy property relations. In this article we investigate how galaxies evolve in the $\Mbh-\sigma$ plane, and thereby place constraints on these and other models for the origin and evolution of the $\Mbh-\sigma$ relation. We focus on local galaxies with classical bulges, and investigate the scatter in the $\Mbh-\sigma$ relation as a function of galaxy nuclear luminosity. Our results indicate a scenario where BH mass and $\sigma$ evolve along the $\Mbh-\sigma$ relation, thereby favoring models where BH mass and host stellar spheroids coevolve. \section{Method} \label{s:data} First, we describe the criteria adopted to select our sample and the methods used to analyze the data. We consider the samples compiled by \citet{Gultekin} and \citet{Graham}, which include galaxies with dynamically measured BHs and stellar velocity dispersions. In addition, we include galaxies studied by \citet{Greene}, who measured the central BH mass via masers. We refer to the sum of these samples as our dynamically-based sample. We consider a second set of galaxies with BHs estimated via reverberation mapping \citep{Woo}. We select galaxies with classical bulges when possible. This is done because \citet{Kormendy} observe that $\Mbh$ does not correlate with the properties of galaxy disks or pseudobulges, and \citet{Sani} find smaller intrinsic scatter of BH mass-host galaxy property relations when excluding galaxies with pseudobulges. According to bulge classification of the galaxies included in \citet{Greene}, whose classification relies on the galaxy morphology and stellar population property, we select the only galaxy (N1194) that has a classical bulge with nuclear luminosity at $10^{-2.17}~L_{Edd}$, and we also select the galaxy (UGC 3789), which has an unclassified bulge with nuclear luminosity at $ 10^{-0.82}~L_{Edd}$. For the other dynamically-based galaxies, we select classical bulge galaxies according to \citep{Sani}, who classify galaxies with classical bulges by selecting galaxies which have S\'{e}rsic indices higher than two. We include all the reverberation-based galaxies as it is difficult to classify the morphology of galaxies with AGN. Next, we estimate the bolometric nuclear luminosity. First we consider the sample of \citet{Greene}, who estimate the nuclear bolometric luminosity for their sample using O[III], which is strong and ubiquitous in obscured megamaser systems \citep{Kauffmann03, Zakamska03}. In this approach, O[III] luminosity is converted to $M_{2500}$, where $M_{2500}$ is the magnitude at 2500{~\AA}, and then to bolometric luminosity following the $M_{2500}-L$[OIII] \citep{Reyes08} and $M_{2500}$ bolometric correction \citep{Richards06} for unobscured quasars. The total uncertainty introduced is $\sim 0.5$ dex \citep{Liu09}. This is smaller than our smallest bin size (one dex) when we compare the scatter in the $\Mbh-\sigma$ relation with respect to nuclei luminosity. In order to obtain the nuclear luminosity for the \citet{Gultekin} and \citet{Graham} samples, we select galaxies with nuclear luminosity measured in the soft X-Ray band by the \textit{Chandra} X-ray observatory \citep{Pellegrini10, Pellegrini05, Zhang, Gonz}. At the lowest luminosity in our sample ($\sim2\times10^{38}{\rm~erg~s^{-1}}$), the central X-ray source has luminosity comparable to the X-ray binary population($\sim10^{38-40}{\rm~erg~s^{-1}}$) \citep{King01}. The use of \textit{Chandra} data is therefore essential because it has sufficient angular resolution to isolate galactic nuclear from bright X-ray binaries. For the reverberation-based sample, we select those with known X-ray luminosity in the NASA/IPAC Extraglactic Database (NED)\footnote{\texttt{http://ned.ipac.caltech.edu/}}. Because these galaxies have nuclear luminosity($\sim2\times10^{43}{\rm~erg~s^{-1}}$) much higher than the X-ray binaries, the X-ray luminosity of the galaxy is dominated by the AGN. Thus, we associate the X-ray luminosity of the galaxy with that of the AGN. Most of the X-ray data are obtained from XMM-Newton \citep{Bianchi09, Markowitz09, Nandra07}, except for Mrk202 and IC 120, which are obtained from ASCA \citep{Ueda05} and BeppoSAX \citep{Verrecchia} separately (detailed properties and references see Table 2). In order to obtain the X-ray luminosity from X-ray fluxes, we estimate distances assuming $H_0 = 73\rm~km~s^{-1}Mpc^{-1}$, $\Omega_m = 0.27$, $\Omega_\Lambda = 0.73$ for $z>0.01$ galaxies, with redshift measurements obtained from NED, which compiled multiple consistent redshift measurements for each galaxy. For $z<0.01$ galaxies, we obtain distances from the Extragalactic Distance Database, which gives updated best distances for galaxies within $3000 \rm~km~s^{-1}$ \citep{Tully09}. To convert the X-ray luminosity to bolometric luminosity, we first convert the X-ray luminosity of different bands in the literature to luminosity in the band $2-10\,\rm{keV}$ assuming an energy index of $-1$ ($\nu f_{\nu} = constant$) with an uncertainty factor of $\sim2$ (Martin Elvis, private communication). Then, we convert the X-ray luminosity to bolometric luminosity by the bolometric correction for AGNs: $L_{bol}/L_X = 15.8$, with an uncertainty of $\sim0.3$ dex \citep{Ho09}. Therefore, the nuclear bolometric luminosity is calculated as:\\ \noindent \begin{align} L_{bol}=15.8L_X\frac{\ln(10/2)}{\ln(E_2/E_1)}, \end{align} \noindent where $E_2$ and $E_1$ represent the upper and lower bound of the observed X-ray band. We include the properties of the BH and host spheroids for our dynamically-based and reverberation-based sample in Table 1 and Table 2 separately. \begin{table*} \caption{Properties of Dynamically-based sample} \begin{tabular}{|l||l|l|l|l|l|l|} \hline Name & $\sigma$ & $\epsilon_{\sigma}$ & $\log\Mbh$ & $\epsilon_{\log\Mbh}$ & log[$\frac{L_{bol}}{L_{Edd}}$] & ref \\ \ & $\rm{km~s^{-1}}$ & $\rm{km~s^{-1}}$ & $\Msun$ & $\Msun$ & \ & \ \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline IC1459 (a) & 340 & 17 & 9.45 & 0.34 & -5.52 & (1) \\ M31 (a) & 160 & 8 & 8.18 & 0.09 & -8.67 & (5) \\ M32 (a) & 75 & 3 & 6.49 & 0.08 & -7.48 & (1) \\ M60 (a) & 385 & 19 & 9.32 & 0.37 & -8.17 & (1) \\ M81 (a) & 143 & 7 & 7.90 & 0.06 & -4.15 & (4) \\ M84 (a) & 296 & 14 & 9.18 & 0.35 & -6.62 & (1) \\ M87 (a) & 375 & 18 & 9.56 & 0.37 & -5.70 & (1) \\ N524 (b) & 253 & 25 & 8.92 & 0.10 & -7.27 & (1) \\ N1194 (c) & 148 & 24 & 7.82 & 0.05 & -2.17 & (6) \\ N2787 (a) & 189 & 9 & 7.63 & 0.04 & -6.35 & (3) \\ N2974 (b) & 227 & 23 & 8.23 & 0.08 & -4.85 & (1) \\ N3115 (a) & 230 & 11 & 8.98 & 0.13 & -7.19 & (1) \\ N3227 (a) & 133 & 12 & 7.18 & 0.32 & -5.12 & (1) \\ N3245 (a) & 205 & 10 & 8.34 & 0.10 & -4.52 & (2) \\ N3377 (a) & 145 & 7 & 8.04 & 0.04 & -6.74 & (1) \\ N3379 (a) & 206 & 10 & 8.08 & 0.33 & -6.90 & (1) \\ N3384 (a) & 143 & 7 & 7.26 & 0.02 & -6.09 & (1) \\ N3414 (b) & 237 & 24 & 8.40 & 0.07 & -7.19 & (1) \\ N3585 (a) & 213 & 10 & 8.53 & 0.08 & -6.54 & (1) \\ N3607 (a) & 229 & 11 & 8.08 & 0.14 & -6.22 & (1) \\ N3608 (a) & 182 & 9 & 8.32 & 0.14 & -7.05 & (1) \\ N4026 (a) & 180 & 9 & 8.32 & 0.08 & -6.85 & (1) \\ N4261 (a) & 315 & 15 & 8.74 & 0.09 & -4.58 & (1) \\ N4459 (a) & 167 & 8 & 7.87 & 0.39 & -6.41 & (1) \\ N4552 (b) & 252 & 25 & 8.68 & 0.07 & -6.37 & (1) \\ N4596 (a) & 136 & 6 & 7.92 & 0.13 & -6.40 & (2) \\ N4621 (b) & 225 & 23 & 8.60 & 0.07 & -6.62 & (1) \\ N5128 (a) & 150 & 7 & 7.65 & 0.13 & -3.54 & (1) \\ N5813 (b) & 239 & 24 & 8.85 & 0.07 & -5.24 & (1) \\ N5846 (b) & 237 & 9 & 9.04 & 0.04 & -6.70 & (1) \\ N6251 (a) & 290 & 14 & 8.78 & 0.18 & -2.36 & (2) \\ N7457 (a) & 67 & 3 & 6.61 & 0.25 & -5.64 & (1) \\ UGC3789 (c) & 107 & 12 & 7.05 & 0.05 & -0.82 & (6) \\ \hline \end{tabular} \medskip \\ \begin{flushleft} \textbf{Notes.} Column 1: galaxy name and references on $\Mbh$, $\sigma$. Column 2: stellar velocity dispersion ($\sigma$). Column 3: measurement error on $\sigma$. Column 4: black hole mass ($\Mbh$). Column 5: measurement error on $\log \Mbh (\Msun)$. Column 6: X-ray luminosity references.\\ \textbf{References.} \\ $\Mbh$ and $\sigma$ measurements (Column 1.)\\ (a) \citet{Gultekin} (b) \citet{Graham} (c) \citet{Greene} \\ Nuclei X-ray luminosity measurements (Column 7.)\\ (1) \citet{Pellegrini10}; (2) \citet{Gonz}; (3) \citet{Pellegrini05}; (4) \citet{Zhang}; (5) \citet{Li11}; (6) \citet{Greene} \end{flushleft} \end{table*} \begin{table*} \caption{Properties of Reverberation-based sample} \begin{tabular}{|l||l|l|l|l|l|l|} \hline Name & $\sigma$ & $\epsilon_{\sigma}$ & log$\Mbh$ & $\epsilon_{log\Mbh}$ & log[$\frac{L_{bol}}{L_{Edd}}$] & ref \\ \ & $\rm{km~s^{-1}}$ & $\rm{km~s^{-1}}$ & $\Msun$ & $\Msun$ & \ & \ \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline 3C 120 & 162 & 20 & 7.72 & 0.23 & -0.18 & (5) \\ Ark 120 & 221 & 17 & 8.15 & 0.11 & -0.60 & (4) \\ Mrk 79 & 130 & 12 & 7.70 & 0.16 & -0.99 & (6) \\ Mrk 110 & 91 & 7 & 7.38 & 0.14 & 0.14 & (3) \\ Mrk 202 & 78 & 3 & 6.13 & 0.22 & -0.58 & (2) \\ Mrk 279 & 197 & 12 & 7.52 & 0.15 & -0.52 & (3) \\ Mrk 590 & 189 & 6 & 7.66 & 0.12 & -0.46 & (7) \\ Mrk 1310 & 84 & 5 & 6.33 & 0.17 & -0.30 & (1) \\ Mrk 1383 & 217 & 15 & 9.09 & 0.16 & -1.21 & (3) \\ N3227 & 136 & 4 & 7.60 & 0.24 & -2.06 & (5) \\ N3516 & 181 & 5 & 7.61 & 0.18 & -1.13 & (5) \\ N3783 & 95 & 10 & 7.45 & 0.13 & -0.10 & (3) \\ N4051 & 89 & 3 & 6.18 & 0.19 & -1.05 & (8) \\ N4151 & 97 & 3 & 7.64 & 0.11 & -1.46 & (4) \\ N4253 & 93 & 32 & 6.23 & 0.30 & 0.11 & (3) \\ N4593 & 135 & 6 & 6.97 & 0.14 & -1.07 & (4) \\ N5548 & 195 & 13 & 7.80 & 0.10 & -0.99 & (4) \\ N6814 & 95 & 3 & 7.25 & 0.12 & -1.52 & (4) \\ N7469 & 131 & 5 & 7.06 & 0.11 & -0.32 & (9) \\ \hline \end{tabular} \medskip\\ \begin{flushleft} \textbf{Notes.} Column 1: galaxy name. Column 2: stellar velocity dispersion ($\sigma$). Column 3: measurement error on $\sigma$. Column 4: black hole mass ($\Mbh$). Column 5: measurement error on $\log \Mbh (\Msun)$. Column 6: X-ray luminosity references. \\ \textbf{References.} \\ Nuclei X-ray luminosity measurements and instruments (Column 7.)\\ (1) \citet{Verrecchia}, BeppoSAX; (2a) \citet{Bianchi09}, XMM; (2b) \citet{Markowitz09}, XMM; (2c) \citet{Nandra07}, XMM; (3) \citet{Ueda05}, \emph{ASCA} \\ Local galaxy (z$<$0.01) distance measurements: N3227, N3516, N3783, N4051, N4151, N4593: \citet{Tully09}; \end{flushleft} \end{table*} In reality the correction factor, $L_{bol}/L_X$, depends on the nuclear luminosity: low luminosity AGNs tend to be ``X-ray-loud" \citep{Ho99, Ho09}. In other words, the lower X-ray nuclear luminosity corresponds to an even lower bolometric nuclear luminosity and vice versa. Note that this additional complexity does not mix the order of galaxies with respect to their nuclei luminosity. As we compare galactic properties of lower nuclear luminosity galaxy to those of higher nuclear luminosity galaxies, without computing the exact bolometric luminosity, our conclusion is not affected by assuming a constant bolometric correction factor. Because the bolometric correction factor only introduces an uncertainty of $\sim0.3$ dex \citep{Ho09}, the total uncertainty of the bolometric luminosity is less than the smallest bin width, one dex. Therefore, we expect our results to be largely unaffected by the uncertainties in nuclear bolometric luminosity. In total, we have 38 galaxies with dynamically measured BH mass and 17 galaxies with reverberation-mapping BH mass measurements in our sample. For the dynamically-based sample, the range of the nuclear luminosity is limited because the nuclear luminosity is difficult to measure for low nuclear luminosity galaxies and the BH mass is difficult to estimate dynamically for high nuclear luminosity galaxies. The reverberation mapping measurements are normalized by setting a constant virial coefficient so that the reverberation mapping-based and dynamically-based $\Mbh-\sigma$ relations agree. The assumption of a constant virial coefficient could potentially introduce a larger scatter for the reverberation-based sample. In addition, the virial coefficient may depend on the nuclear luminosity. If so, choosing a constant virial coefficient may introduce a BH mass uncertainty that depends on the nuclear luminosity. This could affect our result. In order to mitigate this potential bias, we do not rely heavily on comparing galaxies across our two samples. With $\Mbh$, $\sigma$ and nuclear luminosity in hand, we are now in a position to consider the scatter in the $\Mbh-\sigma$ relation with respect to nuclear luminosity. In order to obtain the scatter, we perform a linear fit to the $\Mbh-\sigma$ relation by minimizing $\chi^2$ for the dynamically-based sample, as done by \citet{Tremaine}. Our best-fit parameters are consistent with those in the literature: $\log(\Mbh / \Msun) = 8.27 + 4.05\log (\sigma/200 \,\rm{km~s^{-1}})$, with an intrinsic scatter of 0.27. The scatter between the measured $\Mbh$ for an individual galaxy and the $\Mbh-\sigma$ relation, in units of the uncertainty, is calculated as: \noindent \begin{align} scatter=\frac{\Delta M_{BH}}{\sqrt{\epsilon_{M}^2+b^2\epsilon_{\sigma}^2}}, \end{align} \noindent where $\Delta M_{BH}$ is the difference between the measured BH mass and the BH mass corresponds to the fitted $\Mbh-\sigma$ relation, $\epsilon_{M}$ is the measurement uncertainty of BH mass, $\epsilon_\sigma$ is the measurement uncertainty of the velocity dispersion and b is the slope of the $\Mbh-\sigma$ relation. For a fixed slope of the $\Mbh-\sigma$ relation, the scatter in BH mass determines the scatter in $\sigma$ ($scatter_{BH}=b \times scatter_{\sigma}$). Measuring the scatter in BH mass makes it more convenient to compare with the literature, where intrinsic scatter in BH mass is discussed (eg. \citealt{Tremaine}). \section{Results} \label{s:res} In Figure 1 we plot the $\Mbh-\sigma$ relation for our sample. In order to compare the scatter in the $\Mbh-\sigma$ relation with the nuclear luminosity, we color code the symbols by the nuclear luminosity level, $\log L_{bol}/L_{Edd}$. Because the reverberation mapping method has a higher uncertainty in BH mass (because a constant virial coefficient is adopted), the scatter in the reverberation-based sample is slightly larger than that for the dynamically-based sample. It is apparent already from this figure that there is no strong correlation between $\Mbh-\sigma$ scatter and nuclear luminosity within each sample. \begin{figure} \begin{center} \resizebox{3.5in}{!}{\includegraphics{Figure1.eps}} \caption{$\Mbh-\sigma$ relation for galaxies with classical bulges and dynamically-based BH masses (stars), and reverberation-based masses (circles). The color bar indicates the nuclear luminosity levels, $\log L_{bol}/L_{Edd}$. This figure demonstrates visually that there is no strong correlation between the scatter and the nuclear luminosity.} \label{fig:f1} \vspace{0.1cm} \end{center} \end{figure} In order to investigate the dependence of the scatter on nuclear luminosity quantitatively, we show in Figure 2 the scatter versus the nuclear luminosity (top panel) and the average scatter in bins of the nuclear luminosity level (bottom panel). To keep track of the standard deviation of the scatter at each nuclei luminosity bin, we plot the standard deviation in each bin as vertical dash error bars. We choose bins so that the number of galaxies per bin is comparable, in order to minimize Poisson noise. Specifically, the number of galaxies in each bin are 13, 13, 12, 9, 8. Notice that we do not mix the dynamically-based and reverberation-based samples. The scatter and the standard deviation remain approximately constant as the nuclear luminosity increases. This is the basic result of this article. \begin{figure} \begin{center} \mbox{\includegraphics[width=3.5in]{Figure2a.eps}}\\ \mbox{\includegraphics[width=3.5in]{Figure2b.eps}} \caption{Scatter (top panel) and binned scatter (bottom panel) in the $\Mbh-\sigma$ relation as a function of galaxy nuclear luminosity. Measurement error is taken to be the sum of the measurement uncertainty in $\Mbh$ and $\sigma$, where the measurement uncertainty in $\sigma$ is scaled to $\Mbh$ (see scatter definition in equation (2)). Galaxies with dynamical BH measurements are represented by stars, and those with reverberation measurements are represented by circles. The solid vertical error bars indicate the error on the mean scatter in each bin; the dashed vertical error bars indicate the standard deviation of the scatters in each bin; the horizontal error bars indicate the standard deviation of the nuclear luminosity levels in each bin. It is clear from this figure that the scatter is independent of nuclear luminosity.} \label{fig:f2} \vspace{0.1cm} \end{center} \end{figure} This result can be seen another way by considering the timescales that govern the growth of the BH mass and the dynamical time. We define the BH growth timescale as: \begin{align} t_{acc} \nonumber &\equiv \frac{\Mbh}{\dot M_{\rm BH}}\\ &= 4\times10^7 \bigg( \frac{\epsilon}{0.1} \bigg) \bigg(\frac{L_{bol}}{L_{Edd}}\bigg)^{-1} {\rm yr}, \end{align} where $\epsilon$ is the radiative efficiency, which we take to be 0.1, $L_{Edd} = 3.5\times10^4 (\frac{\Mbh}{M_{\odot}}) L_{\odot}$ is the Eddington luminosity. The dynamical time is defined via: \begin{align} t_{dyn} &\equiv\frac{R_e}{\sigma}. \end{align} where $R_e$ is the effective radius of the galaxy. Because we are interested in the two timescales when the BH is accreting rapidly, we set an arbitrary threshold ($4\times10^{-3}L_{Edd}$) and select galaxies with nuclear bolometric luminosity higher than this value. Our result is independent of this threshold. This leaves us with 3 dynamically-based measurements, and all the reverberation-based measurements. Then, to calculate $t_{dyn}$, we obtain the effective radii of these galaxies from the literature \citep{Sani, Bentz, Lauer, Marconi}. For the galaxies with no measured effective radii in the literature (N1194, UGC3789, Mrk 202 and N4253), we use the galaxy radius calculated as the following. We estimate the radii of the galaxy by multiplying the angular radii by the angular-size distances. We obtain the angular radii from the 2MASS isophotal measurements with reference level of the radii set at $20~\rm K-band~magnitude~arcsec^{-2}$. The angular-size distance is calculated assuming $H_0 = 73 \rm~km~s^{-1}Mpc^{-1}$, $\Omega_m = 0.27$, $\Omega_\Lambda = 0.73$ as the four galaxies are all $z>0.01$ galaxies. The redshift of the galaxies are obtained from NED, which compiled multiple consistent redshift measurements for each galaxy. In Figure 3 we plot the ratio of $t_{acc}$ to $t_{dyn}$ as a function of the nuclear luminosity in units of the Eddington luminosity. We fit a straight line and find the slope of the line is $-0.94$ and the ratio approaches unity when the nuclear luminosity approaches the Eddington limit. As discussed in section 2, the bolometric correction factor depends on the nuclei luminosity. By fixing a constant correction factor, the nuclear bolometric luminosity is probably underestimated at high luminosity by roughly factor of 2. This causes $t_{acc}$ to be overestimated at high luminosity by the same factor. Thus, plotting the ``true'' $t_{acc}$ versus the ``true'' luminosity, which are overestimated and underestimated separately by the same factor, the points in Figure 3 should shift to the left along our observed trend. Therefore, the trend in Figure 3 is robust against the uncertainties in nuclear luminosity. \begin{figure} \begin{center} \resizebox{3.5in}{!}{\includegraphics{Figure3.eps}} \caption{Ratio of the instantaneous accretion timescale to dynamical timescale as a function of bolometric nuclear luminosity in units of the Eddington luminosity. The black line indicates the linear fit of the relation, which has a slope of -0.94. The fact that the ratio of timescales approaches unity as the luminosity approaches Eddington implies that BH mass and $\sigma$ coevolve.} \label{fig:f3} \vspace{0.1cm} \end{center} \end{figure} \section{Discussion} \label{s:dis} The relationship between scatter in the $\Mbh-\sigma$ relation and the accretion rate of the BH puts interesting constraints on how galaxies evolve in the $\Mbh-\sigma$ plane. For example, if BH growth and host stellar spheroid growth are uncorrelated then we would expect that galaxies with rapidly accreting BHs to lie systematically off of the $\Mbh-\sigma$ relation. In contrast, if BHs and spheroid coevolve, then we would expect no correlation between scatter in $\Mbh-\sigma$ and BH activity, which is precisely what we observe. Our results therefore favor the scenario wherein spheroid and BHs coevolve along the $\Mbh-\sigma$ relation. In support of our conclusion, we also find that the dynamical time of the galaxy becomes comparable to the BH accretion timescale when the nuclear luminosity approaches the Eddington luminosity (Figure 3). We can understand how galaxies populate the space in Figure 3 in light of our results. Galaxies cannot be in the upper right region of Figure 3 because otherwise the BH would grow much slower than $\sigma$, resulting in a correlation between scatter and nuclear luminosity, which is not observed. We can also understand why there are no galaxies in the lower left region of Figure 3: there is no limit to how long $t_{acc}$ can be (because the accretion rate can be arbitrarily close to zero), while there is a limit to how small $t_{dyn}$ can be. So $t_{acc}/t_{dyn}$ can increase at lower $L/L_{Edd}$. For systems experiencing rapid growth in the BHs, the growth timescale of the BH is comparable to the dynamical time, supporting the idea that galaxies evolve along the $\Mbh-\sigma$ relation. As a further extention, we also consider the intrinsic scatter of galaxies with $\Mbh$ measured by the virial method \citep{Xiao, Shen} for high redshift AGN. We find that the intrinsic scatter of those is similar to the intrinsic scatter of the galaxies with BH masses measured by the reverberation-mapping method. The virial method uses the radius-luminosity relation to estimate the radius of the broad-line region, and then uses the reverberation mapping method formalism to measure BH masses. Thus, the mass of the BH has an even bigger uncertainty. Barring these additional uncertainties, this suggests that high-z galaxies may also evolve along $M-\sigma$ relation. Our result is inconsistent with models that predict a non-causal origin of BH mass-galaxy property relations. As discussed in \citet{Jahnke11}, in a non-causal origin model, the BH mass growth rate is uncorrelated with the growth of $\sigma$, and the BH mass-galaxy property relation converge only through the central limit theorem. Thus, when a given BH is growing its mass at high nuclear luminosity, $\sigma$ does not ``catch up'' until after several merger events. Such a model would therefore predict the scatter in $\Mbh-\sigma$ to be larger for higher nuclear luminosity, in contradiction to our results. On the other hand, our conclusion is consistent with the scenario emerging from simulations that episodes of major spheroid growth and BH growth occur on similar timescales via mergers. Simulations find that the epoch of rapid BH accretion is limited by AGN feedback to be $\sim 100$ Myr, which is similar to the dynamical time (eg. \citealt{Kauffmann, DiMatteo, Sijacki07, Hopkins09, Hopkins11, Blecha}). This suggests that the BH mass and $\sigma$ change concurrently. Models that appeal to fueling of the BH by recycled gas \citep[e.g.,][]{Ciotti07} must satisfy our constraint that galaxies evolve along $\Mbh-\sigma$ even during high accretion rate phases. In addition, our result reinforces the assumptions in various studies. For instance, it shows that BH grows simultaneously with the potential well formation in mergers, which is assumed by \citet{Shankar} to investigate the cosmological evolution of the $\Mbh-\sigma$ relation. The self-regulated growth of supermassive BH is assumed when investigating the link between quasars and the red galaxy population by \citet{Hopkins06}, and when constraining the accretion history of massive BHs by \citet{Volonteri06}. Moreover, it constrains the total timescale of the episodic random accretion model proposed by \citet{Wang} to be similar to $t_{dyn}$. In our dynamically-based sample we only have one galaxy with nuclear bolometric luminosity higher than $0.1L_{Edd}$, and three galaxies with nuclear bolometric luminosity higher than $0.001L_{Edd}$. We can expect more stringent constraints on the coevolution of galaxies and BHs as additional maser-based BH masses are obtained for higher nuclear luminosity galaxies. \section*{Acknowledgments} We thank Martin Elvis, Pepi Fabbiano, Silvia Pellegrini, Junfeng Wang, Yue Shen, Laura Blecha and Phil Hopkins for helpful conversations. This work was supported in part by NSF grant AST-0907890 and NASA grants NNX08AL43G and NNA09DB30A.
1,116,691,499,422
arxiv
\section{INTRODUCTION} The Underactuated Lightweight Tensegrity Robotic Assistive Spine (ULTRA Spine) is an ongoing project to develop a flexible, actuated spine for quadruped robots. This involves creating a control system for the spine's cables, so that it can perform the necessary bending motions. The spine is made out of a tensegrity ("tensile-integrity") system, where cables in tension hold the spine's vertebrae apart, and where the lengths of these cables are adjusted as inputs. This work considers two similar models of the ULTRA Spine, and applies trajectory tracking controllers to each of them (Fig. \ref{fig:ultra_spine_mid-bend_mpc}.) The first controller is presented in more depth in recent work by the authors \cite{Sabelhaus2017a}, while the second controller with input constraints is ongoing work and is presented here for the first time. That paper includes more background on the motivations and choices made for this controller. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{img/ultra_spine_mpc_overview_2016-09-05} \caption{Trajectory-tracking control for the flexible backbone robot (ULTRA Spine), mid-simulation, for a uniaxial bending trajectory, using the method presented in \cite{Sabelhaus2017a}. The rigid bodies (vertebrae) of the spine are in gray, cables in red, and the target trajectory for the top spine vertebra is in blue. This work uses a point-mass dynamics model, so the rigid vertebra bodies are for visualization only.} \label{fig:ultra_spine_mid-bend_mpc} \end{figure} \section{CONTROLLER FORMULATIONS}\label{sec:controller} Two different a model-predictive controllers were used to track a reference trajectory $\xi^{ref}$, which corresponded to counterclockwise bending of the spine. This work uses a Model-Predictive Control (MPC) law for multiple reasons. First, there are inherent constraints on the dynamics of this system: the rest lengths of the springs cannot be negative (the springs can only compress down to a certain length), and the vertebrae of the system should not contact each other. In addition to these two constraints, there is an additional requirement (not considered here) on the cable tensions: the cables cannot ''push'', e.g. tensions must be non-negative; however, this is now included in the dynamics derivation so is not considered as a constraint. However, most importantly, constraints were used here to improve the quality of the linearization. Prior controllers became unstable due to the poor linearizations and the rapid movements that were created. By constraining and penalizing the amount of movement of the spine, the controllers create feasible movements. \subsection{Model-Predictive Control Overview} At each timestep $t$, model-predictive control solves a constrained finite-time optimal control problem (CFTOC), generating the sequence of control inputs $U_{t\rightarrow t+N|t} = \{u_{t|t}, ..., u_{t+N|t}\}$, with a window of $N$. The notation $t+k|t$ represents a value at the timestep $t+k$, as given or predicted at timestep $t$ (\cite{Borrelli2003}, Ch. 4.) Then, the first input $u_{t|t}$ is applied to the system, and the simulation advances to timestep $t+1$, and the problem repeats. Note that no terminal costs or constraints are included here, and thus stability can only be experimentally concluded, not proven. The two different CFTOCs for the controllers are listed below. The formulation of this optimization problem is the only difference between the controllers, alongside the fact that they were applied to slightly different models of the ULTRA Spine. \subsection{Controller with smoothing, without a reference input} The first controller used a four-vertebra, three-dimensional model of the ULTRA Spine, and solved the optimization problem below for each step of MPC. Note that for this controller, no corresponding input to the state trajectory was available, so smoothing constraints had to be applied. These are explained more in-depth in \cite{Sabelhaus2017a}. \begin{align} \displaystyle\min_{U_{t\rightarrow t+N|t}} & \sum_{k = 0}^{N} J( \xi_{t+k|t}, u_{t+k|t}, \xi^{ref}_{t+k}) \label{eq:opt_main}\\ & \text{subj. to:} \notag \\ & \xi_{t+k+1} = A_{t} \xi_{t+k} + B_{t} u_{t+k} + c_{t} \label{eq:sys_dynamics}\\ & u_{min} \leq u_{t+k} \leq u_{max} \label{eq:u_lim}\\ & \|u_{t} - u_{(t-1)}\|_\infty \leq w_1 \label{eq:smoothing_u1}\\ & \|u_{t+k} - u_{t}\|_\infty \leq w_2, \; k=1..(N-1) \label{eq:smoothing_uk}\\ & \|u_{t+N} - u_{t}\|_\infty \leq w_3 \label{eq:smoothing_uN}\\ & \|\xi(1:6)_{t+k} - \xi(1:6)_{t+k-1}\|_\infty \leq w_4 \label{eq:smoothing_x1}\\ & \|\xi(13:18)_{t+k} - \xi(13:18)_{t+k-1}\|_\infty \leq w_5 \label{eq:smoothing_x2}\\ & \|\xi(25:30)_{t+k} - \xi(25:30)_{t+k-1}\|_\infty \leq w_6 \label{eq:smoothing_x3}\\ & \xi(3)_{t+k} + w_7 \leq \xi(15)_{t+k} \label{eq:collision_2}\\ & \xi(15)_{t+k} + w_7 \leq \xi(27)_{t+k} \label{eq:collision_3} \end{align} Here, $N=10$ is the horizon length (a scalar), $w_1...w_7$ are constant scalar weights, and $\xi(i)_{t+k}$ denotes the $i$-th element of the state vector at time $t+k$ as predicted at time $t$. The dynamics constraint, (\ref{eq:sys_dynamics}), consists of the linearized and discretized system at time $t$, calculated as \[ A_t = \frac{\partial g(\xi,u)}{\partial \xi} \Bigr|_{ \substack{ \xi = \xi_{t} \\ u = u_{t-1} }} \quad \quad B_t = \frac{\partial g(\xi,u)}{\partial u} \Bigr|_{ \substack{ \xi = \xi_{t} \\ u = u_{t-1} }} \] \vspace{-0.3cm} \[ c_t = g(\xi_t, u_{(t-1)}) - A_t \xi_t - B_t u_{(t-1)} \] The discretization occurs as $A_t$, $B_t$, and $c_t$ are calculated, via a simple finite-difference Euler discretization, with $k=0.001$, the same as the timestep for $t$. At each timestep of the simulation, these matrices are calculated by numerically differentiating the equations of motion in MATLAB: the dynamics are forward simulated in each direction, and a finite-difference approximation is taken. This approach was chosen due to computational issues with calculating additional analytical derivatives of the dynamics. This linearization was calculated at each timestep $t$ and used for the optimization over the entire horizon, thus the notation $A_t, B_t, c_t$. Since no trajectory of inputs was available, linearizations used the prior state's input $u_{t-1}$ instead. For the start of the simulation, $u_{0}=\mathbf{0}$ was used. Note that since these linearizations are not at equilibrium points, the linear system is affine, with $c_t$ being a constant vector offset. The remaining constraints used have the following interpretations. Constraint (\ref{eq:u_lim}) is a bound on the inputs, limiting the length of the cable rest lengths, with $u_{min},u_{max} \in \mathbb{R}^{24}$ but having the same value for all inputs. This is the constraint that helps prevent the system from operating in the slack-cable regime, thus keeping it in one set of continous dynamics instead of behaving as a hybrid system. Constraints (\ref{eq:smoothing_u1}), (\ref{eq:smoothing_uk}), and (\ref{eq:smoothing_uN}) are smoothing terms on the inputs, which help with the lack of an input reference trajectory. Here $u_{(t-1)}$ is the most recent input at the start of the CFTOC problem. Constraints (\ref{eq:smoothing_x1}), (\ref{eq:smoothing_x2}), and (\ref{eq:smoothing_x3}) are smoothing terms on the states, limiting the deviation between successive states in the trajectory. These are needed to reduce linearization error, and are split so that the positions and angles of each vertebra could be weighted differently. Note from (\ref{eq:smoothing_x1}-\ref{eq:smoothing_x3}) that no velocity terms are constrained. Finally, noting that states $\xi(3)$, $\xi(15)$, and $\xi(27)$ are the z-positions of each vertebra, constraints (\ref{eq:collision_2}) and (\ref{eq:collision_3}) prevent the collision between adjacent vertebrae. The cost function $J$, written with arbitrary time index $j$, \vspace{-0.2cm} \begin{align} \begin{split} \displaystyle & J( \xi_{j}, u_{j}, \xi^{ref}_{j}) = \\ & \quad \quad (\xi_{j} - \xi^{ref}_{j})^\top Q^j (\xi_{j} - \xi^{ref}_{j}) \; +\\ & \quad \quad (\xi_{j} - \xi_{(j-1)})^\top S^j (\xi_{j} - \xi_{(j-1)}) \; + \\ & \quad \quad w_8 \lVert (u_{j} - u_{(j-1)}) \rVert_{\infty} \label{eq:obj_fun}\\ \end{split} \end{align} As before, $w_8$ is a scalar, while $Q$ and $S$ are constant diagonal weighting matrices. Here, $Q$ penalizes the tracking error in the states, $S$ penalizes the deviation in the states at one timestep to the next, and $w_8$ penalizes the deviations in the inputs from one timestep to the next. These matrices are diagonal, with blocks corresponding to the Cartesian and Euler angle dimensions, with zeros for all velocity states, according to vertebra. Nonzeros are at states $\xi_1...\xi_6$, $\xi_{13}...\xi_{18}$, and $\xi_{25}...\xi_{30}$, recalling that $\xi \in \mathbb{R}^{36}$. Raising each diagonal element to the power $j$ puts a heavier penality on terms farther away on the horizon. These are defined as: \vspace{-0.2cm} \begin{align} & Q_{j} = diag( w_9, \: w_9, \: w_9 \: | \: w_{10}, \: w_{10}, \: w_{10} \: | \: 0 ... 0) \in \mathbb{R}^{12 \times 12} \notag\\ & S_j = diag( w_{11}, \: w_{11}, \: w_{11} \: | \: w_{11}, \: w_{11}, \: w_{11} \: | 0...0) \in \mathbb{R}^{12 \times 12} \notag\\ & Q = diag(Q_1, \: Q_2, \: Q_3), \quad S = diag( S_1, \: S_2, \: S_3) \notag \end{align} The paper \cite{Sabelhaus2017a} provides more details about the simulation for this first controller. \subsection{Controller with a reference input, without smoothing} The above controller required significant tuning in order to get convergence. So, a controller was developed that could be run without as much tuning. One way to do so is to include a reference input trajectory, so that the optimization problem had a solution which may theoretically lead to zero error. For this second controller, a reduced-order model of the system was used. Specifically, only one moving vertebra was simulated, and only two-dimensional dynamics were considered. This eliminated the number of compounding variables as the controller was developed. The optimization problem for the newer controller is: \begin{align} \displaystyle\min_{U_{t \rightarrow t+N|t}} \quad & p(\xi_{t+N|t}) + \sum_{k = 0}^{N-1} q(\xi_{t+k|t}, u_{t+k|t}) \label{eq:opt_main}\\ \text{s.t.} \quad & \xi_{t+k+1|t} = A_t \xi_{t+k|t} + B_t u_{t+k|t} + c_t \label{eq:sys_dynamics}\\ & u_{min} \leq u_{t+k|t} \leq u_{max} \label{eq:u_lim}\\ & \xi_{t+k|t}^{(2)} \geq \frac{h}{2} \label{eq:collision}\\ & \xi_{t|t} = \xi(t) \label{eq:initialcondition} \end{align} The objective function components are quadratic weights of the tracking errors on both state and input: \vspace{-0.2cm} \begin{align} & p(\xi_{t+N|t}) = (\xi_{t+N|t} - \xi_{t+N}^{ref})^\top P (\xi_{t+N|t} - \xi_{t+N}^{ref}) \\ & q(\xi_{t+k|t}, u_{t+k|t}) = \notag \\ & \quad \quad (\xi_{t+k|t} - \xi_{t+k}^{ref})^\top Q (\xi_{t+k|t} - \xi_{t+k}^{ref}) \\ & \quad \quad (u_{t+k|t} - u_{t+k}^{ref})^\top R (u_{t+k|t} - u_{t+k}^{ref}) \notag \end{align} The reference input trajectory, $u^{ref}$, was calculated using the inverse kinematics for the positions (and assuming zero velocity) of a specifc reference state $\xi^{ref}$. Those inverse kinematics follow the algorithm which is used in \cite{friesen2014,Sabelhaus2015a,Schek1974}. The constraints have the following interpretation. Constraint (\ref{eq:u_lim}) is a box constraint on the inputs so that the springs cannot have negative length (violating the dynamics assumptions), and also cannot become too large (where the dynamics solution also becomes unrealistic.) Constraint (\ref{eq:collision}) denotes a minimum bound on the second element in the state, the $z$-position, which prevents collision between the moving vertebra and the static vertebra, where the vertebrae each have height $h$. Finally, constraint (\ref{eq:initialcondition}) assigns the initial condition at the starting time of the CFTOC. The constants used in this optimization are \begin{equation} u_{min}=0, \quad u_{max}=0.3, \quad N=4, \quad h=0.15, \end{equation} and the objective function weights are $Q=P=I$ and $R=2I$. \section{RESULTS} \begin{figure}[thpb] \centering \includegraphics[width=1\columnwidth]{img/allvert_Feb2017} \caption{Positions in the $X$-$Z$ plane of all 3 of the vertebrae, including the reference and the two simulations (with/without disturbances), as the robot performs a counterclockwise bend. Blue trajectories are same as those in Fig. 1.} \label{fig:allvert} \end{figure} \begin{figure}[thpb] \centering \includegraphics[width=0.9\columnwidth]{img/topvert_Feb2017} \caption{Positions in the X-Z plane of the top vertebra, including the reference and the two simulations (with/without disturbances), as the robot performs a counterclockwise bend. The vertebra tracks the trajectory closely.} \label{fig:topvert} \vspace{-0.3cm} \end{figure} \subsection{Controller without reference input} The first controller tracked the positions of the vertebrae with extremely low error, after an initial transient response. Fig. \ref{fig:allvert} shows the paths of the vertebrae in the $X$-$Z$ plane as they sweep through their counterclockwise bending motion, including the reference trajectory (blue), the resulting trajectory with MPC controller and no disturbances (green), and the result of the controller with disturbances (magenta). Fig. \ref{fig:topvert} shows a zoomed-in view of the top vertebra, which had the largest tracking errors of the three vertebrae. \subsection{Controller with reference input} The controller for the two-dimensional, single-vertebra spine with reference input tracking does not currently perform as well as the controller with hand-tuned smoothing constraints. However, it does not go unstable and fail, as does a controller without either smoothing or input tracking. Figure \ref{fig:ref_states} shows the tracking of the single vertebra for each of its three kinematic states ($X$, $Z$, and angle $\theta$), showing good tracking performance. Figure \ref{fig:ref_inputs} shows the input reference for the four cables in this system for the same simulation. These results show promise for future work, once additional possible complications are resolved relating to discretization errors and speed of tracking. \section*{ACKNOWLEDGEMENTS} This research was supported by NASA Space Technology Research Fellowship no. NNX15AQ55H. \bibliographystyle{IEEEtran}
1,116,691,499,423
arxiv
\section{Introduction} \subsection*{Superstring scattering amplitudes} In (super)string perturbation theory one of the main questions is finding an explicit expression for the string measure to an arbitrary loop (genus) order. For the bosonic string, the infinite-dimensional integral over the space of possible worldsheets becomes, after taking a quotient by the appropriate group of symmetries, an integral over the moduli space of curves $\M_g$. For superstrings, computing the measure is much more difficult, as the integration over the odd variables needs to be performed. However, for $g=1$ this difficulty is not present, and an expression for the genus 1 superstring measure was obtained by Green and Schwarz \cite{grsc}. In general the superstring measure is a measure on the moduli space of super-Riemann surfaces, and dealing with the supermoduli and finding an appropriate gauge-fixing procedure is extremely difficult. In a series of papers \cite{DHP1,DHPa,DHPb,DHPc,DHPd,DHPe} D'Hoker and Phong overcame these conceptual difficulties, derived an expression for the chiral superstring measure for $g=2$ from first principles, and verified its further physical properties. In \cite{159,182} D'Hoker and Phong establish the modern program of finding higher genus (i.e. multiloop) superstring amplitudes by using factorization constraints: they investigated the behavior of the amplitude as the curve degenerates, and showed that in the limit the constant term is expected to factorize as the product of lower-dimensional amplitudes. D'Hoker and Phong then proposed and studied an expression for the genus 3 superstring amplitude assuming holomorphicity of certain square roots. Cacciatori, Dalla Piazza, and van Geemen in \cite{CDPvG} advanced the program begun by D'Hoker and Phong by constructing a holomorphic modular form in genus 3 satisfying the factorization constraints, while in \cite{DPvG} the uniqueness of this ansatz is shown. \smallskip The problem of finding the superstring measure for higher genera presents additional difficulties. While for $g\le 3$ loops the moduli space of curves $\M_g$ is dense in the moduli space $\A_g$ of ppavs, this is no longer the case for genus $g\ge 4$. Thus while for $g\le 3$ an ansatz for a superstring measure is a Siegel modular form on $\A_g$, for $g\ge 4$ such an ansatz for the amplitude may only be defined on $\M_g$, and not on all of $\A_g$. In \cite{GR} the first-named author presented a general framework for the ansatz\"e of Green-Schwartz, D'Hoker-Phong, and Cacciatori-Dalla Piazza-van Geemen, proposed an ansatz for genus 4, and a possible ansatz, satisfying the factorization constraints, for the superstring measure in any genus, subject to the condition that certain holomorphic roots of modular forms are single-valued on $\M_g$ (the genus 4 ansatz was then also obtained independently in \cite{CDPvG2}). In \cite{RSM} the second-named author proved that in genus 5 these holomorphic square roots are indeed well-defined modular forms on a suitable covering of $\M_5$, and thus that an ansatz is well-defined for $g=5$. For a review and further developments, see Morozov \cite{M1,M2}. See also \cite{MV1} for a different approach. In \cite{GSM} we showed that in genus 3 the 2-point function vanishes as expected. Recently in \cite{MV} Matone and Volpato showed that certain quantities connected with this ansatz and the 3-point function no longer vanish for genus 3. However, they discuss also the possibility of a non-trivial correction which would still result in the vanishing of the 3-point function, and further investigation of the question is required. Whatever correction may be required will not influence, however, the first constraint on the ansatz --- the vanishing of the cosmological constant along $\M_g$. This has been verified in \cite{GR} and \cite{RSM} for all $g\le 4$ by using an expression for the ansatz in terms of theta series associated to quadratic forms. \subsection*{The Schottky problem} In complex algebraic geometry, the Riemann-Schottky problem --- the question of determining which ppavs are Jacobians of curves --- is one of the classical problems in algebraic geometry. The solution to the problem for the first non-trivial case, that of $g=4$, was given by Schottky in \cite{sc}, who constructed an explicit modular form vanishing on the locus of Jacobians of curves of genus 4. Igusa in \cite{Ch} proved that the Schottky form is irreducible and thus defines $\M_4\subset\A_4$. The original expression for the Schottky form in \cite{sc} used the combinatorics of theta characteristics, but it was shown by Igusa that it is equal to the weight 8 modular form $F_g:=(2^g-1)\sum\theta_m^{16}-2(\sum\theta_m^8)^2$, for genus $g=4$ (where the sum is taken over all even theta characteristics $m$). It is known that for $g\le 3$ the modular form $F_g$ is identically zero on $\A_g$ --- this follows from Riemann's bilinear relations for $g=1,2$, and is the only equation relating theta constants of the second order for $g=3$, cf. \cite{vgvdg}. The properties of the modular form $F_g$ for higher genus are of obvious interest, and some identities for it were obtained by Igusa in \cite{Ch}. In the 1980s Belavin, Knizhnik, and Morozov \cite{BK,BK1,M86}, and later D'Hoker and Phong \cite{159} for physics reasons conjectured that the form $F_g$ vanishes on $\M_g$ for any $g$ (it was shown by Igusa that $F_g$ is proportional to $f_4^2-f_8$, where $f_4$ and $f_8$ are the theta series associated to the even lattices $E_8\times E_8$ and $D_{16}^+$ --- see below for the definitions --- and physically the vanishing of $F_g$, i.e. the equality of $f_4^2$ and $f_8$, would be interpreted as the equality of the measures for the $SO(32)$ and $E_8\times E_8$ theories). This question was investigated by Poor who in \cite{Po} showed that for any $g$ the form $F_g$ vanishes on the locus of hyperelliptic curves; the conjectural vanishing of $F_g$ on $\M_g$ remained open. \subsection*{Summary of results} In this paper we study the cosmological constant for the proposed in \cite{GR} chiral superstring scattering measure for $g\ge 5$. By extending the results of Igusa \cite{Ch} on certain modular forms constructed using theta constants associated to syzygetic subspaces of characteristics we obtain an alternative expression for one of the summands in the ansatz. This allows us to show that in genus 5 the cosmological constant for the proposed ansatz is in fact equal to a non-zero multiple of the Schottky form $F_5$ described above. Thus the vanishing of the cosmological constant for the proposed genus 5 ansatz is equivalent to the conjecture of Belavin, Kniznik, Morozov, D'Hoker, and Phong that $F_5$ vanishes identically on $\M_5$. By studying the boundary of $\M_5$ and computing the degenerations of the proposed ansatz, using theta functional techniques, we show that this conjecture is {\it false}: that $F_5$ in fact does {\it not} vanish identically on $\M_5$. By using the results on the slope of effective divisors on $\M_g$ we identify the zero locus of $F_5$ in $\M_5$ as the divisor of trigonal curves in $\M_5$. Thus it follows that the cosmological constant for the originally proposed in \cite{GR} ansatz for the chiral superstring measure does not vanish identically in genus 5. We thus give a simple explicit formula for a modification of the genus 5 ansatz that satisfies factorization constraints and results in an identically vanishing cosmological constant. It follows from our results that $F_g$ does not vanish identically on $\M_g$ for any $g\ge 5$. Subject to the validity of certain generalized Schottky-Jung identities involving roots of degree $2^{g-4}$ of polynomials in theta constants (see the appendix), we further show that the cosmological constant for the originally proposed ansatz is equal to a non-zero multiple of $F_g$ in any genus, and thus also does not vanish identically on $\M_g$ for any $g\ge 5$ --- thus an adjustment of the ansatz and a further investigation of the question would be of interest. \section*{Acknowledgements} We would like to thank Sergio L. Cacciatori, Francesco Dalla Piazza, Bert van Geemen, Marco Matone, Duong Phong, and Roberto Volpato for useful discussions and suggestions regarding chiral superstring scattering amplitudes. \section{Notations} We denote by $\HH_g$ the Siegel upper half-space of symmetric complex matrices with positive-definite imaginary part, called period matrices. The action of the symplectic group $\Sp$ on $\HH_g$ is given by $$ \begin{pmatrix} A&B\\ C&D\end{pmatrix}\circ\tau:= (A\tau+B)(C\tau+D)^{-1} $$ where we think of elements of $\Sp$ as of consisting of four $g\times g$ blocks, and they preserve the symplectic form given in the block form as $\begin{pmatrix} 0& 1\\ -1& 0\end{pmatrix}$. For a period matrix $\tau\in\HH_g$, $z\in \C^g$ and $\e,\de\in \F^g$ (where $\F$ denotes the abelian group $\Z/2\Z=\lbrace 0,1\rbrace$ for which we use the additive notation) the associated theta function with characteristic $m=[\e, \de]$ is $$ \theta_m(\tau, z)=\thetat\e\de(\tau,z)=\sum\limits_{n\in\Z^g}\exp(\pi i ((n+\e/2)'\tau (n+\e/2)+ 2(n+\e/2)'( z+\de/2)) $$ (where we denote by $X'$ the transpose of $X$). As a function of $z$, $\theta_m(\tau, z)$ is odd or even depending on whether the scalar product $\e\cdot\de\in\F$ is equal to 1 or 0, respectively. Theta constants are restrictions of theta functions to $z=0$. We shall write $\theta_m$ for theta constants.\smallskip For a set of characteristics $M=(m_1, m_2,\dots, m_k)$ we set $$ P(M):=\prod_{i=1}^k\theta_{m_i}. $$ A holomorphic function $f:\HH_g\to\C$ is a modular form of weight $k/2$ with respect to a subgroup $\Gamma\subset\Sp$ of finite index if $$ f(\gamma\circ\tau)=)\det(C\tau+D)^{k/2}f(\tau)\quad \forall\gamma\in\Gamma,\forall\tau\in\HH_g. $$ and if additionally $f$ is holomorphic at all cusps when $g=1$. We denote by $[\Gamma, k/2]$ the vector space of such functions. Theta constants are modular forms of weight $k/2$ with respect to a certain subgroup $\Gamma(4,8)$. For further use, we denote $\Gamma_g:=\Sp$ the integral symplectic group and $\Gamma_g (1,2)$ its subgroup defined by $${\rm diag}(AB')\equiv {\rm diag}(CD')\equiv 0\,\, mod 2.$$ \section{The ansatz for the chiral superstring measure} We recall from \cite{GR} or \cite{RSM} \begin{lm} If $16$ divides $2^i s$, and $g\geq i$, then $$ P_{i, s}^g (\tau):= \sum_V P(V)^s(\tau), $$ where the sum is over all $i$-dimensional subspaces of $\F^{2g}$, belongs to $[\Gamma_g(1,2),\, 2^{i-1}s]$. \end{lm} For any even characteristic $m$, we can define $P_{i, s}^g[m] (\tau)$ by taking the sum above over all affine subspaces $V$ of $\F^{2g}$ of dimension $g$ (i.e. translates of $i$-dimensional linear subspaces) containing the characteristic $m$. The function $P_{i, s}^g[m] (\tau)$ is then a modular form with respect to a subgroup of $\Gamma_g$ conjugate to $\Gamma_g (1,2)$ (note that $\Gamma_g (1,2)\subset\Gamma_g$ is not normal). \begin{cor} If $16$ divides $2^i s$, and $g\geq i$, the form $$ S_{i,s}^g:=\sum_{m\in\F^{2g}}\sum_V P(V+m)^s(\tau)=\sum_m P_{i,s}^g[m](\tau), $$ where the sum is taken over all $i$-dimensional linear subspaces $V$, belongs to $[\Gamma_g,\, 2^{i-1} s]$. \end{cor} We recall the main results obtained in \cite{GR} and \cite{RSM}. \begin{prop}[\cite{GR}] The modular forms $P_{i, s}^g$ restrict to the locus of block diagonal period matrices $\HH_k\times\HH_{g-k}$ as follows: $$ P_{i, s}^g\left(\begin{matrix}\tau_1&0\\ 0&\tau_2\end{matrix}\right) = \sum_{0\leq n,m\leq i\leq n+m}N_{n,m;i} P_{n, 2^{i-n}s}^k(\tau_1) P_{m, 2^{i-m}s}^{g-k}(\tau_2), $$ where $$N_{n,m;i}:=\prod_{j=0}^{n+m-i-1}\frac{(2^n -2^j)(2^m-2^j)}{2^{n+m-i }-2^j},$$ for any $\tau_1\in\HH_k$ and $\tau_2\in\HH_{g-k}$. \end{prop} \begin{thm}[\cite{GR}] For $g \leq 4$ the function $$ \Xi^{(g)}[0]:=\frac{1}{2^g}\sum\limits_{i=0}^g (-1)^i2^{\frac{i(i-1)}{2}}P_{i, 2^{4-i}}^{g} $$ is a modular form in $[\Gamma_g(1,2),8]$, and its restriction to $\HH_k\times\HH_{g-k}$ is $$ \Xi^{(g)}[0]\left(\begin{matrix}\tau_1&0\\ 0&\tau_2\end{matrix}\right)=\Xi^{(k)}[0](\tau_1)\cdot\Xi^{(g-k)}[0](\tau_2), $$ for any $\tau_1\in\HH_k,\tau_2\in\HH_{g-k}$. \end{thm} We also define for any characteristic $m$ $$ \Xi^{(g)}[m]:=\frac{1}{2^g}\sum\limits_{i=0}^g (-1)^i 2^{\frac{i(i-1)}{2}}P_{i, 2^{4-i}}^{g}[m]. $$ These $\Xi^{(g)}[m]$ satisfy similar factorization constraints, and thus are natural candidates for the chiral superstring measure. In \cite{GR} it is also shown that the above statement holds for $g > 4$ as well, up to a possible inconsistency in the modularity. In fact, since $\HH_g$ is simply connected, the individual degree $2^n$ roots needed to define the $P$'s above are well-defined globally, but they are not necessarily modular forms due to possible sign inconsistency. \smallskip We denote $\A_g:=\HH_g/\Gamma_g$ the moduli space of principally polarized abelian varieties. The Torelli map gives an immersion of the moduli space of curves $\M_g\hookrightarrow\A_g$. We define $\TT_g\subset\HH_g$ (the Torelli space) as the preimage of $\M_g\subset\A_g$ under the projection $\HH_g\to\A_g$. \begin{thm}[\cite{RSM}] The restrictions to $\TT_5$ of $P_{5, 1/2}^{5}$ (which are sums of square roots of polynomials in theta constants) satisfy the modularity condition with respect to $\Gamma_5(1, 2)$. Hence $\Xi^{(5)}[0]|_{\TT_5}$ is a section of the restriction of the bundle of modular forms of weight 8 to $\TT_5$. \end{thm} Summing up we get the following expression for the cosmological constant \begin{equation}\label{Xi} \Xi^{(g)}:=\sum_m \Xi^{(g)}[m] =\frac{1}{2^g}\sum\limits_{i=0}^g (-1)^i 2^{\frac{i(i+1)}{2}}S_{i, 2^{4-i}}^g \end{equation} (note that when summing over characteristics $m$, each polynomial $P_i$ appears in $2^i$ terms; thus we have the $2^{\frac{i(i+1)}{2}}$ in the formula for $\Xi^{(g)}$ here instead of the $2^{\frac{i(i-1)}{2}}$ in the formula for $\Xi^{(g)}[m]$). As a consequence of the previous discussion we have \begin{prop} When $g\leq 4$, $\Xi^{(g)}$ is a modular form on $ \HH_g$ of weight 8, while $\Xi^{(5)}|_{\TT_5}$ is a section of the restriction of the line bundle of modular forms of weight 8. \end{prop} For any even positive definite unimodular matrix $S$ of degree $2k$ (i.e $S$ is positive definite, $\det S=1$ $x'Sx\equiv 0 \forall x \in \Z^k$), we define the theta series for $\tau\in\HH_g$ by $$ f_S^{(g)}(\tau):=\sum\limits_{u\in\Z^{k,g}}\exp(\pi i {\rm tr}(u'Su\tau)) $$ These are modular forms in $[\Gamma_g,k]$, cf.~\cite{Fr}. For any $g$, we denote by $f_4^{(g)}$ and $f_8^{(g)}$ the theta series, of weights 4 and 8, respectively, associated to the even unimodular matrices related to the lattices $E_8$ and $D_{16}^+$, cf.~\cite{CS}. It is shown in \cite{Ch} that the following identity holds for any $g$: \begin{equation}\label{fS} ((f_4^{(g)})^2-f_8^{(g)})={\frac{1}{2^{2g}}}((1-2^g )S_{0,16}^g+2S_{1,8}^g). \end{equation} \smallskip In this note we consider the case $g=5$ of the ansatz for chiral superstring measures, which involves square roots of theta constants. In this case we prove $$ \Xi^{(5)}= \frac{-51}{217}((f_4^{(5)})^2-f_8^{(5)}) $$ on $\TT_5$, and then resolve a long-standing open question posed in \cite{BK,BK1,M86,159}, by showing that this expression does {\it not} vanish identically. In an appendix we then discuss a possible generalization of the Schottky-Jung identities, and conjectural results for arbitrary genus. \section{ The cosmological constant } The vanishing of the cosmological constant means for $\Xi^{(g)}(\tau)$ to vanish identically on the moduli space of curves $\M_g$ (we will work on $\TT_g$). This has been verified for the proposed ansatz in genus 2 in \cite{DHP1}, for genus 3 in \cite{CDPvG}, and for genus 4 in \cite{GR} and \cite{RSM}, so that we know that $\Xi^{(g)}(\tau)$ is identically zero for $\tau\in\TT_g$ for $g\leq 4$. The proof given in \cite{RSM} was an immediate consequence of remarkable formulas deduced from the Riemann relations by Igusa around thirty years ago. \begin{lm}[\cite{Ch}]\label{iglm} We have $$(2^{2g}-1)S_{0,16}^g= 6S_{1,8}^g+24S_{2,4}^g$$ $$(2^{2g-2}-1)S_{1,8}^g= 18S_{2,4}^g+168S_{3,2}^g$$ $$(2^{2g-4}-1)S_{2,4}^g= 42S_{3,2}^g+840 S_{4,1}^g$$ for $g\geq 2,3,4$ respectively. \end{lm} In this paper we study the cosmological constant when $g\ge 5$, and to this end recall the form of the Riemann relations used to prove the above result. For any $ a\in \F^{2g}$ we denote by $a'$ (resp. $a''$) the vectors of the first $g$ (resp. last $g$) entries. For any $a, b, c\in \F^{2g}$ we set $$ (a,b,c)=\exp(\pi i\sum_{j=1}^g (a'_j b''_j c''_j +a''_j b'_j c''_j+ a''_j b''_j c'_j ) $$ and observe that this is a symmetric tricharacter.\smallskip We also set $e(a,b):=(a,a,b)(a,b,b)$; with these notations, Riemann's theta formula can be stated as follows \begin{lm}[\cite{Ch}] For any $m, a, b$ in $ \F^{2g}$ we have \begin{equation}\label{riemannrelation} (m,a,a)(m,b,b)(m,a,b)\theta_m\theta_{m+a}\theta_{m+b}\theta_{m+a+b} \end{equation} $$ =2^{-g}\sum_{n\in\F^{2g}}e(m,n)(n,a,a)(n,b,b)(n,a,b)\theta_n\theta_{n+a}\theta_{n+b}\theta_{n+a+b}. $$ \end{lm} We remark that our proof of the modularity of $\Xi^{(5)}$ also uses the so called Schottky-Jung relation for theta constants of Jacobians of curves, cf.~\cite{RF,vG, Ts}. It is well-known that Riemann relations in genus $g$ induce Schottky relations for periods of Jacobians in genus $g+1$ of similar structure: Riemann relations involve homogeneous monomials of degree 4 in the $\theta_m$, while the Schottky-Jung relations involve monomials of degree 8 in the square root of $\theta_m$. To write them down explicitly, we set $c:=\tch{0&\ldots&0&0}{0&\ldots&0&1}\in\F^{2g+2}$, and for any $m=\tch{m'}{m''}\in\F^{2g}$ denote $\ov{m}:=\tch{m'&0}{m''&0}\in\F^{2g+2}$. Then as a consequence of the classical Schottky-Jung relations we have \begin{lm} For any $\pi\in\TT_{g+1}$, the following identity holds for theta constants evaluated at $\pi$ $$ (\ov{m},\ov{a},\ov{a})(\ov{m},\ov{b},\ov{b})(\ov{m},\ov{a},\ov{b})\sqrt{\theta_{\ov{m}}\theta_{\ov{m+a}}\theta_{\ov{m+b}}\theta_{\ov{m+a+b}} \theta_{\ov{m}+c}\theta_{\ov{m+a}+c}\theta_{\ov{m+b}+c}\theta_{\ov{m+a+b}+c}} $$ $$ =2^{-g}\sum_{n\in \F^{2g}}e(\ov m, \ov n)(\ov n,\ov a,\ov a)(\ov n,\ov b,\ov b)(\ov n,\ov a,\ov b)$$ $$\sqrt{\theta_{\ov{n}}\theta_{\ov{n+a}}\theta_{\ov{n+b}}\theta_{\ov{n+a+b}} \theta_{\ov{n}+c}\theta_{\ov{n+a}+c}\theta_{\ov{n+b}+c}\theta_{\ov{n+a+b}+c}} $$ \end{lm} We observe that the structure of Riemann and Schottky-Jung relations above is the same, and that Riemann relations in genus $g$ and Schottky relations in genus $g+1$ have the same coefficients. Thus the $g=4$ Riemann relation (valid on $\HH_4$) of the form $$ r_1\pm r_2\pm r_3\pm r_4=0, $$ with each $r_i$ a monomial of degree 4 in theta constants, induces a Schottky relation in genus $5$, valid on $\TT_5$, of the form $$ \sqrt R_1\pm \sqrt R_2\pm \sqrt R_3\pm \sqrt R_4=0, $$ where each $R_i$ is the square root of a monomial of degree 8 in theta constants of a Jacobian in $\TT_5$, cf.~\cite{AC}. Note that if we replace $\ov n$ with $\ov n +c$ in the right-hand-side of the lemma above, nothing changes, since $e(\ov m, \ov n +c)=e(\ov m, \ov n)$ and $(\ov n,\ov a,\ov b)=(\ov n +c,\ov a,\ov b)$. Moreover, if we let $d:=\tch{0&\ldots&0&1}{0&\ldots&0&0}\in\F^{2g+2}$, then for any $n\in\F^{2g}$ one of the characteristics $\ov{n}+d$ and $\ov{n}+c+d$ is odd, and thus $\theta_{\ov{n}+d}\theta_{\ov{n}+c+d}=0$. Thus in the lemma above we can extend the summation over all of $\F^{2g+2}$ to get \begin{lm} For any $\pi\in\TT_{g+1}$, the following identity holds for theta constants evaluated at $\pi$ $$ (\ov{m},\ov{a},\ov{a})(\ov{m},\ov{b},\ov{b})(\ov{m},\ov{a},\ov{b})\sqrt{\theta_{\ov{m}}\theta_{\ov{m+a}}\theta_{\ov{m+b}}\theta_{\ov{m+a+b}} \theta_{\ov{m}+c}\theta_{\ov{m+a}+c}\theta_{\ov{m+b}+c}\theta_{\ov{m+a+b}+c}} $$ $$ =2^{-g-1}\sum_{n\in \F^{2g+2}}e(\ov m, n)( n,\ov a,\ov a)( n,\ov b,\ov b)( n,\ov a,\ov b)$$ $$\sqrt{\theta_{n}\theta_{n+\ov a}\theta_{n+\ov{b}}\theta_{n+\ov{a+b}} \theta_{n+c}\theta_{n+\ov{a}+c}\theta_{n+\ov{b}+c}\theta_{n+\ov{a+b}+c}} $$ \end{lm} All the terms appearing in the above relation are of the form $\sqrt{P(N+n)}$, with $N=\langle \ov a, \ov b , c\rangle$ (we denote by $\langle\ \rangle$ the linear span). Such a polynomial $P(N+n)$ is not identically zero if and only if $N+n$ is an even coset of a totally isotropic, with respect to the form $e(m,n)$, 3-dimensional space $N$. The symplectic group acts transitively on such cosets, and $\sigma(N)P(N+n)(\tau)$ maps to $\sigma(N_1)P(N_1+n_1)(\tau)$, where the sign $\sigma(N) =\pm 1$ depends only on the subspace $N$ and not on the coset, cf.~\cite{Ch}. As an immediate consequence we get a more general result than the above lemma that can be stated for any totally isotropic space $N$. Since square roots appear in the formula, and there is a choice of a sign for each of them, there will be signs $\sigma(n)$ depending on the cosets. Applying the same argument as in \cite{RSM}, we get the following constraint on the signs: $$ \sigma(n_1)\sigma(n_2)\sigma(n_3)\sigma(n_4)=1 $$ if $n_1+n_2+ n_3+n_4 =0$. \begin{lm} Let $N=\langle a, b, c\rangle$ be a totally isotropic 3-dimensional subspace of $\F^{2g+2}$. Then for any $\pi\in\TT_{g+1}$, the following identity holds for theta constants evaluated at $\pi$ $$ (m,a,a)(m,b,b)(m,a,b)\sigma(m)\sqrt{P(N+m)}= $$ $$2^{-g-1}\sum_{n\in \F^{2g+2}}e( m, n)\sigma(n)( n,a,a)( n,b,b)( n,a,b)\sqrt{P(N+n)} $$ \end{lm} To obtain special relations for theta constants of Jacobians using the above Schottky-Jung identity, we proceed as in \cite{Ch}, where identities for theta constants of arbitrary abelian varieties are obtained by using Riemann relations. We repeat the argument given there, since it is elementary, but quite involved. We take the fourth powers of both sides of the above formula and sum over $m\in\F^{2g+2}$ to obtain $$ \sum_mP(N+m)^2=2^{-2g-2}\left(\sum_nP(N+n)^2+3!\sum_{n,m}P(N+n)P(N+m)\right. $$ $$ \left.+4!\sum_{n_1,n_2,n_3,n_4}\sqrt{P(N+n_1)P(N+n_2)P(N+n_3)P(N+n_4)}\right) $$ Because of the orthogonality of the characters, the only non-zero terms here would be the ones with $n_1+n_2+ n_3+n_4 =0$. The crucial observation is that the signs $\sigma$ disappear in this formula, so that it now looks exactly similar to the one for theta constants of arbitrary abelian varieties, given in \cite{Ch}. Indeed, in the first term two terms on the right we have $\sigma^2=1$, while in the last term we have $\sigma(n_1)\sigma(n_2)\sigma(n_3)\sigma(n_4)$, which is equal to 1 since $n_1+n_2+n_3+n_4=0$. \smallskip We now sum over all 3-dimensional isotropic subspaces $N$, and note that $$\sum_{N}\sum_{m} \prod _{n\in Nm}\theta_n^2= \sum_{N}\sum_{m}P(Nm)^2=8 S_{3,2}^{g+1}$$ where the coefficient $8$ is due to the fact that each $N$ has 8 elements. We further compute $$ \sum_{N}\sum_{n,m }P (N+m) P(N+n) = 28 S_{3,2}^{g+1} + 64\cdot 15 S_{4,1}^{g+1}, $$ where $28$ appears as the number of pairs of distinct elements of $N$, $64=8\cdot 8$ is the number of elements in $(N+n)\times (N+m)$, and $15$ is the number of 3-dimensional isotropic spaces contained in a fixed 4-dimensional isotropic space (the 4-dimensional space $(N+n)\sqcup(N+m)$ must be isotropic, otherwise the corresponding product is zero). Finally for the last term we get $$ \sum_{N} \sum_{ n_1, n_2, n_3, n_4 }\quad \sqrt{P(N+n_1)P(N+n_2)P(N+n_3)P(N+n_4)}= $$ $$ 14 S_{3,2}^{g+1} + 112\cdot 15 S_{4,1}^{g+1} +2^9\cdot 155 S_{5, 1/2}^{g+1}. $$ Here $14$ is the numbers of quadruplets $ n_1, n_2, n_3, n_4$ such that $n_1+n_2+ n_3+n_4 =0$, and all $n_i$ are in $N$, $112= 28\cdot 4$ is the number of quadruplets $ n_1, n_2, n_3, n_4$ such that $n_1+n_2+ n_3+n_4 =0$ with $n_1, n_2\in N$ and $n_3, n_4\in N+n_3$, $2^9$ is the numbers of quadruplets $ n_1, n_2, n_3, n_4$ such that $n_1+n_2+ n_3+n_4 =0$ with all $N+n_i$ disjoint, and $155$ is the number of 3-dimensional isotropic spaces contained in a 5-dimensional isotropic space (notice that in this case the union $\sqcup_i(N+n_i)$ is 5-dimensional isotropic). \smallskip Applying these results, we finally get the following: $$ 8 S_{3,2}^{g+1}={2^{-2g-2}}\left(8 S_{3,2}^{g+1}+ 6( 28 S_{3,2}^{g+1} + 64\cdot 15 S_{4,1}^{g+1})+\right.$$ $$\left. 24(14 S_{3,2}^{g+1} + 112\cdot 15 S_{4,1}^{g+1} +2^9\cdot 155 S_{5, 1/2}^{g+1})\right)=$$ $${2^{-2g+7}}(S_{3,2}^{g+1}+90S_{4,1}^{g+1}+3720 S_{5,1/2}^{g+1}).$$ Rescaling from $g+1$ to $g$, and gathering all $S_{3,2}^g$ on one side, we get \begin{prop} For any $g\geq 5$ and for any $\pi\in\TT_g$ we have \begin{equation}\label{Sexpress} (2^{2g-6}-1)S_{3,2}^g(\pi)= 90S_{4,1}^g(\pi)+3720 S_{5,1/2}^g(\pi) \end{equation} (notice that the last term $S_{5,1/2}^g$ is only known by \cite{RSM} to be a modular form on $\TT_g$, so the above identity does not make sense over all of $\HH_g$). \end{prop} Substituting this result in formula (\ref{Xi}) for $\Xi^{(5)}(\pi)$ for $\pi\in\TT_5$ to express $S_{5,1/2}^5$ in terms of $S_{4,1}^5$ and $S_{3,2}^5$, and then using lemma \ref{iglm} to express those, we eventually express $\Xi^{(g)}$ as a linear combination of $S_{0,16}^5$ and $S_{1,8}^5$, and furthermore as a linear combination of $(f_4^{(g)})^2$ and $f_8^{(g)}$, cf.~\cite{Ch}. We thus obtain \begin{thm}\label{XiF} For any $\pi\in\TT_5$ we have $$ \Xi^{(5)}(\pi)=\frac{-51}{217}((f_4^{(5)})^2-f_8^{(5)})(\pi) $$ \end{thm} \begin{cor} For $g= 5$ the cosmological constant $\Xi^{(5)}$ vanishes identically on $\TT_5$ if and only if $(f_4^{(5)})^2-f_8^{(5)}$ vanishes identically on $\TT_5$. \end{cor} It was conjectured in \cite{BK,BK1,M86,159} that $(f_4^{(g)})^2-f_8^{(g)}$ vanishes identically on $\TT_g$ in any genus (physically $f_4$ is interpreted as the appropriate measure for the $E_8$ theory, $f_4^2$ --- for $E_8\times E_8$, and $f_8$ --- for $SO(32)$). For $g\le 3$ the identical vanishing of this modular form on $\TT_g=\HH_g$ is a consequence of Riemann's bilinear addition theorem. For $g=4$ this form is equal to the Schottky equation defining $\TT_4\subset\HH_4$, cf\cite{Ch} . In cf\cite{Po} it was shown that this form vanishes along the hyperelliptic locus for any $g$. We will now show that in fact this form does {\it not} vanish identically on $\TT_5$, from which it follows that it does {\it not} vanish identically on $\TT_g$ for any $g\ge 5$. \section{The generic non-vanishing of $(f_4^{(5)})^2-f_8^{(5)}$ along $\TT_5$} Writing out the formulas for $S_{0,16}^g$ and $S_{1,8}^g$ explicitly, we have from (\ref{fS}) $$ F_g(\tau):=-2^{2g}\left((f_4^{(g)})^2(\tau)-f_8^{(g)}(\tau)\right)=2^g\sum\limits_{m\in\F^{2g}} \theta_m^{16}(\tau)-\left(\sum\limits_{m\in\F^{2g}} \theta_m^8(\tau)\right)^2 $$ We now prove that the cosmological constant $\Xi^{(5)}(\tau)$ for the ansatz above does not vanish identically for $\tau\in\TT_5$. By theorem \ref{XiF} it is equivalent to proving the following for $g=5$. \begin{thm}\label{nonvanish} The modular form $F_g$ does not vanish identically on $\TT_g$. \end{thm} \begin{proof} We will prove this by showing that $F_5$ does not generically vanish in a neighborhood of the boundary divisor $\delta_0\subset\overline{\M_5}$ (or does not vanish along $\delta_0$ to second order), and then deducing that $F_g$ does not vanish identically for any $g\ge 5$. We recall that in general the open part of $\delta_0\subset\overline{\M_g}$ is parameterized by $\M_{g-1,2}$, and first prove the following lemma that seems to be widely known. \begin{lm} In an appropriate basis for homology, the period matrix for a point $(C;p,q)\in\M_{g-1,2}\subset\partial\overline{\M_g}$ is given by $$ \left(\begin{matrix} i\infty & A(p)-A(q)\\ (A(p)-A(q))'& \tau\end{matrix} \right), $$ where $A:C\to {\rm Jac}(C)$ is the Abel-Jacobi map of the curve to its Jacobian, and $\tau$ is the period matrix of ${\rm Jac}(C)$. \end{lm} \begin{proof} Choose a basis for $H_1$ of the nodal curve $C/p\sim q$ to consist of a basis of $H_1(C)$ together with taking the residue at $p=q$, and a path going from $p$ to $q$. The dual basis for the sections of the dualizing sheaf on $C/p\sim q$ then consists of $g-1$ holomorphic differentials on $C$ and a differential with simple poles (and opposite residues) at $p$ and $q$ with zero integrals over all cycles on $C$. Integrating the holomorphic differentials over loops on $C$ gives $\tau$, integrating them from $p$ to $q$ gives by definition $A(p)-A(q)$, and we get the $i\infty$ for integrating the dipole differential from $p$ to $q$. \end{proof} We now use the Fourier-Jacobi expansion of the theta functions near the boundary, see\cite{vG}: $$ \thetat{0\ \e}{\de_1\ \de}\left(\begin{matrix}\tau_{11}&z^t\\ z& \tau\end{matrix}\right)= \thetat\e\de(\tau,0)+2e^{\pi i\de_1}q^4\thetat\e\de(\tau,z)+O(q^{16}) $$ and $$ \thetat{1\ \e}{\de_1\ \de}\left(\begin{matrix}\tau_{11}& z^t\\ z& \tau\end{matrix}\right)=2e^{\pi i\de_1/2}q\thetat\e\de(\tau,z/2)+O(q^9) $$ where as usual we let $q:=\exp(\pi i \tau_{11}/4)$. Let us now compute the first terms of the $q$-expansion of $F_g$ as $q\to 0$ (i.e. near the boundary, as $\tau_{11}\to i\infty$). By inspection we see that the two lowest order terms are $O(1)$ and $O(q^8)$ respectively, so we compute them using $$ \sum\limits_{\e,\de\in\F^{g-1}}\sum\limits_{\de_1\in\F} \theta^N\tch{0\ \e}{\de_1\ \de}\left(\begin{matrix}\tau_{11}&z^t\\ z& \tau\end{matrix}\right) $$ $$=2\sum\limits_{\e,\de\in\F^{g-1}} \theta^N\tch\e\de(\tau,0)+2\binom{N}{2}(2q^4)^2\theta^{N-2}\tch\e\de(\tau,0)\theta^2 \tch\e\de(\tau,z)+o(q^8) $$ and $$ \sum\limits_{\e,\de\in\F^{g-1}}\sum\limits_{\de_1\in\F} \theta^N\tch{1\ \e}{\de_1\ \de}\left(\begin{matrix}\tau_{11}&z^t\\ z& \tau\end{matrix}\right)=2^Ne^{N\pi i\de_1/2}q^N\theta^N\tch\e\de(\tau,z/2)+O(q^N). $$ For the terms up to $O(q^8)$ in $F_g$ we then get from the above $$ \sum\limits_{\e,\de\in\F^g}\theta^{16}\tch\e\de\left(\begin{matrix}\tau_{11}&z^t\\ z& \tau\end{matrix}\right) $$ $$ =2\sum\limits_{\e,\de\in\F^{g-1}{\rm\ even}} \theta^{16}\tch\e\de(\tau,0)+2\binom{16}{2}(2q^4)^2\theta^{14}\tch\e\de(\tau,0)\theta^2 \tch\e\de(\tau,z)+o(q^8) $$ with no contribution from the case of $\e_1=1$, while $$ \sum\limits_{\e,\de\in\F^g}\theta^{8}\tch\e\de\left(\begin{matrix}\tau_{11}&z^t\\ z& \tau\end{matrix}\right)=2\sum\limits_{\e,\de\in\F^{g-1}} \theta^{8}\tch\e\de(\tau,0)+2\binom{8}{2}(2q^4)^2\theta^{6}\tch\e\de(\tau,0)\theta^2 \tch\e\de(\tau,z) $$ $$+\sum\limits_{\e,\de\in\F^{g-1}}2^8e^{8\pi i\de_1/2}q^8\theta^8\tch\e\de(\tau,z/2)+o(q^8). $$ Combining these, we get for the lowest order terms of $F_g$ the expression $$ F_5\left(\begin{matrix}\tau_{11}&z^t\\ z& \tau\end{matrix}\right)= 4F_{4}(\tau)+2^5\cdot960q^8\sum\limits_{\e,\de\in\F^{g-1}} \theta^{14}\tch\e\de(\tau,0)\theta^2\tch\e\de(\tau,z) $$ $$ -2q^8\!\!\!\sum\limits_{\alpha,\beta\in\F^{g-1}}\!\!\! \theta^8\tch\alpha\beta(\tau,0) \left( 448\!\!\!\sum\limits_{\e,\de\in\F^{g-1}}\!\!\! \theta^6\tch\e\de(\tau,0)\theta^2\tch\e\de(\tau,z) +512\!\!\!\sum\limits_{\e,\de\in\F^{g-1}}\!\!\! \theta^8\tch\e\de(\tau,z/2)\right) $$ In particular, since $F_4$, being the Schottky polynomial, vanishes identically on $\M_4$, and by the above lemma for a boundary point of $\delta_0\subset\overline{\M_5}$ we have $\tau\in\TT_4$ above, this means that $F_5|_{\delta_0}=0$. However, if $F_5$ vanished identically on $\M_5$, then it would vanish along $\delta_0$ to any order in the expansion. \smallskip For the $O(q^8)$ term of the expansion of $F_g$, note that as a function of $z$ it is a linear combination of $\theta^8\tch\e\de(\tau,z/2)$ and of $\theta^2\tch\e\de(\tau,z)$. Thus as a function of $z$ it is a section of $2\Theta$, and thus by the lemma above the $O(q^8)$ term of the expansion of $F_g$ near $\delta_0\subset\partial\overline{\M_g}$ vanishes if and only if this term is a section of the linear system $\Gamma_{00}$ defined in \cite{vgvdg} (and which turned out to be relevant for the study of the 2- and 3-point functions in genus 3, cf.~\cite{GSM,MV}). We will now use Riemann relations to express $\theta^8\tch\e\de(\tau,z/2)$ as a linear combination $\theta^2\tch\e\de(\tau,z)$. \begin{lm} We have the following identity: $$ \sum\limits_{\e,\de\in\F^{g-1}}\theta^8\tch\e\de(\tau,z/2) =\sum\limits_{\e,\de\in\F^{g-1}}\theta^6\tch\e\de(\tau,0)\theta^2\tch\e\de(\tau,z). $$ \end{lm} \begin{proof} A special case of Riemann relations (\ref{riemannrelation}) is $$ \theta^4\tch\e\de(\tau,z/2)=2^{1-g}\sum\limits_{\alpha,\beta \in\F^{g-1}} (-1)^{\alpha\cdot\de+\beta\cdot\e}\theta^3\tch\alpha\beta(\tau,0)\thetat\alpha\beta(\tau,z). $$ We will now use this identity twice to get an expression for $\theta^8\tch\e\de(\tau,z/2)$ as a double sum, and then sum over all $\e,\de\in\F^{g-1}$ (note that we include the odd ones), to get $$ \sum\limits_{\e,\de\in\F^{g-1}}\theta^8\tch\e\de(\tau,z/2) =2^{2-2g}\sum\limits_{\epsilon,\delta,\alpha,\beta,\sigma,\mu\in\F^{g-1}} $$ $$ (-1)^{(\alpha+\sigma)\cdot\de+(\beta+\mu)\cdot\e}\theta^3\tch\alpha\beta(\tau,0)\thetat\alpha\beta(\tau,z) \theta^3\tch\sigma\mu(\tau,0)\thetat\sigma\mu(\tau,z). $$ Notice that the only dependence on $\e,\de$ in the sum on the right is in the sign. Recalling that in general for any $B\in\F^g$ $$ \sum\limits_{A\in\F^g}(-1)^{A\cdot B}=2^g\de_{B,0} $$ (where $\de_{B,0}$ is the Kronecker symbol), we see that for fixed $\alpha,\beta,\sigma,\mu$ the sum over $\e,\de$ on the right-hand-side of the formula above is non-zero if and only if $\alpha+\sigma=\beta+\mu=0$ (i.e. iff $\alpha=\sigma$ and $\beta=\mu$). The factor of $2^{2g-2}$ for summing over $\e,\de$ in this case cancels out the $2^{2-2g}$ from Riemann relations, so that we finally obtain $$ \sum\limits_{\e,\de\in\F^{g-1}}\theta^8\tch\e\de(\tau,z/2) =\sum\limits_{\alpha,\beta\in\F^{g-1}}\theta^6\tch\alpha\beta(\tau,0)\theta^2\tch\alpha\beta(\tau,z) $$ as claimed. \end{proof} Using the lemma, the $q^8$ term of the $q$-expansion of $F_g$ becomes (note that $448+512=960$) $$ 1920q^8\left(2^{g-1}\sum\limits_{\e,\de\in\F^{g-1}} \theta^{14}\tch\e\de(\tau,0)\theta^2\tch\e\de(\tau,z)\right. $$ $$\left.- \sum\limits_{\alpha,\beta\in\F^{g-1}} \theta^8\tch\alpha\beta(\tau,0) \sum\limits_{\e,\de\in\F^{g-1}} \theta^6\tch\e\de(\tau,0)\theta^2\tch\e\de(\tau,z)\right) $$ If we now denote, in the spirit of \cite{vgvdg}, the variables $$ X_{\e,\de}:=\theta^2\tch\e\de(\tau,0) $$ and express $F_{g-1}$ as a function of $X$'s, note that the above expression for the $q^8$ is equal to $$ v_{F_{g-1}}:=\sum\limits_{\e,\de\in\F^{g-1}}240 \frac{\partial F_{g-1}}{\partial X_{\e,\de}}\ \theta^2\tch\e\de(z) $$ (note that when differentiating $X_{\e,\de}^8=\theta^{16}\tch\e\de(\tau,0)$, one picks up a factor of 8, and while differentiating $(\sum X_{\e,\de}^4)^2$, one also picks up the factor of $2\cdot 4=8$ for each term). For $g=5$ the above discussion implies that for the vanishing of the cosmological constant $v_{F_4}$ must lie in $\Gamma_{00}$ for every $\tau \in \TT_4$. Hence, for every $\tau \in \TT_4$, we must have $$ \frac{\p^2}{\p z_i\p z_j}v_{F_4}(\tau, 0)=0 $$ for any pair of indices $1\le i\le j\le 4$. By the heat equation for the theta function, this is equivalent to having $$ \frac{\partial F_4 }{\partial \tau_{ij}}(\tau)=0 $$ for any $\tau\in\TT_4$, but this contradicts the irreducibility of $F_4$, the defining equation of $\TT_4\subset\HH_4$. To show now that $F_g$ does not vanish identically on $\TT_g$ for any $g>5$, note that in the above expansion of $F_g$ along $\delta_0$ the constant term is $4F_{g-1}$, so if $F_{g-1}$ does not vanish identically on $\TT_{g-1}$, then $F_g$ cannot vanish identically on $\TT_g$. \end{proof} Note that for $g=5$ the slope of the Brill-Noether divisor is $6+\frac{12}{g+1}=8$. Since the slope conjecture is valid for $\overline{\M_5}$ (see cf.~\cite{FP} for more discussion), this is the unique effective divisor on $\overline{\M_5}$ of slope 8. However, the slope of the divisor of $F_5$ is equal to 8 as well. \begin{cor} The zero locus of the modular form $F_5$ on $\M_5$ is the Brill-Noether divisor or, equivalently, the locus of trigonal curves. \end{cor} It is interesting to ask whether $F_g$ might vanish on the locus of trigonal curves for any genus (note that it is known that $F_g$ vanishes on the locus of hyperelliptic curves). \begin{rem} We recall that there exists the Prym map $p:{\mathcal R}_5\to\A_4$ from the moduli space of curves of genus 5 with a choice of a point of order two. Let us also denote by $q:{\mathcal R}_5\to\M_5$ the forgetful finite map. From Recillas construction it then follows, cf.~\cite{BL}, that the trigonal locus in $\M_5$ is in fact equal to $q\circ p^{-1}(\M_4)$, and it would be interesting to understand the vanishing of $F_5$ in these terms, by exploring the Schottky-Jung identities more explicitly. \end{rem} \begin{rem} In a forthcoming paper \cite{OPSMY} the space of cusp forms $[\Gamma_4(1,2),8)]_0$, in which $\Xi^{(4)}[0]$ lies, is studied, and it is shown that the dimension of this space is equal to 2. By looking at the basis for this space it follows that there exists a unique cusp form with prescribed factorization properties, and thus it follows that $\Xi^{(4)}[0]$ is the unique form in $[\Gamma_4(1,2),8]_0$ satisfying the factorization constraints. \end{rem} \begin{rem} In \cite{FD} 7 linearly independent modular forms in $[\Gamma_4(1,2),8]$ are constructed, which are polynomials in squares of theta constants, and $\Xi^{(4)}[0]$ is expressed as their linear combination. If this expression for $\Xi^{(4)}[0]$ is in fact divisible by $\theta_0^2(\tau,0)$, then it would follow that in the expression for the genus 4 two-point function $$ \sum_m \frac{ \Xi^{(4)}[m](\tau)} {\theta_m^2(\tau,0)}\theta_m^2(\tau,z) $$ all ratios are polynomials in squares of theta constants. It is tempting then to conjecture that this two-point function is a constant multiple of $v_{F_4}$, and thus the vanishing of the genus 4 two-point function would be equivalent (see \cite{GSM}) to the function $v_{F_4}(\tau)$ lying in $\Gamma_{00}$ for any $\tau\in\TT_4$, which by the above discussion is {\it not} the case. This question merits further investigation. \end{rem} In contrast to genera up to 4, it turns out that in genus 5 the modular forms $\Xi^{(5)}[m]$ are not the unique forms on $\TT_5$ satisfying the factorization constraints. \begin{prop}\label{new} For any constant $c$ (independent of $m$) the expressions $$\Xi'^{(5)}[m]:=\Xi^{(5)}[m]+c(f_4^2-f_8)$$ are modular form of weight 8 on $\TT_5$ with respect to the subgroup of $\operatorname{Sp}(5,\Z)$ fixing $m$, permuted among themselves among themselves under the action of $\operatorname{Sp}(5,\Z)$ satisfying factorization constraints. Moreover, for $c=\frac{38192}{17}$ the cosmological constant $\sum_m\Xi'^{(5)}[m]$ vanishes identically on $\TT_5$. \end{prop} \begin{proof} Note first that $f_4^2-f_8$ is a modular form of weight 8 with respect to all of $\operatorname{Sp}(5,\Z)$, and thus $\Xi'^{(5)}[m]$ are modular and permuted by the group action as claimed. To determine the factorization of $\Xi'^{(5)}[m]$ on $\TT_i\times\TT_{5-i}$, we compute in general for $\tau_1\in\TT_i,\tau_2\in\TT_{g-i}$ $$ f_4^2\left(\begin{matrix}\tau_1&0\\ 0&\tau_2\end{matrix}\right) -f_8\left(\begin{matrix}\tau_1&0\\ 0&\tau_2\end{matrix}\right) =f_4^2(\tau_1)f_4^2(\tau_2)-f_8(\tau_1)f_8(\tau_2). $$ If $i\le 4$ (which is always the case for $g=5$), so that $F_i=0$, the above expression becomes equal to $$ f_8(\tau_1)(f_4^2(\tau_2)-f_8(\tau_2)); $$ if now $g-i\le 4$ (which is also always the case for $g=5$), this vanishes, and thus the factorization of $\Xi'$ is the same as the factorization of $\Xi$. Finally we note that by definition the cosmological constant is $$ \sum\limits_m\Xi'^{(5)}[m]= \sum\limits_m\Xi^{(5)}[m]+2^4(2^5+1)(f_4^2-f_8) \left(-\frac{51}{217}+528 \right)(f_4^2-f_8) $$ where we used the computation of the cosmological constant for $\Xi^{(5)}$ from theorem \ref{XiF}, and recall that the number of even characteristics is $2^{g-1}(2^g+1)$. Thus for $c=\frac{38192}{17}$ the cosmological constant for $\Xi'^{(5)}$ vanishes identically. \end{proof} \section{Appendix: possible generalizations to higher genus} The results of the previous section lead us to observe that a way to have modularity in the ansatz for the chiral superstring measure proposed in \cite{GR} for all $g$ is to prove a generalized version of Schottky-Jung relations, involving roots of degree $2^{k-4}$ for all $5\le k\le g$. These generalized Schottky-Jung relations should be relations induced by the Riemann relations in genus $g-k+4$. For example, cf.~\cite{AC}, a Riemann relation in genus 4 $$r_1\pm r_2\pm r_3\pm r_4=0,$$ where each $r_i$ is of the form $$ r_i=\theta_{m_1}\theta_{m_2}\theta_{m_3}\theta_{m_4}, $$ induces a Schottky relation in genus $6$ of the form $$ S_1\pm S_2\pm S_3\pm S_4=0 $$ where each $S_i$ is a fourth root of a monomial of degree 16 in the theta constants of a Jacobian of a genus 6 curve, with the set of characteristics satisfying some obvious conditions. If this is the case, and such generalized Schottky-Jung relations hold, as immediate consequence of Riemann's formula, we have the following relations \begin{prop}\label{Srelation} If the generalized Schottky-Jung relations hold, then for any $g\geq k+2$, for any $\tau\in\TT_g$, we have $$ (2^{2g-2k}-1)S_k^g=6(2^{k+1}-1) S_{k+1}^g+ 8(2^{k+2}-1)(2^{k+1}-1) S_{k+2}^g $$ (this is a generalization of (\ref{Sexpress}). \end{prop} If this is the case, then by eliminating $S_k^g$ starting from the highest one, $S_g^g$, the cosmological constant $\Xi^{(g)}$ given by (\ref{Xi}) on $\TT_g$ can be written as a linear combination of $S_{0,16}^g$ and $S_{1,8}^g$, or as a linear combination of $(f_4^{(g)})^2$ and $f_8^{(g)}$. Thus it makes sense to ask if $\Xi^{(g)}$ is proportional to the restriction of $(f_4^{(g)})^2-f_8^{(g)}$ also when $g>5$. Eberhard Freitag confirmed this, using a computer, for $g<1000$. We now give a (geometric, rather than combinatorial) proof of this. \begin{prop} If proposition \ref{Srelation} holds, then we have for any $\tau\in\TT_g$ $$\Xi^{(g)}(\tau)={\rm const}(f_4^2(\tau)-f_8(\tau)).$$ \end{prop} \begin{proof} From the above discussion we must have $$\Xi^{(g)}= a_g(f_4^{(g)})^2- b_g f_8^{(g)}$$ for some constants $a_g$ and $b_g$. Let us introduce the Siegel $\Phi$-operator: for any $f:\HH_g\to \C$ we let $$ \Phi(f)(\tau_1):=\lim_{ \lambda \longrightarrow + \infty}f \begin{pmatrix} \tau_1 &0\\ 0&i\lambda\end{pmatrix} $$ for all $\tau_1\in\HH_{g-1}$. This operator has relevance in the theory of modular forms, cf.~\cite{I,Fr} for details. Applying the Siegel $\Phi$ operator to forms defined on $\TT_g$ we get forms defined on $\TT_{g-1}$. It is well known and easy to show using the expression in terms of $S_0$ and $S_1$ that $$ \Phi(f_4^{(g)})=f_4^{(g-1)}\quad{\rm and } \quad \Phi(f_8^{(g)})=f_8^{(g-1)} $$ An easy computation, cf.~\cite{RSM}, then gives $\Phi(\Xi^{(g)})=0$, and moreover $$ \Phi^{g-4}(a_g(f_4^{(g)})^2- b_g f_8^{(g)})=a_g(f_4^{(4)})^2- b_g f_8^{(4)}. $$ Thus we have $$ 0= \Phi^{g-4}(\Xi^{(g)})=a_g(f_4^{(4)})^2- b_g f_8^{(4)}. $$ This implies that $a_g=b_g$, since $(f_4^{(4)})^2-f_8^{(4)}$ is the defining equation for $\TT_4\subset\HH_4$. Hence setting $c_g:=a_g=b_g$, we have the desired equality $$ \Xi^{(g)}= c_g((f_4^{(g)})^2- f_8^{(g)}) $$ for all $g$. \end{proof} \begin{cor} If the generalized Schottky-Jung identities hold (i.e. if proposition \ref{Srelation} holds) $\Xi^{(g)}$ vanishes identically on $\TT_g$ if and only if $g\le 4$. \end{cor} \begin{rem} It is tempting to try to construct, similarly to proposition \ref{new}, a corrected ansatz for arbitrary genus that would satisfy (assuming the above holds) the factorization constraints and give a vanishing cosmological constant. However, already in genus 6 it is not clear how to proceed. It is natural to try to add a multiple of $f_4^2-f_8$ to $\Xi^{(6)}$, but similarly to the proof of proposition \ref{new} we see that in this case on $\TT_1\times\TT_5$ the term $f_8^{(1)}F_5$ is added to the factorization, which is proportional to $f_8^{(1)}(\Xi'^{(5)}[m]-\Xi^{(5)}[m])$, but not proportional to the necessary $\Xi^{(1)}[m](\Xi'^{(5)}[m]-\Xi^{(5)}[m])$. \end{rem}
1,116,691,499,424
arxiv
\section{Introduction} A question of interest in a wide range of problems in economics and operations research is whether the solution to an optimization problem is monotone with respect to its parameters. The analysis of this question is called \emph{comparative statics}.\protect\footnote{ See \cite{topkis2011supermodularity} for a comprehensive treatment of comparative statics methods. } Following Topkis' seminal work \citep{topkis1978minimizing}, comparative statics methods have received significant attention in the economics and operations research literature.\protect\footnote{ See for example \cite{licalzi1992subextremal}, \cite{milgrom1994monotone}, \cite{athey2002monotone}, \cite{echenique2002comparative}, \cite{antoniadou2007comparative}, \cite{quah2007comparative}, \cite{quah2009comparative}, \cite{shirai2013welfare}, \cite{nocetti2015robust}, \cite{wang2015precautionary}, \cite{barthel2018directional}, and \cite{koch2019index}. } While comparative statics methods are usually applied to static optimization problems, they can also be applied to dynamic optimization problems. In particular, these methods can be used to study how the policy function\footnote{\cite{muller1997does} and \cite{smith2002structural} study how the optimal value function changes with respect to the parameters of the dynamic optimization problem, such as the single-period payoff function and the transition probability function. In contrast, in this paper, we analyze the optimal policy function.} changes with respect to the current state of the system or with respect to other parameters of the dynamic optimization problem.\protect\footnote{ For comparative statics results in dynamic optimization models see \cite{serfozo1976monotone}, \cite{lovejoy1987ordered}, \cite{amir1991one}, \cite{hopenhayn1992stochastic}, \cite{mirman2008qualitative}, \cite{topkis2011supermodularity}, \cite{krishnamurthy2016partially}, \cite{smith2017risk}, \cite{lehrer2018effect}, and \cite{l2017supermodular}.} That is, for multi-period optimization models, comparative statics methods can be used to determine how the current period's optimal decision changes with respect to the parameters of the optimization problem. For example, in a Markov decision process, under suitable conditions on the payoff function and on the transition function, comparative statics methods can be applied to show that the optimal decision is increasing in the discount factor when the state of the system is fixed. But since the model is dynamic and includes uncertainty, the states' evolution is different under different discount factors, and thus, it is not clear whether the future optimal decision is increasing in the discount factor even when the current optimal decision is increasing in the discount factor for a fixed state. The state of the system in period $t >1$ is a random variable from the point of view of period $1$, and thus, the optimal decision in period $t$, which depends on the state of the system in period $t$, is a random variable given the information available in period $1$. In this paper, we analyze how the expected value of the optimal decision in period $t$ changes as a function of the optimization problem parameters in the context of Markov decision processes (MDP). We call this analysis \emph{stochastic comparative statics.} More precisely, let $(E,\succeq )$ be a partially ordered set that contains some parameters of the MDP. For example, $E$ can be the set of all transition probability functions, the set of all discount factors, and/or a set of parameters that influence the payoff function. Suppose that under the parameters $e \in E$ a stationary policy function is given by $g(s,e)$ where $s$ is the state of the system. Given the policy function $g$ and the system's initial state, the system's states follow a stochastic process. Suppose that the states' distribution in period $t$ is described by the probability measure $\mu ^{t}(ds,e)$. We are interested in finding conditions that ensure that the expected decision in period $t$, $\mathbb{E}_{\,}^{t}(g(e)) =\int g(s ,e)\mu _{\,}^{t}(ds ,e)$ is increasing in the parameters $e$ on $E$. The expected value $\mathbb{E}_{\,}^{t}(g(e))$ is interpreted in two different ways. From a probabilistic point of view, $\mathbb{E}_{\,}^{t}(g(e))$ is the expected optimal decision in period $t$ as a function of the parameters $e$. For example, in investment theory, this expected value usually represents the expected capital accumulation in the system in period $t$ \citep{stokey1989}. In inventory management, it represents the expected inventory in period $t$ \citep{krishnan2010inventory}, and in income fluctuation problems it represents the expected wealth accumulation (see \cite{huggett2004precautionary} and \cite{bommier2018risk}) in period $t$. From a deterministic point of view, if we consider a population of ex-ante identical agents whose states evolve independently according to the stochastic process that governs the states' dynamics, then $\mu ^{t}$ represents the empirical distribution of states in period $t$. In this case, $\mathbb{E}_{\,}^{t}(g(e))$ corresponds to the average decision in period $t$ of this population given the parameters $e$. This latter interpretation is common in the growing literature on stationary equilibrium models and mean field equilibrium models. In this literature, while the focus is on the analysis of equilibrium, some stochastic comparative statics results have been obtained (see \cite{adlakha2013mean} and \cite{acemoglu2015robust}). These stochastic comparative statics results are useful in analyzing the equilibrium of these models. In particular, proving comparative statics results and establishing the uniqueness of an equilibrium (see \cite{hopenhayn1992entry}, \cite{light2017uniqueness}, \cite{acemoglu2018equilibrium}, and \cite{light2018mean}). The goal of this paper is to provide general stochastic comparative statics results in the context of an MDP. In particular, we provide various sufficient conditions on the primitives of MDPs that guarantee stochastic comparative statics results with respect to important parameters of MDPs, such as the discount factor, the single-period payoff function, and the transition probability function. We also provide novel comparative statics results with respect to these parameters. For example, we show that under a standard set of conditions that implies that the policy function is increasing in the state, the policy function is increasing the discount factor also (see Section \ref{Section: discount}). We apply our results in capital accumulation models with adjustment costs \citep{hopenhayn1992stochastic}, in dynamic pricing models with reference effects \citep{popescu2007dynamic}, and in controlled random walks. As an example, consider the following controlled random walk $s_{t+1} =s_{t} +a_{t} +\epsilon _{t+1}$ where $s_{t}$ is the state of the system in period $t$, $a_{t}$ is the action chosen in period $t$, and $\{\epsilon _{t}\}_{t =1}^{\infty}$ are random variables that are independent and identically distributed across time. In each period, a decision maker receives a reward that depends on the current state of the system and incurs a cost that depends on the action that the decision maker chooses in that period. The reward function is increasing in the state of the system and the cost function is increasing in the decision maker's action. The decision maker's goal is to maximize the expected sum of rewards. We provide sufficient conditions on the reward function and on the cost function that guarantee that the decision maker's current action and the expected future actions increase when the distribution of the random noise $\epsilon$ is higher in the sense of stochastic dominance. Since our results are intuitive and the sufficient conditions that we provide in order to derive stochastic comparative statics results are satisfied in some dynamic programs of interest, we believe that our results hold in other applications as well. The rest of the paper is organized as follows. Section \ref{Section MODEL} presents the dynamic optimization model. Section \ref{Section notations} presents definitions and notations that are used throughout the paper. In Section \ref{SCS} we present our main stochastic comparative statics results. In Section \ref{Section: discount} we study changes in the discount factor and in the single-period payoff function. In Section \ref{Section: transition} we study changes in the transition probability function. In Section \ref{Sectopn applications} we apply our results to various models. In Section \ref{Section final} we provide a summary, followed by an Appendix containing proofs. \section{\label{Section MODEL}The model} In this section we present the main components and assumptions of the model. For concreteness, we focus on a standard discounted dynamic programming model, sometimes called a Markov decision process.\protect\footnote{ All our results can be applied to other dynamic programming models, such as positive dynamic programming and negative dynamic programming. } For a comprehensive treatment of dynamic programming models, see \cite{feinberg2012handbook} and \cite{puterman2014markov}. We define a discounted dynamic programming model in terms of a tuple of elements $(S ,A ,\Gamma ,p ,r ,\beta )$. $S \subseteq \mathbb{R}^{n}$ is a Borel set called the state space. $\mathcal{B}(S)$ is the Borel $\sigma $-algebra on $S$. $A \subseteq \mathbb{R}$ is the action space. $\Gamma$ is a measurable subset of $S \times A$. For all $s \in S$, the non-empty and measurable $s$-section $\Gamma(s)$ of $\Gamma$ is the set of feasible actions in state $s \in S$. $p :S \times A \times \mathcal{B}(S) \rightarrow [0 ,1]$ is a transition probability function. That is, $p(s ,a , \cdot)$ is a probability measure on $S$ for each $(s ,a) \in S \times A$ and $p(\cdot, \cdot,B)$ is a measurable function for each $B \in \mathcal{B}(S)$. $r :S \times A \rightarrow \mathbb{R}$ is a measurable single-period payoff function. $0 <\beta <1$ is the discount factor. There is an infinite number of periods $t \in \mathbb{N} : =\{1 ,2 , . . .\}$. The process starts at some state $s(1) \in S$. Suppose that at time $t$ the state is $s(t)$. Based on $s(t)$, the decision maker (DM) chooses an action $a(t) \in \Gamma (s(t))$ and receives a payoff $r(s(t) ,a(t))$. The probability that the next period's state $s(t +1)$ will lie in $B \in \mathcal{B}(S)$ is given by $p(s(t) ,a(t) ,B)$. Let $H =S \times A$ and $H^{t}:=\underbrace{H \times \ldots \times H}_{t -1~ \mathrm{t} \mathrm{i} \mathrm{m} \mathrm{e} \mathrm{s}} \times S$. A policy $\sigma$ is a sequence $(\sigma _{1} ,\sigma _{2} , \ldots )$ of Borel measurable functions $\sigma _{t} :H^{t} \rightarrow A$ such that $\sigma _{t}(s(1) ,a(1) ,\ldots ,s(t)) \in \Gamma (s(t))$ for all $t \in \mathbb{N}$ and all $(s(1) ,a(1) ,\ldots ,s(t))\in H^{t}$. For each initial state $s\left (1\right )$, a policy $\sigma $ and a transition probability function $p$ induce a probability measure over the space of all infinite histories $H^{\infty }$.\protect\footnote{ The probability measure on the space of all infinite histories $H^{\infty }$ is uniquely defined by the Ionescu Tulcea theorem (for more details, see \cite{bertsekas1978stochastic} and \cite{feinberg1996measurability}). } We denote the expectation with respect to that probability measure by $\mathbb{E}_{\sigma }$, and the associated stochastic process by $\{s(t) ,a(t)\}_{t =1}^{\infty }$. The DM's goal is to find a policy that maximizes his expected discounted payoff. When the DM follows a strategy $\sigma$ and the initial state is $s \in S$ his expected discounted payoff is given by \begin{equation*}V_{\sigma}(s) =\mathbb{E}_{\sigma }\sum \limits_{t =1}^{\infty }\beta^{t -1}r(s(t),a(t)). \end{equation*}Define \begin{equation*}V(s) =\sup _{\sigma }V_{\sigma }(s). \end{equation*} We call $V :S \rightarrow \mathbb{R}$ the value function. Define the operator $T:B(S)\rightarrow B(S)$ where $B(S)$ is the space of all functions $f :S \rightarrow \mathbb{R}$ by \begin{equation*}Tf(s) =\max _{a \in \Gamma (s)}h(s ,a ,f), \end{equation*} where \begin{equation}h(s ,a ,f) =r(s ,a) +\beta \int _{S}f(s^{ \prime })p(s ,a ,ds^{ \prime }). \label{eq:h} \end{equation} Under standard assumptions on the primitives of the MDP,\footnote{The state and action spaces can be continuous or discrete. When we discuss convex functions on $S$ we assume that $S$ is a convex set.} standard dynamic programming arguments show that the value function $V$ is the unique function that satisfies $TV =V$. In addition, there exists an optimal stationary policy and the optimal policies correspondence \begin{equation*}G(s) =\{a \in \Gamma (s) :V(s) =h(s ,a ,V)\} \end{equation*} is nonempty, compact-valued and upper hemicontinuous. Define $g(s) =\max G(s)$. We call $g(s)$ the policy function. For the rest of the paper, we assume that the value function is the unique and continuous function that satisfies $TV =V$, $T^{n}f$ converges uniformly to $V$ for every $f \in B(S)$, and that the policy function exists.\footnote{These conditions are usually satisfied in applications. Conditions that ensure the existence and continuity of the value function and the existence of a stationary policy function are widely studied in the literature. See \cite{hinderer2016dynamic} for a textbook treatment. For recent results, see \cite{feinberg2016partially} and references therein.} \subsection{\label{Section notations} Notations and definitions } In this paper we consider a parameterized dynamic program. Let $(E , \succeq )$ be a partially ordered set that influences the DM's decisions. We denote a generic element in $E$ by $e$. Throughout the paper, we slightly abuse the notations and allow an additional argument in the functions defined above. For instance, the value function of the parameterized dynamic program $V$ is denoted by \begin{equation*}V (s ,e) =\max_{a \in \Gamma (s ,e)}h (s ,a ,e ,V). \end{equation*} Likewise, the policy function is denoted by $g(s ,e)$; $r(s ,a ,e)$ is the single-period payoff function; and $h (s ,a ,e ,V)$ is the $h$ function associated with the dynamic program problem with parameters $e$, as defined above in Equation (\ref{eq:h}). For the rest of the paper, we let $E_{p}$ be the set of all transition functions $p:S \times A \times \mathcal{B}(S) \rightarrow [0 ,1]$. When the DM follows the policy function $g(s)$ and the initial state is $s(1)$, the stochastic process $(s(t))$ is a Markov process. The transition function of $(s(t))$ can be described by the policy function $g$ and by the transition function $p$ as follows: For all $B \in \mathcal{B}(S)$, define $\mu ^{1}(B) =1$ if $s(1) \in B$ and $0$ otherwise, and $\mu ^{2}(B) =p(s(1) ,g(s(1)) ,B)$. $\mu ^{2}(B)$ is the probability that the second period's state $s(2)$ will lie in $B$. For $t \geq 3$, define $\mu ^{t}(B) =\int _{S}p(s ,g(s) ,B)\mu ^{t -1}(ds)$ for all $B \in \mathcal{B}(S)$. Then $\mu ^{t}(B)$ is the probability that $s(t)$ will lie in $B \in \mathcal{B}(S)$ in period $t$ when the initial state is $s(1) \in S$ and the DM follows the policy function $g$. For notational convenience, we omit the reference to the initial state. All the results in this paper hold for every initial state $s(1)\in S$. We write $\mu _{i}^{t}(B)$ to denote the probability that $s$ will lie in $B \in \mathcal{B}(S)$ in period $t$, when $e_{i} \in E$ are the parameters that influence the DM's decisions and the DM follows the policy function $g(s ,e_{i})$, $i =1 ,2$. For $e_{i} \in E$, define \begin{equation*}\mathbb{E}_{i}^{t}(g(e_{i})) =\int _{S}g(s ,e_{i})\mu _{i}^{t}(ds) . \end{equation*}As we discussed in the introduction, $\mathbb{E}_{i}^{t}(g(e_{i}))$ can be interpreted in two ways. According to the first interpretation, the DM's optimal decision in period $t$ is a random variable from the point of view of period $1$. The expected value $\mathbb{E}_{i}^{t}(g(e_{i}))$ is the DM's expected decision in period $t$, given that the parameters that influence the DM's decisions are $e_{i} \in E$. Alternately, the expected value $\mathbb{E}_{i}^{t}(g(e_{i}))$ can be interpreted as the aggregate of the decisions of a continuum of DMs facing idiosyncratic shocks. In the latter interpretation, each DM has an individual state and $\mu ^{t}$ is the distribution of the DMs over the states in period $t$. This interpretation is often used in stationary equilibrium models and in mean field equilibrium models (see more details in Section \ref{Section 3.3}). We are interested in the following stochastic comparative statics question: is it true that $e_{2} \succeq e_{1}$ implies $\mathbb{E}_{2}^{t}(g(e_{2})) \geq \mathbb{E}_{1}^{t}(g(e_{1}))$ for all $t \in \mathbb{N}$ (and for each initial state)? We note that for $t =1$, the stochastic comparative statics question reduces to a comparative statics question: is it true that $e_{2} \succeq e_{1}$ implies $g(s ,e_{2}) \geq g(s ,e_{1})$? We now introduce some notations and definitions that will be used in the next sections. For two elements $x ,y \in \mathbb{R}^{n}$ we write $x \geq y$ if $x_{i} \geq y_{i}$ for each $i =1 , . . . ,n$. We say that $f :\mathbb{R}^{n} \rightarrow \mathbb{R}$ is increasing if $x \geq y$ implies $f(x) \geq f(y)$. Let $D \subseteq \mathbb{R}^{S}$ where $\mathbb{R}^{S}$ is the set of all functions from $S$ to $\mathbb{R}$. When $\mu _{1}$ and $\mu _{2}$ are probability measures on $(S ,\mathcal{B}(S))$, we write $\mu _{2} \succeq _{D}\mu _{1}$ if \begin{equation*}\int _{S}f(s)\mu _{2}(ds) \geq \int _{S}f(s)\mu _{1}(ds) \end{equation*}for all Borel measurable functions $f \in D$ such that the integrals exist. In this paper we will focus on two important stochastic orders: the first order stochastic dominance and the convex stochastic order. When $D$ is the set of all increasing functions on $S$, we write $\mu _{2} \succeq _{st}\mu _{1}$ and say that $\mu _{2}$ first order stochastically dominates $\mu _{1}$. If $D$ is the set of all convex functions on $S$, we write $\mu _{2} \succeq _{CX}\mu _{1}$ and say that $\mu _{2}$ dominates $\mu _{1}$ in the convex stochastic order. If $D$ is the set of all increasing and convex functions on $S$, we write $\mu _{2} \succeq _{ICX}\mu _{1}$. Similarly, for $p_{1} ,p_{2} \in E_{p}$, we write $p_{2} \succeq _{D}p_{1}$ if \begin{equation*}\int _{S}f(s^{ \prime })p_{2}(s,a ,ds^{ \prime }) \geq \int _{S}f(s^{ \prime })p_{1}(s ,a ,ds^{ \prime }) \end{equation*}for all Borel measurable functions $f \in D \subseteq \mathbb{R}^{S}$ and all $(s ,a) \in S \times A$ such that the integrals exist.\protect\footnote{ In the rest of the paper, all functions are assumed to be integrable. } If $D$ is the set of all increasing functions, convex functions, and convex and increasing functions, we write $p_{2} \succeq _{st}p_{1}$, $p_{2} \succeq _{CX}p_{1}$, and $p_{2} \succeq _{ICX}p_{1}$, respectively. For comprehensive coverage of stochastic orders and their applications, see \cite{muller2002comparison} and \cite{shaked2007stochastic}. \begin{definition} (i) We say that $p \in E_{p}$ is monotone if for every increasing function $f$ the function $\int _{S}f(s^{ \prime })p(s ,a ,ds^{ \prime })$ is increasing in $(s ,a)$. (ii) We say that $p \in E_{p}$ is convexity-preserving if for every convex function $f$ the function $\int _{S}f(s^{ \prime })p(s ,a ,ds^{ \prime })$ is convex in $(s ,a)$. (iii) Define $P_{i}(s ,B) = :p_{i}(s ,g(s ,e_{i}) ,B)$. Let $D \subseteq \mathbb{R}^{S}$. We say that $P_{i}$ is $D$-preserving if $f \in D$ implies that $\int _{S}f(s^{ \prime })P_{i}(s ,ds^{ \prime }) \in D$. If $D$ is the set of all increasing functions, convex functions, and convex and increasing functions, we say that $P_{i}$ is $I$-preserving, $CX$-preserving, and $ICX$-preserving, respectively. \end{definition} \section{\label{Section dynamics}Main results} In this section we derive our main results. In Section 3.1 we provide stochastic comparative statics results. In Section 3.2 and in Section 3.3 we provide conditions on the primitives of the MDP that guarantee comparative statics and stochastic comparative statics results. \subsection{\label{SCS}Stochastic comparative statics} In this section we provide conditions that ensure stochastic comparative statics. Our approach is to find conditions that imply that the states' dynamics generated under $e_{2}$ stochastically dominate the states' dynamics generated under $e_{1}$ whenever $e_{2} \succeq e_{1}$. Theorem \ref{Theorem 1} shows that if $P_{2}$ is $D$-preserving and $P_{2}(s , \cdot ) \succeq _{D}P_{1}(s , \cdot )$ for all $s \in S$, then $\mu _{2}^{t} \succeq _{D}\mu _{1}^{t}$ for all $t \in \mathbb{N}$. A proof of Theorem \ref{Theorem 1} can be found in Chapter 5 in \cite{muller2002comparison} where the authors study stochastic comparisons of general Markov chains. For completeness, because our setting is slightly different, we provide the proof of Theorem \ref{Theorem 1} in the Appendix for completeness.\footnote{A similar result to Theorem \ref{Theorem 1} for the case of $ \succeq _{st}$ and $ \succeq _{ICX}$ can be found in \cite{huggett2004precautionary}, \cite{adlakha2013mean}, \cite{balbus2014constructive}, and \cite{acemoglu2015robust}.} The focus of the rest of the paper is on finding sufficient conditions on the primitives of the MDP in order to apply Theorem \ref{Theorem 1}. Corollary \ref{Parameter} and Theorem \ref{TRANSITION} provide sufficient conditions for $P_{2}$ to be $D$-preserving and $P_{2}(s , \cdot ) \succeq _{D}P_{1}(s , \cdot )$ when $D$ is the set of increasing functions or the set of increasing and convex functions. The results in this section require conditions on the policy function and on the primitives of the MDP. In Sections \ref{Section: discount} and \ref{Section: transition}, we provide comparative statics and stochastic comparative statics results that depend only on the primitives of the model (e.g., the transition probabilities and the single-period payoff function). \begin{theorem} \label{Theorem 1}Let $(E , \succeq )$ be a partially ordered set and let $D \subseteq \mathbb{R}^{S}$. Let $e_{1} ,e_{2}\in E$ and suppose that $e_{2} \succeq e_{1}$. Assume that $P_{2}$ is $D$-preserving and that $P_{2}(s , \cdot ) \succeq _{D}P_{1}(s , \cdot )$ for all $s \in S$. Then $\mu _{2}^{t} \succeq _{D}\mu _{1}^{t}$ for all $t \in \mathbb{N}$. \end{theorem} In the case that $p_{2} =p_{1} =p$ and $(E , \succeq )$ is a partially ordered set that influences the agent's decisions, Theorem \ref{Theorem 1} yields a simple stochastic comparative statics result. Corollary \ref{Parameter} shows that if $g(s,e)$ is increasing in $e$, $g(s ,e_{2})$ is increasing in $s$, and $p$ is monotone, then $\mathbb{E}_{2}^{t}(g(e_{2})) \geq \mathbb{E}_{1}^{t}(g(e_{1}))$ whenever $e_{2} \succeq e_{1}$. This result is useful when $E$ is the set of all possible discount factors between $0$ and $1$, or is a set that includes parameters that influence the single-period payoff function (see Section \ref{Section: discount}). \begin{corollary} \label{Parameter}Let $e_{1} ,e_{2} \in E$ and suppose that $e_{2} \succeq e_{1}$. Assume that $g(s ,e)$ is increasing in $e$ for all $s \in S$, $g(s ,e_{2})$ is increasing in $s$, $p_{1} =p_{2} =p$, and $p$ is monotone. Then \begin{equation*}\mathbb{E}_{2}^{t}(g(e_{2})) \geq \mathbb{E}_{1}^{t}(g(e_{1})) \end{equation*}for all $t \in \mathbb{N}$ and for each initial state $s(1) \in S$. \end{corollary} In some dynamic programs we are interested in knowing how a change in the initial state will influence the DM's decisions in future periods. Corollary \ref{Initial state} shows that a higher initial state leads to higher expected decisions if the policy function is increasing in the state of the system and the transition probability function is monotone. The proof follows from the same arguments as those in the proof of Corollary \ref{Parameter}. Recall that we denote the initial state by $s(1)$. \begin{corollary} \label{Initial state} Consider two MDPs that are equivalent except for the initial states $s_{i}(1)$, $i =1 ,2$. Assume that $s_{2}(1) \geq s_{1}(1)$, $g(s)$ is increasing in $s$, and $p$ is monotone. Then $\mathbb{E}_{2}^{t}(g(s_{2}(1))) \geq \mathbb{E}_{1}^{t}(g(s_{1}(1)))$ for all $t \in \mathbb{N}$. \end{corollary} We now derive stochastic comparative statics results with respect to the transition probability function that governs the states' dynamics. Part (i) of Theorem \ref{TRANSITION} provides conditions that ensure that $p_{2} \succeq _{st}p_{1}$ implies $\mathbb{E}_{2}^{t}(g(p_{2})) \geq \mathbb{E}_{1}^{t}(g(p_{1}))$ for all $t \in \mathbb{N}$. Part (ii) provides conditions that ensure that $p_{2} \succeq _{CX}p_{1}$ implies $\mathbb{E}_{2}^{t}(g(p_{2})) \geq \mathbb{E}_{1}^{t}(g(p_{1}))$ for all $t \in \mathbb{N}$. In Section 4 we apply these results to various commonly studied dynamic optimization models. \begin{theorem} \label{TRANSITION}Let $p_{1} ,p_{2} \in E_{p}$. (i) Assume that $p_{2}$ is monotone, $g(s ,p_{2})$ is increasing in $s$, and $g(s ,p_{2}) \geq g(s ,p_{1})$ for all $s \in S$. Then $p_{2} \succeq _{st}p_{1}$ implies that $\mathbb{E}_{2}^{t}(g(p_{2})) \geq \mathbb{E}_{1}^{t}(g(p_{1}))$ for all $t \in \mathbb{N}$. (ii) Assume that $p_{2}$ is monotone and convexity-preserving, $g(s ,p_{2})$ is increasing and convex in $s$, and $g(s ,p_{2}) \geq g(s ,p_{1})$ for all $s \in S$. Then $p_{2} \succeq _{CX}p_{1}$ implies that $\mathbb{E}_{2}^{t}(g(p_{2})) \geq \mathbb{E}_{1}^{t}(g(p_{1}))$ for all $t \in \mathbb{N}$. \end{theorem} \subsection{\label{Section: discount}A change in the discount factor or in the payoff function} In this section we provide sufficient conditions for the monotonicity of the policy function in the state variable, and for the monotonicity of the policy function in other parameters of the MDP, including the discount factor and the parameters that influence the single-period payoff function. Our stochastic comparative statics results in Section \ref{SCS} rely on these monotonicity properties. Thus, we provide conditions on the model's primitives that ensure stochastic comparative statics results. The monotonicity of the policy function in the state variable follows from the conditions on the model's primitives provided in \cite{topkis2011supermodularity}. We note that these conditions are not necessary for deriving monotonicity results regarding the policy function, and in some specific applications one can still derive these monotonicity results using different techniques or under different assumptions.\protect\footnote{ For example, see \cite{lovejoy1987ordered} and \cite{hopenhayn1992stochastic}. See also \cite{smith2002structural} for conditions that guarantee that the value function is monotone and has increasing differences.} Recall that a function $f :S \times E \rightarrow \mathbb{R}$ is said to have increasing differences in $(s ,e)$ on $S \times E$ if for all $e_{2} ,e_{1} \in E$ and $s_{2} ,s_{1} \in S$ such that $e_{2} \succeq e_{1}$ and $s_{2} \geq s_{1}$, we have \begin{equation*}f(s_{2} ,e_{2}) -f(s_{2} ,e_{1}) \geq f(s_{1} ,e_{2}) -f(s_{1} ,e_{1}) . \end{equation*} A function $f$ has decreasing differences if $-f$ has increasing differences. A set $B \in \mathcal{B}(S)$ is called an upper set if $s_{1} \in B$ and $s_{2} \geq s_{1}$ imply $s_{2} \in B$. The transition probability $p \in E_{p}$ has stochastically increasing differences if $p(s ,a ,B)$ has increasing differences for every upper set $B$. See \cite{topkis2011supermodularity} for examples of transition probabilities that have stochastically increasing differences. The optimal policy correspondence $G$ is said to be ascending if $s_{2} \geq s_{1}$, $b \in G(s_{1})$, and $b^{ \prime } \in G(s_{2})$ imply $\max \{b ,b^{ \prime }\} \in G(s_{2})$ and $\min \{b ,b^{ \prime }\} \in G(s_{1})$. In particular, if $G$ is ascending, then $\min G(s)$ and $\max G(s)$ are increasing functions. \cite{topkis2011supermodularity} provides conditions under which the optimal policy correspondence $G$ is ascending. These conditions are summarized in the following assumption: \begin{assumption} \label{Ass Topkis}(i) $r(s ,a)$ is increasing in $s$ and has increasing differences. (ii) $p$ is monotone and has stochastically increasing differences. (iii) For all $s_{1} ,s_{2} \in S$, $s_{1} \leq s_{2}$ implies $\Gamma (s_{1}) \subseteq \Gamma (s_{2})$. \end{assumption} Theorem \ref{Thorem DISCOUNT} shows that under Assumption \ref{Ass Topkis}, the policy function $g(s,\beta )$ is increasing in the discount factor. Furthermore, if the single period payoff function $r(s ,a ,c)$ depends on some parameter $c$ and has increasing differences, then the policy function is increasing in the parameter $c$. \begin{theorem} \label{Thorem DISCOUNT}Suppose that Assumption \ref{Ass Topkis} holds and that $\Gamma(s)$ is ascending. (i) Let $0 <\beta _{1} \leq \beta _{2} <1$. Then $g(s ,\beta _{2}) \geq g(s ,\beta _{1})$ for all $s \in S$ and $\mathbb{E}_{2}^{t}(g(\beta _{2})) \geq \mathbb{E}_{1}^{t}(g(\beta _{1}))$ for all $t \in \mathbb{N}$. (ii) Let $c \in E$ be a parameter that influences the payoff function. If the payoff function $r(s ,a ,c)$ has increasing differences in $(a ,c)$ and in $(s ,c)$, then $g(s ,c_{2}) \geq g(s ,c_{1})$ for all $s \in S$, and $\mathbb{E}_{2}^{t}(g(c_{2})) \geq \mathbb{E}_{1}^{t}(g(c_{1}))$ for all $t \in \mathbb{N}$ whenever $c_{2} \succeq c_{1}$. \end{theorem} \subsection{\label{Section: transition}A change in the transition probability function} In this section we study stochastic comparative statics results related to a change in the transition function. We provide conditions on the transition function and on the payoff function that ensure that $p_{2} \succeq _{st}p_{1}$ implies comparative statics results and stochastic comparative statics results. We assume that the transition function $p_{i}$ is given by $p_{i}(s ,a ,B) =\Pr (m(s ,a ,\epsilon ) \in B)$ for all $B \in \mathcal{B}(S)$, where $\epsilon $ is a random variable with law $v$ and support $\mathcal{V} \subseteq \mathbb{R}^{k}$. Theorem \ref{Theorem Transition} provides conditions on the function $m$ that imply that the policy function is higher when $v$ is higher in the sense of stochastic dominance. In Section \ref{Sec: cont ran walk}, we provide an example of a controlled random walk where the conditions on $m$ are satisfied. \begin{theorem} \label{Theorem Transition}Suppose that $p_{i}(s,a ,B) =\Pr (m(s ,a ,\epsilon _{i}) \in B)$ where $m$ is convex, increasing, continuous, and has increasing differences in $(s ,a)$, $(s ,\epsilon )$ and $(a ,\epsilon )$; and $\epsilon _{i}$ has the law $v_{i}$, $i =1 ,2$. $r(s ,a)$ is convex and increasing in $s$ and has increasing differences. For all $s_{1}, s_{2} \in S$, we have $\Gamma (s_{1}) = \Gamma (s_{2})$. If $v_{2} \succeq _{st}v_{1}$ then (i) $g(s ,p_{2}) \geq g(s ,p_{1})$ for all $s \in S$ and $g(s ,p_{2})$ is increasing in $s$. (ii) $\mathbb{E}_{2}^{t}(g(p_{2})) \geq \mathbb{E}_{1}^{t}(g(p_{1}))$ for all $t \in \mathbb{N}$. \end{theorem} \section{\label{Sectopn applications} Applications} In this section we apply our results to several dynamic optimization models from the economics and operations research literature. \subsection{Capital accumulation with adjustment costs} Capital accumulation models are widely studied in the investment theory literature \citep{stokey1989}. We consider a standard capital accumulation model with adjustment costs \citep{hopenhayn1992stochastic}. In this model, a firm maximizes its expected discounted profit over an infinite horizon. The single-period revenues depend on the demand and on the firm's capital. The demand evolves exogenously in a Markovian fashion. In each period, the firm decides on the next period's capital level and incurs an adjustment cost that depends on the current capital level and on the next period's capital level. Using the stochastic comparative statics results developed in the previous section, we find conditions that ensure that higher future demand, in the sense of first order stochastic dominance, increases the expected long run capital accumulated. We provide the details below. Consider a firm that maximizes its expected discounted profit. The firm's single-period payoff function $r$ is given by \begin{equation*}r(s ,a) =R(s_{1} ,s_{2}) -c(s_{1} ,a) \end{equation*} where $s=(s_{1},s_{2})$. The revenue function $R$ depends on an exogenous demand shock $s_{2} \in S_{2} \subseteq \mathbb{R}^{n -1}$, and on the current firm's capital stock $s_{1} \in S_{1} \subseteq \mathbb{R}_{ +}$. The state space is given by $S = S_{1} \times S_{2}$. The demand shocks follow a Markov process with a transition function $Q$. The firm chooses the next period's capital stock $a \in \Gamma (s_{1})$ and incurs an adjustment cost of $c(s_{1} ,a)$. The transition probability function $p$ is given by \begin{equation*}p(s ,a ,B) =1_{D}(a)Q(s_{2} ,C) , \end{equation*}where $D \times C =B$, $D$ is a measurable set in $\mathbb{R}$, $C$ is a measurable set in $\mathbb{R}^{n -1}$, and $Q$ is a Markov kernel on $S_{2} \subseteq \mathbb{R}^{n -1}$. It is easy to see that if $Q$ is monotone then $p(s ,a ,B) =1_{D}(a)Q(s_{2} ,C)$ is monotone and that $Q_{2} \succeq _{st}Q_{1}$ implies $p_{2} \succeq _{st}p_{1}$. Assume that the revenue function $R$ is continuous and has increasing differences, that $c$ is continuous and has decreasing differences, and that $\Gamma (s)$ is ascending. Under these conditions, \cite{hopenhayn1992stochastic} show that the policy function $g(s ,p)$ is increasing in $s$ if $Q$ is monotone. If, in addition, $Q_{2} \succeq _{st}Q_{1}$, then $g(s ,p_{2}) \geq g(s ,p_{1})$ for all $s$ (see Corollary 7 in \cite{hopenhayn1992stochastic}). Thus, part (i) in Theorem \ref{TRANSITION} implies that $\mathbb{E}_{2}^{t}(g(p_{2})) \geq \mathbb{E}_{1}^{t}(g(p_{1}))$ for all $t \in \mathbb{N}$. \begin{proposition} Let $Q_{1}$ and $Q_{2}$ be two Markov kernels on $S_{2}$. Assume that $R$ is continuous and has increasing differences, $c$ is continuous and has decreasing differences, $\Gamma (s)$ is ascending, and $\Gamma (s_{1}) \supseteq \Gamma (s_{1}^{\prime})$ whenever $s_{1} \geq s_{1}^{\prime}$. Assume that $Q_{2}$ is monotone and that $Q_{2} \succeq _{st}Q_{1}$. Then under $Q_{2}$ the expected capital accumulation is higher than under $Q_{1}$, i.e., $\mathbb{E}_{2}^{t}(g(p_{2})) \geq \mathbb{E}_{1}^{t}(g(p_{1}))$ for all $t \in \mathbb{N}$. \end{proposition} \subsection{\label{Section Popescu} Dynamic pricing with a reference effect and an uncertain memory factor} In this section we consider a dynamic pricing model with a reference effect as in \cite{popescu2007dynamic}. In this model the demand is sensitive to the firm's pricing history. In particular, consumers form a reference price that influences their demand. As in \cite{popescu2007dynamic}, we consider a profit-maximizing monopolist who faces a homogeneous stream of repeated customers over an infinite time horizon. In each period, the monopolist decides on a price $a \in A : =[0 ,\overline{a}]$ to charge the consumers. Assume for simplicity that the marginal cost is $0$. The resulting single-period payoff function is given by \begin{equation*}r(s ,a) =aD(s ,a) \end{equation*} where $s \in S \subseteq \mathbb{R}$ is the current reference price and $D(s ,a)$ is the demand function that depends on the reference price $s$ and on the price that the monopoly charges $a$. We assume that the function $D(s ,a)$ is continuous, non-negative, decreasing in $p$, increasing in $s$, has increasing differences, and is convex in $s$. If the current reference price is $s$ and the firm sets a price of $a$ then the next period's reference price is given by $\gamma s +(1 -\gamma )a$ (see \cite{popescu2007dynamic} for details on the micro foundations of this structure). $\gamma $ is called the memory factor. In contrast to the model of \cite{popescu2007dynamic}, we assume that the memory factor $\gamma $ is not deterministic. More precisely, we assume that the memory factor $\gamma $ is a random variable on $[0 ,1]$ with law $v$. So the transition probability function $p$ is given by \begin{equation*}p(s ,a ,B) =v\{\gamma \in [0 ,1] :(\gamma s +(1 -\gamma )a) \in B\} \end{equation*}for all $B \in \mathcal{B}(S)$. We show that even when the memory factor $\gamma $ is a random variable, the result of \cite{popescu2007dynamic} holds in expectation, i.e., the long run expected prices are increasing in the current reference price. We also show that an increase in the discount factor increases the current optimal price and the long run expected prices. \begin{proposition}\label{POPESCU} Suppose that the function $D(s ,a)$ is continuous, non-negative, decreasing in $p$, increasing and convex in $s$, and has increasing differences. (i) The optimal pricing policy $g(s)$ is increasing in the reference price $s$. (ii) The expected optimal prices in each period are higher when the initial reference price is higher. (iii) $0 <\beta _{1} \leq \beta _{2} <1$ implies that $g(s ,\beta _{2}) \geq g(s ,\beta _{1})$ for all $s \in S$ and $\mathbb{E}_{2}^{t}(g(\beta _{2})) \geq \mathbb{E}_{1}^{t}(g(\beta _{1}))$ for all $t \in \mathbb{N}$. \end{proposition} \subsection{Controlled random walks} \label{Sec: cont ran walk} Controlled random walks are used to study controlled queueing systems and other phenomena in applied probability (for example, see \cite{serfozo1981optimal}). In this section we consider a simple controlled random walk on $\mathbb{R}$. At any period, the state of the system $s \in \mathbb{R}$ determines the current period's reward $c_{1}(s)$. The next period's state is given by $m(s ,a ,\epsilon ) =a +s +\epsilon $ where $\epsilon $ is a random variable with law $v$ and support $\mathcal{V} \subseteq \mathbb{R}$, and $a \in A$ is the action that the DM chooses. Thus, the process evolves as a random walk $s +\epsilon$ plus the DM's action $a$. When the DM chooses an action $a \in A$, a cost of $c_{2}(a)$ is incurred. We assume that $A \subseteq \mathbb{R}$ is a compact set, $c_{1}(s)$ is an increasing and convex function, and $c_{2}$ is an increasing function. That is, the reward and the marginal reward are increasing in the state of the system and the costs are increasing in the action that the DM chooses. The single-period payoff function is given by $r(s ,a) =c_{1}(s) -c_{2}(a)$ and the transition probability function is given by \begin{equation*}p(s ,a ,B) =v\{\epsilon \in \mathcal{V} :a +s +\epsilon \in B\} \end{equation*}for all $B \in \mathcal{B}(\mathbb{R})$. In this setting, when choosing an action $a$, the DM faces the following trade-off between the current payoff and future payoffs: while choosing a higher action $a$ has higher current costs, it increases the probability that the state of the system will be higher in the next period, and thus, a higher action increases the probability of higher future rewards. We study how a change in the random variable $\epsilon$ affects the DM's current and future optimal decisions. When $c_{1}(s)$ is convex and increasing in $s$, it is easy to see that the transition function $m(s ,a ,\epsilon ) =a +s +\epsilon $ and the single-period function $r(s ,a) =c_{1}(s) -c_{2}(a)$ satisfy the conditions of Theorem \ref{Theorem Transition}. Thus, the proof of the following proposition follows immediately from Theorem \ref{Theorem Transition}. \begin{proposition} \label{prop: Dist} Suppose that $p_{i}(s ,a ,B) =\Pr (a +s +\epsilon _{i} \in B)$ where $\epsilon _{i}$ has the law $v_{i}$, $i =1 ,2$. Suppose that $c_{1}(s)$ is convex and increasing in $s$. Assume that $v_{2} \succeq _{st}v_{1}$. Then $g(s ,p_{2}) \geq g(s ,p_{1})$ for all $s \in S$, $g(s ,p_{2})$ is increasing in $s$, and $\mathbb{E}_{2}^{t}(g(p_{2})) \geq \mathbb{E}_{1}^{t}(g(p_{1}))$ for all $t \in \mathbb{N}$. \end{proposition} \subsection{\label{Section 3.3}Comparisons of stationary distributions} Stationary equilibrium is the preferred solution concept for many models that describe large dynamic economies (see \cite{acemoglu2015robust} for examples of such models). In these models, there is a continuum of agents. Each agent has an individual state and solves a discounted dynamic programming problem given some parameters $e$ (usually prices). The parameters are determined by the aggregate decisions of all agents. Informally, a stationary equilibrium of these models consists of a set of parameters $e$, a policy function $g$, and a probability measure $\lambda$ on $S$ such that (i) $g$ is an optimal stationary policy given the parameters $e$, (ii) $\lambda$ is a stationary distribution of the states' dynamics $P(s ,B)$ given the parameters $e$, and (iii) the parameters $e$ are determined as a function of $\lambda $ and $g$.\protect\footnote{ Stationary equilibrium models are used to study a wide range of economic phenomena. Examples include models of industry equilibrium \citep{hopenhayn1992entry}, heterogeneous agent macro models \citep{huggett1993risk} and \citep{aiyagari1994uninsured}, and many more.} The existence and uniqueness of a stationary probability measure $\lambda $ on $S$ in the sense that \begin{equation*}\lambda (B) =\int _{S}p(s ,g(s) ,B)\lambda (ds) \end{equation*} for all $B \in \mathcal{B}(S)$ are widely studied.\protect\footnote{ For example, see \cite{hopenhayn1992stochastic}, \cite{kamihigashi2014stochastic}, and \cite{foss2018stochastic}. } We now derive comparative statics results relating to how the stationary distribution $\lambda $ changes when the transition function $p$ changes. We denote the least stationary distribution by $\underline{\lambda}$ and the greatest stationary distribution by $\overline{\lambda}$. \begin{proposition} \label{Prop Comp Stationary}Suppose that $S$ is a compact set in $\mathbb{R}$. (i) Let $E_{p ,i}$ be the set of all monotone transition probability functions $p$. Assume that $g(s ,p)$ is increasing in $(s,p)$ on $S \times E_{p ,i}$ where $E_{p ,i}$ is endowed with the order $ \succeq _{st}$. Then the greatest stationary distribution $\overline{\lambda}$ and the least stationary distributions $\underline{\lambda}$ are increasing in $p$ on $E_{p ,i}$ with respect to $ \succeq _{st}$.\protect\footnote{ The existence of the greatest fixed point is guaranteed from the Tarski fixed-point theorem. For more details, see the Appendix and \cite{topkis2011supermodularity}.} (ii) Let $E_{p ,ic}$ be the set of all monotone and convexity-preserving transition probability functions $p$. Assume that $g(s,p)$ is convex in $s$ and is increasing in $(s,p)$ on $S \times E_{p ,ic}$ where $E_{p ,ic}$ is endowed with the order $ \succeq _{CX}$. Then the greatest stationary distribution $\overline{\lambda}$ and the least stationary distributions $\underline{\lambda}$ are increasing in $p$ on $E_{p ,ic}$ with respect to $ \succeq _{ICX}$. \end{proposition} We apply Proposition \ref{Prop Comp Stationary} to a standard stationary equilibrium model \citep{huggett1993risk}. There is a continuum of ex-ante identical agents with mass $1$. The agents solve a consumption-savings problem when their income is fluctuating. Each agent's payoff function is given by $r(s ,a) =u(s-a)$ where $s$ denotes the agent's current wealth, $a$ denotes the agent's savings, $s -a$ is the agent's current consumption, and $u$ is the agent's utility function. Thus, when an agent consumes $s-a$, his single-period payoff is given by $u(s-a)$.\footnote{For simplicity we assume that all the agents are ex-ante identical, i.e., the agents have the same utility function and transition function. The model can be extended to the case of ex-ante heterogeneity.} Recall that a utility function is in the class of hyperbolic absolute risk aversion (HARA) utility functions if its absolute risk aversion $A \left (c\right )$ is hyperbolic. That is, if $A (c):= -\frac{u^{ \prime \prime } \left (c\right )}{u^{ \prime } \left (c\right )} =\frac{1}{ac+b}$ for $c >\frac{ -b}{a}$. We assume that $u$ is in the HARA class and that the utility function's derivative $u^{\prime}$ is convex. Savings are limited to a single risk-free bond. When the agents save an amount $a$ their next period's wealth is given by $Ra +y$ where $R$ is the risk-free bond's rate of return and $y \in Y =[\underline{y} ,\overline{y}] \subset \mathbb{R}_{ +}$ is the agents' labor income in the next period. The agents' labor income is a random variable with law $\nu$. Thus, the transition function is given by \begin{equation*}p(s ,a ,B) =\ensuremath{\operatorname*{}}\nu \{y \in Y :Ra +y \in B \}. \end{equation*} The set from which the agents can choose their savings level is given by $\Gamma (s) =[\underline{s} ,\min \{s ,\overline{s}\}]$ where $\underline{s} <0$ is a borrowing limit and $\overline{s} >0$ is an upper bound on savings. A stationary equilibrium is given by a probability measure $\lambda $ on $S =[\underline{s} ,(1 +r)\overline{s} +\overline{y}]$, a rate of return $R$, and a stationary savings policy function $g$ such that (i) $g$ is optimal given $R$, (ii) $\lambda $ is a stationary distribution given $R$, i.e., $\lambda (B) =\int _{S} p(s,g(s),B) \lambda (ds)$, and (iii) markets clear in the sense that the total supply of savings equals the total demand for savings, i.e., $\int g(s)\lambda (ds) =0$. If the agents' utility function is in the HARA class then the savings policy function $g(s)$ is convex and increasing (see \cite{jensen2017distributional}). It is easy to see that $p$ is convexity-preserving and monotone. Furthermore, when $u^{\prime }$ is convex then the policy function $g(s,p)$ is increasing in $p$ with respect to the convex order, i.e., $g(s ,p_{2}) \geq g(s ,p_{1})$ whenever $p_{2} \succeq _{CX}p_{1}$ (see \cite{light2018precautionary}). Thus, part (ii) of Proposition \ref{Prop Comp Stationary} implies that when the labor income uncertainty increases (i.e., $p_{2} \succeq _{CX}p_{1}$), both the highest partial equilibrium (when $R$ is fixed) wealth inequality and the lowest partial equilibrium wealth inequality increase (i.e., $\lambda _{2} \succeq _{ICX}\lambda _{1}$). \section{\label{Section final}Summary} This paper studies how the current and future optimal decisions change as a function of the optimization problem's parameters in the context of Markov decision processes. We provide simple sufficient conditions on the primitives of Markov decision processes that ensure comparative statics results and stochastic comparative statics results. We show that various models from different areas of operations research and economics satisfy our sufficient conditions. \section{\label{Section Appendix} Appendix} \subsection{Proofs of the results in Section \ref{SCS}} \begin{proof} [Proof of Theorem~\ref{Theorem 1}] For $t =1$ the result is trivial since $\mu _{2}^{1} =\mu _{1}^{1}$. Assume that $\mu _{2}^{t} \succeq _{D}\mu _{1}^{t}$ for some $t \in \mathbb{N}$. First note that for every measurable function $f :S \rightarrow \mathbb{R}$ and $i =1 ,2$ we have \begin{equation}\int _{S}f(s^{ \prime })\mu _{i}^{t +1}(ds^{ \prime }) =\int _{S}\int _{S}f(s^{ \prime })P_{i}(s ,ds^{ \prime })\mu _{i}^{t}(ds) . \label{(1)} \end{equation}To see this, assume first that $f =1_{B}$ where $B \in \mathcal{B}(S)$ and $1$ is the indicator function of the set $B$. We have \begin{align*}\int _{S}f(s^{ \prime })\mu _{i}^{t +1}(ds^{ \prime }) & =\mu _{i}^{t +1}(B) \\ & =\int _{S}p_{i}(s ,g(s ,e_{i}) ,B)\mu _{i}^{t}(ds) \\ & =\int _{S}\int _{S}1_{B}(s^{ \prime })p_{i}(s ,g(s ,e_{i}) ,ds^{ \prime })\mu _{i}^{t}(ds) \\ & =\int _{S}\int _{S}f(s^{ \prime })P_{i}(s ,ds^{ \prime })\mu _{i}^{t}(ds) .\end{align*} A standard argument shows that equality (\ref{(1)}) holds for every measurable function $f$. Now assume that $f \in D$. We have \begin{align*}\int _{S}f(s^{ \prime })\mu _{2}^{t +1}(ds^{ \prime }) & =\int _{S}\int _{S}f(s^{ \prime })P_{2}(s ,ds^{ \prime })\mu _{2}^{t}(ds) \\ & \geq \int _{S}\int _{S}f(s^{ \prime })P_{2}(s ,ds^{ \prime })\mu _{1}^{t}(ds) \\ & \geq \int _{S}\int _{S}f(s^{ \prime })P_{1}(s ,ds^{ \prime })\mu _{1}^{t}(ds) \\ & = \int _{S}f(s^{ \prime })\mu _{1}^{t +1}(ds^{ \prime }) .\end{align*}The first inequality follows since $f \in D$, $P_{2}$ is $D$-preserving and $\mu _{2}^{t} \succeq _{D}\mu _{1}^{t}$ . The second inequality follows since $P_{2}(s , \cdot ) \succeq _{D}P_{1}(s , \cdot )$. Thus, $\mu _{2}^{t +1} \succeq _{D}\mu _{1}^{t +1}$. We conclude that $\mu _{2}^{t} \succeq _{D}\mu _{1}^{t}$ for all $t \in \mathbb{N}$. \end{proof} \begin{proof} [Proof of Corollary \ref{Parameter}] We show that $P_{2}$ is $I$-preserving and that $P_{2}(s , \cdot ) \succeq _{st}P_{1}(s , \cdot )$ for all $s \in S$. Let $f :S \rightarrow \mathbb{R}$ be an increasing function and let $e_{2} \succeq e_{1}$. Since $p$ is monotone and $g(s ,e_{2})$ is increasing in $s$, if $s_{2} \geq s_{1}$ then \begin{equation*}\int _{S}f(s^{ \prime })p(s_{2} ,g(s_{2} ,e_{2}) ,ds^{ \prime }) \geq \int _{S}f(s^{ \prime })p(s_{1} ,g(s_{1} ,e_{2}) ,ds^{ \prime }) . \end{equation*}Thus, $P_{2}$ is $I$-preserving. Let $s \in S$. Since $g(s ,e_{2}) \geq g(s ,e_{1})$ and $p$ is monotone, we have \begin{equation*}\int _{S}f(s^{ \prime })p(s ,g(s ,e_{2}) ,ds^{ \prime }) \geq \int _{S}f(s^{ \prime })p(s ,g(s ,e_{1}) ,ds^{ \prime }) . \end{equation*}Thus, $P_{2}(s , \cdot ) \succeq _{st}P_{1}(s , \cdot )$. From Theorem \ref{Theorem 1} we conclude that $\mu _{2}^{t} \succeq _{st}\mu _{1}^{t}$ for all $t \in \mathbb{N}$. We have \begin{equation*}\int _{S}g(s ,e_{2})\mu _{2}^{t}(ds) \geq \int _{S}g(s ,e_{2})\mu _{1}^{t}(ds) \geq \int_{S}g(s ,e_{1})\mu _{1}^{t}(ds) , \end{equation*} which proves the Corollary. \end{proof} \begin{proof} [Proof of Theorem \ref{TRANSITION}] (i) Assume that $p_{2} \succeq _{st}p_{1}$. We show that $P_{2}$ is $I$-preserving and that $P_{2}(s , \cdot ) \succeq _{st}P_{1}(s , \cdot )$ for all $s \in S$. Let $f :S \rightarrow \mathbb{R}$ be an increasing function. Assume that $s_{2} \geq s_{1}$. Since $g(s_{2} ,p_{2}) \geq g(s_{1} ,p_{2})$ and $p_{2}$ is monotone we have \begin{equation*}\int _{S}f(s^{ \prime })p_{2}(s_{2} ,g(s_{2} ,p_{2}) ,ds^{ \prime }) \geq \int _{S}f(s^{ \prime })p_{2}(s_{1} ,g(s_{1} ,p_{2}) ,ds^{ \prime }), \end{equation*} which proves that $P_{2}$ is $I$-preserving. Let $s \in S$. Since $p_{2}$ is monotone, $g(s ,p_{2}) \geq g(s ,p_{1})$ for all $s \in S$, and $p_{2} \succeq _{st}p_{1}$ we have \begin{align*}\int _{S}f(s^{ \prime })p_{2}(s ,g(s ,p_{2}) ,s ,ds^{ \prime }) & \geq \int _{S}f(s^{ \prime })p_{2}(s ,g(s ,p_{1}) ,ds^{ \prime }) \\ & \geq \int _{S}f(s^{ \prime })p_{1}(s ,g(s ,p_{1}) ,ds^{ \prime }),\end{align*} which proves that $P_{2}(s , \cdot ) \succeq _{st}P_{1}(s , \cdot )$ for all $s \in S$. From Theorem \ref{Theorem 1} we conclude that $\mu _{2}^{t} \succeq _{st}\mu _{1}^{t}$ for all $t \in \mathbb{N}$. Since $g(s ,p_{2})$ is increasing, we have \begin{equation*}\int _{S}g(s ,p_{2})\mu _{2}^{t}(ds) \geq \int _{S}g(s ,p_{2})\mu _{1}^{t}(ds) \geq \int g(s ,p_{1})\mu _{1}^{t}(ds) , \end{equation*}which proves part (i). (ii) Assume that $p_{2} \succeq _{CX}p_{1}$. We show that $P_{2}$ is $ICX$-preserving and that $P_{2}(s , \cdot ) \succeq _{ICX}P_{1}(s , \cdot )$ for all $s \in S$. Let $f :S \rightarrow \mathbb{R}$ be an increasing and convex function. Let $s_{1} ,s_{2} \in S$ and $s_{\lambda } =\lambda s_{1} +(1 -\lambda )s_{2}$ for $0 \leq \lambda \leq 1$. We have \begin{align*}\lambda \int _{S}f(s^{ \prime })p_{2}(s_{1},g(s_{1} ,p_{2}) ,ds^{ \prime }) & +(1 -\lambda )\int _{S}f(s^{ \prime })p_{2}(s_{2} ,g(s_{2} ,p_{2}) ,ds^{ \prime }) \\ & \geq \int _{S}f(s^{ \prime })p_{2}(s_{\lambda } ,\lambda g(s_{1} ,p_{2}) +(1 -\lambda )g(s_{2} ,p_{2}) ,ds^{ \prime }) \\ & \geq \int _{S}f(s^{ \prime })p_{2}(s_{\lambda } ,g(s_{\lambda } ,p_{2}) ,ds^{ \prime }) .\end{align*}The first inequality follows since $p_{2}$ is convexity-preserving. The second inequality follows since $g(s ,p_{2})$ is convex and $p_{2}$ is monotone. Thus, $\int _{S}f(s^{ \prime })P_{2}(s ,ds^{ \prime })$ is convex. Part (i) shows that $\int _{S}f(s^{ \prime })P_{2}(s ,ds^{ \prime })$ is increasing. We conclude that $P_{2}$ is $ICX$-preserving. Fix $s \in S$. We have \begin{align*}\int _{S}f(s^{ \prime })p_{2}(s ,g(s ,p_{2}) ,ds^{ \prime }) \geq \int _{S}f(s^{ \prime })p_{2}(s ,g(s ,p_{1}) ,ds^{ \prime }) \\ \geq \int _{S}f(s^{ \prime })p_{1}(s ,g(s ,p_{1}) ,ds^{ \prime }) .\end{align*} The first inequality follows since $g(s ,p_{2}) \geq g(s ,p_{1})$ and $p_{2}$ is monotone. The second inequality follows since $p_{2} \succeq _{CX}p_{1}$. We conclude that $P_{2}(s , \cdot ) \succeq _{ICX}P_{1}(s , \cdot )$. From Theorem \ref{Theorem 1} we conclude that $\mu _{2}^{t} \succeq _{ICX}\mu _{1}^{t}$ for all $t \in \mathbb{N}$. Since $g(s ,p_{2})$ is increasing and convex, we have \begin{equation*}\int _{S}g(s ,p_{2})\mu _{2}^{t}(ds) \geq \int _{S}g(s ,p_{2})\mu _{1}^{t}(ds) \geq \int g(s ,p_{1})\mu _{1}^{t}(ds) , \end{equation*}which proves part (ii). \end{proof} \subsection{Proofs of the results in Section \ref{Section: discount}} In order to prove Theorem \ref{Thorem DISCOUNT} we need the following two results: \begin{proposition} \label{TOPKIS} Suppose that Assumption \ref{Ass Topkis} holds. Then (i) $h(s ,a ,f)$ has increasing differences whenever $f$ is an increasing function. (ii) $G(s)$ is ascending. In particular, $g(s) =\max G(s)$ is an increasing function. (iii) $Tf(s) =\max _{a \in \Gamma (s)}h(s ,a ,f)$ is an increasing function whenever $f$ is an increasing function. $V(s)$ is an increasing function. \end{proposition} \begin{proof} See Theorem 3.9.2 in \cite{topkis2011supermodularity}. \end{proof} \begin{proposition} \label{LOVEJOY}Let $(E , \succeq )$ be a partially ordered set. Assume that $\Gamma(s)$ is ascending. If $h(s ,a ,e ,f)$ has increasing differences in $(s ,a)$, $(s ,e)$, and $(a ,e)$, then \begin{equation*}Tf(s ,e) =\max _{a \in \Gamma (s)}h(s ,a ,e ,f) \end{equation*} has increasing differences in $(s ,e)$. \end{proposition} \begin{proof} See Lemma 1 in \cite{hopenhayn1992stochastic} or Lemma 2 in \cite{lovejoy1987ordered}. \end{proof} \begin{proof} [Proof of Theorem \ref{Thorem DISCOUNT}] (i) Let $E =(0 ,1)$ be the set of all possible discount factors, endowed with the standard order: $\beta _{2} \geq \beta _{1}$ if $\beta _{2}$ is greater than or equal to $\beta _{1}$. Assume that $\beta _{1} \leq \beta _{2}$. Let $f \in B(S \times E)$ and assume that $f$ has increasing differences in $(s ,\beta )$ and is increasing in $s$. Let $a_{2} \geq a_{1}$. Since $f$ has increasing differences, the function $f(s ,\beta _{2}) -f(s ,\beta _{1})$ is increasing in $s$. Since $p$ is monotone we have\begin{equation*}\int _{S}(f(s^{ \prime } ,\beta _{2}) -f(s^{ \prime } ,\beta _{1}))p(s ,a_{2} ,ds^{ \prime }) \geq \int _{S}(f(s^{ \prime } ,\beta _{2}) -f(s^{ \prime } ,\beta _{1}))p(s ,a_{1} ,ds^{ \prime }) . \end{equation*}Rearranging the last inequality yields \begin{equation*}\int _{S}f(s^{ \prime } ,\beta _{2})p(s ,a_{2} ,ds^{ \prime }) -\int _{S}f(s^{ \prime } ,\beta _{2})p(s ,a_{1} ,ds^{ \prime }) \geq \int _{S}f(s^{ \prime } ,\beta _{1})p(s ,a_{2} ,ds^{ \prime }) -\int _{S}f(s^{ \prime } ,\beta _{1})p(s ,a_{1} ,ds^{ \prime }) . \end{equation*}Since $f$ is increasing in $s$ and $p$ is monotone, the right-hand-side and the left-hand-side of the last inequality are nonnegative. Thus, multiplying the left-hand-side of the last inequality by $\beta _{2}$ and the right-hand-side of the last inequality by $\beta _{1}$ preserves the inequality. Adding to each side of the last inequality $r(a_{2} ,s) -r(a_{1} ,s)$ yields \begin{equation*}h(s ,a_{2} ,\beta _{2} ,f) -h(s ,a_{1} ,\beta _{2} ,f) \geq h(s ,a_{2} ,\beta _{1} ,f) -h(s ,a_{1} ,\beta _{1} ,f) . \end{equation*}That is, $h$ has increasing differences in $(a ,\beta )$. An analogous argument shows that $h$ has increasing differences in $(s ,\beta )$. Proposition \ref{TOPKIS} guarantees that $h$ has increasing differences in $(s ,a)$ and that $Tf$ is increasing in $s$. Proposition \ref{LOVEJOY} implies that $Tf$ has increasing differences. We conclude that for all $n =1 ,2 ,3....$, $T^{n}f$ has increasing differences and is increasing in $s$. From standard dynamic programming arguments, $T^{n}f$ converges uniformly to $V$. Since the set of functions that has increasing differences and is increasing in $s$ is closed under uniform convergence, $V$ has increasing differences and is increasing in $s$. From the same argument as above, $h(s ,a ,\beta ,V)$ has increasing differences in $(a ,\beta )$. Theorem 6.1 in \cite{topkis1978minimizing} implies that $g(s ,\beta )$ is increasing in $\beta $ for all $s \in S$. Proposition \ref{TOPKIS} implies that $g(s ,\beta )$ is increasing in $s$ for all $\beta \in E$. We now apply Corollary \ref{Parameter} to conclude that $\mathbb{E}_{2}^{t}(g(\beta _{2})) \geq \mathbb{E}_{1}^{t}(g(\beta _{1}))$ for all $t \in \mathbb{N}$. (ii) The proof is similar to the proof of part (i) and is therefore omitted. \end{proof} \subsection{Proofs of the results in Section \ref{Section: transition}} \begin{proof} [Proof of Theorem \ref{Theorem Transition}] Suppose that the function $f \in B(S \times E_{p})$ is convex and increasing in $s$, and has increasing differences where $E_{p}$ is endowed with the stochastic dominance order $ \succeq _{st}$. Let $v_{2} \succeq _{st}v_{1}$. Note that $m$ has increasing differences in $(s ,a)$, $(s ,\epsilon )$ and $(a ,\epsilon )$ if and only if $m$ is supermodular (see Theorem 3.2 in \cite{topkis1978minimizing}). From the fact that the composition of a convex and increasing function with a convex, increasing and supermodular function is convex and supermodular (see \cite{topkis2011supermodularity}) the function $f(m(s ,a ,\epsilon ) ,p_{2})$ is convex and supermodular in $(s ,a)$ for all $\epsilon \in \mathcal{V}$. Since convexity and supermodularity are preserved under integration, the function $\int f(m(s ,a ,\epsilon ) ,p_{2})v_{2}(d\epsilon )$ is convex and supermodular in $(s ,a)$. Thus, \begin{equation}h(s ,a ,p_{2} ,f) =r(s ,a) +\beta \int _{\mathcal{V}}f(m(s ,a ,\epsilon ) ,p_{2})v_{2}(d\epsilon ) \end{equation}is convex and supermodular in $(s ,a)$ as the sum of convex and supermodular functions. This implies that $Tf(s ,p_{2}) =\max _{a \in \Gamma (s)}h(s ,a ,p_{2} ,f)$ is convex. Since $h$ is increasing in $s$ it follows that $Tf(s ,p_{2})$ is increasing in $s$. Note that for any increasing function $\overline{f} :S \rightarrow \mathbb{R}$ we have \begin{equation*}\int _{S}\overline{f}(s^{ \prime })p_{2}(s ,a ,ds^{ \prime }) =\int _{\mathcal{V}}\overline{f}(m(s ,a ,\epsilon ))v_{2}(d\epsilon ) \geq \int _{\mathcal{V}}\overline{f}(m(s ,a ,\epsilon ))v_{1}(d\epsilon ) =\int _{S}\overline{f}(s^{ \prime })p_{1}(s ,a ,ds^{ \prime }), \end{equation*} so $p_{2} \succeq _{st}p_{1}$. Fix $a \in A$, and let $s_{2} \geq s_{1}$. Since $f(m(s ,a ,\epsilon ) ,p_{2})$ is supermodular in $(s ,\epsilon )$, the function $f(m(s_{2} ,a ,\epsilon ) ,p_{2}) -f(m(s_{1} ,a ,\epsilon ) ,p_{2})$ is increasing in $\epsilon $. We have \begin{align*}\int _{\mathcal{V}}(f(m(s_{2} ,a ,\epsilon ) ,p_{2}) -f(m(s_{1} ,a ,\epsilon ) ,p_{2}))v_{2}(d\epsilon ) & \geq \int _{\mathcal{V}}(f(m(s_{2} ,a ,\epsilon ) ,p_{2}) -f(m(s_{1} ,a ,\epsilon ) ,p_{2}))v_{1}(d\epsilon ) \\ & \geq \int _{\mathcal{V}}(f(m(s_{2} ,a ,\epsilon ) ,p_{1}) -f(m(s_{1} ,a ,\epsilon ) ,p_{1}))v_{1}(d\epsilon ).\end{align*}The first inequality follows since $v_{2} \succeq _{st}v_{1}$. The second inequality follows from the facts that $m$ is increasing in $s$ and $f$ has increasing differences. Adding $r(s_{2} ,a) -r(s_{1} ,a)$ to each side of the last inequality implies that $h$ has increasing differences in $(s ,p)$. Similarly, we can show that $h$ has increasing differences in $(a ,p)$. Proposition \ref{LOVEJOY} implies that $Tf$ has increasing differences. We conclude that for all $n =1 ,2 ,3....$, $T^{n}f$ is convex and increasing in $s$ and has increasing differences. From standard dynamic programming arguments, $T^{n}f$ converges uniformly to $V$. Since the set of functions that have increasing differences and are convex and increasing in $s$ is closed under uniform convergence, $V$ has increasing differences and is convex and increasing in $s$. From the same argument as above, $h(s ,a ,p ,V)$ has increasing differences in $(a ,p)$ and $(s ,a)$. An application of Theorem 6.1 in \cite{topkis1978minimizing} implies that $g(s ,p_{2}) \geq g(s ,p_{1})$ for all $s \in S$ and $g(s,p_{2})$ is increasing in $s$. The fact that $m$ is increasing implies that $p$ is monotone. We now apply Corollary \ref{Parameter} to conclude that $\mathbb{E}_{2}^{t}(g(p_{2})) \geq \mathbb{E}_{1}^{t}(g(p_{1}))$ for all $t \in \mathbb{N}$. \end{proof} \subsection{Proofs of the results in Sections \ref{Section Popescu} and \ref{Section 3.3}} \begin{proof} [Proof of Proposition \ref{POPESCU}] (i) Let $f \in B(S)$ be a convex function. The facts that $D(s ,a)$ is convex in $s$ and that convexity is preserved under integration imply that the function $aD(s ,a) +\beta \int f(\gamma s +(1 -\gamma )a)v(d\gamma )$ is convex in $s$. Thus, the function $Tf(s)$ given by \begin{equation*}Tf(s) =\max _{a \in A}aD(s ,a) +\beta \int f(\gamma s +(1 -\gamma )a)v(d\gamma ) \end{equation*} is convex in $s$. A standard dynamic programming argument (see the proof of Proposition \ref{prop: Dist}) shows that the value function $V$ is convex. The convexity of $V$ implies that for all $\gamma $, the function $V(\gamma s +(1 -\gamma )a)$ has increasing differences in $(s ,a)$. Since increasing differences are preserved under integration, $\int _{0}^{1}V(\gamma s +(1 -\gamma )a)v(d\gamma )$ has increasing differences in $(s ,a)$. Since $D(s ,a)$ is nonnegative and has increasing differences, the function $aD(s ,a)$ has increasing differences. Thus, the function \begin{equation*}aD(s ,a) +\beta \int _{0}^{1}V(\gamma s +(1 -\gamma )a)v(d\gamma ) \end{equation*}has increasing differences as the sum of functions with increasing differences. Now apply Theorem 6.1 in \cite{topkis1978minimizing} to conclude that $g(s)$ is increasing. (ii) Follows from Corollary \ref{Parameter}. (iii) Follows from a similar argument to the arguments in the proof of Theorem \ref{Thorem DISCOUNT}. \end{proof} We now introduce some notations and a result that is needed in order to prove Proposition \ref{Prop Comp Stationary}. Recall that a partially ordered set $(Z , \geq )$ is said to be a lattice if for all $x ,y \in Z$, $\sup \{x ,y\}$ and $\inf \{y ,x\}$ exist in $Z$. $(Z , \geq )$ is a complete lattice if for all non-empty subsets $Z^{ \prime } \subseteq Z$ the elements $\sup Z^{ \prime }$ and $\inf Z^{ \prime }$ exist in $Z$. We need the following Proposition regarding the comparison of fixed points. For a proof, see Corollary 2.5.2 in \cite{topkis2011supermodularity}. \begin{proposition} \label{Topkis Fixed point}Suppose that $Z$ is a nonempty complete lattice, $E$ is a partially ordered set, and $f (z ,e)$ is an increasing function from $Z \times E$ into $Z$. Then the greatest and least fixed points of $f (z ,e)$ exist and are increasing in $e$ on $E$. \end{proposition} \begin{proof} [Proof of Proposition \ref{Prop Comp Stationary}] Let $\mathcal{P}(S)$ be the set of all probability measures on $S$. The partially ordered set $(\mathcal{P} (S) , \succeq _{st})$ and the partially ordered set $(\mathcal{P} (S) , \succeq _{ICX})$ are complete lattices when $S \subseteq \mathbb{R}$ is compact (see \cite{muller2006stochastic}). (i) Define the operator $\Phi :\mathcal{P}(S) \times E_{p ,i} \rightarrow \mathcal{P}(S)$ by \begin{equation*}\Phi (\lambda ,p)( \cdot ) =\int _{S}p(s ,g(s ,p) , \cdot )\lambda (ds) . \end{equation*}The proof of Theorem \ref{TRANSITION} implies that $\Phi $ is an increasing function on $\mathcal{P}(S) \times E_{p ,i}$ with respect to $ \succeq _{st}$. That is, for $p_{1} ,p_{2} \in E_{p ,i}$ and $\lambda _{1} ,\lambda _{2} \in \mathcal{P}(S)$ we have $\Phi (\lambda _{2} ,p_{2}) \succeq _{st}\Phi (\lambda _{1} ,p_{1})$ whenever $p_{2} \succeq _{st}p_{1}$ and $\lambda _{2} \succeq _{st}\lambda _{1}$. Proposition \ref{Topkis Fixed point} implies the result. (ii) The proof is analogous to the proof of part (i) and is therefore omitted. \end{proof} \bibliographystyle{ecta}
1,116,691,499,425
arxiv
\section{Introduction} Various mechanisms may trigger star formation on the borders of H\,{\sc{ii}}\ regions (see the review by Elmegreen~\cite{elm98}). All rely on the high-pressure exerted by the warm ionized gas on the surrounding cold neutral material. These mechanisms differ in their assumptions concerning the nature of the surrounding medium (homogeneous or not) and the part played by turbulence. One of these mechanisms, the collect and collapse process, first proposed by Elmegreen \& Lada~(\cite{elm77}), is particularly interesting as it allows the formation of massive fragments (hence subsequently of massive objects, stars or clusters), out of an initially uniform medium. In this process a layer of neutral material is collected between the ionization front (IF) and the associated shock front (SF) during the supersonic expansion of an H\,{\sc{ii}}\ region. With time this layer may become massive and gravitationally instable, leading to the formation of dense massive cores (Whitworth et al.~\cite{whi94}, Hosokawa \& Inutsuka~\cite{hos06}). We have previously proposed seventeen Galactic H\,{\sc{ii}}\ regions as candidates for the collect and collapse process of massive-star formation (Deharveng et al.~\cite{deh05}). Among these is Sh2-212, the subject of the present paper, described in Sect.~2. Our main criterion for the choice of Sh2-212 was the presence of a bright MSX point source (Price et al.~\cite{pri01}) at its periphery, beyond the ionization front, coincident with a red object in the 2MASS survey (Skrutskie et al.~\cite{skr06}) and with a small optical reflection nebula. However nothing was yet known about Sh2-212's molecular environment. We now present new high-resolution molecular observations to investigate the distribution of molecular material. Do we observe a layer of dense neutral material surrounding the ionized gas -- a signature of the collect and collapse process of star formation? These observations are described in Sect.~4. We also present new $JHK$ observations to determine the stellar content of this region. Are young stellar objects (YSOs) present on the border of Sh2-212? What is the nature of the MSX point source observed near the IF? This is discussed in Sect.~3. We also present new radio observations aimed at detecting possible UC H\,{\sc{ii}}\ regions at the periphery of Sh2-212, and at detecting maser emission indicative of recent star formation. These observations are described in Sect.~5. The results are discussed in Sect.~6, where we present our view of the morphology of the whole complex, and argue in favour of the collect and collapse process of massive-star formation. \\ \section{Description of the region} Sh2-212 (Sharpless~\cite{sha59}) is a bright optically-visible H\,{\sc{ii}}\ region in the outer Galaxy ({\it l}=155\fdg36, {\it b}=2\fdg61). It lies high above the Galactic plane ($\sim300$~pc assuming a distance of 6.5~kpc, Sect.~2.1 ), and far from the Galactic centre (14.7~kpc). Its diameter is $\sim$5\hbox{$^\prime$}\ (9.5~pc). It is a high-excitation H\,{\sc{ii}}\ region, ionized by a cluster containing an O5.5neb (Moffat et al.~\cite{mof79}) or an O6I star (Chini \& Wink~\cite{chi84}). Fig.~1 presents a colour image of Sh2-212 in the optical, a composite of two frames obtained at the 120-cm telescope of the Observatoire de Haute-Provence. Pink corresponds to the H$\alpha$\ emission at 6563~\AA\ (exposure time 1~hour) and turquoise to the [S\,{\sc{ii}}]\ emission at 6717~\AA\ and 6731~\AA\ (exposure time 2$\times$1~hour). [S\,{\sc{ii}}]\ is enhanced near the ionization front, and thus is a good tracer of the limits of the ionized region and of the morphology of the ionization front. Sh2-212 appears as a circular H\,{\sc{ii}}\ region around its exciting cluster. Numerous substructures are present, indicating that Sh2-212 is presently evolving in a non-homogeneous medium. A bright rim is conspicuous at the north-western border of Sh2-212. A small reflection nebula (indicated by an arrow in Fig.~1) is present beyond this ionization front. Because Sh2-212 is both optically bright and situated far (14.7~kpc) from the Galactic centre, it has been included in numerous studies of abundance determinations in the Galaxy. For this purpose absolute integrated line fluxes in a number of nebular emission lines were measured through a circular diaphragm by Caplan et al.~(\cite{cap00}). These measurements confirm the high-excitation of Sh2-212; they indicate an electron temperature of 9700~K and an electron density of 130~cm$^{-3}$ (Deharveng et al.~\cite{deh00}). The coordinates of the objects discussed in the text are given in Table~1. \begin{figure}[htp] \includegraphics[width=90mm]{9233f1.eps} \caption{Composite colour image of Sh2-212 in the optical. North is up and east is left. The size of the field is $7\farcm0$ (E-W) $\times$ $6\farcm6$ (N-S). Pink corresponds to the H$\alpha$ 6563\AA\ emission, and turquoise to the [S\,{\sc{ii}}]\ 6717\AA\ + 6731\AA\ emission, enhanced near the ionization front. The arrow points to the reflection nebulosity, associated with star no.~228, discussed in the text.} \end{figure} \begin{table}[htp] \caption{Coordinates of the objects discussed in the text} \begin{tabular}{lllllll} \hline\hline Object & \multicolumn{3}{c}{RA~(2000)} & \multicolumn{3}{c}{Dec~(2000)}\\ & h & m & s & $\hbox{$^\circ$}$ & $\hbox{$^\prime$}$ & $\hbox{$^{\prime\prime}$}$ \\ \hline Main exciting star M2 & 4 & 40 & 37.44 & +50 & 27 & 40.5 \\ MSX G155.3319+02.5989 & 4 & 40 & 27.2 & +50 & 28 & 29 \\ IRAS 04366+5022 & 4 & 40 & 26.1 & +50 & 28 & 24 \\ UC H\,{\sc{ii}}\ region & 4 & 40 & 27.2 & +50 & 28 & 29 \\ Massive YSO no.~228 & 4 & 40 & 27.24 & +50 & 28 & 29.5 \\ \hline \\ \end{tabular} \end{table} Sh2-212 is a thermal radio-continuum source, with a flux density of 1.58~Jy at 1.46~GHz (Fich~\cite{fic93}, and references therein). The angular resolution of Fich's observations, 40\hbox{$^{\prime\prime}$}, was insufficient for the detection of a possible UC H\,{\sc{ii}}\ region on the border of Sh2-212. Higher angular resolution radio observations will be presented and discussed in Sect.~5. Table~2 lists the velocities, obtained by various authors, of the ionized gas and the associated molecular material. As a whole, the ionized gas flows away from the molecular cloud, with a radial velocity of the order of 5~km~s$^{-1}$. This may be indicative of a ``champagne flow'' (Tenorio-Tagle~\cite{ten79}). This point will be developed in Sect.~6. \begin{table}[htp] \caption{Velocity measurements} \begin{tabular}{lllllll} \hline\hline Line & V$_{\rm LSR}$ (km s$^{-1}$) & Reference \\ \hline H$\alpha$ & $-$39.5 & Pi{\c{s}}mi{\c{s}} et al.~(\cite{pis91}) \\ H$\alpha$ & $-$43.9$\pm$0.1 & Fich et al.~(\cite{fic90}) \\ H109$\alpha$ & $-$40.1$\pm$1.0 & Lockman~(\cite{loc89}) \\ CO & $-$35.3$\pm$0.3 & Blitz et al.~(\cite{bli82}) \\ CO & $-$34.3 & Shepherd \& Churchwell~(\cite{she96})\\ \hline \\ \end{tabular} \end{table} Sh2-212 was proposed by Deharveng et al.~(\cite{deh05}) as a candidate for the collect and collapse process of massive-star formation, on the basis of i)~the presence of a ring of MSX emission at 8.3~$\mu$m (mainly PAH emission) surrounding the brightest part of the ionized region; this indicates the presence of dense neutral material and dust around the ionized gas; and ii) the presence of a luminous MSX point source in the direction of this dust ring (indicated by an arrow in Fig.~2). This MSX point source lies in the direction of the reflection nebulosity. Fig.~2 shows that the bright ring of MSX emission at 8.3~$\mu$m surrounds only the bright northern part of the ionized region, and not the whole region. However, fainter brightness emission is observed around the southern part of the H\,{\sc{ii}}\ region. This point will be discussed in Sect.~6. \begin{figure} \includegraphics[width=90mm]{9233f2.eps} \caption{Composite colour image of Sh2-212 in the optical and the mid-IR. Red corresponds to the MSX emission at 8.3~$\mu$m, and turquoise to the optical emission of the ionized gas. The arrow points to the MSX point source associated with star no.~228 and the UC H\,{\sc{ii}}\ region discussed in the text.} \end{figure} \subsection{The distance of Sh2-212} A kinematical distance $D$ can be estimated from the velocity of the molecular gas, $V_{\rm LSR}(\rm CO)=-35 \pm 1$~km~s$^{-1}$ (Table~2), and from the Galactic rotation curve of Brand \& Blitz~(\cite{bra93}); we obtain $D=6.1 \pm0.4$~kpc. A photometric distance can be estimated for M2, the main exciting star of Sh2-212. We have used the spectral type and the $UBV$ magnitudes of Moffat et al.~(\cite{mof79}; O5.5V, $U$=11.90, $B$=12.34, $V$=11.77), our $JHK$ magnitudes (Sect.~4; $J$=10.38, $H$=10.17, $K$=10.07), the synthetic photometry of O stars by Martins \& Plez (\cite{mar06}), and the interstellar extinction law of Rieke \& Lebofsky~(\cite{rie85}). The best fit is obtained for a distance of 6.5~kpc and a visual extinction $A_V$ of 2.85~mag. In the following we adopt this distance of 6.5~kpc, which is consistent with the kinematic one. \section{Near IR observations} \subsection{Observations} Sh2-212 was observed with the CFHT-IR camera on the night of 2002 October 20. Frames were obtained in the $JHK$ broad-band filters. The detector was a Rockwell array of 1024$\times$1024 pixels, with a pixel size of $0\farcs211$. For each band a mosaic of nine positions was obtained, each position being observed ten times with a short exposure time. This results in a total field of view of $4\farcm4$ (E-W) $\times$ $5\farcm2$ (N-S), and total integration times of 270s, 270s, and 450s in the $J$, $H$, and $K$ bands respectively. The $J$, $H$, and $K$ images were reduced using the DAOPHOT stellar photometry package (Stetson~\cite{ste87}), with PSF fitting. The results were calibrated using the 2MASS Point Source Catalog (Skrutskie et al. \cite{skr06}), with 65 common stars. After a best-fit transformation, an rms dispersion of 0.10~mag is present between our photometry and 2MASS, in each band and each colour. A total of 891 sources were measured in the three $JHK$ bands, and 36 more were measured in only one or two bands. The detection limit is $\sim$17.5~mag in $J$ and $K$, and $\sim$18~mag in $H$. The seeing was $0\farcs9$. Fig.~3 presents a composite $JHK$ colour image of Sh2-212. Our observations do not cover the whole H\,{\sc{ii}}\ region (our field is centred on the massive YSO, star no.~228, discussed below). We have supplemented the coverage of the Sh2-212 field, when necessary, using the 2MASS survey. Table~3, giving the coordinates and the magnitudes of the stars in the $JHK$ bands (CFHT observations), is available in electronic form at the CDS. \begin{figure} \includegraphics[width=90mm]{9233f3.eps} \caption{Sh2-212. Composite colour image of Sh2-212 in the near-IR ($J$ is blue, $H$ is green, $K$ is red). The colours of the stars are mainly determined by extinction. North is up and east is left. The size of the field is $4\farcm4$ (E-W) $\times$ $5\farcm2$ (N-S). The red nebulosity at the centre of the field corresponds to the optical reflection nebula associated with the MSX point source and star no.~228.} \end{figure} \begin{table*}[htp] \caption{ Coordinates and $JHK$ photometry of all the stars in a $4\farcm4 \times 5\farcm2$ field centred on star no.~228. This table is available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/.} \end{table*} \subsection{The stellar population associated with Sh2-212} Fig.~4 presents the $K$ versus $J-K$ magnitude-colour diagram of the sources detected in the three bands. The main sequence is drawn for a distance of 6.5~kpc, using the absolute calibration and colours of Martins et al.~(\cite{mar05}) and Martins \& Plez~(\cite{mar06}) for O3 to O9.5 stars; for later spectral types the absolute calibration is that of Schmidt-Kaler~(\cite{sch82}) and the colours are from Tokunaga~(\cite{tok00}). Note that there is some overlap between the magnitudes of O9.5 and B0 stars. The reddening lines correspond to a visual extinction of 30~mag. The interstellar extinction law is from Indebetouw et al.~(\cite{ind05}). The $J-H$ versus $H-K$ colour-colour diagram is presented in Fig.~5. The reddening lines are drawn for a visual extinction of 20~mag. They bracket the region occupied by reddened main-sequence stars. Stars near or above the upper reddening line may be evolved stars (giants). Stars below the bottom reddening line have a near-IR excess; they are probably young stellar objects associated with large amounts of dust (in an envelope or a disk), such as T~Tauri stars, Herbig Ae/Be stars and more massive YSOs (Lada \& Adams~\cite{lad92}). We have labelled a few objects in Fig.~6: $\bullet$ the stars observed in $UBV$ by Moffat et al.~(\cite{mof79}), each marked with the letter M followed by the number given by these authors. $\bullet$ the stellar object observed in the direction of the MSX and IRAS point sources. The large symbols in Figs 4 and 5 correspond to the magnitudes and colours of the whole object (star no.~228 plus associated nebulosity) integrated in a diaphragm of radius 6$\hbox{$.\!\!^{\prime\prime}$}$3. Note that this object presents a near-IR colour excess. $\bullet$ a few other stars, saturated on our frames, are identified by asterisks; their magnitudes are from the 2MASS catalogue. Fig.~4 shows that the whole region is affected by a visual extinction of about 3~mag, most probably of interstellar origin. Moffat et al.'s stars are, at optical wavelengths, the brightest components in the observed field; most of these stars follow the main sequence reddened by $\sim$3~mag. In particular, this is the case of M2, an O5.5 star according to Moffat et al., and the main exciting star of Sh2-212. Our $JHK$ photometry confirms this conclusion. However, Figs~4 and 5 show that a few of Moffat et al.'s stars are evolved; this is the case for stars M6, M10, and possibly M8 and M9. Star A, very bright in the near IR but not in the optical (not measured by Moffat et al.), is also an evolved star. (Fig.~4 shows that it is too luminous to be a main-sequence star associated with Sh2-212, and Fig.~5 shows it to be a giant star.) It is not possible to know whether these stars belong to the exciting cluster or if they are unrelated foreground stars. Our near-IR images also show that stars M1 and M3 are double stars. Red stars are observed all over the field of Fig.~3. Their density is especially high in the direction or in the vicinity of the molecular condensation C2 (Sect. 4.2). Many of these objects present a near-IR excess, indicating that they are YSOs. The extinction affecting the central cluster of Sh2-212 (about 3 mag) is very low for the distance of the region and is thus probably mainly of interstellar origin. Thus very little {\it local} dust is present in front of the optical nebula and its exciting cluster. \begin{figure}[htp] \includegraphics[width=90mm]{9233f4.eps} \caption{$K$ versus $J-K$ diagram. The main sequence is drawn for a visual extinction of zero (full line) and 3~mag (dotted line). The reddening lines, corresponding to a visual extinction of 30~mag, are issued from O3V and B2V stars. A few stars are identified, according to Fig.~6. Moffat et al.'s (\cite{mof79}) stars are identified by their number according to these authors. The asterisks are for 2MASS measurements. The connected full circles are for the star no.~228 (the YSO) and its associated nebulosity.} \end{figure} \begin{figure}[htp] \includegraphics[width=90mm]{9233f5.eps} \caption{$J-H$ versus $H-K$ diagram for stars with $K\leq17$~mag. The main sequence (full line) is drawn for a visual extinction of zero. The reddening lines, starting at the positions of B0V and M0V stars, have lengths corresponding to a visual extinction of 20~mag. The symbols are the same as in Fig.~4.} \end{figure} \begin{figure}[htp] \includegraphics[width=90mm]{9233f6.eps} \caption{Identification of a few objects discussed in the text. The underlying image is a colour composite of the [S\,{\sc{ii}}]\ frame (blue) and of the $K$ frame (orange). The lines indicate the limits of our $JHK$ frames.} \end{figure} \section{Molecular observations} \subsection{Observations} In March 2006 we observed the emission of the molecular gas associated with Sh2-212, in the \hbox{${}^{12}$CO}\ and \hbox{${}^{13}$CO}\ \hbox{$J=2\rightarrow 1$}\ lines using the IRAM 30-m telescope (Pico Veleta, Spain). We mapped an area of $16\hbox{$^\prime$} \times 16\hbox{$^\prime$}$ with the HERA heterodyne array (Schuster et al. \cite{sch04}). HERA has nine dual polarization pixels. The elements of the array are arranged in a $3\times 3$ matrix, with a separation of $24\hbox{$^{\prime\prime}$}$ on the sky between adjacent elements. The beam size of the telescope is $\sim$11\hbox{$^{\prime\prime}$} at these frequencies (Table~4). The data were acquired by drifting the telescope in right ascension (``on-the-fly'') in the frequency switching mode. The beam pattern on the sky was rotated by 18.5 degrees with respect to the right ascension axis by means of a K-mirror mounted between the Nasmyth focal plane and the cryostat of the heterodyne array. When drifting the telescope in right ascension, two adjacent rows are separated by $8\hbox{$^{\prime\prime}$}$, which results in a map slightly under-sampled in declination (the Nyquist sampling step is $5\farcs5$). Details about the HERA array and the K-mirror can be found at http://www.iram.fr/IRAMES/index.htm. We used the WILMA digital autocorrelator, with a spectral resolution of 78 kHz, as a spectrometer; the resolution was later degraded to obtain a velocity resolution of 0.2~km~s$^{-1}$. The observing conditions were typical for the time of the year, with system temperatures of 400~K to 500~K. Pointing, which was checked every 90 min by scanning across nearby quasars, was found to be stable to better than $2\hbox{$^{\prime\prime}$}$. Supplementary observations in the \hbox{C${}^{18}$O}\ \hbox{$J=2\rightarrow 1$}\ and the CS \hbox{$J=2\rightarrow 1$}\, \hbox{$J=3\rightarrow 2$}\ and \hbox{$J=5\rightarrow 4$}\ transitions were carried out towards the condensations identified in the HERA maps, using the ``standard'' heterodyne receivers at the IRAM 30-m telescope. The weather conditions were very good, with system temperatures of 145 K and 260 K at 3~mm and 1.3~mm respectively. The $^{12}$CO and $^{13}$CO emission extends over scales of several arcmin (Fig.~7), comparable to or larger than the first and second error beam of the IRAM telescope (see Table~1 in Greve, Kramer, \& Wild, \cite{gre98}). Hence the main-beam temperature scale is not a good approximation to the intrinsic $^{12}$CO and $^{13}$CO line brightnesses. The antenna temperature scale $T_a^{*}$ is a better approximation and we express the CO and $^{13}$CO fluxes in this unit. On the other hand, the \hbox{C${}^{18}$O}\ emission is much more compact, and the main-beam brightness temperature scale is a reasonable approximation to the intrinsic line brightness. We adopt a value of 0.53 for the main-beam efficiency at the frequency of the \hbox{C${}^{18}$O}\ line. Table~4 gives a summary of the millimetre observations and the efficiencies used. \begin{table}[htp] \begin{flushleft} \caption[]{Summary of millimetre line observations. } \begin{tabular}{|l|c|c|c|c|} \hline Line & Frequency & Beam width & $B_{\rm eff}$ & $F_{\rm eff}$ \\ & (GHz) & (arcsec) & & \\ \hline CS $\hbox{$J=2\rightarrow 1$}$ & 97.98097 & 24 & 0.77 & 0.95 \\ CS $\hbox{$J=3\rightarrow 2$}$ & 146.96905 & 16 & 0.69 & 0.93 \\ $\hbox{C${}^{18}$O}~\hbox{$J=2\rightarrow 1$}$ & 219.56038 & 11 & 0.53 & 0.90 \\ $\hbox{${}^{13}$CO}~\hbox{$J=2\rightarrow 1$}$ & 220.39872 & 11 & 0.52 & 0.90\\ $\hbox{${}^{12}$CO}~\hbox{$J=2\rightarrow 1$}$ & 230.53800 & 11 & 0.52 & 0.90 \\ CS $\hbox{$J=5\rightarrow 4$}$ & 244.93563 & 10 & 0.54 & 0.91 \\ \hline \end{tabular} \end{flushleft} \end{table} \begin{figure*}[htp] \includegraphics[width=180mm]{9233f7.eps} \caption{Channel maps of the \hbox{${}^{12}$CO}\ \hbox{$J=2\rightarrow 1$}\ emission as observed with HERA and integrated over velocity intervals of $1\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$; the central velocity of each bin is marked in the upper left corner of each panel. The first contour is at 1~K~km~s$^{-1}$ and the contour interval is 2~K~km~s$^{-1}$. The coloured background is the MSX emission at 8.3~$\mu$m. The 0,0 position is that of the MSX point source (Table~1).} \end{figure*} \subsection{Distribution and kinematics} In all the maps presented hereafter, the coordinates are expressed in arcsecond offsets with respect to the MSX point source. The distribution of the integrated \hbox{${}^{12}$CO}\ emission as a function of velocity is shown in Fig.~7. The CO line traces a diffuse filamentary cloud which extends southeast--northwest. The velocity of this filament is between $-36$ and $-37\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$, hence a few kilometres per second more positive than that of the ionized gas. A bright and thin half-ring structure of molecular gas is very clearly associated with the photo-dissociation region. This half-ring follows the ring of MSX emission at 8.3~$\mu$m which surrounds the brightest part of the ionized region. It lies at the back of the H\,{\sc{ii}}\ region, as no corresponding extinction of the optical nebular emission is observed in its direction. This is consistent with the observed velocity field, which shows that the molecular gas ring is redshifted with respect to the molecular filament and the ionized gas. The ring is fragmented into at least five condensations (Fig.~8). From west to east we have condensation 1, observed in the direction of the MSX point source and the UC H\,{\sc{ii}}\ region (see Sect.~5), with a mean velocity of $-$35.5~km~s$^{-1}$; condensations 2 and 3 with a velocity of $-$33.5~km~s$^{-1}$; condensation 4 with a velocity of $-$32.5~km~s$^{-1}$. Further to the east lies condensation 5 with a velocity of $-$37.5~km~s$^{-1}$. The velocity varies along the half-ring structure: condensations 1 and 5, situated on opposite ends of this structure, have velocities not very different from that of the filament; the half-ring, at the rear of the Sh2-212 H\,{\sc{ii}}\ region, is expanding with a velocity of a few kilometres per second with respect to the filament. The ionized gas flows away from the filament in the opposite direction (see Fig.~13). \begin{figure*}[htp] \includegraphics[width=180mm]{9233f8.eps} \caption{{\bf Top Left:} Condensation C1, showing the $^{13}$CO(2-1) emission integrated between $-36.1\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$ and $-35.1\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$. The first contour and the step correspond respectively to 20\% and 15\% of the peak brightness ($9.6\hbox{\kern 0.20em K}\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$). {\bf Bottom Left:} Condensations C2, C3, and C4, showing the $^{13}$CO(2-1) emission integrated between $-34.0\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$ and $-32.7\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$. The first contour and the step are respectively 20\% and 15\% of the peak brightness ($15.7\hbox{\kern 0.20em K}\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$). {\bf Top Right:} Condensation C5, showing the $^{13}$CO(2-1) emission integrated between $-38.4\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$ and $-36.6\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$. The first contour and the step are respectively 35\% and 15\% of the peak brightness ($3.8\hbox{\kern 0.20em K}\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$). {\bf Bottom Right:} Filament, with the {$^{13}$CO(\mbox 2-1)} emission integrated between $-36.8\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$ and $-35.9\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$. The peak is at 6.6~K~km~s$^{-1}$. The first contour and the step are respectively 20\% and 20\% of the peak brightness.} \end{figure*} Fig.~8 compares the \hbox{${}^{13}$CO}\ emission, especially the locations of the condensations, with the [S\,{\sc{ii}}]\ emission of the ionized gas. Fig.~8 shows that the bright rim present at the northwest border of Sh2-212, which harbours the UC H\,{\sc{ii}}\ region and star no.~228, is the ionized border of condensation 1. Thus condensation 1 appears as the remains of the parental core in which the massive YSO no.~228 formed and subsequently ionized a UC H\,{\sc{ii}}\ region. Furthermore, several molecular substructures have counterparts in the [S\,{\sc{ii}}]\ image, as substructures of the ionization front. For example, condensation 5 is situated behind an IF traced by [S\,{\sc{ii}}]\ emission. \begin{figure}[htp] \includegraphics[width=90mm]{9233f9.eps} \caption{Molecular lines observed towards the molecular fragments C1 to C4.} \end{figure} \subsection{Physical conditions} \begin{table*} \begin{flushleft} \caption[]{Properties of the molecular gas condensations} \begin{tabular}{|c|c|c|c|c|c|c|c|c|}\hline Condensation & Offset Position & $\Delta v^1$ & Core Dimensions & $N(\hbox{H${}_2$})^{\rm peak,1}$ & $n(\hbox{H${}_2$})^1$ & $N$(CS) & $n(\hbox{H${}_2$})^{\rm peak,2}$ & Fragment Mass \\ & (\hbox{$^{\prime\prime}$}) & ($\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$) & (\hbox{$^{\prime\prime}$}) & ($\hbox{\kern 0.20em cm$^{-2}$}$) & ($\hbox{\kern 0.20em cm$^{-3}$}$) & ($\hbox{\kern 0.20em cm$^{-2}$}$) & ($\hbox{\kern 0.20em cm$^{-3}$}$) & ($\hbox{\kern 0.20em $M_\odot$}$) \\ \hline 1 & ($-6,-5$) & 1.1 & $25 \times 8$ & 4.3(21) & 1.0(4) & 6.5(12) & 4(5) & 220 \\ 2 &($+108,-90$) & 1.2 & $25 \times 12$ & 6.7(21) & 1.0(4) & 3.8(12) & 4(5) & 63 \\ 3 & ($+68,-90$) & 1.2 & $25 \times 10$ & 4.3(21) & 8.2(3) & 2.7(12) & 1.8(5) & 45 \\ 4 & ($+136,-30$) &1.0& $15 \times 10$ & 7.2(21) & 2.5(4) & 1.2(13) & 1.0(6) & 61 \\ 5 & ($+210,-5$)& $2.0^3$& $16 \times 16$ & 2.7(21) & \_ & \_ & \_ & 38 \\ \hline \end{tabular} \end{flushleft} $^1$~determined from \hbox{C${}^{18}$O}\ data. \\ $^2$~determined from CS data.\\ $^3$~from \hbox{${}^{13}$CO}\ data. \\ \end{table*} \subsubsection{The kinetic temperature} $^{12}$CO brightness temperatures of 20--30~K are observed along the molecular gas ring and the cores. Towards C1, the maximum of \hbox{${}^{12}$CO}\ brightness is observed at the offset position $(-3\hbox{$^{\prime\prime}$},-1\hbox{$^{\prime\prime}$})$, where $T_{\rm a}^{*}= 24\hbox{\kern 0.20em K}$. This implies a kinetic temperature $T_{\rm k}\approx 27\hbox{\kern 0.20em K}$. The peak brightness temperatures in the \hbox{${}^{13}$CO}\ and \hbox{C${}^{18}$O}\ \hbox{$J=2\rightarrow 1$}\ transitions are $11\hbox{\kern 0.20em K}$ and $1.5\hbox{\kern 0.20em K}$, compatible with opacities $\tau_{13}= 0.58$ and $\tau_{18}= 0.08$, respectively, adopting standard relative abundances $\rm [^{12}CO]/[^{13}CO]= 60$ and $\rm [^{13}CO]/[C^{18}O]= 8$. Hence the $\hbox{${}^{13}$CO}$ \hbox{$J=2\rightarrow 1$}\ transition is moderately optically thin. The \hbox{C${}^{18}$O}\ line traces a gas column density $N(\hbox{H${}_2$})=4.3\times 10^{21}\hbox{\kern 0.20em cm$^{-2}$}$ at the intensity peak of C1. Calculations of the \hbox{C${}^{18}$O}\ line excitation in the large velocity gradient (LVG) approximation show that the opacity of the line is consistent with a gas kinetic temperature of $30\hbox{\kern 0.20em K}$, as estimated above. A similar conclusion was derived for C3. Hence we adopt a kinetic temperature of $30\hbox{\kern 0.20em K}$ for the condensations, in what follows. Inside the filamentary cloud, away from the H\,{\sc{ii}}\ region, the \hbox{${}^{12}$CO}\ and \hbox{${}^{13}$CO}\ \hbox{$J=2\rightarrow 1$}\ ~brightness temperatures are much lower, typically a few kelvins for both lines. At the offset position $(-120\hbox{$^{\prime\prime}$},+80\hbox{$^{\prime\prime}$})$, we measure $T_B(\hbox{${}^{12}$CO})= 8.2\hbox{\kern 0.20em K}$ and $T_B(\hbox{${}^{13}$CO})= 2.1\hbox{\kern 0.20em K}$, which implies an opacity $\tau_{13}= 0.32$, adopting the same standard abundance ratio. Both line intensities and opacities are accounted for by a gas layer of column density $N(\hbox{H${}_2$})= 9\times 10^{20}\hbox{\kern 0.20em cm$^{-2}$}$ at about $14\hbox{\kern 0.20em K}$. \subsubsection{Masses and Densities} The masses of the molecular gas fragments were derived from the \hbox{${}^{13}$CO}\ data, in the optically thin limit. For condensation C1 the contour at half power of the \hbox{${}^{13}$CO}\ \hbox{$J=2\rightarrow 1$}\ emission delineates a condensation of $56 \hbox{$^{\prime\prime}$} \times 28\hbox{$^{\prime\prime}$}$, oriented north-south, centred at offset position $(-6\hbox{$^{\prime\prime}$},-1\hbox{$^{\prime\prime}$})$. The total mass of the condensation is obtained by integrating over the 20\% peak contour, from which we determine $M= 220\hbox{\kern 0.20em $M_\odot$}$. A similar procedure was applied to the five condensations. The results are summarised in Table~5. The half shell of collected molecular material has a mass $\leq$720\,\hbox{\kern 0.20em $M_\odot$}, as estimated by integrating the whole $\hbox{${}^{13}$CO}\ \hbox{$J=2\rightarrow 1$}$\ emission in the ring. The mean density in a fragment, $n(\hbox{H${}_2$})$, is derived as $N(\hbox{H${}_2$})/\sqrt(ab)$ where $a$ and $b$ are the major and minor axes of the condensation. We have taken into account the dilution in the main beam, by applying a correction factor $(ab)/(ab+\theta_{\rm beam}^2)$, where $\theta_{\rm beam}$ is the beam width. These mean densities are given in Table~5. Emission of the high-density gas was traced by the millimetre lines of CS (Fig.~9). The emission of the lower transition \hbox{$J=2\rightarrow 1$}\ is detected along the filament, whereas the \hbox{$J=3\rightarrow 2$}\ and \hbox{$J=5\rightarrow 4$}\ emissions are more compact. The \hbox{$J=5\rightarrow 4$}\ transition was detected only in C1 and C4, arising from a small unresolved region. The \hbox{$J=2\rightarrow 1$}\ and \hbox{$J=3\rightarrow 2$}\ lines intensities are typically 1 to $2\hbox{\kern 0.20em K}$ at the brightness peak of the condensations. The lines are typically $1\hbox{\kern 0.20em km\kern 0.20em s$^{-1}$}$ wide. Estimates of the \hbox{H${}_2$}\ density in the fragments were obtained by modelling the millimetre CS line emission, in the LVG approximation. Analysis of the \hbox{$J=2\rightarrow 1$}\ and the \hbox{$J=3\rightarrow 2$}\ transitions at the brightness peak indicate typical densities $n(\hbox{H${}_2$})\simeq$ 2--4 $\times 10^5\hbox{\kern 0.20em cm$^{-3}$}$ in the cores, and up to $1.0\times 10^6\hbox{\kern 0.20em cm$^{-3}$}$ towards C4. Note that these densities are much higher than the mean densities estimated from the \hbox{C${}^{18}$O}\ emission; thus the CS material has a small filling factor. \section{Radio observations} We observed Sh2-212, in the direction of the MSX point source, with the Very Large Array (VLA) on 2005 November 16. We searched for emission from methanol at 44~GHz (7~mm), water at 22~GHz (1.3~cm), ammonia in the (1,1) and (2,2) lines at 23~GHz, and neutral hydrogen at 21~cm. The array was in its most compact configuration, providing spatial resolutions of $1\farcs5$, $3\farcs3$, and $45\hbox{$^{\prime\prime}$}$ in the 7~mm, 1.3~cm, and 21~cm bands, respectively. Each molecular transition was observed with a distinct combination of bandwidth and number of channels, resulting in spectral resolutions of 0.3~km~s$^{-1}$ for methanol and water, and 0.6~km~s$^{-1}$ for ammonia. For all three molecules the total bandwidth corresponds to about 40~km~s$^{-1}$, and is centred at $-35$~km~s$^{-1}$. Further observational details will be provided in a forthcoming paper. We did not detect any of the molecular transitions to the 5~$\sigma$ levels of 90, 33, and 10 mJy~beam$^{-1}$ for the methanol, water, and ammonia lines, respectively. The neutral hydrogen line at 21~cm was detected, and is discussed below.\\ The absence of 44~GHz methanol masers is not necessarily surprising. Methanol masers are known to be associated with high-mass star formation (e.g., Ellingsen \cite{ell06}); however, type II methanol masers (such as those producing the 6.7 and 12.2~GHz lines) are thought to be pumped by the radiation field of YSOs, and hence are closely linked with the star formation process. Type I masers (such as those producing the 44~GHz line) are thought to be collisionally pumped, and may not be directly associated with massive YSOs. More surprising is the absence of water masers, which are nearly ubiquitous in star-forming regions. Water masers are known to be variable, and a possible explanation for our non-detection is that such masers are present but currently quiescent. Other explanations for the non-detection may have implications for the star formation process. For example, water masers are typically thought to arise in outflow and/or accretion processes, which may be absent in YSO no.~228. Our 5~$\sigma$ detection level of 10 mJy beam$^{-1}$ for the (1,1) and (2,2) ammonia lines allows us to estimate an upper limit for the gas column density. Assuming optically thin emission, with a 30~K excitation temperature, and a beam filling factor of one, our observations could detect an ammonia column density of about $10^{14}$~cm$^{-2}$. Adopting an ammonia abundance of $10^{-8}$ indicates a detection limit of H$_2$ column density of about $10^{22}$~cm$^{-2}$. This is marginally higher than the several times $10^{21}$~cm$^{-2}$ column densities reported in Table 5; thus our non-detection of ammonia is consistent with the column densities inferred from the CO observations. Hot molecular cores, a common tracer of young, high-mass star formation, with column densities of $10^{23}$--$10^{24}$~cm$^{-2}$, are clearly absent. \\ Centimetre continuum data of Sh2-212 were obtained from the VLA data archive of programmes AF346 and AR390, at 1.46 GHz and 8.69~GHz respectively. These data were calibrated and imaged using standard procedures for continuum data. The AF346 observations were made in 1998 in the C and B configurations. The lower resolution (16\hbox{$^{\prime\prime}$}$\times$13\hbox{$^{\prime\prime}$}) C-array data imaged both the Sh2-212 region and a compact source to the northwest. The resolution was too low for a reliable determination of the compact source's parameters; the higher resolution (5\hbox{$^{\prime\prime}$}$\times$4\hbox{$^{\prime\prime}$}) B-array data were used for this purpose. The AR390 observations were made in 1997 in the D configuration, and provided an angular resolution of 10\hbox{$^{\prime\prime}$}$\times$7\hbox{$^{\prime\prime}$}. Sh2-212 was too large to be imaged by these data, but the compact source parameters were reliably determined to be $11\pm1$~mJy at 1.46~GHz and $7.6\pm1$~mJy at 8.69~GHz. Gaussian fits to the source size and position yield deconvolved major and minor axes of 3\hbox{$^{\prime\prime}$}, and a J2000 position 04$^{\rm h}$40$^{\rm m}$27$^{\rm s}$.2, +50\degr28\arcmin29\hbox{$^{\prime\prime}$}, i.e. coincident with the MSX source G155.3319+02.5989 (see Table 1). Although no ammonia was detected in our 2005 observations, we used the central 75\% of the 3.1~MHz bandwidth to form a 1.3~cm continuum image, presented in Fig.~10. The compact source was detected in this image, with a flux density of $8.9\pm1.2$~mJy. For the UC H\,{\sc{ii}}\ region, the flux densities at 1.46, 8.69, and 23.7~GHz show a flat spectrum, indicative of an optically thin H\,{\sc{ii}}\ region. All three observations indicate an ionizing photon flux of log $N_{\rm Lyc}>46.5$, or the equivalent of a B1 or earlier star (Smith et al.~\cite{smi02}). The 3\hbox{$^{\prime\prime}$}\ size corresponds to 0.095~pc at a distance of 6.5~kpc. Assuming a spherical H\,{\sc{ii}}\ region of this diameter, the flux densities indicate an rms electron density of $3.2 \times 10^3$~cm$^{-3}$, with a total mass of ionized gas of 0.05~$M_\odot$.\\ \begin{figure}[htp] \includegraphics[width=90mm]{9233f10.eps} \caption{ Radio continuum emission (contours) superimposed on the $H$ image (grey scale). Star no. 228 is indicated by an arrow. The contours show 1.3 cm continuum emission from the UC H\,{\sc{ii}}\ region coinciding in direction with the MSX point source and the star. The angular resolution of the radio image is $3 \hbox{$.\!\!^{\prime\prime}$} 3$ (indicated in the lower left corner) and the contour levels are $-$15, 15, 20, 25, 30, 40, 45 mJy~beam$^{-1}$.} \end{figure} Sh2-212 was also observed with the VLA in the 21~cm line of neutral hydrogen. The observations were made with a 1.56~MHz (330 km~s$^{-1}$) bandwidth and 255 channels of 6.1~kHz (1.3 km~s$^{-1}$) each. Data reduction followed standard VLA spectral line procedures. After the external flux and phase calibration, continuum emission was subtracted from the $uv$ data and an image cube was formed. Very extended H\,{\sc{i}}\ emission was present in the field while the H\,{\sc{i}}\ emission on size scales similar to those of Sh2-212 was relatively weak. To optimise the imaging toward the H\,{\sc{i}}\ associated with Sh2-212, the data were re-imaged, removing the shortest 0.3~k$\lambda$ baselines (to suppress the extended emission) and averaging adjacent channels (to improve the signal-to-noise ratio). The resulting image cube has an angular resolution of $50'' \times 41''$ and a spectral resolution of 2.6~km~s$^{-1}$. H\,{\sc{i}}\ emission was found in three adjacent channels, from $-40$~km~s$^{-1}$ to $-48$~km~s$^{-1}$. Three distinct H\,{\sc{i}}\ condensations were found, all lying (in projection) at the edge of the H\,{\sc{ii}}\ region. A contour plot of this emission is shown in Fig.~11. Using the peak brightness temperature (in a 45$''$ beam) we can calculate the column density over the central 1.4 pc of each clump. Assuming optically thin emission, so that the observed line temperature is approximately equal to the spin temperature times the optical depth, we calculate the column density as $N_{\rm HI}$(cm$^{-2})=1.82 \times 10^{18}~(T_L$/K)($\Delta V$/km~s$^{-1}$). All three clumps have peak line temperatures of 20--25~K, and linewidths of 4--5~km~s$^{-1}$. Hence, for all three we find column densities of about 1.5$\times 10^{20}$~cm$^{-2}$ (within a factor of two). Assuming a spherical geometry for the central region of each clump implies hydrogen densities of about 35~cm$^{-3}$. \begin{figure}[htp] \includegraphics[width=90mm]{9233f11.eps} \caption{H\,{\sc{i}}\ emission associated with Sh2-212, integrated between $-40$~km~s$^{-1}$ and $-48$~km~s$^{-1}$. The radio angular resolution ($50'' \times 41''$) is indicated in the upper left corner. Contour levels are $-15$, 15, 20, 25, ... 50 mJy~beam$^{-1}$~km~s$^{-1}$. The H\,{\sc{i}}\ contours are superimposed on the [S\,{\sc{ii}}]\ image.} \end{figure}~ \section{Discussion} \subsection{A massive young stellar object exciting a UC H\,{\scriptsize\it II} region} Star no.~228, associated with a reflection nebulosity, is a massive young stellar object: $\bullet$ It is a luminous near-IR source with a near-IR excess indicative of the presence of nearby dust, probably associated with a disk (Lada \& Adams~\cite{lad92}). $\bullet$ Its spectral energy distribution (SED) rises strongly in the IR. Table~6 gives the flux measurements between 1.25~$\mu$m and 21.3~$\mu$m. We have used the Web-based SED fitting tool of Robitaille et al.~(\cite{rob07}; http://caravan.astro.wisc.edu/protostars/ ) to interprete that of YSO no.~228 and its associated nebulosity. Several parameters of the model are not well constrained, but some are. A strong conclusion is that the central stellar object is hot ($T_*=30000 \pm 1000$~K) and massive ($M_*\sim14$\hbox{\kern 0.20em $M_\odot$}), and that the YSO is luminous (total luminosity $14000 - 20000$\hbox{\kern 0.20em $L_\odot$}); the disk and the envelope have similar luminosities (more uncertain). Also, the YSO is seen edge-on (inclination angle $\sim$87\hbox{$^\circ$}). Fig.~12 shows the SED of YSO no.~228 and the five best-fitting models. Fig.~10 shows the radio-continuum emission of the UC H\,{\sc{ii}}\ region at 1.3~cm , as contours superimposed on an $H$ image. Star no.~228 lies at the centre of the UC H\,{\sc{ii}}\ region (see also Table~1). Thus star no.~228 is very probably the exciting star of the UC H\,{\sc{ii}}\ region. The SED of object no.~228 indicates that it is probably in an evolutionary stage between Class~I and Class~II, with both an envelope and a disk. This object is consistent with the evolutionary models of massive stars, formed by accretion, as described by Beech \& Mitalas (\cite{bee94}) and by Bernasconi \& Maeder (\cite{ber96}). This object of $\sim$14\,\hbox{\kern 0.20em $M_\odot$}\ has reached the main sequence, and hence is burning hydrogen in its centre. But it is still accreting material and increasing its mass. Presently its accretion rate is not high enough to prevent the formation of an ionized region (Walmsley, \cite{wal95}). This massive YSO, which is observed at its place of birth inside the parental condensation, does not seem to belong to a populous cluster. Only three low-brightness stars are observed nearby, at less than 0.11~pc (with $K$ magnitudes $\geq16.1$). {\it It is therefore a good candidate for being a massive star born either in isolation or in a very small cluster}. As such, it deserves further high-resolution, high-sensitivity imaging and spectroscopic observations to ascertain its nature and to detect any nearby deeply-embedded objects. \begin{table}[htp] \caption{Spectral energy distribution of star no.~228, a massive YSO} \begin{tabular}{ccl} \hline\hline Wavelength & $F_{\nu}$ & origin \\ ($\mu$m) & (Jy) & \\ \hline 1.215 & 0.629 10$^{-2}$ & $J$ star alone \\ 1.215 & 1.268 10$^{-2}$ & $J$ star + nebulosity \\ 1.65 & 1.049 10$^{-2}$ & $H$ star alone \\ 1.65 & 2.006 10$^{-2}$ & $H$ star + nebulosity \\ 2.18 & 1.268 10$^{-2}$ & $K$ star alone \\ 2.18 & 3.112 10$^{-2}$ & $K$ star + nebulosity \\ 8.28 & 2.27 & MSX star + nebulosity \\ 12.13 & 2.91 & MSX star + nebulosity \\ 14.65 & 4.94 & MSX star + nebulosity \\ 21.3 & 35.34 & MSX star + nebulosity \\ \hline \\ \end{tabular} \end{table} \begin{figure}[htp] \includegraphics[width=90mm]{9233f12.eps} \caption{Spectral energy distribution of star no.~228, a massive YSO. Filled circles are the fluxes listed in Table~6. The five best fitting models obtained using the web-based tool of Robitaille et al.~(\cite{rob07}) are presented.} \end{figure} \subsection{The age of the UC H\,{\scriptsize\it II} region} The UC H\,{\sc{ii}}\ region formed and evolved in condensation 1. Let us assume that it formed in a uniform medium of density $10^5$~cm$^{-3}$. The exciting star, emitting 10$^{46.5}$ ionizing photons per second, very quickly formed an ionized region of radius 0.0046~pc, which later expanded. According to Dyson \& Williams~(\cite{dys97}, sect.~7.1.8) the present radius of the UC H\,{\sc{ii}}\ region, 0.0475~pc, corresponds to an age of 13\,500~yr, and a density in the ionized gas of 3050~cm$^{-3}$, which is very close to that which is observed. Also, the pressure equilibrium with the surrounding medium has not yet been reached (the expansion velocity being $\sim$2~km~s$^{-1}$). Thus this UC H\,{\sc{ii}}\ region is very young -- much younger than Sh2-212 (see Sect.~6.4). Our non-detection of water maser emission is a little surprising, and higher-sensitivity observations are needed. Note, however, that the age of the H\,{\sc{ii}}\ region is not that of the massive YSO no.~228, which is much older, having evolved during a long time before being able to ionize the surrounding gas. \subsection{The morphology of the Sh2-212 complex} The present distribution of the molecular material indicates that Sh2-212 probably formed in a filament. Was this medium turbulent? One morphological aspect of this region, the perfectly spherical shape of the ionized region, at both optical and radio wavelengths, seems to indicate that the level of turbulence, if any, is low. We suggest (see Fig.~13) that the exciting star of Sh2-212 formed inside the molecular filament and that the H\,{\sc{ii}}\ region first expanded inside this filament. When the ionization front reached the border of the filament, the H\,{\sc{ii}}\ region opened on the outside (a low-density inter-filament medium), thus creating a champagne flow. This explains both the velocity field (the ionized gas flowing away from the molecular cloud, more or less in the direction of the observer, at a few kilometres per second), and the shape of the H\,{\sc{ii}}\ region and its photodissociation region containing the PAHs (a bright ionized region surrounded by the PAH emission ring observed by MSX, and a more diffuse extension of the ionized gas). \begin{figure}[htp] \includegraphics[width=90mm]{9233f13.eps} \caption{Morphology of the Sh2-212 complex.} \end{figure} During its supersonic expansion inside the molecular filament, the ionization front was preceded by a shock front on the neutral side, and neutral material accumulated between the two fronts. This collected material forms the thin molecular half-ring which surrounds the brightest part of the H\,{\sc{ii}}\ region. This collected layer forms only half a shell, adjacent to the molecular filament, and is in expansion with a velocity of 4--5 km~s$^{-1}$ with respect to the molecular filament. On the other side the H\,{\sc{ii}}\ region is surrounded by low-density atomic material; this is the origin of the H\,{\sc{i}}\ emission observed at the periphery of \mbox{Sh2-212} (Fig.~11). This atomic material is receding from the molecular filament with a velocity similar to that of the ionized gas. The low density of this material is confirmed by the very low, if any, local extinction in front of the ionized gas. Such atomic hydrogen rings have been observed in other complexes, for example around the spherical H\,{\sc{ii}}\ region Sh2-219 (Roger \& Leahy~\cite{rog93}, Deharveng et al.~\cite{deh06}, Fig.~4), and around Sh2-217 (Roger \& Leahy~\cite{rog93}, Brand et al.~in preparation). Dale et al.~(\cite{dal05}) have simulated the photoionizing feedback of a massive star on a turbulent molecular cloud. Fig.~14 shows the striking similarity between the morphologies of the observed Sh2-212 molecular cloud and of the simulated cloud. The right panel -- simulation -- shows the column density of the neutral gas; the ionized region lies in the central hole. The left panel -- observations -- shows the $^{12}$CO emission integrated over velocity (the intensity is proportional to the column density, except for regions optically thick along the line of sight); here again the ionized region lies inside the central hole. The simulation concerns a turbulent cloud of relatively low density; a more chaotic morphology is obtained in the case of a denser turbulent cloud (Dale et al.). \begin{figure*}[htp] \includegraphics[width=180mm]{9233f14.eps} \caption{Confrontation of observations and simulations. {\bf Left:} $^{12}$CO emission, integrated over the velocity. {\bf Right:} simulation of a turbulent cloud illuminated by the UV radiation of a massive star (Dale et al.~\cite{dal05}, their Fig.~16).} \end{figure*} \subsection{Star formation history} It is almost impossible to obtain direct evidence of sequential star formation. The age of the evolved Sh2-212 is very uncertain due to our lack of knowledge about the density structure of the original medium in which this H\,{\sc{ii}}\ region formed and evolved. Only indirect evidence is available. It is difficult to explain the origin of the thin circular half-ring of molecular material which surrounds the brightest part of Sh2-212 other than by material collected during the expansion of this H\,{\sc{ii}}\ region. It is presently fragmented; at least five condensations are present along the ring. This is a good illustration of the collect and collapse process. The most massive condensation contains a massive young stellar object exciting a UC H\,{\sc{ii}}\ region. This UC H\,{\sc{ii}}\ region is very young -- much younger than Sh2-212. Thus massive star-formation has been triggered by the expansion of the Sh2-212 H\,{\sc{ii}}\ region, via the collect and collapse process. Here again this process seems able to form massive objects, as observed on the borders of Sh2-104 (Deharveng et al.~\cite{deh03}) and RCW~79 (Zavagno et al.~\cite{zav06}). Also, a few lower mass YSOs have formed in the collected layer, for example in the direction of C2. In order to compare with the predictions of the collect and collapse model of Whitworth et al.~(\cite{whi94}), we need to know three parameters: 1) the Lyman continuum photon flux: we adopt 10$^{49}$ ionizing photons per second, corresponding to the O5.5V--\,O6V exciting star (Martins et al.~\cite{mar05}); 2) the velocity dispersion in the collected layer: we measure a FWHM $\sim$1~km~s$^{-1}$ at the condensation peaks (Table~5), corresponding to a velocity dispersion $\sim$0.4~km~s$^{-1}$. The condensations are possibly collapsing, and thus a lower value, in the range 0.2--0.3~km~s$^{-1}$, seems reasonable for the collected layer before collapse; 3) the density of the neutral material into which the H\,{\sc{ii}}\ region evolved: a density $\sim$500~atom~cm$^{-3}$ allows formation of a collected layer (half a shell, of internal radius 2~pc) with a mass of about 750~$M_{\odot}$, as observed. With these figures the model predicts the fragmentation of the collected layer after 2.2--2.8~Myr . It predicts the formation of fragments with a mass in the range 30--140~$M_{\odot}$, in agreement with the observations, separated by some 1.1--2.2~pc, again in agreement with the observations. But the radius of the H\,{\sc{ii}}\ region at the time of the fragmentation should be in the range 8.5--10.0~pc, much larger than the present radius of 2~pc. The fact that the H\,{\sc{ii}}\ region shows a champagne flow (and thus did not evolve in a homogeneous medium) probably explains why this model does not account for the observations. The possible presence of a magnetic field is an additional difficulty because, as demonstrated by Krumholz et al.~(\cite{kru06}), it results in a non-spherical expansion. The star formation efficiency can be estimated. Condensation 1 has a mass $\sim$220\,\hbox{\kern 0.20em $M_\odot$}\ and has formed a (possibly isolated) B1V star; assuming for this star a mass of 14~\hbox{\kern 0.20em $M_\odot$}, we obtain a star formation efficiency of 5\%. \section{Conclusions} The optical Galactic H\,{\sc{ii}}\ region Sh2-212 appears in the visible as a spherical H\,{\sc{ii}}\ region around its O5.5V exciting star. The near-IR observations show that a rich cluster lies at its centre. A bright stellar object, no.~228, presenting a near-IR excess, and associated with a reflection nebulosity, lies on the border of Sh2-212, behind a bright rim. The MSX image at 8.3~$\mu$m shows a bright point source in the direction of object no.~228, and radio continuum observations show the presence of a UC H\,{\sc{ii}}\ region in this exact direction. Sh2-212 lies in the middle of a molecular filament. Millimetre observations show that a thin molecular half-ring structure surrounds the brightest part of Sh2-212 at its back, and is expanding. This molecular layer is fragmented. The most massive fragment ($\sim$200\,\hbox{\kern 0.20em $M_\odot$}) is associated with object no.~228. The SED of object no.~228 shows that it is a massive YSO of about 14 \hbox{\kern 0.20em $M_\odot$} , hence able to ionize the UC H\,{\sc{ii}}\ region. Low-density atomic hydrogen is detected at the periphery of the low-density ionized region.\\ We have tried to understand the star formation history in this region. Sh2-212 first formed and evolved inside a molecular filament. During its expansion neutral material was collected between the IF and the SF, as predicted by the collect and collapse process. This layer was then fragmented, and a second-generation massive star (no.~228) formed inside a massive fragment, ultimately ionizing a second-generation H\,{\sc{ii}}\ region. In a next stage, the IF bounding the Sh2-212 H\,{\sc{ii}}\ region reached the limits of the molecular filament, and the ionized region opened towards the low-density inter-condensation gas, creating a champagne flow. Presently, the Sh2-212 H\,{\sc{ii}}\ region is surrounded on one side by the dense collected molecular material, and on the other side by low-density atomic material.\\ The Sh2-212 H\,{\sc{ii}}\ region is, after Sh2-104 and RCW~79, one more example of massive-star formation by the collect and collapse process. It is a very special region for the following reasons: - The massive YSO, no.~228, seems to have formed in isolation, or in a very small group. If this is confirmed, no.~228 is a very uncommon object. - The layer of collected material is very well defined (circular, bright, thin, expanding); the fact that it survives in an inhomogeneous medium is especially interesting. This demonstrates that the collect and collapse process can work in a non-homogeneous medium, possibly in a turbulent one. \begin{acknowledgements} We gratefully thank D.~Gravallon and S.~Ilovaisky for the H$\alpha$\ and [S\,{\sc{ii}}]\ frames they obtained for us at the 120-cm telescope of the Observatoire de Haute-Provence, and M.~Walmsley for constructive comments and questions. This work has made use of Aladin and of the Simbad astronomical database operated at CDS, Strasbourg, France. We have used data products from the Midcourse Space EXperiment and from the Two Micron All Sky Survey, obtained through the NASA/IPAC Infrared Science Archives. S. K. thanks the Laboratoire d'Astrophysique de Marseille and the Universit\'e de Provence for hosting him while some of this work was done. \end{acknowledgements}
1,116,691,499,426
arxiv
\section{Introduction} Technological advances in quantum computing and quantum communication have accelerated in recent years, putting a number of high-stakes applications in the realm of the potential near future. One such application is quantum key distribution, a protocol in which a secret key is distributed to two distant parties through the measurement of entangled particles. The security of quantum key distribution is guaranteed by the laws of quantum mechanics when entanglement is present in the particles measured \cite{Umesh_Vidick_SecurityQKD}. Moreover, this entanglement can be verified by considering the probability distributions generated by the measurement devices used in the key generation process. These probability distributions are called \textbf{quantum correlations}. However, many open questions remain regarding precisely which probability distributions can be certified as quantum correlations (e.g. see \cite{Fu_Miller_Slofstra_2021_preprint}). The best known method for distinguishing quantum correlations from other kinds of probability distributions is the NPA hierarchy, developed in \cite{NPA2008}. Roughly, the NPA hierarchy is an infinite sequence of semidefinite programs which yield positive semidefinite matrices certifying that a given probability distribution may be a quantum correlation. If the given probability distribution $p$ yields a complete infinite sequence of certificates, then that distribution is certified as a \textbf{quantum commuting} correlation, meaning that $p$ was potentially generated by a valid quantum measurement scenario according the the Haag-Kastler axioms of relativistic quantum mechanics \cite{HaagKastler}, though the Hilbert space required may have infinite dimension. In practice, one cannot generate an infinite sequence of certificates directly. However, if two consecutive certificates $T_m$ and $T_{m+1}$ have the same rank, then the hierarchy can be terminated and the correlation can be certified as a quantum correlation arising from a finite dimensional Hilbert space. The NPA Hierarchy can also be developed using the theory of universal C*-algebras (see Section 3 of \cite{PaulsenEtAlSynchronous}), and it was recently generalized to the setting of prepare-and-measure scenarios (see \cite{npjPrepareMeasureCorrelations}). The distinction between quantum commuting correlations and quantum correlations would be of less practical importance if it were possible to approximate an arbitrary quantum commuting correlation with a quantum correlation. The question of whether or not this was possible remained open for many years and generated tremendous research interest, eventually becoming tied to a long-standing problem in mathematics known as Connes' embedding problem. These questions were finally settled recently in the paper \cite{mipStarEqualRe}, which showed that some quantum commuting correlations cannot be approximated by quantum correlations. Their methods required only synchronous quantum correlations, which are the subject of this paper. In this paper, we present an adaptation of the NPA hierarchy for certifying synchronous quantum and quantum commuting correlations. While a synchronous correlation can be verified using the original NPA hierarchy as well, our adaptation has some advantages. The certificates produced by the hierarchy are smaller than those produced in the original NPA hierarchy. Moreover, there are fewer linear constraints imposed on the certificate, as one only needs to check that the certificates satisfy a kind of cyclic symmetry. Our adapted hierarchy yields new characterizations for the sets of synchronous quantum and quantum commuting correlations. To further motivate these tools, we demonstrate how one can verify or invalidate two major open problems in quantum information theory, namely the existence of symmetric informationally-complete positive operator-valued measures (SIC-POVMs) and maximal sets of mutually unbiased bases (MUBs) in each dimension, using only two certificates of our adapted NPA hierarchy. We conclude this introduction with an overview of the notation and mathematical prerequisites for the paper. We let $\mathbb{N}, \mathbb{R}$, and $\mathbb{C}$ denote the sets of positive integers, real numbers, and complex numbers, respectively. Given $\lambda \in \mathbb{C}$, we let $\lambda^*$ denote its complex conjugate. For each $n \in \mathbb{N}$, we let $\mathbb{M}_n$ denote the set of $n \times n$ matrices with entries in $\mathbb{C}$, and we let $[n] = \{1,2,\dots,n\}$ for indexing purposes. We assume basic familiarity with the theory of Hilbert spaces over $\mathbb{C}$. Given a Hilbert space $H$, we sometimes use the notation $\langle h, k \rangle$ to denote the inner product of vectors $h,k \in H$. We also employ bra-ket notation whenever convenient. We use the notation $\vec{v}$ whenever regarding vectors as column matrices in the finite-dimensional Hilbert space $\mathbb{C}^n$. We let $B(H)$ denote the set of operator norm bounded operators on a Hilbert space $H$, and we let $T^{\dagger}$ denote the adjoint of an operator $T \in B(H)$. By a \textbf{C*-algebra}, we mean a norm-closed $\dagger$-closed subalgebra of $B(H)$. A \textbf{state} on a unital C*-algebra $\mathfrak{A}$ is a linear functional $\phi: \mathfrak{A} \to \mathbb{C}$ mapping the identity to 1 and mapping positive elements of $\mathfrak{A}$ to positive real numbers. A state $\tau: \mathfrak{A} \to \mathbb{C}$ is \textbf{tracial} if $\tau(ab)=\tau(ba)$ for all $a,b \in \mathfrak{A}$. We use freely well-known results about C*-algebras and Hilbert space operators throughout the paper, and we refer the reader to \cite{ConwayOperatorTheoryBook} for an in-depth introduction to these topics. \section{Preliminaries} For each $n,k \in \mathbb{N}$ we say that a tuple $p(a,b|x,y)_{a,b \in [k], x,y \in [n]}$ is a \textbf{correlation} if it satisfies the relation \[ \sum_{a,b \in [k]} p(a,b|x,y) = 1 \] for all $x,y \in [n]$. A correlation $p(a,b|x,y)$ is called \textbf{nonsignalling} if the quantities \[ p_A(a|x) = \sum_{b \in [k]} p(a,b|x,y) \quad \text{ and } \quad p_B(b|y) = \sum_{a \in [k]} p(a,b|x,y) \] are well-defined, meaning that the sum expressing $p_A(a|x)$ is independent of the choice of $y \in [n]$ and the sum expressing $p_B(b|y)$ is independent of the choice of $x \in [k]$. Nonsignalling correlations model a scenario where two parties, traditionally named Alice and Bob, are provided questions $x$ and $y$, respectively, from a referee. Without communicating with each other, Alice produces an answer $a$ and Bob produces an answer $b$ with probability $p(a,b|x,y)$. The lack of communication between Alice and Bob can be verified after many trials by checking that the quantities $p_A(a|x)$ and $p_B(b|y)$ are well-defined; i.e. by checking that $p(a,b|x,y)$ is nonsignalling. Nonsignalling correlations arise in quantum communication protocols, such as quantum key distribution \cite{Umesh_Vidick_SecurityQKD}. In these scenarios, Alice and Bob produce their answers by performing measurements on particles emitted from a common source. These particles may be entangled, yielding observable differences from correlations which arise in classical scenarios \cite{CHSH}. Mathematically, a correlation $p(a,b|x,y)$ is called a \textbf{quantum correlation} if there exists a finite dimensional Hilbert space $H$, projection valued measures $\{E_{x,a}\}_{a=1}^k, \{F_{y,b}\}_{b=1}^k \subset B(H)$, and a unit vector $\ket{\phi} \in H \otimes H$ such that \begin{equation}\label{eqn: Q Correlation defn} p(a,b|x,y) = \bra{\phi} E_{x,a} \otimes F_{y,b} \ket{\phi} = p(a,b|x,y). \end{equation} In this formulation, Alice and Bob apply measurements corresponding to the projection-valued measures $\{E_{x,a}\}_{a=1}^k$ and $\{F_{y,b}\}_{b=1}^k$ to their respective copies of the Hilbert space $H$ upon receiving questions $x$ and $y$, respectively, from the referee. The laws of quantum mechanics dictate that they obtain answers $a$ and $b$, respectively, with probability $p(a,b|x,y)$ as described in Equation (\ref{eqn: Q Correlation defn}). Quantum correlations can be equivalently defined in terms of finite-dimensional C*-algebras. A correlation $p(a,b|x,y)$ is a quantum correlation if and only if there exists a finite dimensional C*-algebra $\mathfrak{A}$, projection-valued measures $\{E_{x,a}\}_{a=1}^k, \{F_{y,b}\}_{b=1}^k \subseteq \mathfrak{A}$ for which each $E_{x,a}$ commutes with each $F_{y,b}$, and a state $\phi: \mathfrak{A} \to \mathbb{C}$ such that $p(a,b|x,y) = \phi(E_{x,a} F_{y,b})$. If we eliminate the restriction that the C*-algebra be finite dimensional, we obtain a \textbf{quantum commuting correlation}. It was an open question for many years whether or not an arbitrary quantum commuting correlation can be approximated by quantum correlations \cite{Tsirelson1987}. Indeed, this question was shown to be equivalent to the Connes' embedding problem \cite{ConnesConjecture} of operator algebras (See \cite{JMPPSW2011}, \cite{FritzKirchberg}, and \cite{OzawaConnes}). The recent results of \cite{mipStarEqualRe} imply that some quantum commuting correlations cannot be approximated by quantum correlations, thus solving the Connes' embedding problem. The solution presented in \cite{mipStarEqualRe} relied only on synchronous correlations, which we describe now. A correlation $p(a,b|x,y)$ is called \textbf{synchronous} if $p(a,b|x,x) = 0$ whenever $a \neq b$. The following characterization of synchronous quantum and quantum commuting correlations comes from \cite{PaulsenEtAlSynchronous}. \begin{theorem}[Corollary 5.6 of \cite{PaulsenEtAlSynchronous}] Let $p(a,b|x,y)$ be a synchronous correlation. Then $p(a,b|x,y)$ is a quantum commuting (resp. quantum) correlation if and only if there exists a (resp. finite-dimensional) C*-algebra $\mathfrak{A}$, projection valued measures $\{E_{x,a}\}_{a=1}^k \subseteq \mathfrak{A}$, and at tracial state $\tau: \mathfrak{A} \to \mathbb{C}$ satisfying \[ p(a,b|x,y) = \tau(E_{x,a} E_{y,b}). \] \end{theorem} We will make use of a family of matrices which are closely related to the set of synchronous correlations. The following definitions were introduced in \cite{RordamMusatNonClosure}. \begin{definition} Let $n \in \mathbb{N}$. We say that a matrix $p(x,y) \in D_{qc}(n)$ if there exists a C*-algebra $\mathfrak{A}$ and projections $P_1, P_2, \dots, P_n \in \mathfrak{A}$, and a tracial state $\tau: \mathfrak{A} \to \mathbb{C}$ such that $p(x,y) = \tau(P_x P_y)$ for each $x,y \in [n]$. We say that $p(x,y) \in D_q(n)$ if the same conditions are met, but with the restriction that $\mathfrak{A}$ is finite-dimensional. \end{definition} It was shown in \cite{RordamMusatNonClosure} that the set $D_q(n)$ (resp. $D_{qc}(n)$) is affinely isomorphic to the set of a synchronous quantum correlations (resp. quantum commuting correlations) with $n$ questions and $k=2$ answers. For $k > 2$, it was shown in \cite{RussellTwoOutcome20} and \cite{HarrisThesis} that a particular affine slice of the set $D_q(nk)$ (resp. $D_{qc}(nk)$) is affinely isomorphic to the set of synchronous quantum correlations (resp. quantum commuting correlations) with parameters $n$ and $k$. Consequently, characterizing the structure of the set $D_q(n)$ (resp. $D_{qc}(n)$) is equivalent to characterizing the structure of the set of synchronous quantum (resp. quantum commuting) correlations. Therefore, we will focus our attention for the rest of the paper on the sets $D_q(n)$ and $D_{qc}(n)$. \section{A synchronous NPA Heirarchy} In this section, we will characterize the set of correlations $D_{qc}(N)$. Let $[N] := \{1,2,\dots,N\}$. We let $[N]^*$ denote the set of words in $[N]$ (including the empty word, denoted by $0$). Given a word $w \in [N]^*$, we let $|w|$ denote the length of the word, with the convention that $|0| = 0$. We let $[N]^k$ denote the set of words of length $k$, so that $[N]^* = \cup_{k=0}^\infty [N]^k$. \begin{lemma} \label{lem: infinite gram} Let $(M_{\alpha, \beta})$ be a matrix indexed by words $\alpha, \beta \in [N]^*$. Assume that for each $k$, the finite matrix $(M_{\alpha, \beta})_{\alpha, \beta \in [N]^k}$ is positive-semidefinite. Then there exists a sequence of finite dimensional Hilbert spaces $H_0, H_1, H_2, \dots$, vectors $\ket{\alpha,k} \in H_k$ for each $\alpha \in [N]^k$ satisfying $M_{\alpha,\beta} = \braket{\alpha,k}{\beta,k}$, and an isometry $W_k: H_k \to H_{k+1}$ with $W_k \ket{\alpha,k} = \ket{\alpha,k+1}$ for each $\alpha \in [N]^K$. \end{lemma} \begin{proof} By the Gram decomposition, we can find a finite dimensional Hilbert space $\hat{H}_k$ and vectors $\ket{\alpha_k} \in \hat{H}_k$ for each $\alpha \in [N]^k$ such that $\braket{\alpha}{\beta} = M_{\alpha, \beta}$. For each $k$, let $H_k := \operatorname{span} \{ \ket{\alpha,k} \}$ inside $\hat{H}_k$. Define a function $W_k$ from $\{ \ket{\alpha,k}\}$ to $\{ \ket{\alpha,k+1}\}$ via $W_k \ket{\alpha,k} = \ket{\alpha,k+1}$ for each $\alpha \in [N]^k$. We first show that $W_k$ extends to a linear map from $H_k$ to $H_{k+1}$. To see this, observe that \begin{eqnarray} \langle (\sum t_{\alpha} \ket{\alpha,k+1}), (\sum r_{\beta} \ket{\beta,k+1}) \rangle & = & \sum t_{\alpha}^* r_{\beta} \braket{\alpha, k+1}{\beta, k+1} \nonumber \\ & = & \sum t_{\alpha}^* r_{\beta} \braket{\alpha, k+1}{\beta, k+1} \nonumber \\ & = & \sum t_{\alpha}^* r_{\beta} M_{\alpha,\beta} \nonumber \\ & = & \sum t_{\alpha}^* r_{\beta} \braket{\alpha,k}{\beta,k} \nonumber \\ & = & \langle (\sum t_{\alpha} \ket{\alpha,k}), (\sum r_{\beta} \ket{\beta,k}) \rangle. \nonumber \end{eqnarray} Now, if $\sum t_{\alpha} \ket{\alpha,k} = 0$, then \[ 0 = \langle (\sum t_{\alpha} \ket{\alpha,k}), (\sum t_{\alpha} \ket{\alpha,k}) \rangle = \langle (\sum t_{\alpha} \ket{\alpha,k+1}), (\sum t_{\alpha} \ket{\alpha,k+1}) \rangle. \] It follows that setting $W_k(\sum t_{\alpha} \ket{\alpha,k}) = \sum t_{\alpha} \ket{\alpha,k+1}$ yields a well-defined linear extension of $W_k$. To see that $W_k$ is an isometry from $H_k$ to $H_{k+1}$, It suffices to check that $W_k^{\dagger} W_k$ is the identity on $H_k$. This follows from the the calculation \begin{eqnarray} \langle (\sum t_{\alpha} \ket{\alpha,k}), W_k^{\dagger} W_k (\sum r_{\beta} \ket{\beta,k}) \rangle & = & \langle W_k (\sum t_{\alpha} \ket{\alpha,k}), W_k (\sum r_{\beta} \ket{\beta,k}) \rangle \nonumber \\ & = & \langle (\sum t_{\alpha} \ket{\alpha,k+1}), (\sum r_{\beta} \ket{\beta,k+1}) \rangle \nonumber \\ & = & \langle (\sum t_{\alpha} \ket{\alpha,k}), (\sum r_{\beta} \ket{\beta,k}) \rangle \nonumber \end{eqnarray} proving the statement. \end{proof} We briefly describe the construction for an inductive limit of a sequence of finite dimensional Hilbert spaces. Let $(H_k,W_k)_{k=0}^\infty$ be a sequence of pairs, each pair consisting of a finite dimensional Hilbert space $H_k$ and an isometry $W_k: H_k \to H_{k+1}$. Whenever $k < l$ we let $W_{k,l} := W_{l-1} W_{l-2} \dots W_{k+1} W_k$. Let $\hat{H}$ denote the disjoint union $\cup_k H_k$. Then we can define a pre-inner product on $\hat{H}$ via $\langle x_l, x_k \rangle = \langle x_l, W_{k,l} x_l \rangle_l$ when $k \leq l$ and $\langle x_l, x_k \rangle = \langle W_{l,k} x_l, x_k \rangle$ when $l < k$. Let $N = \{ x \in \hat{H} : \langle x, x \rangle = 0\}$. Then we define $\lim_k H_k$ to be the completion of $H/N$ with respect to this inner product. Then $\lim_k H$ is a Hilbert space with dimension $\lim_k dim(H_k)$. Moreover, for each $k$, there exists a natural isometry $V_k: H_k \to \lim_k H$ such that $V_{l} W_{k,l} H_k = V_k H_k$ for each $k < l$. Informally, we can use the $W_k$'s to identify $H_k$ as a subspace of $H_{k+1}$ and $V_k$ to identify $H_k$ as a subspace of $\lim_k H$, so that we have $H_0 \subseteq H_1 \subseteq H_2 \subseteq \dots \subseteq \lim_k H$. From the above construction and Lemma \ref{lem: infinite gram} we get the following corollary. \begin{corollary} \label{cor: infinite gram} Let $(M_{\alpha, \beta})$ be a matrix indexed by words $\alpha, \beta \in [N]^*$. Assume that for each $k$, the finite matrix $(M_{\alpha, \beta})_{\alpha, \beta \in [N]^k}$ is positive-semidefinite. Then there exists a Hilbert space $H$ and vectors $\ket{\alpha} \in H$ for each $\alpha \in [N]^*$ such that $M_{\alpha,\beta} = \braket{\alpha}{\beta}$ for each $\alpha, \beta \in [N]^*$. \end{corollary} As in the original NPA heirarchy, we will be interested in positive semidefinite matrices $(M_{\alpha, \beta})$ whose entries satisfy certain relations. We will keep track of these relations by introducing an equivalence relation $\sim$ on $[N]^* \times [N]^*$. In the following, for each $\gamma \in [N]^k$ with $\gamma = g_1 g_2 \dots g_k$ and each permutation $\sigma$ of the set $[N]^k$, we let $\sigma(\gamma)$ denote the word $g_{\sigma(1)} g_{\sigma(2)} \dots g_{\sigma(k)}$. We define $\gamma^{\dagger} := g_k g_{k-1} \dots g_2 g_1$; i.e. $\gamma^{\dagger} = \tau(\gamma)$ where $\tau$ is the permutation which maps $i$ to $k-i+1$ for each $i = 1,2, \dots, k$. \begin{definition} Let $\alpha \in [N]^k$ and assume $\alpha = a_1^{r_1} a_2^{r_2} \dots a_n^{r_n}$ where $r_1, r_2, \dots, r_n \in \mathbb{N}$ with $\sum_i r_i = k$ and $a_i \neq a_{i+1}$ for each $i=1,2, \dots, n-1$. Then we define $\eta(\alpha) := a_1 a_2 \dots a_n \in [N]^n$ when $a_1 \neq a_n$ and $\eta(\alpha) := a_1 a_2 \dots a_{n-1} \in [N]^{n-1}$ otherwise. Given pairs $(\alpha, \beta), (\gamma, \delta) \in [N]^* \times [N]^*$, we say that $(\alpha, \beta) \sim (\gamma, \delta)$ if and only if $\eta(\alpha^{\dagger} \beta) = \sigma (\eta(\gamma^{\dagger} \delta))$ for some cyclic permutation $\sigma$. \end{definition} We are now prepared to prove the main result. \begin{theorem} \label{thm: Synchronous NPA heirarchy} A matrix $p(x,y)$ is an element of $D_{qc}(N)$ if and only if there exists a matrix $(M_{\alpha, \beta})_{\alpha, \beta \in [N]^*}$ with $M_{0, 0} = 1$ satisfying the following properties. \begin{enumerate} \item The restriction $(M_{\alpha, \beta})_{\alpha, \beta \in [N]^k}$ is positive semidefinite for each $k \in \mathbb{N}$. \item Whenever $(\alpha, \beta) \sim (\delta, \gamma)$ we have $M_{\alpha, \beta} = M_{\delta, \gamma}$. \item For each $x,y \in [N]$ we have $p(x,y) = M_{x,y}$. \end{enumerate} \end{theorem} \begin{proof} First assume that $p(x,y) \in D_{qc}(N)$. Then there exists a C*-algebra $\mathfrak{A}$, projections \[ P_1, P_2, \dots, P_N \in \mathfrak{A}, \] and a tracial state $\tau: \mathfrak{A} \to \mathbb{C}$ such that $p(x,y) = \tau(P_x P_y)$ for each $x,y \in [N]$. For each $\alpha \in [N]^*$ with $\alpha = a_1 a_2 \dots a_k$, let $P_{\alpha} = P_{a_1} P_{a_2} \dots P_{a_k}$, and let $P_{0} := I$. Set $M_{\alpha, \beta} = \tau(P_{\alpha}^{\dagger} P_{\beta})$ for each $\alpha, \beta \in [N]^*$. Then $M_{0, 0} = \tau(I) = 1$. To prove (1), it suffices to check that the matrix of products $(P_{\alpha}^{\dagger} P_{\beta})_{\alpha, \beta}$ is positive in $M_n(\mathfrak{A})$, where $n = |[N]^k|$, since $\tau$ is completely positive. However this follows from the observation that \[ (P_{\alpha}^{\dagger} P_{\beta})_{\alpha, \beta} = R^{\dagger}R \] where $R \in M_{1,n}(\mathfrak{A})$ is the row operator given by $R = [ P_{\alpha_1} P_{\alpha_2} \dots P_{\alpha_N}]$ where $\{\alpha_1, \dots, \alpha_n\}$ is an enumeration of $[N]^k$. To prove (2), we observe that whenever $(\alpha, \beta) \sim (\gamma, \delta)$ we have $\tau(P_{\alpha}^{\dagger} P_{\beta}) = \tau(P_{\gamma}^{\dagger} P_{\delta})$ since $\tau$ is cyclic and each $P_i$ satisfies $P_i^2 = P_i$. It is clear that (3) is satisfied. Therefore a matrix $(M_{\alpha, \beta})_{\alpha, \beta \in [N]^*}$ with the desired properties exists whenever $(p(x,y)) \in D_{qc}(N)$. Now assume that we are given a matrix $(M_{\alpha, \beta})_{\alpha, \beta}$ with $M_{0, 0} = 1$ satisfying properties (1) and (2), and set $p(x,y) = M_{x,y}$ for each $x,y \in [N]$. We will show that $(p(x,y)) \in D_{qc}(N)$. By Lemma \ref{lem: infinite gram}, there exists a Hilbert space $H$ and vectors $\ket{\alpha} \in H$ for each $\alpha \in [N]^*$ satisfying $M_{\alpha, \beta} = \braket{\alpha}{\beta}$. For each $x \in [N]$, let $P_x$ denote the orthogonal projection onto the subspace of $H$ densely spanned by the vectors $\{ \ket{x \alpha}: \alpha \in [N]^*\}$. Clearly $P_x \ket{x \alpha} = \ket{x \alpha}$ for each $\alpha \in [N]^*$. Moreover, if $\alpha, \beta \in [N]^*$ then \begin{eqnarray} \braket{x \beta}{\alpha} & = & M_{x \beta, \alpha} \nonumber \\ & = & M_{x \beta, x \alpha} \nonumber \\ & = & \braket{x \beta}{x \alpha} \nonumber \end{eqnarray} since $(x \beta, \alpha) \sim (x \beta, x \alpha)$. We conclude that $P_x \ket{\alpha} = P_x \ket{x \alpha} = \ket{x \alpha}$ for each $\alpha \in [N]^*$. As above, let $P_{\alpha}$ denote the product $P_{a_1} P_{a_2} \dots P_{a_k}$ for each $\alpha = a_1 a_2 \dots a_k \in [N]^k$. Then it is evident that $P_{\alpha} \ket{0} = \ket{\alpha}$ for each $\alpha \in [N]^*$. Hence $M_{\alpha, \beta} = \bra{0} P_{\alpha}^{\dagger} P_{\beta} \ket{0}$ for each $\alpha, \beta \in [N]^*$. Let $\mathfrak{A}$ denote the C*-algebra generated by the projections $P_1, \dots, P_N$ in $B(H)$ and define $\tau: \mathfrak{A} \to \mathbb{C}$ by $\tau(T) = \bra{0} T \ket{0}$ for each $T \in \mathfrak{A}$. Since $\braket{0}{0} = M_{0, 0} = 1$, $\tau$ defines a state on $\mathfrak{A}$. Furthermore, notice that for each $\alpha \in [N]^*$ and each cyclic permutation $\sigma$ \begin{eqnarray} \tau(P_{\alpha}) & = & \bra{0} P_{\alpha} \ket{0} \nonumber \\ & = & \braket{0}{\alpha} \nonumber \\ & = & M_{0, \alpha} \nonumber \\ & = & M_{0, \sigma(\alpha)} \nonumber \\ & = & \braket{0}{\sigma(\alpha)} \nonumber \\ & = & \bra{0} P_{\sigma(\alpha)} \ket{0} \nonumber \\ & = & \tau(P_{\sigma(\alpha)}) \nonumber \end{eqnarray} where we have used $(0, \alpha) \sim (0, \sigma(\alpha))$. It follows that $\tau$ is tracial on the $*$-algebra generated by the $P_x$'s and hence $\tau$ is a tracial state on $\mathfrak{A}$. It follows that the identification $M_{x,y} = p(x,y)$ defines a correlation $(p(x,y)) \in D_{qc}(N)$ since $p(x,y) = \tau(P_x P_y)$ for each $x,y \in [N]$. \end{proof} Assume that $(p(x,y)) \in D_{qc}(N)$ and let $(M_{\alpha,\beta})$ be a positive semidefinite matrix as described in Theorem \ref{thm: Synchronous NPA heirarchy}. We let $T_k$ denote the submatrix $(M_{\alpha, \beta})_{\alpha, \beta \in [N]^k}$. We call the matrix $T_k$ the \textbf{$k$-th certificate} for $(p(x,y))$. Then Theorem \ref{thm: Synchronous NPA heirarchy} can be rephrased as follows: a matrix $(p(x,y))$ is a correlation if and only if there exists a sequence $T_1, T_2, \dots$ of certificates extending $(p(x,y))$. We note that the first certificate $T_1$ is uniquely determined by the correlation $(p(x,y))$. Indeed, given the matrix $(p(x,y))$, we form the corresponding certificate $T_1$ by appending a single row and column corresponding to the unit vector $\ket{0}$. The entries $M_{0,x}$ and $M_{x,0}$ for $x \in [N]$ are uniquely determined since $(x,0) \sim (x,x) \sim (0,x)$ implies $M_{x,0}=M_{x,x}=M_{0,x}$. The entry $M_{0,0}$ is determined by the requirement $M_{0,0}=1$. \section{The rank loop} In the original NPA hierarchy, quantum correlations are distinguished from quantum commuting correlations by the existence of a rank loop in the sequence of certificates $\{T_k\}_{k=1}^\infty$. The same property holds for our synchronous NPA hierarchy as well. \begin{theorem} \label{thm: rank loop} A matrix $p(x,y)$ is an element of $D_q(N)$ if and only if there exists an integer $m \in \mathbb{N}$ and a matrix $(M_{\alpha, \beta})_{\alpha, \beta \in [N]^{m+1}}$ with $M_{0, 0} = 1$ satisfying the following properties: \begin{enumerate} \item The restriction $(M_{\alpha, \beta})_{\alpha, \beta \in [N]^k}$ is positive semidefinite for each $k \leq m+1$. \item Whenever $(\alpha, \beta) \sim (\delta, \gamma)$ we have $M_{\alpha, \beta} = M_{\delta, \gamma}$. \item For each $x,y \in [N]$ we have $p(x,y) = M_{x,y}$. \item We have $\operatorname{rank}(T_m) = \operatorname{rank}(T_{m+1})$, where $T_k = (M_{\alpha, \beta})_{\alpha, \beta \in [N]^k}$. \end{enumerate} \end{theorem} \begin{proof} First assume that $(p(x,y)) \in D_{q}(N)$. Then there exists a finite dimensional C*-algebra $\mathfrak{A}$, projections $P_1, P_2, \dots, P_N \in \mathfrak{A}$ and a tracial state $\tau: \mathfrak{A} \to \mathbb{C}$ satisfying $p(x,y) = \tau(P_x P_y)$. We may assume without loss of generality that $\mathfrak{A}$ is generated by the projections $P_1, P_2, \dots, P_N$ as a C*-algebra. For each $\alpha \in [N]^*$ with $\alpha = a_1 a_2 \dots a_k$, set $P_{\alpha} = P_{a_1} P_{a_2} \dots P_{a_k}$, and let $M_{\alpha, \beta} = \tau(P_{\alpha}^{\dagger} P_{\beta})$ for each $\alpha, \beta \in [N]^k$. By the GNS construction, there exists a Hilbert space $H$, a unit vector $\ket{\phi} \in H$ and a $*$-isomophism $\pi: \mathfrak{A} \to B(H)$ such that $\tau(P_{\alpha}^{\dagger} P_{\beta}) = \bra{\phi} \pi(P_{\alpha}^{\dagger} P_{\beta}) \ket{\phi}$ for each $\alpha, \beta \in [N]^*$. Since $\dim(\mathfrak{A}) < \infty$, there exists $m$ such that $\mathfrak{A}$ is spanned by $\{P_{\alpha} : \alpha \in [N]^m\}$. Let $\alpha_1, \alpha_2, \dots, \alpha_M$ be an enumeration of $[N]^m$, and let $\alpha_{M+1}, \alpha_{M+2}, \dots, \alpha_{M'}$ be an enumeration of $[N]^{m+1} \setminus [N]^{m}$. Then since \[ \dim( \operatorname{span} \{\pi(P_{\alpha}) \ket{\phi} : \alpha \in [N]^m \}) = \dim( \operatorname{span} \{\pi(P_{\alpha}) \ket{\phi} : \alpha \in [N]^{m+1} \}) \] we must conclude that $\operatorname{rank}(T_m) = \operatorname{rank}(T_{m+1})$, since \[ T_m = \begin{bmatrix} \bra{\phi} P_{\alpha_1}^{\dagger} \\ \vdots \\ \bra{\phi} P_{\alpha_M}^{\dagger} \end{bmatrix} \begin{bmatrix} P_{\alpha_1} \ket{\phi} & \dots & P_{\alpha_M} \ket{\phi} \end{bmatrix} \quad \text{and} \quad T_{m+1} = \begin{bmatrix} \bra{\phi} P_{\alpha_1}^{\dagger} \\ \vdots \\ \bra{\phi} P_{\alpha_{M'}}^{\dagger} \end{bmatrix} \begin{bmatrix} P_{\alpha_1} \ket{\phi} & \dots & P_{\alpha_{M'}} \ket{\phi} \end{bmatrix}. \] On the other hand, assume we have $(M_{\alpha, \beta})$ satisfying the given conditions. By Lemma \ref{lem: infinite gram}, there exists a Hilbert space $H$ and vectors $\ket{\alpha} \in H$ for each $\alpha \in [N]^{m+1}$ such that $M_{\alpha, \beta} = \braket{\alpha}{\beta}$ for each $\alpha, \beta \in [N]^{m+1}$. Since $\operatorname{rank}(T_m) = \operatorname{rank}(T_{m+1})$ we see that \[ \dim( \operatorname{span} \{ \ket{\alpha} : \alpha \in [N]^m\}) = \dim( \operatorname{span} \{ \ket{\alpha} : \alpha \in [N]^{m+1}\}). \] Therefore every vector $\ket{\alpha} \in H_{m+1}$ can be written as a linear combination of vectors of the form $\ket{ \alpha }$ where $\alpha \in [N]^m$. Hence, we may identify the Hilbert spaces $H_{m+1}$ and $H_m$. For each $x \in [N]$, let $P_x: H_m \to H_m$ denote subspace spanned by the vectors $\ket{x \alpha}$ for $\alpha \in [N]^m$. As shown in the proof of Theorem \ref{thm: Synchronous NPA heirarchy}, we have $P_x \ket{\alpha} = \ket{x \alpha}$ for each $\alpha \in [N]^m$. Let $\mathfrak{A}$ denote the finite-dimensional C*-algebra generated by the operators $P_x$ in $B(H_m)$. The proof that $\tau(T) = \bra{0} T \ket{0}$ defines a trace on $\mathfrak{A}$ is identical to the argument presented in the proof of Theorem \ref{thm: Synchronous NPA heirarchy}. We conclude that $(p(x,y)) \in D_q(N)$ in this case. \end{proof} \section{Applications} In this section, we consider two applications, each involving $N$ projections which span the vector space $\mathbb{M}_d$. We begin by characterizing the correlations which arise from such projections. We say that a correlation $(p(x,y)) \in D_q(N)$ is \textbf{matricially spanning} if there exists an integer $d \in \mathbb{N}$, called the \textbf{dimension} of $(p(x,y))$, and projections $P_1, P_2, \dots, P_N \in \mathbb{M}_d$ such that $\operatorname{span} P_i = \mathbb{M}_d$ and such that $p(x,y) = \frac{1}{d} Tr(P_x P_y)$ for each $x,y \in [N]$. We would like to characterize such correlations. To this end, suppose that $\operatorname{span} P_i = \mathbb{M}_d$. Then $\dim(\operatorname{span} \{P_i\}) = d^2 = \dim(\operatorname{span} \{P_i P_j\})$. Therefore a matricially spanning correlation $(p(x,y))$ can be certified by computing only two certificates, $T_1$ and $T_2$, and verifying that $\operatorname{rank}(T_1) = \operatorname{rank}(T_2) = d^2$. While $\dim(\operatorname{span} \{P_i\}) = d^2 = \dim(\operatorname{span} \{P_i P_j\})$ guarantees that $C^*(\{P_i\})$ has dimension $d^2$, it does not guarantee that $C^*(\{P_i\}) = \mathbb{M}_d$. To verify this, we can use the certificate $T_2$ to show that the center of $C^*(\{P_i\})$ is one-dimensional, and hence scalar. To see how we can achieve this, suppose that $T = \sum t_i P_i$ is an element of the center of $C^*(\{P_i\})$. Then for each $x,y \in [N]^1$ and $k \in [N]$ we must have \begin{eqnarray} \bra{x} [T,P_k] \ket{y} & = & \sum_i t_i ( \bra{x} P_i P_k \ket{y} - \bra{x} P_k P_i \ket{y}) \nonumber \\ & = & \sum_i t_i (\braket{ix}{ky} - \braket{kx}{iy}) \nonumber \\ & = & \sum_i t_i (M_{ix,ky} - M_{kx,iy}) \nonumber \\ & = & 0 \nonumber \end{eqnarray} since $[T,P_k] = 0$ for each $k \in [N]$. More succinctly, the vector $\vec{t}=(t_i) \in \mathbb{C}^n$ lies in the kernel of the matrix $S_M(x,y)$ whose $(k,i)$ entry is $(M_{ix, ky} - M_{kx, iy})_{k,i}$ for each $x,y \in [N]^1$. Let \[ V = \operatorname{span} \{ T = \sum_i t_i P_i : \vec{t} \in \ker(S_M(x,y)) \text{ for all } x,y \in [N]^1\}. \] Then we must have $\dim(V) = 1$. Equivalently, the nullity of the matrix \begin{equation} \label{S_M Matrx} S_M = \begin{bmatrix} S(0,0) \\ \vdots \\ S(N,N) \end{bmatrix} \end{equation} is one (the ordering of the column matrix above is arbitrary, but includes $S(x,y)$ for each $x,y \in [N]^1$). We summarize these observations in the following theorem. \begin{theorem} \label{thm: matricially spanning} A matrix $(p(x,y))$ is a matricially spanning quantum correlation of dimension $d$ if and only if there exists a positive semidefinite matrix $(M_{\alpha, \beta})_{\alpha, \beta \in [N]^2}$ with $M_{0,0} = 1$ satisfying the following properties: \begin{enumerate} \item For each $x,y \in [N]$ we have $p(x,y) = M_{x,y}$. \item Whenever $(\alpha, \beta) \sim (\delta, \gamma)$ we have $M_{\alpha, \beta} = M_{\delta, \gamma}$. \item We have $\operatorname{rank}(T_1) = \operatorname{rank}(T_{2}) = d^2$, where $T_k = (M_{\alpha, \beta})_{\alpha, \beta \in [N]^k}$. \item The nullity of the matrix $S_M$ in equation (\ref{S_M Matrx}) is one. \end{enumerate} \end{theorem} We remark that $\ker(S_M) = \ker(S_M^{\dagger}S_M)$, since $\ker(S_M)$ is clearly a subset of $\ker(S_M^{\dagger} S_M)$ and since $S_M^{\dagger}S_M \vec{x} = \vec{0}$ implies that $0 = \vec{x}^{\dagger} S_M^{\dagger} S_M \vec{x} = \langle S_M \vec{x}, S_M \vec{x} \rangle = \| S_M \vec{x} \|^2$. Thus calculating the nullity of the matrix $S_M$ is equivalent to calculating the nullity of the smaller $N \times N$ matrix $S_M^{\dagger} S_M$. We now consider the problem of applying the conditions of Theorem \ref{thm: matricially spanning} to some problems of interest in quantum information theory. Suppose that we have already verified that the certificate $T_1$ is positive semidefinite. Since $T_1$ is a real matrix, we may decompose $T_1$ as $T_1 = L^{\dagger} L$, where \[ L = \begin{bmatrix} \vec{v}_0 & \vec{v}_1 & \vec{v}_2 & \dots & \vec{v}_N \end{bmatrix} \] for some set of vectors $\vec{v}_0, \vec{v}_1, \vec{v}_2, \dots, \vec{v}_N \in \mathbb{R}^{d^2}$ satisfying $\operatorname{span} \{\vec{v}_0, \vec{v}_1, \vec{v}_2, \dots, \vec{v}_N\} = \mathbb{R}^{d^2}$. To satisfy conditions 2 and 3 of Theorem \ref{thm: matricially spanning}, suppose we have vectors $\{ \vec{v}_{ab} : a,b \in [N] \} \subset \mathbb{C}^{d^2}$ satisfying \[ \langle \vec{v}_{\alpha}, \vec{v}_{\beta} \rangle = \langle \vec{v}_{\delta}, \vec{v}_{\gamma} \rangle \] whenever$(\alpha, \beta) \sim (\delta, \gamma)$ for $\alpha, \beta, \gamma, \delta \in [N]^1$. Then the matrix $T_2 = (L')^{\dagger} (L')$, where \[ L' = \begin{bmatrix} L & L_1 & L_2 & \dots & L_N \end{bmatrix} \] with \[ L_a = \begin{bmatrix} \vec{v}_{1a} & \vec{v}_{2a} & \dots & \vec{v}_{Na} \end{bmatrix} \] for each $a \in [N]$, satisfies conditions 2 and 3 of Theorem \ref{thm: matricially spanning}. Taking $M = T_2$, the final condition of Theorem \ref{thm: matricially spanning} can be checked by calculating the nullity of the matrix $S_M$. We conclude by setting up the matrix $L$ described above for two important classes of examples. \subsection{SIC-POVMs} \label{subsec: SIC-POVM} Let $d \in \mathbb{N}$. Then a set $\{P_1, P_2, \dots, P_{d^2}\}$ of rank one projections in $\mathbb{M}_d$ is called a SIC-POVM if $\operatorname{span} \{P_1, P_2, \dots, P_{d^2}\} = \mathbb{M}_d$, $\sum_{i=1}^{d^2} P_i = d I_d$, and $Tr(P_i P_j)=c$ for all $i \neq j$, where $c$ is a fixed positive constant. Under these conditions, it can be checked that \[ \operatorname{Tr}(P_i P_j) = \begin{cases} \frac{1}{d+1} & i \neq j \\ 1 & i = j \end{cases}. \] It has been verified that SIC-POVMs exist in most dimensions $d \leq 50$, and numerical evidence suggests that they also exist in most dimensions $d \leq 150$. It is currently an open question whether or not SIC-POVMs exist in every dimension $d$, or if there is an upper bound on the dimension $d$ in which SIC-POVMs exist. See \cite{SICQuestion} for an overview of the history and open problems related to SIC-POVMs. Let $d \in \mathbb{N}$. Define \[ p_{sic,d}(x,y) = \begin{cases} \frac{1}{d(d+1)} & i \neq j \\ \frac{1}{d} & i = j \end{cases}. \] Then $p_{sic,d}(x,y)$ is a matricially spanning synchronous quantum correlation of dimension $d$ if and only if there exists a SIC-POVM $\{P_1, P_2, \dots, P_{d^2}\}$ in $\mathbb{M}_d$. Thus the existence of a SIC-POVM in $\mathbb{M}_d$ is equivalent to verifiying that the matrix $p_{sic,d}(x,y)$ satisfies the conditions of Theorem \ref{thm: matricially spanning}. We now verify that $p_{sic,d}(x,y)$ extends to a positive semidefinite certificate $T_1$ satisfying $\operatorname{rank}(T_1) = d^2$. The certificate $T_1$ is uniquely defined and equals \begin{equation} T_1 = \begin{bmatrix} 1 & \frac{1}{d} & \frac{1}{d} & \dots & \frac{1}{d} \\ \frac{1}{d} & \frac{1}{d} & \frac{1}{d(d+1)} & \dots & \frac{1}{d(d+1)} \\ \frac{1}{d} & \frac{1}{d(d+1)} & \frac{1}{d} & & \frac{1}{d(d+1)} \\ \vdots & \vdots & & \ddots & \vdots \\ \frac{1}{d} & \frac{1}{d(d+1)} & \dots & & \frac{1}{d} \end{bmatrix}. \nonumber \end{equation} This matrix factors as $L^{\dagger}L$, where \[ L = \begin{bmatrix} 1 & \frac{1}{d} & \frac{1}{d} & \frac{1}{d} & \dots & \frac{1}{d} & \frac{1}{d} \\ 0 & x_1 & \frac{-x_1}{d^2-1} & \frac{-x_1}{d^2-1} & \dots & \frac{-x_1}{d^2-1} & \frac{-x_1}{d^2-1} \\ 0 & 0 & x_2 & \frac{-x_2}{d^2-1} & \dots & \frac{-x_2}{d^2-1} & \frac{-x_2}{d^2-1} \\ 0 & 0 & 0 & x_3 & \dots & \frac{-x_3}{d^2-1} & \frac{-x_3}{d^2-1} \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \dots & x_{d^2-1} & -x_{d^2-1} \end{bmatrix} \] and \[ x_k = \frac{\sqrt{d^2-k}}{\sqrt{(d+1)(d^2-k+1)}} \] for each $k \geq 1$. Hence $T_1$ is positive semidefinite. Since the columns of $T_1$ are elements of $\mathbb{R}^{d^2}$ which span $\mathbb{R}^{d^2}$, we see that $\operatorname{rank}(T_1) = d^2$. For more justification, see Appendix \ref{appen: SICs} Therefore, to decide if there exists a SIC-POVM in dimension $d$, it remains to answer the following question: \begin{question} Do there exist $d^2 \times d^2$ matrices $L_1, L_2, \dots, L_{d^2}$ such that the matrix $T_2 = (L')^{\dagger} (L')$ satisfies the conditions of Theorem \ref{thm: matricially spanning}, where $L' = \begin{bmatrix} L & L_1 & L_2 & \dots & L_{d^2} \end{bmatrix}$? \end{question} \subsection{MUBs} \label{subsec: MUBs} Let $H$ be a Hilbert space of dimension $d$. Two sets $\{ \ket{x_1} \ket{x_2}, \dots, \ket{x_d} \}$ and $\{ \ket{y_1}, \ket{y_2}, \dots, \ket{y_d} \}$ in $H$ are \textbf{unbiased bases} if they are each orthogonal bases for $H$ and $|\braket{x_i}{y_j}| = \frac{1}{\sqrt{d}}$ for all $i,j \in [d]$. Letting $P_i = \ket{x_i}\bra{x_i}$ and $Q_j = \ket{y_j}\bra{y_j}$ for each $i,j \in [d]$ we obtain projection-valued measures $\{P_i\}_{i=1}^d$ and $\{Q_j\}_{j=1}^d$ which satisfy $Tr(P_i Q_j) = \frac{1}{d}$ for all $i,j \in [d]$. It is known that a Hilbert space of dimension $d$ can have at most $d+1$ mutually unbiased bases, or MUBs. When $d = p^n$ for some prime $p$ and some positive integer $n$, then it is also known that $d+1$ mutually unbiased bases exist. When $d$ is a composite number, it is not known if $d+1$ mutually unbiased bases exist. In particular, it is unknown whether or not there exist seven mutually unbiased bases for the Hilbert space of dimension 6, though numerical evidence suggests that such MUBs do not exist \cite{MUBSix}. Let $d \in \mathbb{N}$. Define \[ p_{mub,d}((x,i),(y,j)) = \begin{cases} \frac{1}{d} & (x,i) = (y,j) \\ 0 & x=y \text{ and } i \neq j \\ \frac{1}{d^2} & x \neq y \end{cases} \] for all $(x,i),(y,j) \in [d+1] \times [d]$. Then $p_{mub,d}((x,i),(y,j))$ is a matricially spanning synchronous quantum correlation of dimension $d$ if and only if there exists $d+1$ projection valued measures $\{P_{1,i}\}_{i=1}^d, \{P_{2,i}\}_{i=1}^d, \dots, \{P_{d+1,i}\}_{i=1}^d$ in $\mathbb{M}_d$ whose corresponding bases are mutually unbiased. Thus the existence of $d+1$ MUBs in $\mathbb{M}_d$ is equivalent to verifying that the matrix $p_{mub,d}(x,y)$ satisfies the conditions of Theorem \ref{thm: matricially spanning}. We now verify that $p_{mud,d}(x,y)$ extends to a positive semidefinite certificate $T_1$ satisfying $\operatorname{rank}(T_1) = d^2$. The certificate $T_1$ is uniquely defined and equals \begin{equation} T_1 = \begin{bmatrix} 1 & \vec{v}^{\dagger} & \dots & \vec{v}^{\dagger} \\ \vec{v} & A & & B \\ \vdots & & \ddots & \\ \vec{v} & B & & A \end{bmatrix} \nonumber \end{equation} where \begin{equation} \vec{v} = \begin{bmatrix} \frac{1}{d} \\ \vdots \\ \frac{1}{d} \end{bmatrix} \in \mathbb{M}_{d,1}, \quad A = \begin{bmatrix} \frac{1}{d} & & 0 \\ & \ddots & \\ 0 & & \frac{1}{d} \end{bmatrix} \in \mathbb{M}_d, \text{ and } B = \begin{bmatrix} \ddots & & \rotatebox{80}{$\ddots$} \\ & \frac{1}{d^2} & \\ \rotatebox{80}{$\ddots$} & & \ddots \end{bmatrix} \in \mathbb{M}_d. \nonumber \end{equation} Let $\vec{0}_d \in \mathbb{C}^d$ denote the zero vector and $0_d \in \mathbb{M}_d$ denote the zero matrix. Then $T_1$ factors as $L^{\dagger}L$, where $L^{\dagger}$ has the block matrix form \[ L^{\dagger} = \begin{bmatrix} 1 & \vec{0}_{d-1}^{\dagger} & \vec{0}_{d-1}^{\dagger} & \dots & \vec{0}_{d-1}^{\dagger} \\ \vec{v} & C & 0_{d-1} & \dots & 0_{d-1} \\ \vec{v} & 0_{d-1} & C & & \vdots \\ \vdots & \vdots & & \ddots & \\ \vec{v} & 0_{d-1} & \dots & & C \end{bmatrix} \] with \[ C = \begin{bmatrix} \vec{w}_1 & \vec{w}_2 & \dots & \vec{w}_{d-1} \end{bmatrix} \quad \text{ and } \quad \vec{w}_k = \frac{\sqrt{d-k}}{\sqrt{d(d-k+1)}} \begin{bmatrix} \vec{0}_{k-1} \\ 1 \\ \frac{-1}{d-k} \\ \vdots \\ \frac{-1}{d-k} \end{bmatrix} \] for each $k \in \{1,2,\dots,d-1\}$. Hence, $T_1$ is positive semidefinite. Since $L^{\dagger}$ has $d^2$ columns, it is easy to check that the columns of $L$ span $\mathbb{R}^{d^2}$, and hence $\operatorname{rank}(T_1) = d^2$. For more justification, see Appendix \ref{appen: MUBs}. Therefore, to decide if there exists $d+1$ MUBs in dimension $d$, it remains to answer the following question: \begin{question} Do there exist $d^2 \times (d^2+d)$ matrices $L_1, L_2, \dots, L_{d^2+d}$ such that the matrix $T_2 = (L')^{\dagger} (L')$ satisfies the conditions of Theorem \ref{thm: matricially spanning}, where $L' = \begin{bmatrix} L & L_1 & L_2 & \dots & L_{d^2} \end{bmatrix}$? \end{question} \bibliographystyle{plain}
1,116,691,499,427
arxiv
\section{Hamiltonians for three band models having non-trivial Euler class}\label{sec:models} We here briefly comment on the explicit models of the main text that were derived using the techniques of Ref. \cite{newpaperfragile}. These include systems having Euler class $\xi=2,4$ and different spectral orderings of the bands. Finally, we also illustrate the derived Chern models induced by the non-degenerate subspace. \parag{Model Hamiltonians}-- Chern insulator models find their simplest incarnation in two-band systems of the form \begin{equation}\label{appeq::2band} H_{\mathcal{C}}=\bf{ d}(\bf{k}) \cdot \boldsymbol{\sigma}+d_0(\bs{k})\sigma_0, \end{equation} on the bases of Pauli matrices ${\bs \sigma}$. Hamiltonians exhibiting Euler class have a similar tractable form in terms of real symmetric three-band models, that have two degenerate bands featuring a number of $2\xi$ of isolated band nodes, and a third separated band. A priori, one may expect that separating the system into a two-band $\{\ket{u_1(\bs k)},\ket{u_2(\bs k)}\}$ and one-band subspace $\{\ket{u_3(\bs k)}\}$ can give nontrivial behavior as the stable homotopy classes of Wilson flows, characterized by the first homotopy group $\pi_1$, cover $S^2$ \cite{bouhon2019wilson,tomas}. The two subblocks then accordingly relate to $\pi_1(SO(2))=\mathbf{Z}$, conveying the gapless charges within the gap between these two bands, whereas $\pi_1(SO(1))=0$ relates to the third band. However, these guiding intuitions should be taken with caution. First of all, the orthonormal frame spanned by $E=\{\ket{u_1(\bs k)},\ket{u_2(\bs k)},\ket{u_3(\bs k)}\}$ is invariant under reversing the sign of each of the vectors. Hence, the space of Hamiltonians is actually given by the projective plane $S^2/\bs Z_2=\bs R \bs P^2$, coinciding with bi-axial nematic descriptions \cite{Kamienrmp,Prx2016}. Fortunately, as for the Euler classes, we are only interested in the orientable case of the vector bundle which {\it can} actually be related to the sphere \cite{bouhon2019nonabelian}. Secondly, the facts that all bands must sum up to a trivial charge and $\pi_1(SO(3))=\mathbf{Z}_2$ also suggest that the integer winding of the two band subspace has to be even, which is consistent with earlier findings that an odd Euler class would require a four-band model \cite{Ahn2018b}. Finally, $\ket{u_3(\bs k)}$ {\it is} related to the other two eigenstates as $\ket{u_3(\bs k)}=\ket{u_1(\bs k)}\times\ket{u_2(\bs k)}\equiv\bs n(\bs k)$. Although these hints are merely a motivation, the spectrally flattened form for such real three-band systems can indeed be shown to be \cite{bouhon2019nonabelian, newpaperfragile,tomas}, \begin{equation} \label{eq::specflat;app} H(\bs{k}) = 2\, \bs{n}(\bs{k})\cdot \bs{n}(\bs{k})^\top -\mathbb{I}_3. \end{equation} With these insights it is possible to construct Euler Hamiltonians systematically using a geometric construction on arbitrary lattice geometries \cite{newpaperfragile}. In a nutshell, the idea is that the winding can be encoded via a so-called Pl\"ucker embedding. That is, via a pullback map to coordinates paramterizing the sphere, the winding can be formulated in terms of rotation matrix $R({\bs k})$. Concretely, this means that we can obtain Hamiltonians with Euler class $\xi$ as \begin{equation}\label{eq::construct} H(\boldsymbol{k}) = R(\boldsymbol{k}) [-\mathbb{1}\oplus 1] R(\boldsymbol{k})^T, \end{equation} where $R({\bs k})$ implements a $\xi$-times winding of the sphere and we take the flattened energies of degenerate/third band subspace to be $-/+1$. From $H(\bs{k})$ we can then get an explicit tight-binding model upon sampling over a grid $\Lambda^*$ set by the lattice, e.g a square lattice. Applying an inverse discrete Fourier transform then results in the hopping paramaters $t_{\alpha\beta}(\boldsymbol{R}_j-\boldsymbol{0}) = \mathcal{F}^{-1}[ \{H_{\alpha\beta}(\boldsymbol{k}_m)\}_{m\in \Lambda^*}](\boldsymbol{R}_j-\boldsymbol{0}) $ around a specific site. Formally this has to be executed over the whole Bravais lattice $\boldsymbol{R}_j\in \{l_1 \boldsymbol{a}_1 + l_2 \boldsymbol{a}_2\}_{l_1,l_2\in\mathbf{Z}}$ (here spanned by the primitive lattice vectors of the square lattice $\boldsymbol{a}_1 = a \hat{x}$, $\boldsymbol{a}_2 = a \hat{y}$), but due to fast decay of the Fourier envelope we can get simple models by truncating this process, meaning that we consider a specific number of neighbors $\boldsymbol{R}_j\in \{ l_1 \boldsymbol{a}_1 + l_2 \boldsymbol{a}_2 \}_{ 0\leq \vert l_1\vert,\vert l_2\vert \leq N}$ for tunneling. For the $\xi=2$ case, we restrict tunneling to $N=2$ neighbors. The three-band Hamiltonian then takes the form, \begin{equation} \label{eq::App:mainham} H({\bs k})=h_j({\bs k})\lambda_j, \end{equation} in terms of the five real Gell-Mann matrices $\lambda_j$, where the sum over $j$ is implied. The specific form of the $h_j({\bs k})$ then read \begin{equation}\label{eq::modelparameters} h_j({\bs k})=\sum_{l_1,l_2}t_j(l_1,l_2)e^{i (l_1 k_x+l_2 k_y)}, \end{equation} where $l_1, l_2$ are integers running from $-N$ to $N$ and $t_j(l_1,l_2)$ defines a matrix element of hopping parameters obtained by the truncation procedure defined above. We have specified these eight $5\times5$-matrices for completeness in Appendix \ref{App:matrices Hamiltonian}. Similarly, it is straightforward to obtain models having Euler class $\xi=4$, where we truncate to $N=3$ neighbors to accommodate the higher winding. The hopping matrices of this higher winding are also specified in Appendix \ref{App:matrices Hamiltonian}. Finally, the construction is also flexible to shift the third band to the bottom of the spectrum, corresponding to the flattened form, \begin{equation} H(\bs{k}) =\mathbb{I}_3- 2\, \bs{n}(\bs{k})\cdot \bs{n}(\bs{k})^\top, \end{equation} by acting with $R$ as $H(\boldsymbol{k}) = R(\boldsymbol{k}) [-1\oplus\mathbb{1}] R(\boldsymbol{k})^T$. We refer to these systems as``inverted". \parag{Derived Chern model}-- As stated in the main text, the class of Euler Hamiltonians that are studied here directly induce Chern insulator models upon promoting the normalized ${\bs n(\bs k)}=\ket{u_3(\bs k)}$-eigenstate to $d({\bs k})$ in Eq. \eqref{appeq::2band}. We stress however once more that there are no Chern numbers in the Euler system and this mereley induces a new Chern Hamiltonian. Since $n(\bs{k})$ is always normalized and real (due to the symmetries), it can be treated as a Bloch vector that corresponds to a flattened Hamiltonian as $H'(\bs{k})=n(\bs{k})\cdot \bs{\sigma}$. We consider the Euler Hamiltonian with $\xi=2$ and display the complete coverage of two-sphere by $n(\bs{k})$ in Fig.~\ref{fig:App1_n(k)}a. Accordingly, when we quench a trivial initial state ($\Psi_0(\bs{k})=\frac{1}{\sqrt{10}}(1,2)^\top$) to suddenly evolve with $H'(\bs{k})$, we obtain linking number (Hopf invariant) 1 in Fig.~\ref{fig:App1_n(k)}b. \begin{figure} \centering\includegraphics[width=.85\linewidth]{figs/FigApp1_n_linking.png} \caption{ Topology of $n(\bs{k})$ for $\xi=2$. a) The vector $n(\bs{k})$ covers $S^2$ once. b) Inverse images of the poles link once when a trivial state quenched to evolve under $n(\bs{k})$. } \label{fig:App1_n(k)} \end{figure} \section{Alternative simple model in square lattice} In view of specific experimental protocols, the aspects of the implementation are important and form the key to resolving the main hurdle on the route to observing the rich physics presented by the interplay of crystalline symmetries, fragile topology and non-Abelian band nodes. Indeed, although implementing the non-trivial winding of three-band Euler class might be more involved than a Chern insulator, optical lattices offer a versatile toolbox to overcome these difficulties, where different orbitals (pseudospin structure) can be encoded as sublattice degrees of freedom (and e.g. optimized in Kagome geometry), or most easily as different internal states of an atom where the tunneling can be engineered via Raman couplings. In addition, optical flux lattices offer a promising platform where a topologically non-trivial Euler model can be engineered in momentum space~\cite{Cooper11_PRL_fluxlatt,Cooper13_PRL_fluxlatt}. Finally, the general nature of the algorithm for generating Euler class models does allow for flexibility in lattice geometry and can be tailored on a case-by-case basis. To illustrate this, by relaxing the condition of flatness of the bands and optimizing for the fewest number of tunneling processes, we arrive at an equally valid Hamiltonian on the square lattice with Euler class $\xi=2$ \begin{equation} \begin{aligned} h_3(\boldsymbol{k}) &= -t_a (\cos k_x - \cos k_y) \;,\\ h_8 (\boldsymbol{k}) &= t_a \sqrt{3}/2 [2(\cos k_x + \cos k_y) - 3 (\cos 2k_x + \cos 2k_y)]\;,\\ h_1(\boldsymbol{k}) &= t_b \sin k_x \sin k_y\;, \\ h_4 (\boldsymbol{k}) &= t_c \sin 2k_x\;,\\ h_6 (\boldsymbol{k}) &= t_c \sin 2k_y\;, \end{aligned}\label{eq::alternativemodel} \end{equation} where we listed the non-zero contributions to Eq. \eqref{eq::App:mainham} for $t_a = 0.35$, $t_b = 0.46$, $t_c = 0.69$. In the $(A,B,C)$-orbital basis, the Hamiltonian thus reads \begin{equation} \begin{array}{rclrcl} h_{AA}&=& h_3 + 1/\sqrt{3} h_8 \,,& h_{AB} &=& h_1\,, \\ h_{BB}&=& -h_3 + 1/\sqrt{3} h_8 \,, & h_{AC} &=& h_4\,, \\ h_{CC}&=& -2/\sqrt{3} h_8 \,, & h_{BC} &=& h_6 \,, \end{array} \end{equation} which amounts to the tight-binding parameters \begin{equation} \begin{aligned} t^{(0,\pm1)}_{AA} &= t^{(\pm1,0)}_{BB} = -t^{(\pm1,0)}_{CC} =- t^{(0,\pm1)}_{CC} =2t_a \,, \\ t^{(\pm2,0)}_{AA} &= t^{(0,\pm2)}_{AA} = t^{(\pm2,0)}_{BB} = t^{(0,\pm2)}_{BB} = - t^{(\pm2,0)}_{CC}/2 = -t^{(0,\pm2)}_{CC}/2 \\ &= - t_a 3/2 \,\nonumber, \end{aligned} \end{equation} \begin{equation} \begin{aligned} t^{(\pm1,\pm1)}_{AB} &= -t^{(\pm1,\mp1)}_{AB} = -t_b/2\,,\\ t^{(\pm2,0)}_{AC} &= \pm t_c \,,\\ t^{(0,\pm2)}_{BC} &= \pm t_c \,. \end{aligned} \end{equation} In the above the super scripts convey the vectors connecting the neighbors of each site and the subscripts indicate the orbitals. These parameters are illustrated in Fig.~\ref{fig4_TB}. We however stress once more that the models can be further fine-tuned to individual setups as the Euler class supersedes any specific crystal structure that accommodates the $C_2\mathcal{T}$ symmetry. \begin{figure} \centering\includegraphics[width=.7\linewidth]{figs/Fig4.pdf} \caption{Real space hopping parameters defining the model~\eqref{eq::alternativemodel}. Each site has three orbitals $A,B,C$ and hopping terms $t^{(l_1,l_2)}_{\alpha\beta}$, i.e.~from orbital $\alpha$ to $\beta$ along the vector $(l_1,l_2)=l_1 \hat{x} + l_2\hat{y}$, are indicated as $\alpha\beta$, and $\overline{\alpha\beta}$ for $-t^{(l_1,l_2)}_{\alpha\beta}$. } \label{fig4_TB} \end{figure} \section{Quaternions, Rotations and Hopf Fibration}\label{sec:quaternions} We find that the Hopf map description of quench dynamics can be most conveniently described in terms of quaternions, that directly relate to usual characterizations of rotations in terms of $SU(2)$ and $SO(3)$ matrices, thereby also exposing the intimate relation between the two. Note that here we deal with the first Hopf map, where higher Hopf maps can also be represented by similar constructions, for example using octonions. \parag{Quaternions}-- Recall that quaternions in essence constitute a generalization of complex numbers in an analogous manner in which the latter extends the real numbers. Specifically, a quaternion $q$ is written as \begin{equation} q=x_0+x_1{ i}+x_2{ j}+x_3{ k}, \end{equation} where the quaternion units satisfy ${ij}=k, {jk}=i, {ki}=j$ and ${i}^2={j}^2={k}^2=-1$. We hence see that the real coefficients identify $q$ with $\mathbf{R}^4$ similar to how $\mathbf{C}$ relates to $\mathbf{R}^2$. Accordingly, we refer to $x_0$ and $\{x_1,x_2,x_3\}$ as the real part and pure quaternion part of $q$, respectively. Using the relations between the units and defining the conjugate of $q$ as reversing the sign of the units, that is $q^{*}=-\frac{1}{2}(q+iqi+jqj+kqk)$, the norm of $q$ is readily found to be $|q|=\sqrt{qq^{*}}=\sqrt{x_0^2+x_1^2+x_2^2+x_3^2}$. \parag{Rotations and versors}-- In the subsequent, we will be particularly interested in quaternions $v$ having unit norm, or so-called versors. Due to the constraint $|v|=1$, it is evident that the subset of versors span the hypersphere $S^3\subset\mathbf{R}^4$. Versors are of particular use in describing rotations in three spatial dimensions. Note that we can represent a vector $\mathbf{t}\in\mathbf{R}^3$ as a pure quaternion, having a zero real part and with the quaternion units corresponding to the unit vectors parametrizing $\mathbf{R}^3$. It is easy to see that upon multiplying a pure quaternion with an arbitrary quaternion results in another pure quaternion. Rotations on vectors can then be implemented using versors, that, due to the unit norm, indeed generate an isometry. In particular, it is well known that acting with a versor $v=x_0+\mathbf{v}$ on a vector $\bf{t}$ as \begin{equation}\label{eq::versorrot} R_v: {\bf t}\mapsto v{\bf{t}} v^{-1}, \end{equation} where $v^{-1}$ refers to the inverse of $v$, implements a rotation around the vector $\mathbf{v}$ by an angle $\theta=2\arccos{(x_0)}=2\arcsin{(1-x_0)}$. \parag{Hopf map}-- The formulation of rotations in terms of versors provides for a direct parametrization of the Hopf map. Indeed, we can reinterpret Eq. \eqref{eq::versorrot} as a map relating the versor $v$ with a vector ${\bf t'}=v{\bf{t}} v^{-1}$. Moreover, as this entails an isometry we can already expect that when ${\bf t}$ is a unit vector, and thus an element corresponding to the two-sphere $S^2$, the map returns another unit vector ${\bf t}'\in S^2$. In other words, the rotation isometry acts on $S^2$ transversely. As the set of versors span $S^3$, this directly induces a map of $S^3$ to $S^2$, the famous Hopf fibration. As a specific example one may consider the versor $v=x_0+x_1{ i}+x_2{ j}+x_3{ k}$ and the unit vector in the $\hat{x}$-direction that we thus represent as $i$ in the quaternion language, ${\bf t}=i$. Simple algebra then shows that applying the rotation isometry, Eq. \eqref{eq::versorrot}, results in \begin{equation} {\bf t'}= \begin{pmatrix} x_0^2+x_1^2-x_2^2-x_3^2\\ 2x_1x_2+2x_0x_3\\ -2x_0x_2+2x_1x_3 \end{pmatrix}, \end{equation} which evidently has unit norm, thereby inducing the Hopf map $\mathcal{H}: S^3 \rightarrow S^2$. Indeed, it is easy to verify that the inverse image is a circle, which constitutes the familiar fibre under this map. We can make the above even more insightful upon representing the action of the versor $v$ in terms of a rotation matrix acting on a vector. The columns represent the action on the unit vectors $i,j,k$ and a similar calculation to the one above then gives us the general rotation matrix \begin{widetext} \begin{equation}\label{eq::rotationversorrep} R_v= \begin{pmatrix} x_0^2+x_1^2-x_2^2-x_3^2 & 2x_1x_2-2x_0x_3 & 2x_0x_2+2x_1x_3 \\ 2x_1x_2+2x_0x_3 & x_0^2-x_1^2+x_2^2-x_3^2& -2x_0x_1+2x_2x_3\\ -2x_0x_2+2x_1x_3 & 2x_0x_1+2x_2x_3 & x_0^2-x_1^2-x_2^2+x_3^2 \end{pmatrix}. \end{equation} \end{widetext} It is routinely verified that the columns and rows of $R_v$ are orthogonal and that $R_v$ indeed represents the familiar rotation matrix in 3D. From the viewpoint of the Hopf map, we note that Eq.~\eqref{eq::rotationversorrep} is nothing but an explicit parameterization. Indeed, any row, column or linear linear combination generated by acting on $R$ with a unit norm vector from the left or right implements the first Hopf map. \parag{Quenches in two-band systems}-- We can directly put the above notions in use to reproduce the dynamics of two band models quenched between trivial and non-trivial Chern numbers \cite{Wangchern_2017, Tarnowski19_NatCom, ChernHopf_Yu, YiPan19_arx_hopfTori,ChernHopf_Yu}. We consider the model given in Eq.\eqref{appeq::2band} for $d_0=0$, where the relation of $\bf{ d}$ to the Chern number is given in the main text Eq.~\textcolor{blue}{(2)}. An essential role is then played by the time evolution operator $U_{\mathcal{C}}$, which takes the particularly easy form \vspace{-.5mm} \begin{equation} U_{\mathcal{C}}=e^{-itH_{\mathcal{C}}}=\cos{(t)}-i\sin{(t)}\boldsymbol{\sigma}\cdot \bf{ d}(\bf{k}). \end{equation} Writing this in matrix form one obtains \begin{equation} U_{\mathcal{C}}= \begin{pmatrix} x_0+i x_3 & ix_1 + x_2\\ i x_1 - x_2 & x_0 - i x_3\\ \end{pmatrix}, \end{equation} where we already suggestively identified the elements $\{x_0,x_1,x_2,x_3\}=\{\cos{t}, -\sin(t)d_1, -\sin(t)d_2, -\sin(t)d_3\}$ in terms of the components $d_i(\bf{k})$ of ${\bf d}(\bf{k})$. Indeed, assuming that upon spectral flattening ${\bf d}(\bf{k})$ is a unit vector, we can directly see that, by relating the above to a versor with components $x_\alpha$, the action of the time evolution operator on the initial state $\Psi_0(\bf{k})$ is simply to induce a rotation around the vector ${\bf d}(\bf{k})$ with a period that is set by $|\bs{d}|$. In fact, the above identification is the standard one to relate a quaternion of unit norm to a $SU(2)$ matrix representation. Starting from a trivial $\Psi_0(\bf{k})$, we quench the system suddenly with a non-trivial Hamiltonian so that the state evolves as $\Psi({\bs k},t)=U_\mathcal{C}\Psi_0(\bf{k})$. One can then map the evolving state back to the Bloch sphere upon considering $\hat{p}=(\Psi^{\dagger}({\bs k},t)\sigma_x\Psi({\bs k},t),\Psi^{\dagger}({\bs k},t)\sigma_y\Psi({\bs k},t),\Psi^{\dagger}({\bs k},t)\sigma_z\Psi({\bs k},t))^{\top}$, thereby establishing a Hopf map from $\{k_x,k_y,t\}$, that is identified with $S^3$, to the Bloch sphere constituting the $S^2$ \cite{Wangchern_2017}. This can be directly checked from the above formulae by taking, e.g.~$\Psi_0=(1,0)^{\top}$, which gives \begin{equation} \hat{p}= \begin{pmatrix} 2x_1x_3-2x_0x_2\\ 2x_0x_1+2x_2x_3\\ x_0^2+x_3^2-x_1^2-x_2^2 \end{pmatrix}, \end{equation} thus parameterizing the Hopf map with the last row (or column upon taking a minus sign in the definition of $x_\alpha$) of Eq.~\eqref{eq::rotationversorrep}. Similarly, one can verify that taking $\Psi_0({\bs k})=(0,1)^{\top}$ results in a similar expression. This, therefore, shows that starting from an arbitrary normalized initial state, the first Hopf map is realized with the construction of Eq.~\eqref{eq::rotationversorrep}. Most interestingly, the Hopf map directly exposes the non-trivial Chern number of the quench Hamiltonian as it implies a non-trivial Hopf invariant $\cal{H}$~\cite{Wangchern_2017,ChernHopf_Yu}, which is also manifested in non-trivial linking of the trajectories under the inverse image of $\cal{H}$ that can be measured in experiments \cite{WeiPan18_PRL_hopfExp,Tarnowski19_NatCom, YiPan19_arx_hopfTori}. We note that, the quaternion description also naturally captures quenches from topologically non-trivial to trivial Hamiltonians. \section{Quench dynamics in three-band models having non-trivial Euler class}\label{sec:quench} We now turn our attention to further detailing the main subject, topological aspects and dynamics of quenches involving non-trivial Euler class Hamiltonian. We thus assume $C_2{\cal T}$-symmetry in all instances, which in fact is rather rudimentary and hence does not impose a serious limitation with regard to experimental implementation. \parag{Quenching with non-trivial Chern Hamiltonian}-- We first consider quenching an initial state with a Hamiltonian having a non-trivial Euler class in which the third band $|n({\bf k})\rangle$ has the highest energy, topping the other two degenerate bands by an energy gap. We then return to the inverted situation, with $| n({\bf k})\rangle$ being the bottom band, at the end of this section. As shown in the main text, upon spectral flattening, the Hamiltonian takes the form of Eq.\eqref{eq::specflat;app}. A non-trivial Euler class, in analogy to the Chern number, is then manifested by the geometrical interpretation that $n(\bs k)$ traces the unit sphere, in fact by a factor 2 as compared to the Chern number case. Moreover, we see that Hamiltonian \eqref{eq::specflat;app} physically represents a rotation by angle $\pi$ around the vector $n(\bs k)$. Indeed, the quaternion description in this case reads $v=\cos(\pi/2)+\sin(\pi/2)\{n_1i+n_2j+n_3k\}$ in terms of the components of $n(\bs k)=(n_1,n_2,n_3)^\top$. As a result, the time evolution operator $U$ still assumes the simple form, \begin{equation} U=e^{-itH}=\cos{(t)}-i\sin{(t)}H(\bs{k}), \end{equation} reminiscent of the two-band case. Starting with a trivial normalized state $\Psi_0(\bs{k})$ we then want to appeal to the above Hopf construction. In this regard we first focus on the case $\Psi_0({\bs k})=(1,0,0)^{\top}$. Applying $U$ on $\Psi_0({\bs k})$ we obtain \begin{equation} \Psi({\bs k},t)= \begin{pmatrix} \cos(t)-i\sin(t)(2n_1^2-1)\\ -i\sin(t)2n_1n_2\\ -i\sin(t)2n_1n_3\\ \end{pmatrix}. \end{equation} A priori, it seems hard to relate to the aforementioned Hopf construction. However, recall that we are free to reparametrize the Hopf map upon applying rotations to the initial vector ${\bf t}$ as in Eq. \eqref{eq::rotationversorrep}. Additionally, $H$ is nothing but a rotation of $\Psi_0({\bs k})$ around $\bs n(\bs k)$. We therefore relabel $(2n_1^2-1, 2n_1n_2, 2n_1n_3)$ as $(a_1,a_2,a_3)$, which evidently sill amounts to a unit vector. Consequently, we thus find the following form of the time-evolved state \begin{equation}\label{eq:::app::evolvpsi} \Psi({\bs k},t)= \begin{pmatrix} \cos(t)-i\sin(t)a_1\\ -i\sin(t)a_2\\ -i\sin(t)a_3\\ \end{pmatrix}. \end{equation} With the above Eq. \ref{eq:::app::evolvpsi} in hand, we subsequently define \begin{widetext} \begin{equation} \mu_x= \begin{pmatrix} 0 & i & 1\\ -i & 0 & 0\\ 1 & 0 & 0 \end{pmatrix}, \qquad \mu_y= \begin{pmatrix} 0 & 1 & -i\\ 1 & 0 & 0\\ i & 0 & 0 \end{pmatrix}, \qquad \mu_z= \begin{pmatrix} 1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & -1 \end{pmatrix}. \end{equation} \end{widetext} These matrices then project $\Psi(t)$ back to a `Bloch vector' $\hat{p}\in S^2$ in the desired manner, upon contracting \begin{equation}\label{eq::Bloch vector} \hat{p}=(\Psi^{\dagger}(t)\mu_x\Psi(t), \Psi^{\dagger}(t)\mu_y\Psi(t), \Psi^{\dagger}(t)\mu_z\Psi(t))^{\top}. \end{equation} Indeed, identifying $\{x_0,x_1,x_2,x_3\}=\{\cos{t}, -\sin(t)a_1, -\sin(t)a_2, -\sin(t)a_3\}$, we observe that this parametrizes the first Hopf map by the first column of Eq. \eqref{eq::rotationversorrep}. A few remarks on the above are in place. First, we note that multiplying the above matrices with rotations (by an angle $\pi/2$) that interchange the bottom two components merely interchanges the components of the Hopf parameterization, thus preserving the map. Secondly, we observe a close analogy to the two-band case. A quaternion can alternatively be rewritten as two complex numbers. That is, we can define $q=z_1+z_2j$ in terms of $z_1=x_0+ix_1$ and $z_2=-x_2+ix_3$. Interpreting these numbers as a vector $\zeta=(z_1,z_2)^{\top}$, we obtain the same expression for the Hopf map upon replacing the ${\bs \mu}$-matrices in Eq. \eqref{eq::Bloch vector} with the respective standard Pauli matrices ${\bs \sigma}$ and the three vector $\Psi({\bs k},t)$ with the two vector $\zeta({\bs k},t)$. Thirdly, on a related note, we see that these identifications can similarly be established for the other basis vectors taken as the initial state $\Psi_0({\bs k})=(0,1,0)^{\top}$ and $\Psi_0({\bs k})=(0,0,1)^{\top}$. For these choices we find respectively, \begin{equation} \Psi({\bs k},t)= \begin{pmatrix} -i\sin(t)b_1\\ \cos(t)-i\sin(t)b_2\\ -i\sin(t)b_3\\ \end{pmatrix}, \end{equation} and \begin{equation} \Psi({\bs k},t)= \begin{pmatrix} -i\sin(t)c_1\\ -i\sin(t)c_2\\ \cos(t)-i\sin(t)c_3\\ \end{pmatrix}, \end{equation} where $(b_1,b_2,b_3)=(2n_2n_1, 2n_2^2-1, 2n_2n_3)$ and $(b_1,b_2,b_3)=(2n_3n_1, 2n_2n_3, 2n_3^2-1)$. Evidently, the Hopf map can then be parametrized in an analogous manner. We nonetheless need to take into account that the two purely imaginary components, constituting the second complex number when written as the two vector $\zeta({\bs k},t)$, are shuffled. In other words, for $\Psi_0({\bs k})=(0,1,0)^{\top}$ or $\Psi_0({\bs k})=(0,0,1)^{\top}$, we need to respectively replace $\mu_i\mapsto u\mu_i u$ or $\mu_i\mapsto v\mu_i v$, where \begin{equation} u= \begin{pmatrix} 0 & 1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 1 \end{pmatrix}, \text{ and } \: v= \begin{pmatrix} 0 & 0 & 1\\ 0 & 1 & 0\\ 1 & 0 & 0 \end{pmatrix}. \end{equation} The general parametrization for any normalized initial state can then similarly be achieved, in terms of the general $\pi$-rotated vector ${\bs a}({\bs k})$. Upon the relabelling in terms of the $\bs{a,b,c}$-vectors, the properties and consequences of the outlined Hopf map are thus determined by the inner topological structure, irrespective of the initial state (cf.~Fig.\ref{fig:App2_tomography}). That is, it amounts to a $\pi$- rotated vector that wraps and unwraps the sphere when ${\bs n}({\bs k})$ traces the sphere once, as detailed in the main text. Finally, let us close this section by commenting on the inverted models, in which the third non-degenerate band is at the bottom of the spectrum. In this case the flattened Hamiltonian is of the form, \begin{equation}\label{eq::specflatINV;app} H(\bs{k}) = -2\, \bs{n}(\bs{k})\cdot \bs{n}(\bs{k})^\top+\mathbb{I}_3. \end{equation} Repeating the procedure above, we notice that the effect of this change in Hamiltonian is to induce an extra minus sign in the $x_1,x_2$ and $x_3$ components. Hence, we observe that this merely changes the parametrization of the Hopf map by interchanging the respective row with a column in Eq. \eqref{eq::rotationversorrep}. \section{Tomography in 3-band Euler model}\label{App:Tomography} As explained in the main text, a state tomography can be employed to measure the linking in $(k_x,k_y,t)$-space. We here present the linking structure acquired through the proposed tomography scheme, for another initial state $\Psi_0=(0,0,1)^\top$ to simultaneously illustrate the effect of different initial states. Our tomography scheme involves first applying a $\pi$-pulse with respect to sublattice $B$ to access the two-vector [depending on the initial state, here $z_1$ is identified with the last component of $\Psi(\bs{k},t)$ as it involves the real $x_0$-term, $\zeta=(\Psi_3,\Psi_1+i\Psi_2)^\top$]. Secondly, we quench with the tomography Hamiltonian having flat bands with respect to sublattice $C$, since we start with an initial state completely localized in $C$. Namely, $\mu'_z=v\mu_zv=diag(-1,-1,1)$ and $H_{tom}=(\omega/2)\mu'_z$. After evolving with $H_{tom}$ for a time $t_{tom}$, we obtain the momentum distribution (which can be measured in TOF) as $m(\bs{k},t_{tom})\propto(1+\sin\theta_{\bs{k}}\cos(\phi_{\bs{k}}+\omega t_{tom}))$. The azimuthal angle $\phi_{\bs{k}}$ and the amplitude $\sin\theta_{\bs{k}}$ is calculated by fitting $m(\bs{k},t_{tom})$ with a cosine at each $\bs{k}$ and quench time $t$, for which the results are given in Fig.~\ref{fig:App2_tomography}. \begin{figure} \centering\includegraphics[width=.75\linewidth]{figs/FigApp2_tom.png} \caption{ Linking of the inverse images of $\pm\hat{x}$ for initial state $\Psi_0=(0,0,1)^\top$, where the Bloch vector $\bs{\hat{p}}$ is reconstructed from the momentum distribution via tomography. } \label{fig:App2_tomography} \end{figure} Note that, for the initial state $(0,0,1)$ the monopole--antimonopole pair now resides in patches aligned at the center and outer circle of the BZ as can be seen in Fig.~\ref{fig:App2_tomography}, as opposed to left/right division visible in Fig.~\textcolor{blue}{1} and Fig.~\textcolor{blue}{2} given for $(1,0,0)$. This can also be seen from the stereo-graphic projection of the components of $H\Psi_0({\bs k})$. We emphasize that despite the different alignments of the BZ patches, the clear separation can be easily seen in the azimuthal angle profile given in Fig.~\ref{fig:App2_tomography}b, and the monopole--antimopopole pair is topologically stable. \section{String and monopole charges on the real projective plain}\label{App:TopAsp} We recall some aspects of the real projective plane. Turning first to the Euler Hamiltonian, we note that the models have a gauge symmetry, relating each eigenstate spanning the dreibein $E=\{\ket{u_1(\bs k)},\ket{u_2(\bs k)},\ket{u_3(\bs k)}\}$ with its negative partner. This therefore relates to the real protective plane ${\bs R \bs P^2}$, as in a biaxial nematic \cite{Nissinen2016, Beekman20171, volovik2018investigation, Kamienrmp, Prx2016}. The string charges corresponding to first homotopgy group $\pi_1$ are thus characterized by $\pi_1({\bs R \bs P^2}^2)={\bs Z}_2$, whereas the monopole charges are given by the second homotopy group $\pi_2({\bs R \bs P}^2)={\bs Z}$. The non-triviality of the string charges dictates caution in considering maps from the torus by using intuition from the sphere. Their presence ensures the existence of weak invariants. In each of the two directions the weak index can take either a trivial or non-trivial value, giving four possibilities. This effects the monople charge possibilities. Namely, when there are nonzero weak invariants in either direction or both, an even multiple of monopole charges can adiabatically be split in two equal pieces and transferred around the non-trivial direction. The action of $\pi_1$ then ensures that the charge gets opposite values and thus can annihilate the other half, rendering a ${\bs Z}_2$ classification. We therefore restrict attention to systems having trivial string charges. In fact the models presented are designed to meet this criterion. However even in this case, the direction is not defined and hence skyrmions and anti-skyrmions can not be discriminated in {\it an absolute sense}, showing that these charges are characterized by the absolute value of the winding number. This also relates to Alice dynamics, as in certain scenarios the monopole can be adiabatically deformed into an Alice string, which changes the sign of the charge charge upon passing through \cite{AlicestringVolovik,Schwarz_alice}. Details of these features are beyond the scope of this paper and will be reported elsewhere. To construct Euler class, an orientation must be fixed, which physically amounts to specifying a handiness. Once this has been taken into account the sphere analogy becomes appropriate. \section{Specific matrix parametrization of the models}\label{App:matrices Hamiltonian} We here give the explicit forms of the tunneling matrix elements given in Eq. \eqref{eq::modelparameters}. The Euler Hamiltonian with $\xi=2$ can be constructed by restricting $N=2$-neighbor tunneling~\cite{newpaperfragile}. The eight $t_j(\alpha, \beta)$ multiplying the Gell-Mann matrices are $5\times 5$ matrices given as: \begin{widetext} \begin{equation} t_1= \begin{pmatrix} 0.0089 - 0.0151i & -0.0761 + 0.0309i & -0.0025 - 0.0076i & 0.0811 - 0.0158i & -0.0139 + 0.0000i\\ -0.0761 + 0.0309i & -0.1205 - 0.0467i & 0.0025 + 0.0233i & 0.1155 + 0.0000i & 0.0811 + 0.0158i\\ -0.0025 - 0.0076i & 0.0025 + 0.0233i & -0.0025 + 0.0000i & 0.0025 - 0.0233i & -0.0025 + 0.0076i\\ 0.0811 - 0.0158i & 0.1155 + 0.0000i & 0.0025 - 0.0233i & -0.1205 + 0.0467i & -0.0761 - 0.0309i\\ -0.0139 + 0.0000i & 0.0811 + 0.0158i & -0.0025 + 0.0076i & -0.0761 - 0.0309i & 0.0089 + 0.0151i \end{pmatrix} \end{equation} \begin{equation*} t_3= \begin{pmatrix} -0.0025 & -0.0883& -0.1727& -0.0883& -0.0025\\ 0.0833 & -0.0025 & 0.0375 & -0.0025 & 0.0833\\ 0.1677 &-0.0425 & -0.0025 & -0.0425 & 0.1677\\ 0.0833 & -0.0025 & 0.0375 &-0.0025 & 0.0833\\ -0.0025 &-0.0883 &-0.1727 &-0.0883 &-0.0025 \end{pmatrix}, \quad t_4= \begin{pmatrix} 0.0278i& 0.2275i& 0.4917i& 0.2275i& 0.0278i\\ 0.0636i & 0.1142i & - 0.2810i& 0.1142i& 0.0636i\\ 0 &0& 0& 0& 0\\ -0.0636i & - 0.1142i& 0.2810i & - 0.1142i & - 0.0636i\\ -0.0278i & - 0.2275i & - 0.4917i & - 0.2275i & - 0.0278i\\ \end{pmatrix} \end{equation*} \begin{equation*} t_6= \begin{pmatrix} 0.0278i& 0.0636i& 0& - 0.0636i& - 0.0278i\\ 0.2275i & 0.1142i & 0 & - 0.1142i & - 0.2275i\\ 0.4917i & - 0.2810i& 0 & 0.2810i & - 0.4917i\\ 0.2275i & 0.1142i & 0 &- 0.1142i &- 0.2275i\\ 0.0278i &0.0636i &0 &- 0.0636i &- 0.0278i \end{pmatrix}, \quad t_8= \begin{pmatrix} 0& -0.1879& -0.4330& -0.1879& 0\\ -0.1879& 0& 0.3083 & 0 & -0.1879\\ -0.4330 & 0.3083 & 0 &0.3083 & -0.4330\\ -0.1879 & 0 & 0.3083 & 0 &-0.1879\\ 0 & -0.1879 & -0.4330 & -0.1879 & 0 \end{pmatrix}, \end{equation*} whereas $t_2=t_5=t_7=0$ as they correspond to the complex Gell-Mann matrices. Similarly, the Euler Hamiltonian with $\xi=4$ can be constructed by restricting $N=3$-neighbor tunneling. Accordingly, $t_j(\alpha, \beta)$ are $7\times 7$ matrices given by: \begin{multline*} t_1=\\ \!\!\!\!\!\!\!\!\!\!\!\! \begin{pmatrix} 0 & 0.0626 - 0.0037i& 0.1137 - 0.0240i& - 0.0001i& -0.1137 + 0.0242i& -0.0626 + 0.0036i& 0.0001i\\ -0.0626 + 0.0037i& 0 & 0.0421 + 0.0278i& - 0.0037i& -0.0421 - 0.0204i & - 0.0073i& 0.0626 + 0.0036i\\ -0.1137 + 0.0240i & -0.0421 - 0.0278i & 0 & - 0.0241i & 0.0482i & 0.0421 - 0.0204i& 0.1137 + 0.0242i\\ 0.0001i & 0.0037i &0.0241i & 0 & - 0.0241i & - 0.0037i& - 0.0001i\\ 0.1137 - 0.0242i &0.0421 + 0.0204i &- 0.0482i & 0.0241i & 0 &-0.0421 + 0.0278i& -0.1137 - 0.0240i\\ 0.0626 - 0.0036i &0.0073i & -0.0421 + 0.0204i &0.0037i & 0.0421 - 0.0278i& 0& -0.0626 - 0.0037i\\ 0.0001i &-0.0626 - 0.0036i & -0.1137 - 0.0242i & 0 &0.1137 + 0.0240i & 0.0626 + 0.0037i& 0 \end{pmatrix} \end{multline*} \begin{equation*} t_3= \begin{pmatrix} -0.0298& -0.0125& 0.0432& 0.0432& 0.0432& -0.0125& -0.0298\\ -0.0125 & -0.1652 & -0.0550 & 0.0753 & -0.0550 & -0.1652 & -0.0125\\ 0.0432 & -0.0550 & 0.1086 & -0.0601 & 0.1086 & -0.0550 & 0.0432\\ 0.0432 & 0.0753 &-0.0601 &-0.0073 &-0.0601 & 0.0753 & 0.0432\\ 0.0432& -0.0550& 0.1086& -0.0601& 0.1086& -0.0550& 0.0432\\ -0.0125 & -0.1652 & -0.0550 & 0.0753 & -0.0550 & -0.1652 & -0.0125\\ -0.0298 & -0.0125 & 0.0432 & 0.0432 & 0.0432 & -0.0125 & -0.0298 \end{pmatrix} \end{equation*} \begin{equation*} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! t_4\!\!=\!\!\! \begin{pmatrix}\! 0& -0.0531& -0.1480& -0.1995& -0.1480& -0.0531& 0\\ 0.0531& 0& -0.0655& -0.0840& -0.0655& 0& 0.0531\\ 0.1480 & 0.0655 & 0 & 0.3123 & 0 & 0.0655 & 0.1480\\ 0.1995 & 0.0840 & -0.3123 & 0 & -0.3123 & 0.0840 & 0.1995\\ 0.1480 & 0.0655 & 0 & 0.3123 & 0 & 0.0655 & 0.1480\\ 0.0531 & 0 & -0.0655 & -0.0840 & -0.0655 & 0 &0.0531\\ 0 &-0.0531 &-0.1480 &-0.1995 &-0.1480 &-0.0531 & 0 \end{pmatrix} \! t_6\!\!=\!\!\!\! \begin{pmatrix} -0.0508& -0.0735& 0.0490& 0& -0.0490& 0.0735& 0.0508\\ -0.0735& -0.1579& -0.2956& 0& 0.2956& 0.1579& 0.0735\\ 0.0490 & -0.2956 & 0.2493 & 0 & -0.2493 & 0.2956 & -0.0490\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ -0.0490 & 0.2956 &-0.2493 & 0 & 0.2493 &-0.2956 & 0.0490\\ 0.0735 &0.1579 &0.2956 & 0 &-0.2956 &-0.1579 &-0.0735\\ 0.0508 &0.0735 & -0.0490 & 0 &0.0490 &-0.0735 &-0.0508 \end{pmatrix} \end{equation*} \begin{equation} t_8= \begin{pmatrix} 0& -0.0461& 0& 0.1148& 0& -0.0461& 0\\ -0.0461& 0& -0.1879& -0.4330& -0.1879& 0& -0.0461\\ 0 & -0.1879 & 0 & 0.3083 & 0 & -0.1879 & 0\\ 0.1148 & -0.4330 & 0.3083 & 0 & 0.3083 & -0.4330 & 0.1148\\ 0 &-0.1879 & 0 & 0.3083 & 0 &-0.1879 & 0\\ -0.0461 & 0 &-0.1879 &-0.4330 &-0.1879 & 0 &-0.0461\\ 0 &-0.0461 & 0 &0.1148 & 0 &-0.0461 & 0 \end{pmatrix}. \end{equation} \end{widetext} \end{document}
1,116,691,499,428
arxiv
\section{Introduction} The transition from quantum to classical physics and classicality of quantum systems continue to be among the most interesting problems in many fields of physics, for both conceptual and experimental reasons \cite{gi96,pa01,zu03}. Two conditions are essential for the classicality of a quantum system \cite{mo90}: a) quantum decoherence (QD), that means the irreversible, uncontrollable and persistent formation of a quantum correlation (entanglement) of the system with its environment \cite{ali}, expressed by the damping of the coherences present in the quantum state of the system, when the off-diagonal elements of the density matrix of the system decay below a certain level, so that this density matrix becomes approximately diagonal and b) classical correlations (CC), expressed by the fact that the Wigner function of the quantum system has a peak which follows the classical equations of motion in phase space with a good degree of approximation, that is the quantum state becomes peaked along a classical trajectory. The necessity and sufficiency of both QD and CC as conditions of classicality are still a subject of debate. Both these conditions do not have an universal character, so that they are not necessary for all physical models. An important role in this discussion plays the temperature of the environment and therefore it is worth to take into account the differences between low and high temperature regimes. For example, purely classical systems at very high temperatures are described by a classical Fokker-Planck equation which does not follow any trajectory in phase space (for very small kinetic energy, compared to the thermal energy, when the probability distribution becomes essentially independent of momentum), so that in this case CC are not necessary. Likewise, one can have a classical behaviour if the coherences are negligible, without having strong CC (for example, in the case of a classical gas at finite temperature) and the lack of strong correlations between the coordinate and its canonical momentum does not necessarily mean that the system is quantum. On the other hand, the condition of CC is not sufficient for a system to become classical -- although the Wigner function can show a sharp correlation in phase space, the quantum coherence never vanishes for a closed system which has a unitary evolution. Likewise, in the low temperature quantum regime one can observe strong CC. For example, in the case of a purely damped quantum harmonic oscillator (at zero temperature), the initial coherent states remain coherent and perfectly follow classical trajectories of a damped oscillator, but CC are not sufficient for classicality. In the last two decades it has became more and more clear that the classicality is an emergent property of open quantum systems, since both main features of this process -- QD and CC -- strongly depend on the interaction between the system and its external environment \cite{zu03,pa93,zu91}. The main purpose of this work is to study QD and CC for a harmonic oscillator interacting with an environment in the framework of the Lindblad theory for open quantum systems. We determine the degree of QD and CC and the possibility of simultaneous realization of QD and CC for a system consisting of a harmonic oscillator in a thermal bath. It is found that the system manifests a QD which increases with time and temperature, whereas CC are less and less strong with increasing time and temperature. \section{Lindblad master equation for the harmonic oscillator in coordinate and Wigner representation} The irreversible time evolution of an open system is described by the following general quantum Markovian master equation for the density operator $\rho(t)$ \cite{l1}: \bea{d \rho(t)\over dt}=-{i\over\hbar}[ H, \rho(t)]+{1\over 2\hbar} \sum_{j}([ V_{j} \rho(t), V_{j}^\dagger ]+[ V_{j}, \rho(t) V_{j}^\dagger ]).\label{lineq}\eea $H$ is the Hamiltonian of the system and $V_{j},$ $ V_{j}^\dagger $ are operators on the Hilbert space of $H$, which model the environment. In order to obtain, for the damped quantum harmonic oscillator, equations of motion as close as possible to the classical ones, the two possible operators $V_{1}$ and $ V_{2}$ are taken as linear polynomials in coordinate $q$ and momentum $p$ \cite{ss,rev} and the harmonic oscillator Hamiltonian $H$ is chosen of the general quadratic form \bea H=H_{0}+{\mu\over 2}(qp+pq), ~~~ H_{0}={1\over 2m}p^2+{m\omega^2\over 2} q^2. \label{ham} \eea With these choices the master equation (\ref{lineq}) takes the following form \cite{ss,rev}: \bea {d \rho \over dt}=-{i\over \hbar}[ H_{0}, \rho]- {i\over 2\hbar}(\lambda +\mu) [ q, \rho p+ p \rho]+{i\over 2\hbar}(\lambda -\mu)[ p, \rho q+ q \rho] \nonumber\\ -{D_{pp}\over {\hbar}^2}[ q,[ q, \rho]]-{D_{qq}\over {\hbar}^2} [ p,[ p, \rho]]+{D_{pq}\over {\hbar}^2}([ q,[ p, \rho]]+ [ p,[ q, \rho]]). ~~~~\label{mast} \eea In the particular case when the asymptotic state is a Gibbs state $\rho_G(\infty)=e^{-{ H_0\over kT}}/{\rm Tr}e^{-{ H_0\over kT}},$ the quantum diffusion coefficients $D_{pp},D_{qq},$ $D_{pq}$ and the dissipation constant $\lambda$ satisfy the relations \cite{ss,rev} \bea D_{pp}={\lambda+\mu\over 2}\hbar m\omega\coth{\hbar\omega\over 2kT}, ~~D_{qq}={\lambda-\mu\over 2}{\hbar\over m\omega}\coth{\hbar\omega\over 2kT}, ~~D_{pq}=0, \label{coegib} \eea where $T$ is the temperature of the thermal bath. In the Markovian regime the harmonic oscillator master equation which satisfies the complete positivity condition cannot satisfy simultaneously the translational invariance and the detailed balance (which assures an asymptotic approach to the canonical thermal equilibrium state). The necessary and sufficient condition for translational invariance is $\lambda=\mu$ \cite{ss,rev}. In this case the equations of motion for the expectation values of coordinate and momentum are exactly the same as the classical ones. If $\lambda\neq \mu,$ then we violate translational invariance, but we keep the canonical equilibrium state. The asymptotic values $\sigma_{qq}(\infty), \sigma_{pp}(\infty),\sigma_{pq}(\infty)$ do not depend on the initial values $\sigma_{qq}(0),\sigma_{pp}(0),\sigma_{pq}(0)$ and in the case of a thermal bath with coefficients (\ref{coegib}), they reduce to \cite{ss,rev} \bea \sigma_{qq}(\infty)={\hbar\over 2m\omega}\coth{\hbar\omega\over 2kT}, ~~\sigma_{pp}(\infty)={\hbar m\omega\over 2}\coth{\hbar\omega\over 2kT}, ~~\sigma_{pq}(\infty)=0. \label{varinf}\eea In the following, we consider a general temperature $T,$ but we should stress that the Lindblad theory is obtained in the Markov approximation, which holds for high temperatures of the environment. At the same time, the semigroup dynamics of the density operator which must hold for a quantum Markovian process is valid only for the weak-coupling regime, with the damping $\lambda$ obeying the inequality $\lambda\ll\omega.$ We consider a harmonic oscillator with an initial Gaussian wave function \bea \Psi(q)=({1\over 2\pi\sigma_{qq}(0)})^{1\over 4}\exp[-{1\over 4\sigma_{qq}(0)} (1-{2i\over\hbar}\sigma_{pq}(0))(q-\sigma_q(0))^2+{i\over \hbar}\sigma_p(0)q], \label{ccs}\eea where $\sigma_{qq}(0)$ is the initial spread, $\sigma_{pq}(0)$ the initial covariance, and $\sigma_q(0)$ and $\sigma_p(0)$ are the initial averaged position and momentum of the wave packet. The initial state (\ref{ccs}) represents a correlated coherent state (squeezed coherent state) \cite{dodkur} with the variances and covariance of coordinate and momentum \bea \sigma_{qq}(0)={\hbar\delta\over 2m\omega},~~ \sigma_{pp}(0)={\hbar m\omega\over 2\delta(1-r^2)},~~ \sigma_{pq}(0)={\hbar r\over 2\sqrt{1-r^2}}. \label{inw}\eea Here, $\delta$ is the squeezing parameter which measures the spread in the initial Gaussian packet and $r,$ with $|r|<1$ is the correlation coefficient at time $t=0.$ The initial values (\ref{inw}) correspond to a minimum uncertainty state, since they fulfil the generalized uncertainty relation \bea \sigma_{qq}(0)\sigma_{pp}(0)-\sigma_{pq}^2(0) ={\hbar^2\over 4}.\label{gen0}\eea For $\delta=1$ and $r=0$ the correlated coherent state becomes a Glauber coherent state. From Eq. (\ref{mast}) we derive the evolution equation in coordinate representation: \bea {\partial\rho\over\partial t}={i\hbar\over 2m}({\partial^2\over\partial q^2}- {\partial^2\over\partial q'^2})\rho-{im\omega^2\over 2\hbar}(q^2-q'^2)\rho\nonumber\\ -{1\over 2}(\lambda+\mu)(q-q')({\partial\over\partial q}-{\partial\over\partial q'})\rho+{1\over 2}(\lambda-\mu)[(q+q')({\partial\over\partial q}+{\partial\over\partial q'})+2]\rho \nonumber\\ -{D_{pp}\over\hbar^2}(q-q')^2\rho+D_{qq}({\partial\over\partial q}+{\partial\over \partial q'})^2\rho -{2iD_{pq}\hbar}(q-q')( {\partial\over\partial q}+{\partial\over\partial q'})\rho.\label{cooreq}\eea For the case of a thermal bath with coefficients (\ref{coegib}) the Wigner distribution function $W(q,p,t)$ satisfies the following Fokker-Planck-type equation: \bea {\partial W\over\partial t}= -{p\over m}{\partial W\over\partial q} +m\omega^2 q{\partial W\over\partial p} +(\lambda+\mu){\partial\over\partial p}(pW) +(\lambda-\mu){\partial\over\partial q}(qW) \nonumber \\ +{\hbar\over 2}\coth{\hbar\omega\over 2kT}[(\lambda+\mu)m\omega{\partial^2 W\over\partial p^2} +{\lambda-\mu\over m\omega}{\partial^2 W\over\partial q^2}].~~~~~~~~~~~~~~~~~~~ \label{wigeq}\eea The first two terms on the right-hand side of both these equations generate a purely unitary evolution. They give the usual Liouvillian evolution. The third and forth terms are the dissipative terms and have a damping effect (exchange of energy with environment). The last two are noise (diffusive) terms and produce fluctuation effects in the evolution of the system. They promote diffusion in momentum $p$ and coordinate $q$ and generate decoherence in coordinate and momentum, respectively. In the high temperature limit, quantum Fokker-Planck equation (\ref{wigeq}) with coefficients (\ref{coegib}) becomes classical Kramers equation ($D_{pp}\to 2m\lambda kT$ for $\lambda=\mu$). The density matrix solution of Eq. (\ref{cooreq}) has the general form of Gaussian density matrices \bea <q|\rho(t)|q'>=({1\over 2\pi\sigma_{qq}(t)})^{1\over 2} \exp[-{1\over 2\sigma_{qq}(t)}({q+q'\over 2}-\sigma_q(t))^2\nonumber\\ -{\sigma(t)\over 2\hbar^2\sigma_{qq}(t)}(q-q')^2 +{i\sigma_{pq}(t)\over \hbar\sigma_{qq}(t)}({q+q'\over 2}-\sigma_q(t))(q-q')+{i\over \hbar}\sigma_p(t)(q-q')],\label{densol} \eea where $\sigma(t)\equiv\sigma_{qq}(t)\sigma_{pp}(t)-\sigma_{pq}^2(t)$ is the Schr\"odinger generalized uncertainty function \cite{unc} ($\sigma_{qq}$ and $\sigma_{pp}$ denote the dispersion (variance) of the coordinate and momentum, respectively, and $\sigma_{pq}$ denotes the correlation (covariance) of the coordinate and momentum). For an initial Gaussian Wigner function (corresponding to a correlated coherent state (\ref{ccs})) the solution of Eq. (\ref{wigeq}) is \bea W(q,p,t)={1\over 2\pi\sqrt{\sigma(t)}} \exp\{-{1\over 2\sigma(t)}[\sigma_{pp}(t)(q-\sigma_q(t))^2+ \sigma_{qq}(t)(p-\sigma_p(t))^2\nonumber\\ -2\sigma_{pq}(t)(q-\sigma_q(t))(p-\sigma_p(t))] \}.\label{wig} \eea In the case of a thermal bath we obtain the following steady state solution for $t\to\infty$ (we denote $\epsilon\equiv{\hbar\omega\over 2kT}$): \bea <q|\rho(\infty)|q'>=({m\omega\over \pi\hbar\coth\epsilon})^{1\over 2}\exp\{-{m\omega\over 4\hbar}[{(q+q')^2\over\coth\epsilon}+ (q-q')^2\coth\epsilon]\}.\label{dinf}\eea In the long time limit we have also \bea W_{\infty}(q,p)={1\over \pi\hbar\coth \epsilon}\exp\{-{1\over \hbar\coth\epsilon}[m\omega q^2+{p^2\over m\omega}] \}.\label{wiginf} \eea Stationary solutions to the evolution equations obtained in the long time limit are possible as a result of a balance between the wave packet spreading induced by the Hamiltonian and the localizing effect of the Lindblad operators. \section{Quantum decoherence and classical correlations} As we already stated, one considers that two conditions have to be satisfied in order that a system could be considered as classical. The {\it first} condition requires that the system should be in one of relatively permanent states -- states that are least affected by the interaction of the system with the environment -- and the interference between different states should be negligible. This implies the destruction of off-diagonal elements representing coherences between quantum states in the density matrix, which is the QD phenomenon. The loss of coherence can be achieved by introducing an interaction between the system and environment: an initial pure state with a density matrix which contains nonzero off-diagonal terms can non-unitarily evolve into a final mixed state with a diagonal density matrix during the interaction with the environment, like in classical statistical mechanics. The {\it second} condition requires that the system should have, with a good approximation, an evolution according to classical laws. This implies that the Wigner distribution function has a peak along a classical trajectory, that means there exist CC between the canonical variables of coordinate and momentum. Of course, the correlation between the canonical variables, necessary to obtain a classical limit, should not violate Heisenberg uncertainty principle, i.e. the position and momentum should take reasonably sharp values, to a degree in concordance with the uncertainty principle. This is possible, because the density matrix does not diagonalize exactly in position, but with a non-zero width. Using new variables $\Sigma=(q+q')/2$ and $\Delta=q-q',$ the density matrix (\ref{densol}) can be rewritten as \bea \rho(\Sigma,\Delta,t)=\sqrt{\alpha\over \pi}\exp[-\alpha\Sigma^2 -\gamma\Delta^2+i\beta\Sigma\Delta+2\alpha\sigma_q(t)\Sigma +i({\sigma_p(t)\over\hbar}- \beta\sigma_q(t))\Delta-\alpha\sigma_q^2(t)],\label{ccd3}\eea with the abbreviations \bea \alpha={1\over 2\sigma_{qq}(t)},~~\gamma={\sigma(t)\over 2\hbar^2 \sigma_{qq}(t)},~~ \beta={\sigma_{pq}(t)\over\hbar\sigma_{qq}(t)}\label{ccd4}\eea and the Wigner transform of the density matrix (\ref{ccd3}) is \bea W(q,p,t)={1\over 2\pi\hbar}\sqrt{\alpha\over\gamma}\exp\{-{[\hbar\beta (q-\sigma_q(t))-(p-\sigma_p(t))]^2\over 4\hbar^2\gamma}-\alpha (q-\sigma_q(t))^2\}.\label{wigc} \eea a) {\it Degree of quantum decoherence (QD)} The representation-independent measure of the degree of QD \cite{mo90} is given by the ratio of the dispersion $1/\sqrt{2\gamma}$ of the off-diagonal element $\rho(0,\Delta,t)$ to the dispersion $\sqrt{2/\alpha}$ of the diagonal element $\rho(\Sigma,0,t):$ \bea \delta_{QD}={1\over 2}\sqrt{\alpha\over \gamma},\label{qdec}\eea which in our case gives \bea \delta_{QD}(t)={\hbar\over 2\sqrt{\sigma(t)}}.\eea The finite temperature Schr\"odinger generalized uncertainty function, calculated in Ref. \cite{unc}, has the expression \bea\sigma(t)={\hbar^2\over 4}\{e^{-4\lambda t}[1-(\delta+{1\over\delta(1-r^2)})\coth\epsilon+\coth^2\epsilon] \nonumber\\ +e^{-2\lambda t}\coth\epsilon[(\delta+{1\over\delta(1-r^2)} -2\coth\epsilon){\omega^2-\mu^2\cos(2\Omega t)\over\Omega^2}\nonumber \\ +(\delta-{1\over\delta(1-r^2)}){\mu \sin(2\Omega t)\over\Omega}+{2r\mu\omega(1-\cos(2\Omega t))\over\Omega^2\sqrt{1-r^2}}]+\coth^2\epsilon\}.\label{sunc}\eea In the limit of long times Eq. (\ref{sunc}) yields \bea \sigma(\infty)={\hbar^2\over 4}\coth^2\epsilon,\eea so that we obtain \bea \delta_{QD}(\infty)=\tanh{\hbar\omega\over 2kT},\eea which for high $T$ becomes \bea\delta_{QD}(\infty)={\hbar\omega\over 2kT}.\eea We see that $\delta_{QD}$ decreases, and therefore QD increases, with temperature, i.e. the density matrix becomes more and more diagonal at higher $T$ and the contributions of the off-diagonal elements get smaller and smaller. At the same time the degree of purity decreases and the degree of mixedness increases with $T.$ $\delta_{QD}<1$ for $T\neq 0,$ while for $T=0$ the asymptotic (final) state is pure and $\delta_{QD}$ reaches its initial maximum value 1. $\delta_{QD}= 0$ when the quantum coherence is completely lost. So, when $\delta_{QD}= 1$ there is no QD and only if $\delta_{QD}<1,$ there is a significant degree of QD, when the magnitude of the elements of the density matrix in the position basis are peaked preferentially along the diagonal $q=q'.$ When $\delta_{QD}\ll 1,$ we have a strong QD. b) {\it Degree of classical correlations (CC)} In defining the degree of CC, the form of the Wigner function is essential, but not its position around $\sigma_q(t)$ and $\sigma_p(t).$ Consequently, for simplicity we consider zero values for the initial expectations values of the coordinate and momentum and the expression (\ref{wigc}) of the Wigner function becomes \bea W(q,p,t)={1\over 2\pi\hbar}\sqrt{\alpha\over\gamma}\exp[-{(\hbar\beta q-p)^2\over 4\hbar^2\gamma}-\alpha q^2].\label{wigcs} \eea As a measure of the degree of CC we take the relative sharpness of this peak in the phase space determined from the dispersion $\hbar\sqrt{2\gamma}$ in $p$ in Eq. (\ref{wigcs}) and the magnitude of the average of $p$ ($p_0=\hbar\beta q $) \cite{mo90}: \bea \delta_{CC}={2\sqrt{\alpha\gamma}\over|\beta|},\label{cor}\eea where we identified $q$ as the dispersion $1/\sqrt{2\alpha}$ of $q.$ $\delta_{CC}$ is a good measure of the "squeezing" of the Wigner function in phase space \cite{mo90}: in the state (\ref{wigcs}), more "squeezed" is the Wigner function, more strongly established are CC. For our case, we obtain \bea \delta_{CC}(t)={\sqrt{\sigma(t)}\over |\sigma_{pq}(t)|},\eea where $\sigma(t)$ is given by Eq. (\ref{sunc}) and $\sigma_{pq}(t)$ can be calculated using formulas given in Refs. \cite{ss,rev}: \bea\sigma_{pq}(t)={\hbar\over 4\Omega^2}e^{-2\lambda t}\{[\mu\omega(2\coth\epsilon-\delta-{1\over\delta(1-r^2)}) -{2\omega^2r\over\sqrt{1-r^2}}]\cos(2\Omega t)\nonumber \\ +\omega\Omega(\delta-{1\over\delta(1-r^2)})\sin(2\Omega t)+\mu\omega(\delta+{1\over\delta(1-r^2)}-2\coth\epsilon)+{2\mu^2r \over\sqrt{1-r^2}}\}.\label{pqvar}\eea When $\delta_{CC}$ is of order of unity, we have a significant degree of classical correlations. The condition of strong CC is $\delta_{CC}\ll 1,$ which assures a very sharp peak in phase space. Since $\sigma_{pq}(\infty)=0,$ in the case of an asymptotic Gibbs state, we get $\delta_{CC}(\infty)\to\infty,$ so that our expression shows no CC at $t\to\infty.$ c) {\it Discussion with Gaussian density matrix and Wigner function} If the initial wave function is Gaussian, then the density matrix (\ref{densol}) and the Wigner function (\ref{wig}) remain Gaussian for all times (with time-dependent parameters which determine their amplitude and spread) and centered along the trajectory given by the solutions of the dissipative equations of motion. This trajectory is exactly classical for $\lambda=\mu$ and only approximately classical for not large $\lambda-\mu.$ The degree of QD has an evolution which shows that in general QD increases with time and temperature. The degree of CC has a more complicated evolution, but the general tendency is that CC are less and less strong with increasing time and temperature. $\delta_{QD}<1$ and $\delta_{CC}$ is of the order of unity for a long enough interval of time, so that we can say that the considered system interacting with the thermal bath manifests both QD and CC and a true quantum to classical transition takes place. Dissipation promotes quantum coherences, whereas fluctuation (diffusion) reduces coherences and promotes QD. The balance of dissipation and fluctuation determines the final equilibrium value of $\delta_{QD}.$ The quantum system starts as a pure state, with a Wigner function well localized in phase space (Gaussian form). This state evolves approximately following the classical trajectory (Liouville flow) in phase space and becomes a quantum mixed state during the irreversible process of QD. d) {\it Decoherence time} In order to obtain the decoherence time, we consider the coefficient $\gamma$ (\ref{ccd4}), which measures the contribution of non-diagonal terms in the density matrix (\ref{ccd3}). For short times ($\lambda t\ll 1, \Omega t\ll 1$), we have: \bea \gamma(t)=-{m\omega\over 4\hbar\delta}\{1+2[\lambda(\delta+{r^2\over\delta(1-r^2)})\coth\epsilon +\mu(\delta-{r^2\over\delta(1-r^2)})\coth\epsilon-\lambda-\mu-{\omega r\over\delta\sqrt{1-r^2}}]t\}.\label{td}\eea From here we obtain that quantum coherences in the density matrix decay exponentially at a rate given by \bea 2[\lambda(\delta+{r^2\over\delta(1-r^2)})\coth\epsilon +\mu(\delta-{r^2\over\delta(1-r^2)})\coth\epsilon-\lambda-\mu-{\omega r\over\delta\sqrt{1-r^2}}]\eea and then the decoherence time scale is \bea t_{deco}={1\over 2[\lambda(\delta+{r^2\over\delta(1-r^2)})\coth\epsilon +\mu(\delta-{r^2\over\delta(1-r^2)})\coth\epsilon-\lambda-\mu-{\omega r\over\delta\sqrt{1-r^2}}]}.\label{tdeco1}\eea The decoherence time depends on the temperature $T$ and the coupling $\lambda$ (dissipation coefficient) between the system and environment (through the diffusion coefficient $D_{pp}$), on the squeezing parameter $\delta$ that measures the spread in the initial Gaussian packet and on the initial correlation coefficient $r.$ We notice that the decoherence time is decreasing with increasing dissipation, temperature and squeezing. For $r=0$ we obtain:\bea t_{deco}={1\over 2(\lambda+\mu)(\delta\coth\epsilon-1)}\label{tdeco2}\eea and at temperature $T=0$ (then we have to take $\mu=0$), this becomes \bea t_{deco}={1\over 2\lambda(\delta-1)}.\eea We see that when the initial state is the usual coherent state $(\delta=1),$ then the decoherence time tends to infinity. This corresponds to the fact that for $T=0$ and $\delta=1$ the coefficient $\gamma$ is constant in time, so that the decoherence process does not occur in this case. At high temperature, expression (\ref{tdeco1}) becomes (we denote $\tau\equiv {1\over\epsilon}$) \bea t_{deco}={1\over 2[\lambda(\delta+{r^2\over\delta(1-r^2)}) +\mu(\delta-{r^2\over\delta(1-r^2)})]\tau}.\eea If, in addition $r=0,$ then we obtain \bea t_{deco}={\hbar\omega\over 4(\lambda+\mu)\delta kT}.\eea In Ref. \cite{unc} we determined the time $t_d$ when thermal fluctuations become comparable with quantum fluctuations: \bea t_d={1\over 2[\lambda (\delta+{1\over\delta(1-r^2)})\coth\epsilon+\mu(\delta-{1\over\delta(1-r^2)}) \coth\epsilon-2\lambda]}.\label{t2}\eea At high temperature, expression (\ref{t2}) becomes \bea t_d={1\over 2\tau[\lambda (\delta+{1\over\delta(1-r^2)})+\mu(\delta-{1\over\delta(1-r^2)})]}.\eea As expected, the decoherence time $t_{deco}$ has the same scale as the time $t_d$ after which thermal fluctuations become comparable with quantum fluctuations. We can assert that in the considered case classicality is a temporary phenomenon, which takes place only at some stages of the dynamical evolution, during a definite interval of time \cite{deco}. Due to the dissipative nature of evolution, the approximately deterministic evolution is no more valid for very large times, when the localization of the system is affected by the spreading of the wave packet and of the Wigner distribution function.
1,116,691,499,429
arxiv
\section{Introduction} \label{sec:intro} The first \emph{quantum affine $W$-algebra}, the so called Zamolodchikov $W_3$-algebra \cite{Zam85}, appeared in the physics literature in the study of 2-dimensional Conformal Field Theory. Further generalizations of this algebra were provided soon after \cite{FL88,LF89}. Physicists thought of these algebras as ``non-linear'' infinite dimensional Lie algebras extending the Virasoro Lie algebra. In \cite{FF90} the affine $W$-algebras $W_\kappa(\mf g,f)$ ($\kappa$ is called the level), for a principal nilpotent element $f\in\mf g$, were described as vertex algebras obtained via a quantization of the Drinfeld-Sokolov Hamiltonian reduction, which was used in \cite{DS85} to construct \emph{classical affine $W$-algebras}. In particular, for $\mf{sl}_2$ one gets the Virasoro vertex algebra, and for $\mf{sl}_3$ the Zamolodchikov's $W_3$ algebra. The construction was finally generalized to arbitrary nilpotent element $f$ in \cite{KRW03,KW04,KW05}. In these papers, affine $W$-algebras were applied to representation theory of superconformal algebras. Quantum affine $W$-algebras may be also considered as an affinization of \emph{quantum finite $W$-algebras} \cite{Pre02} which are a natural quantization of Slodowy slices \cite{GG02}. $W$-algebras are at the cross roads of representation theory and mathematical physics and play important roles (just to cite some of them) in applications to integrable systems \cite{DS85,DSKV13}, to Gromov-Witten theory and singularity theory \cite{BM13,ML16}, the geometric Langlands program \cite{FF11,Fre95,FA18,FH16}, four-dimensional gauge theories \cite{AGT10,SV13,BFN16}. In this note we survey the recent approach to (quantum finite and classical affine) $W$-algebras based on the notion of Lax type operators \cite{DSKV16b,DSKV17,DSKVcl,DSKVfin}. For a review of the approach to (classical) $W$-algebras via generators and relations we refer to \cite{DS14}. Throughout the paper the base field $\mb F$ is a field of characteristic zero. \smallskip \noindent{\bf Acknowledgments.} This review is based on the talk I gave at the XIth International Symposium Quantum Theory and Symmetry in Montr\'eal. I wish to thank the organizers for the invitation and the hospitality. \section{What is a $W$-algebra?} \label{sec:what} $W$-algebras are a rich family of algebraic structures associated to a pair $(\mf g,f)$ consisting of a finite dimensional reductive Lie algebra $\mf g$ and a nilpotent element $f\in\mf g$. They are obtained via Hamiltonian reduction in different categories: Poisson algebras, associative algebras and (Poisson) vertex algebras. We should think of them as algebraic structures underlying some physical theories with ``extended symmetries''. \subsection{Fundamental physical theories and corresponding fundamental algebraic structures} \label{sec:1.1} In Classical Mechanics the phase space, describing the possible configurations of a physical system, is a Poisson manifold. The physical observables are the smooth functions on the manifold, and they thus form a \emph{Poisson algebra} (PA). By quantizing this theory we go to Quantum Mechanics. The observables become non commutative objects, elements of an \emph{associative algebra} (AA). Hence, the Poisson bracket is replaced by the usual commutator and the phase space is described as a representation of this associative algebra. Going from a finite to an infinite number of degrees of freedom, we pass from Classical and Quantum Mechanics to Classical and Quantum Field Theory respectively. The algebraic structure corresponding to an arbitrary Quantum Field Theory is still to be understood, but in the special case of chiral quantum fields of a 2-dimensional Conformal Field Theory (CFT) the adequate algebraic structure is a \emph{vertex algebra} (VA) \cite{Bor86}, and its quasi-classical limit is known as \emph{Poisson vertex algebra} (PVA) \cite{DSK06}. Hence, the algebraic counterparts of the four fundamental frameworks of physical theories can be put in the following diagram: \begin{equation}\label{maxi} \UseTips \xymatrix{ PVA\,\,\,\,\,\, \ar[d]^{\text{Zhu}} \ar@/^1pc/@{.>}[r]^{\text{quantization}} & \ar[l]^{\text{cl.limit}} \,\,\,\,\,\,VA \ar[d]_{\text{Zhu}} \\ PA\,\,\,\,\,\, \ar@/^1pc/@{.>}[u]^{\text{affiniz.}} \ar@/_1pc/@{.>}[r]_{\text{quantization}} & \ar[l]_{\text{cl.limit}} \,\,\,\,\,\, AA \ar@/_1pc/@{.>}[u]_{\text{affiniz.}} } \end{equation} The straight arrows in the above diagram correspond to canonical functors and have the following meaning. Given a filtered AA (respectively VA), its associated graded algebra is a PA (respectively PVA) called its \emph{classical limit}. Moreover, starting from a positive energy VA (respectively PVA) we can construct an AA (resp. PA) governing its representation theory, known as its \emph{Zhu algebra} \cite{Zhu96}. The processes of going from a classical theory to a quantum theory (``quantization''), or from finitely many to infinitely many degrees of freedom (``affinization'') are not functorial and they are thus represented with dotted arrows. \subsubsection{(Poisson) vertex algebras.} PVAs provide a convenient framework to study Hamiltonian partial differential equations. Recall from \cite{BDSK09} that a PVA is a differential algebra, i.e. a unital commutative associative algebra with a derivation $\partial$, endowed with a $\lambda$-\emph{bracket}, i.e. a bilinear (over $\mb F$) map $\{\cdot\,_\lambda\,\cdot\}:\,\mc V\times\mc V\to\mc V[\lambda]$, satisfying the following axioms ($a,b,c\in\mc V$): \begin{enumerate}[(i)] \item sesquilinearity: $\{\partial a_\lambda b\}=-\lambda\{a_\lambda b\}$, $\{a_\lambda\partial b\}=(\lambda+\partial)\{a_\lambda b\}$; \item skewsymmetry: $\{b_\lambda a\}=-\{a_{-\lambda-\partial} b\}$; \item Jacobi identity: $\{a_\lambda \{b_\mu c\}\}-\{b_\mu\{a_\lambda c\}\} =\{\{a_\lambda b\}_{\lambda+\mu}c\}$; \item (left) Leibniz rule: $\{a_\lambda bc\}=\{a_\lambda b\}c+\{a_\lambda c\}b$. \end{enumerate} Applying skewsymmetry to the left Leibniz rule we get \begin{enumerate}[(i)] \setcounter{enumi}{4} \item right Leibniz rule: $\{ab_\lambda c\}=\{a_{\lambda+\partial} c\}_\to b+\{b_{\lambda+\partial} c\}_\to a$. \end{enumerate} In (ii) and (iv) we use the following notation: if $\{a_\lambda b\}=\sum_{n\in\mb Z_+}\lambda^n \alpha_n\in\mc V[\lambda]$, then $\{a_{\lambda+\partial}b\}_{\rightarrow}c =\sum_{n\in\mb Z_+}\alpha_n(\lambda+\partial)^nc\in\mc V[\lambda]$ and $\{a_{-\lambda-\partial}b\}=\sum_{n\in\mb Z_+}(-\lambda-\partial)^n\alpha_n\in\mc V[\lambda]$ (if there is no arrow we move $\partial$ to the left). We denote by $\tint:\,\mc V\to\mc V/\partial\mc V$ the canonical quotient map of vector spaces. Recall that, if $\mc V$ is a PVA, then $\mc V/\partial\mc V$ carries a well defined Lie algebra structure given by $\{\tint f,\tint g\}=\tint\{f_\lambda g\}|_{\lambda=0}$, and we have a representation of the Lie algebra $\mc V/\partial\mc V$ on $\mc V$ given by $\{\tint f,g\}=\{f_\lambda g\}|_{\lambda=0}$. A \emph{Hamiltonian equation} on $\mc V$ associated to a \emph{Hamiltonian functional} $\tint h\in\mc V/\partial\mc V$ is the evolution equation \begin{equation}\label{ham-eq} \frac{du}{dt}=\{\tint h,u\}\,\,, \,\,\,\, u\in\mc V\,. \end{equation} The minimal requirement for \emph{integrability} is to have an infinite collection of linearly independent integrals of motion in involution: $$ \tint h_0=\tint h,\,\tint h_1,\,\tint h_2,\,\dots\, \,\text{ s.t. }\,\, \{\tint h_m,\tint h_n\}=0\,\,\text{ for all }\,\, m,n\in\mb Z_{\geq0} \,. $$ In this case, we have the \emph{integrable hierarchy} of Hamiltonian equations $ \frac{du}{dt_n}=\{\tint h_n,u\}\,\,, \,\,\,\, n\in\mb Z_{\geq0} \,. $ \begin{example}\label{exa:vir} The Virasoro-Magri PVA on the algebra of differential polynomials $\mc V=\mb C[u,u',u'',\dots]$ is defined by letting $$ \{u_\lambda u\}=(2\lambda+\partial)u+\lambda^3\,, $$ and extending it to a $\lambda$-bracket for the whole $\mc V$ using sesquilinearity and Leibniz rules. Let $\tint h=\tint\frac{u^2}{2}$. Then the corresponding Hamiltonian equation \eqref{ham-eq} is the famous \emph{KdV equation}: $$ \frac{du}{dt}=u'''+3uu' \,. $$ Using the Lenard-Magri scheme of integrability \cite{Mag78} it can be shown that it belongs to an integrable hierarchy. \end{example} \subsubsection*{Vertex algebras.} VAs were introduced in \cite{Bor86}. Following \cite{DSK06}, we provide here a ``Poisson-like'' definition using $\lambda$-brackets. A VA is a (not necessarily commutative nor associative) unital algebra $V$ with a derivation $\partial$ endowed with a $\lambda$-bracket $[\cdot_{\lambda}\cdot]:V\times V\longrightarrow V [\lambda]$ satisfying sesquilinearity, skewsymmetry, Jacobi identity, and, moreover ($a,b,c\in V$): \begin{enumerate}[(i)] \item quasicommutativity: $ab-ba=\int_{-\partial}^0[a_\lambda b]d\lambda$; \item quasiassociativity: $(ab)c-a(bc)=(|_{\lambda=\partial}a)\int_0^\lambda[b_\mu c]d\mu +(|_{\lambda=\partial}b)\int_0^\lambda[a_\mu c]d\mu$; \item noncommutative Wick formula: $[a_\lambda bc]=[a_\lambda b]c+b[a_\lambda c]+\int_0^\lambda[[a_\lambda b]_{\mu}c]d\mu$. \end{enumerate} We refer to \cite{DSK06} for explanations about the notation. As before, we denote by $\tint:\,V\to V/\partial V$ the canonical quotient map of vector spaces. If $V$ is a VA, then $V/\partial V$ carries a well defined Lie algebra structure given by $[\tint f,\tint g]=\tint[f_\lambda g]|_{\lambda=0}$, and we have a representation of the Lie algebra $V/\partial V$ on $V$ given by $[\tint f,g]=[f_\lambda g]|_{\lambda=0}$. A quantum integrable system consists in a collection of infinitely many linearly independent elements $\tint h_m\in V/\partial V$, $m\in\mb Z_{\geq0}$, in involution. \begin{example} A VA is commutative if $[a_\lambda b]=0$, for every $a,b\in V$. It follows immediately from the definition that the category of commutative VAs is the same as the category of differential algebras. \end{example} \begin{remark} The (not necessarily commutative nor associative) product in a VA corresponds to the \emph{normally ordered product} of quantum fields in a CFT, while the $\lambda$-bracket encodes the singular part of their \emph{operator product expansion} (OPE). We give a naive explanation of the latter sentence in a particular case. Consider the VA $\lambda$-bracket of a Virasoro element $u$ (recall Example \ref{exa:vir} for its PVA analogue) $$ [u_\lambda u]=(2\lambda +\partial)u+\frac{c}{12}\lambda^3 \,, $$ where $c\in\mb C$ is called the central charge. Replace, in the above relation, $u$ by a quantum field, say $T(w)$, $\partial$ by $\partial_w$ and $\lambda$ by $\partial_w$ acting on the rational function $\frac{1}{z-w}$. Then we get $$ [T(w)_{\partial_w} T(w)]_\to\frac{1}{z-w} =\frac{\partial_wT(w)}{z-w}+\frac{2T(w)}{(z-w)^2}+\frac{c/2}{(z-w)^4} \,, $$ which is the singular part of the OPE of the stress-energy tensor in CFT. \end{remark} \subsection{A toy model} \label{sec:1.2} The simplest example when all four objects in diagram \eqref{maxi} can be constructed, is obtained starting with a finite-dimensional Lie algebra $\mf g$, with Lie bracket $[\cdot\,,\,\cdot]$, and with a non-degenerate invariant symmetric bilinear form $(\cdot\,|\,\cdot)$. The \emph{universal enveloping algebra} of $\mf g$, usually denoted by $U(\mf g)$, is an associative algebra, and its classical limit is the symmetric algebra $S(\mf g)$, with the Kirillov-Kostant Poisson bracket. Furthermore, we have also a Lie conformal algebra $\text{Cur}\,\mf g=(\mb F[\partial]\otimes\mf g)\oplus\mb FK$, with the following $\lambda$-bracket: \begin{equation}\label{20140401:eq2} [a_\lambda b] = [a,b]+(a|b)K\lambda \,\,,\,\,\,\, [a_\lambda K]= 0 \,\,,\,\,\,\, \text{ for }\,\, a,b\in\mf g \,. \end{equation} The universal enveloping vertex algebra of $\text{Cur}\,\mf g$ is the so-called \emph{universal affine vertex algebra} $V(\mf g)$, and its classical limit is the algebra of differential polynomials $\mc V(\mf g)=S(\mb F[\partial]\mf g)$, with the PVA $\lambda$-bracket defined by \eqref{20140401:eq2}. We refer to \cite{DSK06} for the definition of the latter structures and the construction of the corresponding Zhu maps. Thus, we get the following basic example of diagram \eqref{maxi}: \begin{equation}\label{maxi2} \UseTips \xymatrix{ \mc V(\mf g)\,\,\,\,\,\, \ar[d]^{\text{Zhu}} \ar@/^1pc/@{.>}[r]^{\text{quantization}} & \ar[l]^{\text{cl.limit}} \,\,\,\,\,\,V(\mf g) \ar[d]_{\text{Zhu}} \\ S(\mf g)\,\,\,\,\,\, \ar@/^1pc/@{.>}[u]^{\text{affiniz.}} \ar@/_1pc/@{.>}[r]_{\text{quantization}} & \ar[l]_{\text{cl.limit}} \,\,\,\,\,\, U(\mf g) \ar@/_1pc/@{.>}[u]_{\text{affiniz.}} } \end{equation} \subsection{Hamiltonian reduction} \label{sec:1.3} All the four algebraic structures in diagram \eqref{maxi} admit a Hamiltonian reduction. We review here only the case for associative algebras. Recall that the Hamiltonian reduction of a unital associative algebra $A$ by a pair $(B,I)$, where $B\subset A$ is a unital associative subalgebra and $I\subset B$ is a two sided ideal, is the following unital associative algebra: \begin{equation}\label{20140401:eq4} W = W(A,B,I) = \big(A\big/A I\big)^{\ad B} \,. \end{equation} where $\ad B$ denotes the usual adjoint action given by the commutator in an associative algebra (note that $B$ acts on $A/AI$ both by left and right multiplication). It is not hard to show that the obvious associative product on $W$ is well-defined. Now, let $\{e,2x,f\}\subset\mf g$ be an $\mf{sl}_2$-triple, and let \begin{equation}\label{eq:dec} \mf g=\bigoplus_{\substack{j=-d\\j\in\frac12\mb Z}}^d\mf g_j \,, \end{equation} be the $\ad x$-eigenspace decomposition. We can perform the Hamiltonian reduction of $A=U(\mf g)$ as follows. Let $B=U(\mf g_{>0})$ and $I\subset B$ be the two sided ideal generated by the set \begin{equation}\label{set} \big\{m-(f|m)\,\big|\,m\in\mf g_{\geq1}\big\} \,. \end{equation} Applying the Hamiltonian reduction \eqref{20140401:eq4} with the above data we get the so-called \emph{quantum finite $W$-algebra} (it first appeared in \cite{Pre02}) $$ W^{\text{fin}}(\mf g,f)=\big(U(\mf g)\big/U(\mf g)\{m-(f|m)\,\big|\,m\in\mf g_{\geq1}\}\big)^{\ad \mf g_{>0}} \,. $$ The Hamiltonian reduction \eqref{20140401:eq4} still makes sense if we replace associative algebras with PVAs (respectively, PAs), and we can perform it with $A=\mc V(\mf g)$, $B=\mc V(\mf g_{>0})$ and $I\subset B$ the differential algebra ideal generated by the set \eqref{set} (respectively, $A=S(\mf g)$, $B=S(\mf g_{>0})$ and $I\subset B$ the ideal generated by the set \eqref{set}). As a result we get the so-called \emph{classical affine} $W$-algebra $\mc W^{\text{aff}}(\mf g,f)$ (respectively, \emph{classical finite} $W$-algebra $\mc W^{\text{aff}}(\mf g,f)$), see \cite{DSKV16a} for further details. Unfortunately, a similar construction of a Hamiltonian reduction for vertex algebras is not known, and the \emph{quantum affine $W$-algebra} $W^{\text{aff}}(\mf g,f)$ is constructed using a cohomological approach \cite{FF90,KW04}. \subsection{From the toy model to $W$-algebras} Let $\mf g$ be a finite dimensional reductive Lie algebra, and let $f\in\mf g$ be a nilpotent element. By the Jacobson-Morozov Theorem it can be embedded in an $\mf{sl}_2$-triple $\{e,2x,f\}\subset\mf g$. Applying the machinery described in Section \ref{sec:1.3} we thus obtain a Hamiltonian reduction of the whole diagram \eqref{maxi2}: \begin{equation}\label{maxi3} \UseTips \xymatrix { \mc V(\mf g)\ar[dd]_{\text{Zhu}}\ar[dr]^{\HR_f} & & \ar[ll]_{\text{cl.limit}} V(\mf g) \ar[dd]|!{[d];[d]}\hol \ar[dr]^{\HR_f} & \\ & \mc W^{\text{aff}}(\mf g,f) \ar[dd]_(.35){\text{Zhu}} & & W^{\text{aff}}(\mf g,f) \ar[ll]_(.62){\text{cl.limit}} \ar[dd]_(.35){\text{Zhu}} \\ S(\mf g) \ar[dr]^{\HR_f} & & U(\mf g) \ar[ll]|!{[l];[l]}\hol \ar[dr]^{\HR_f} & \\ & \mc W^{\text{fin}}(\mf g,f) & & W^{\text{fin}}(\mf g,f) \ar[ll]_{\text{cl.limit}} } \end{equation} It is a convention to use the calligraphic $\mc W$ to denote objects appearing in the ``classical'' column of diagram \eqref{maxi3} and the block letter $W$ to denote objects appearing in the ``quantum'' column of the same diagram. Also the upper label ``fin'' (resp. ``aff'') is used to denote objects appearing in the ``finite'' (resp. ``affine'') row of diagram \eqref{maxi3}, corresponding to physical theories with a finite (resp. infinite) number of degrees of freedom. Hence, as we can see from diagram \eqref{maxi3}, $W$-\emph{algebras} provide a very rich family of examples which appear in all the four fundamental aspects in diagram \eqref{maxi}. Each of these classes of algebras was introduced and studied separately, with different applications in mind. The relations between them became fully clear later, see \cite{GG02,DSK06,DSKV16a} for further details. \subsubsection{Classical finite $W$-algebras.} The classical finite $W$-algebra $\mc W^{\text{fin}}(\mf g,f)$ is a PA, which can be viewed as the algebra of functions on the so-called \emph{Slodowy slice} $\mc S(\mf g,f)$, introduced by Slodowy while studying the singularities associated to the coadjoint nilpotent orbits of $\mf g$ \cite{Slo80}. \subsubsection{Finite $W$-algebras.} The first appearance of the finite $W$-algebras $W^{\text{fin}}(\mf g,f)$ was in a paper of Kostant \cite{Kos78}. He constructed the finite $W$-algebra for principal nilpotent $f\in\mf g$ (in which case it is commutative), and proved that it is isomorphic to the center of the universal enveloping algebra $U(\mf g)$. The construction was then extended in \cite{Lyn79} for even nilpotent element $f\in\mf g$. The general definition of finite $W$-algebras $W^{\text{fin}}(\mf g,f)$, for an arbitrary nilpotent element $f\in\mf g$, appeared later in a paper by Premet \cite{Pre02}. Finite $W$-algebras have deep connection with geometry and representation theory of simple finite-dimensional Lie algebras, with the theory of primitive ideals, and the Yangians, see \cite{Mat90,Pre02,Pre05,BrK06}. \subsubsection{Classical affine $W$-algebras.} The classical affine $W$-algebras $\mc W^{\text{aff}}(\mf g,f)$ were introduced, for principal nilpotent element $f$, in the seminal paper of Drinfeld and Sokolov \cite{DS85}. They were introduced as Poisson algebras of functions on an infinite dimensional Poisson manifold, and they were used to study KdV-type integrable bi-Hamiltonian hierarchies of PDE's, nowadays known as Drinfeld-Sokolov hierarchies. Later, there have been several papers aimed at the construction of generalized Drinfeld-Sokolov hierarchies, \cite{dGHM92,FHM93,BdGHM93,DF95,FGMS95,FGMS96}. In \cite{DSKV13}, the classical $W$-algebras $\mc W^{\text{aff}}(\mf g,f)$ were described as PVA, and the theory of generalized Drinfeld-Sokolov hierarchies was formalized in a more rigorous and complete way \cite{DSKV14a,DSKV16b,DSKVcl}. \subsubsection{Quantum affine $W$-algebras.} They have been extensively discussed in the Introduction. A review of the subject up to the early 90's may be found in the collection of a large number of reprints on $W$-algebras \cite{BS95}. Recently, it has been shown that they are at the base of an unexpected connections of vertex algebras with the geometric invariants called the Higgs branches in the four dimensional $N=2$ superconformal field theories \cite{BR17,Ara17}. \section{Linear algebra intermezzo} \label{sec:linear} \subsection{Set up}\label{sec:setup} Let $\mf g$ be a finite dimensional reductive Lie algebra, let $\{f,2x,e\}\subset\mf g$ be an $\mf{sl}_2$-triple and let \eqref{eq:dec} be the corresponding $\ad x$-eigenspace decomposition. In Sections \ref{sec:finite} and \ref{sec:affine} we will use the projection map $\pi_{\leq\frac12}:\mf g\to \mf g_{\leq \frac12}=\oplus_{k\leq\frac12}\mf g_k$ with kernel $\mf g_{>\frac12}=\oplus_{k>\frac12}\mf g_k$. Let $\varphi:\,\mf g\to\End V$ be a faithful representation of $\mf g$ on an $N$-dimensional vector space $V$. Throughout the paper we shall often use the following convention: we denote by lowercase Latin letters elements of the Lie algebra $\mf g$, and by the same uppercase letters the corresponding (via $\varphi$) elements of $\End V$. For example, $F=\varphi(f)$ is a nilpotent endomorphism of $V$. Moreover, $X=\varphi(x)$ is a semisimple endomorphism of $V$ with half-integer eigenvalues. The corresponding $X$-eigenspace decomposition of $V$ is \begin{equation}\label{eq:grading_V} V=\bigoplus_{k\in\frac12\mb Z}V[k] \,. \end{equation} Note that $\frac d2$ is the largest $X$-eigenvalue in $V$. Recall that the trace form on $\mf g$ associated to the representation $V$ is, by definition, \begin{equation}\label{20170317:eq1} (a|b)=\tr_V(AB)\,, \qquad a,b\in\mf g \,, \end{equation} and we assume that it is non-degenerate. Let $\{u_i\}_{i\in I}$ be a basis of $\mf g$ compatible with the $\ad x$-eigenspace decomposition \eqref{eq:dec}, i.e. $I=\sqcup_k I_k$ where $\{u_i\}_{i\in I_k}$ is a basis of $\mf g_k$. We also denote $I_{\leq\frac12}=\sqcup_{k\leq\frac12}I_k$. Moreover, we shall also need, in Section \ref{sec:affine}, that $\{u_i\}_{i\in I}$ contains a basis $\{u_i\}_{i\in I_f}$ of $\mf g^f=\{a\in\mf g\mid [a,f]=0\}$, the centraliser of $f$ in $\mf g$. Let $\{u^i\}_{i\in I}$ be the basis of $\mf g$ dual to $\{u_i\}_{i\in I}$ with respect to the form \eqref{20170317:eq1}, i.e. $(u_i|u^j)=\delta_{i,j}$. According to our convention, we denote by $U_i=\varphi(u_i)$ and $U^i=\varphi(u^i)$, $i\in I$, the corresponding endomorphisms of $V$. In Sections \ref{sec:finite} and \ref{sec:affine} we will consider the following important element \begin{equation}\label{20170623:eq4} U=\sum_{i\in I}u_i U^i\in\mf g\otimes\End V \,. \end{equation} Here and further we are omitting the tensor product sign. Furthermore, the following endomorphism of $V$, which we will call the \emph{shift matrix}, will play an important role in Section \ref{sec:finite} \begin{equation}\label{eq:D} D = -\sum_{i\in I_{\geq1}}U^iU_i \,\in\End V \,. \end{equation} Finally, we denote by $\Omega_V\in\End V\otimes\End V$ the permutation map: \begin{equation}\label{Omega} \Omega_V(v_1\otimes v_2)=v_2\otimes v_1 \,\,\text{ for all }\,\,v_1,v_2\in V \,. \end{equation} Using Sweedler's notation we write $\Omega_V=\Omega_V^\prime\otimes\Omega_V^{\prime\prime}$ to denote, as usual, a sum of monomials in $\End V\otimes\End V$. Suppose that $V$ has a non-degenerate bilinear form $\langle\cdot\,|\,\cdot\rangle:\,V\times V\to\mb F$, which is symmetric or skewsymmetric: \begin{equation}\label{eq:epsilon} \langle v_1|v_2\rangle=\epsilon\langle v_2|v_1\rangle \,,\,\,v_1,v_2\in V \,,\,\,\text{ where } \epsilon\in\{\pm1\} \,. \end{equation} Then, we denote by \begin{equation}\label{Omega-dagger} \Omega_V^\dagger=(\Omega_V')^\dagger\otimes\Omega_V'' \,, \end{equation} where $A^\dagger$ is the adjoint of $A\in\End V$ with respect to \eqref{eq:epsilon}. \subsection{ The ``identity'' notation } \label{sec:8a.2.5} Let $U\subset V$ be a subspace of $V$, and assume that there is ``natural'' splitting $V=U\oplus U^\prime$. We shall denote, with an abuse of notation, by $\id_U$ both the identity map $U\stackrel{\sim}{\longrightarrow}U$, the inclusion map $U\hookrightarrow V$, and the projection map $V\twoheadrightarrow U$ with kernel $U^\prime$. The correct meaning of $\id_U$ should be clear from the context. \subsection{ Generalized quasi-determinants } \label{sec:8a.3} Let $R$ be a unital associative algebra and let $V$ be a finite dimensional vector space with direct sum decompositions $V=U\oplus U^\prime=W\oplus W^\prime$. Assume that $A\in R\otimes \End(V)$ and $\id_WA^{-1}\id_U\in R\otimes\Hom(U,W)$ are invertible. The (generalized) \emph{quasideterminant} of $A$ with respect to $U$ and $W$, cf. \cite{GGRW05,DSKV16b}, is defined as \begin{equation}\label{eq:quasidet} |A|_{U,W} := (\id_{W} A^{-1}\id_{U})^{-1}\in R\otimes\Hom(W,U) \,. \end{equation} \begin{remark} Provided that both $A$ and $\id_{U'}A\id_{W'}$ are invertible, it is possible to write the generalised quasideterminant \eqref{eq:quasidet} in the more explicit form $|A|_{U,W} = \id_U A\id_W - \id_U A\id_{W^\prime} (\id_{U^\prime} A\id_{W^\prime})^{-1} \id_{U^\prime} A\id_W$. \end{remark} \section{Quantum finite $W$-algebras and (twisted) Yangians} \label{sec:finite} \subsection{Lax type operators for quantum finite $W$-algebras} We introduce some important $\End V$-valued polynomials in $z$, and Laurent series in $z^{-1}$, with coefficients in $U(\mf g)$. The first one is (cf. \eqref{20170623:eq4}) \begin{equation}\label{eq:A} A(z)=z\id_V+U=z\id_V+\sum_{i\in I}u_i U^i \,\, \in U(\mf g)[z]\otimes\End(V) \,. \end{equation} (As in Section \ref{sec:linear}, we are dropping the tensor product sign.) Another important operator is (keeping the same notation as in \cite{DSKV17}) \begin{equation}\label{eq:Arho} A^{\rho}(z) = z\id_V+F+\pi_{\leq\frac12}U = z\id_V+F+\sum_{i\in I_{\leq\frac12}}u_i U^i \,\,\in U(\mf g)[z]\otimes\End V \,. \end{equation} Now we introduce the Lax operator $L(z)$. Consider the generalized quasideterminant (cf. \eqref{eq:quasidet}) \begin{equation}\label{eq:tildeL} \widetilde L(z) = |A^\rho(z)+D|_{V[\frac d2],V[-\frac d2]} = \Big(\id_{V[-\frac d2]}\big(z\id_V+F+\pi_{\leq\frac12}U+D\big)^{-1}\id_{V[\frac d2]}\Big)^{-1} \,, \end{equation} where $\id_{V[-\frac d2]}$ and $\id_{V[\frac d2]}$ are defined in Section \ref{sec:8a.2.5} (using the obvious splittings of $V$ given by the grading \eqref{eq:grading_V}), $A^{\rho}(z)$ is defined in equation \eqref{eq:Arho} and $D$ is the ``shift matrix'' \eqref{eq:D}. Let us denote by $\bar1$ the image of $1\in U(\mf g)$ in the quotient $U(\mf g)\big/U(\mf g)\{m-(f|m)\,\big|\,m\in\mf g_{\geq1}\}$. The Lax operator $L(z)$ is defined as the image of $\widetilde L(z)$ in this quotient: \begin{equation}\label{eq:L} L(z) = L_{\mf g,f,V}(z) := \widetilde L(z)\bar 1 \,. \end{equation} The first main result in \cite{DSKVfin} can be summarized as follows. \begin{theorem}\label{thm:L1} \begin{enumerate}[(a)] \item The operator $A^\rho(z)+D$ is invertible in $U(\mf g)((z^{-1}))\otimes\End V$, and the operator $\id_{V[-\frac d2]}(A^\rho(z)+D)^{-1}\id_{V[\frac d2]}$ is invertible in $U(\mf g)((z^{-1}))\otimes\Hom\big(V\big[-\frac d2\big],V\big[\frac d2\big]\big)$. Hence, the quasideterminant defining $\widetilde L(z)$ (cf. \eqref{eq:tildeL}) exists and lies in $U(\mf g)((z^{-1}))\otimes\Hom\big(V\big[-\frac d2\big],V\big[\frac d2\big]\big)$. \item The entries of the coefficients of the operator $L(z)$ defined in \eqref{eq:L} lie in the $W$-algebra $W(\mf g,f)$: $$ L(z):=|z\id_V+F+\pi_{\leq\frac12}U+D|_{V[\frac d2],V[-\frac d2]}\bar 1 \,\in W(\mf g,f)((z^{-1}))\otimes\Hom\big(V\big[-\frac d2\big],V\big[\frac d2\big]\big) \,. $$ \end{enumerate} \end{theorem} \begin{remark} For $\mf g=\mf{gl}_N$ and $V=\mb F^N$ the standard representation, equation \eqref{eq:tildeL} may be used to find a generating set (in the sense of PBW Theorem) for the quantum finite $W$-algebra, see \cite{laura} for more details. \end{remark} \subsection{The generalized Yangian identity} Let $\alpha,\beta,\gamma\in\mb F$. Let $R$ be a unital associative algebra, and let $V$ be an $N$-dimensional vector space. For $\beta\neq0$, we also assume, as in Section \ref{sec:setup}, that $V$ is endowed with a non-degenerate bilinear form $\langle\cdot\,|\,\cdot\rangle:\,V\times V\to\mb F$ which we assume to be symmetric or skewsymmetric, and we let $\epsilon=+1$ and $-1$ respectively. Again, when denoting an element of $R\otimes\End(V)$ or of $R\otimes\End(V)\otimes\End(V)$, we omit the tensor product sign on the first factor, i.e. we treat elements of $R$ as scalars. The \emph{generalized} $(\alpha,\beta,\gamma)$-\emph{Yangian identity} for $A(z)\in R((z^{-1}))\otimes\End(V)$ is the following identity, holding in $R[[z^{-1},w^{-1}]][z,w]\otimes\End(V)\otimes\End(V)$: \begin{equation}\label{eq:gener-yangV} \begin{array}{l} \displaystyle{ \vphantom{\Big(} (z-w+\alpha\Omega_V) (A(z)\otimes\id_V) (z+w+\gamma-\beta\Omega_V^\dagger) (\id_V\otimes A(w)) } \\ \displaystyle{ \vphantom{\Big(} = (\id_V\otimes A(w)) (z+w+\gamma-\beta\Omega_V^\dagger) (A(z)\otimes\id_V) (z-w+\alpha\Omega_V) \,.} \end{array} \end{equation} Recall that $\Omega_V$ and $\Omega_V^\dagger$ are defined by equations \eqref{Omega} and \eqref{Omega-dagger} respectively. \begin{remark}\label{20170704:rem2} In the special case $\alpha=1$, $\beta=\gamma=0$, equation \eqref{eq:gener-yangV} coincides with the so called RTT presentation of the Yangian of $\mf{gl}(V)$, cf. \cite{Mol07,DSKV17}. Moreover, in the special case $\alpha=\beta=\frac12$, $\gamma=0$, equation \eqref{eq:gener-yangV} coincides with the so called RSRS presentation of the extended twisted Yangian of $\mf g=\mf{so}(V)$ or $\mf{sp}(V)$, depending on whether $\epsilon=+1$ or $-1$, cf. \cite{Mol07}. Hence, if $A(z)\in R((z^{-1}))\otimes\End V$ satisfies the generalized $(\frac12,\frac12,0)$-Yangian identity we automatically have an algebra homomorphism from the extended twisted Yangian $X(\mf g)$ to the algebra $R$. If, moreover, $A(z)$ satisfies the symmetry condition (required in the definition of twisted Yangian in \cite{Mol07}) $$ A^\dagger(-z)-\epsilon A(z) = -\frac{A(z)-A(-z)}{4z} \,, $$ then we have an algebra homomorphism from the twisted Yangian $Y(\mf g)$ to the algebra $R$. \end{remark} \subsection{Quantum finite $W$-algebras and (extended) twisted Yangians} Let $\mf g$ be one of the classical Lie algebras $\mf{gl}_N$, $\mf{sl}_N$, $\mf{so}_N$ or $\mf{sp}_N$, and let $V=\mb F^N$ be its standard representation (endowed, in the cases of $\mf{so}_N$ and $\mf{sp}_N$, with a non-degenerate symmetric or skewsymmetric bilinear form, respectively). Then, the operator $A(z)$ defined in equation \eqref{eq:A} satisfies the generalized Yangian identity \eqref{eq:gener-yangV}, where $\alpha,\beta,\gamma$ are given by the following table: $ \begin{tabular}{c|lll} \vphantom{\Big(} $\mf g$ & \,\,$\alpha$\,\, & \,\,$\beta$\,\, & \,\,$\gamma$\,\, \\ \hline \vphantom{\Big(} $\mf{gl}_N$ or $\mf{sl}_N$ & $1$ & $0$ & $0$ \\ \vphantom{\Big(} $\mf{so}_N$ or $\mf{sp}_N$ & $\frac12$ & $\frac12$ & $\frac\epsilon2$ \\ \end{tabular} $ Note that $V[\frac{d}{2}]\cong V[-\frac{d}{2}]$. Fix and isomorphism $\chi:V[\frac{d}{2}]\stackrel{\cong}{\longrightarrow}V[-\frac{d}{2}]$. Then, $\chi\circ L(z)\in W(\mf g,f)((z^{-1}))\otimes \End(V[-\frac d2])$. By an abuse of notation, we still denote this operator by $L(z)$. We also let $n=\dim V[-\frac d2]$. The second main result in \cite{DSKVfin} states that, for classical Lie algebras, the Lax operator defined in \eqref{eq:L} also satisfies a generalized Yangian identity. \begin{theorem}\label{thm:main2} The operator $L(z)\in W(\mf g,f)((z^{-1}))\otimes\End(V[-\frac d2])$ defined by \eqref{eq:tildeL}-\eqref{eq:L} (cf. Theorem \ref{thm:L1}) satisfies the generalized Yangian identity \eqref{eq:gener-yangV} with the values of $\alpha,\beta,\gamma$ as in the following table: $ \begin{tabular}{c|lll} \vphantom{\Big(} $\mf g$ & \,\,$\alpha$\,\, & \,\,$\beta$\,\, & \,\,$\gamma$\,\, \\ \hline \vphantom{\Big(} $\mf{gl}_N$ or $\mf{sl}_N$ & $1$ & $0$ & $0$ \\ \vphantom{\Big(} $\mf{so}_N$ or $\mf{sp}_N$ & $\frac12$ & $\frac12$ & $\frac{\epsilon-N+n}2$ \\ \end{tabular} $ \end{theorem} By Theorem \ref{thm:main2} and Remark \ref{20170704:rem2} we have an algebra homomorphism from the extended twisted Yangian $X(\bar{\mf g})$ ($\bar{\mf g}$ depends on the pair $(\mf g,f)$) to the quantum finite $W$-algebra $W(\mf g,f)$. A stronger result has been obtained for $\mf g=\mf{gl}_N$ by Brundan and Kleshchev in \cite{BrK06} where quantum finite $W$-algebras were constructed as truncated shifted Yangians (which are subquotients of the Yangian for $\mf{gl}_N$). \section{Classical affine $W$-algebras and integrable hierarchies of Lax type equations} \label{sec:affine} \subsection{Lax type operators for classical affine $W$-algebras} For classical affine $W$-algebras the discussion is similar to the one in Section \ref{sec:finite} but in a different setting: we need to substitute polynomials and Laurent series with differential operators and pseudodifferential operators respectively (see \cite{DSKVcl} for a review of their basic properties). Consider the differential operators $ A(\partial)=\partial\id_V+U=\partial\id_V+\sum_{i\in I}u_i U^i \,\, \in \mc V(\mf g)[\partial]\otimes\End(V) $ and $ A^{\rho}(\partial) = \partial\id_V+F+\pi_{\leq\frac12}U = \partial\id_V+F+\sum_{i\in I_{\leq\frac12}}u_i U^i \,\,\in \mc V(\mf g_{\leq\frac12})[\partial]\otimes\End V \,. $ Recall from \cite{DSKV13} that in the classical affine case we have $\mc W(\mf g,f)\subset\mc V(\mf g_{\leq\frac12})$ and that there exists a differential algebra isomorphism $w:\mc V(\mf g^f)\stackrel{\sim}{\longrightarrow}\mc W(\mf g,f)$. Consider the generalized quasideterminant (cf. \eqref{eq:quasidet}) \begin{equation}\label{eq:Laff} L(\partial) = |A^\rho(\partial)|_{V[\frac d2],V[-\frac d2]} = \Big(\id_{V[-\frac d2]}\big(\partial\id_V+F+\pi_{\leq\frac12}U\big)^{-1}\id_{V[\frac d2]}\Big)^{-1} \,. \end{equation} The following result has been proved in \cite{DSKVcl}. \begin{theorem}\label{thm:main} $L(\partial)\in\mc W(\mf g,f)((\partial^{-1}))\otimes\Hom\big(V\big[-\frac d2\big],V\big[\frac d2\big]\big) $ and \begin{equation}\label{eq:Lw} L(\partial) = \Big(\id_{V[-\frac d2]}\big(\partial\id_V+F+\sum_{i\in I_{f}}w(u_i) U^i\big)^{-1}\id_{V[\frac d2]}\Big)^{-1} \,. \end{equation} \end{theorem} The above theorem consists of two statements. First, it claims that $L(\partial)$ is well defined, i.e. both inverses in formula \eqref{eq:Laff} can be carried out in the algebra of pseudodifferential operators with coefficients in $\mc V(\mf g_{\leq\frac12})$, and that the coefficients of $L(\partial)$ lie in the $\mc W$-algebra $\mc W(\mf g,f)$. Then, it gives a formula, equation \eqref{eq:Lw}, for $L(\partial)$ in terms of the generators $w(u_i),\,i\in I_f$, of the $\mc W$-algebra $\mc W(\mf g,f)$. \subsection{Integrable hierarchies of Lax type equation} Let $\mf g$ be one of the classical Lie algebras $\mf{gl}_N$, $\mf{sl}_N$, $\mf{so}_N$ or $\mf{sp}_N$, and let $V=\mb F^N$ be its standard representation (endowed, in the cases of $\mf{so}_N$ and $\mf{sp}_N$, with a non-degenerate symmetric or skewsymmetric bilinear form, respectively). Then, we can use the operator $L(\partial)$ in \eqref{eq:Lw} to get explicit formulas for the $\lambda$-brackets among the generators of $\mc W(\mf g,f)$ and construct integrable hierarchies of Hamiltonian equations, see \cite{DSKVcl}. \begin{theorem} \begin{enumerate}[1)] \item $L(\partial)$ satisfies the generalized Adler type identity \begin{equation}\label{eq:adler-general} \begin{array}{l} \displaystyle{ \vphantom{\Big(} \{L(z)_\lambda L(w)\} = \alpha (\id_V\otimes L(w+\lambda+\partial))(z-w-\lambda-\partial)^{-1} (L^*(\lambda-z)\otimes\id_V)\Omega_V } \\ \displaystyle{ \vphantom{\Big(} -\alpha \Omega_V\,\big(L(z)\otimes(z-w-\lambda-\partial)^{-1}L(w)\big) } \\ \displaystyle{ \vphantom{\Big(} -\beta (\id_V\otimes L(w+\lambda+\partial)) \Omega_V^\dagger(z+w+\partial)^{-1}(L(z)\otimes\id_V) } \\ \displaystyle{ \vphantom{\Big(} +\beta (L^*(\lambda-z)\otimes\id_V)\Omega_V^\dagger(z+w+\partial)^{-1}(\id_V\otimes L(w)) } \\ \displaystyle{ \vphantom{\Big(} +\gamma\big(\id_V\otimes \big(L(w+\lambda+\partial)-L(w)\big)\big) (\lambda+\partial)^{-1} \big(\big(L^*(\lambda-z)-L(z)\big)\otimes\id_V\big) \,,} \end{array} \end{equation} for the following values of $\alpha,\beta,\gamma\in\mb F$: $ \begin{tabular}{l|lll} \vphantom{\Big(} $\mf g$ & \,\,$\alpha$\,\, & \,\,$\beta$\,\, & \,\,$\gamma$\,\, \\ \hline \vphantom{\Big(} $\mf{gl}_N$ & 1 & 0 & 0 \\ \vphantom{\Big(} $\mf{sl}_N$ & 1 & 0 & $\frac1N$ \\ \vphantom{\Big(} $\mf{so}_N$ or $\mf{sp}_N$ & $\frac12$ & $\frac12$ & 0 \\ \end{tabular} $ In equation \eqref{eq:adler-general} $L^*$ denotes the formal adjoint of pseudodifferential operators, and $\Omega_V$ and $\Omega_V^\dagger$ are defined by equations \eqref{Omega} and \eqref{Omega-dagger} respectively. \item For $B(\partial)$ a $K$-th root of $L(\partial)$ (i.e. $L(\partial)=B(\partial)^K$ for $K\geq1$) define the elements $h_{n,B}\in\mc W(\mf g,f)$, $n\in\mb Z_\geq0$, by ($\tr=1\otimes\tr$) $ h_{n,B}= \frac{-K}{n} \Res_z\tr(B^n(z)) \text{ for } n>0 \,,\,\, h_0=0\,. $ Then, all the elements $\tint h_{n,B}$ are Hamiltonian functionals in involution and we have the corresponding integrable hierarchy of Lax type Hamiltonian equations \begin{equation}\label{eq:hierarchy} \frac{dL(w)}{dt_{n,B}} = \{\tint h_{n,B},L(w)\} = [\alpha(B^n)_+-\beta((B^{n})^{*\dagger})_+,L](w) \,,\,\,n\in\mb Z_{\geq0}\,. \end{equation} (In the RHS of \eqref{eq:hierarchy} we are taking the symbol of the commutator of matrix pseudodifferential operators.) \end{enumerate} \end{theorem} \begin{remark} For $\beta=0$ solutions to the integrable hierarchy \eqref{eq:hierarchy} can be obtained by reductions of solutions to the multicomponent KP hierarchy, see \cite{last}. \end{remark}
1,116,691,499,430
arxiv
\section{Introduction} Without question one of the most unexpected developements in general relativity was the realisation that there exists a close relationship between certain laws of classical black hole (BH) physics and the laws of thermodynamics (\cite{Bard1}). This fundamental connection was first described by Bekenstein (\cite{Be1},\cite{Be2}) and a little bit later supported by the fundamental analysis of Hawking (\cite{Haw1}). A nice overview is given for example in \cite{Wald1}. That there might perhaps exist a closer and even more fundamental connection between classical relativity and thermodynamics was, as far as we know, emphasized by Jacobson (\cite{Jacob1}) and further developed in e.g. \cite{Jacob2} and many papers of Padmanabhan (see, for example,\cite{Pad1},\cite{Pad2} or \cite{Pad3}). It was shown that one can infer from the physics near horizons, in particular the event horizon of a black hole, that the Einstein equation (EEQ) must hold. One important tool in this context was the use of approximate Rindler-Unruh horizons and the corresponding results of Fulling-Unruh (\cite{Full1},\cite{Un1}). One should remark that, as far as we can see, all the many papers which do exist in this context emphasize the importance of horizons and the role which is played by the \tit{entanglement entropy} of systems behind these horizons. That is, from this view point, entropy and more generally thermodynamics appear to be essentially located near these horizons and not e.g. in the bulk. These properties seem to be suggested, for example, by the famous \tit{entropy area law} of black holes (BH). Whereas horizons apparently seem to play such a fundamental role for the thermal behavior of gravitation, we want to go a step further and will argue in this paper that in our view the situation is a different one and that both the Einstein equation (EEQ) and gravitation in general are of a thermal nature being independent of the existence of horizons and which can be inferred by analyzing the ordinary bulk situation in general relativity (GR). The special behavior near horizons is then rather a particular consequence of this more general scenario. In a first step we will provide arguments for the understanding of \tit{quantum space-time} (QST), that is, the microscopic structure which underlies our ordinary classical smooth space-time manifold ST, as a thermal system at each macroscopic point $x\in ST$. This will be done in the next section (some remarks in this direction have already been made in \cite{Requ1}. Early ideas can already be found in the work of Sakharov but without a thermal connotation (see e.g. \cite{Sak} and \cite{Visser}). We will exploit more recent findings in the foundations of modern quantum statistical mechanics (\cite{Popescu1},\cite{Lebo1},\cite{Lebo2}). Furthermore we will introduce the notion of the classical space-time metric $g_{ij}(x)$ as an \tit{order parameter field} and classical smooth space-time as an \tit{order parameter manifold}. By these notions we mean the following. As briefly described in \cite{Requ1}, we assume that (Q)ST emerged as the consequence of a primordial phase transition PT. In this context a non-vanishing field of observables $g_{ij}(x)$ came into existence which was zero or rather vanishing before the phase transition happened in the primordial soup. That is, $g_{ij}(x)$ and ST represent parameters which indicate that a certain phase transition took place. Such a quantity is typically called an order parameter. We start from the following assumption or observation: Classical space-time ST remains to be smooth on all ordinary scales of the quantum matter regime. So we conjecture that the typical scale where quantum gravity effects emerge is the \tit{ Planck scale}, that is \beq l_p=\hbar G/c^3\quad ,\quad t_p=\hbar G/c^5=l_p/c \eeq We hence assume that the typical quantum gravity degrees of freedom (DoF) live on the same scale. This implies that, as e.g. in hydrodynamics, there exist many quantum gravity DoF in the infinitesimal neighborhood of the macroscopic point $x\in ST$ in the classical space-time manifold ST. That is, we can assume that at each macroscopic point $x$ exists a microscopic subsystem in QST, consisting of many DoF. That these subsystems behave thermally will be proved in the next section. In a second step we will interpret the EEQ (sign convention as in \cite{Wheeler1}) \beq G_{ab}:= R_{ab}-1/2\cdot R\cdot g_{ab}=\kappa \cdot T_{ab}\quad ,\quad \kappa=8\pi G/c^4 \eeq in this thermodynamic context. That is, even if we assume that the global state of the microscopic QST is pure, the local state around some macroscopic point $x$ is a thermal ensemble (density matrix) due to its intricate entanglement with its environment. From the findings in the papers, mentioned above, we may then conclude that with overwhelming probability the local states are thermal. As the local states around the points of the space-time manifold ST are thermal it seems reasonable to give also the EEQ a thermal interpretation. One should note that already Lorentz or Levy-Civita in 1916 or 1917 tried to interpret the EEQ as saying that the total energy of the universe is vanishing (cf. \cite{Pauli1}) \beq (T_{ab}-\kappa^{-1}G_{ab})\equiv 0 \quad \text{hence}\quad \partial_a (T^a_b- \kappa^{-1}G^a_b)\equiv 0 \quad \partial_a:=\partial/\partial x^a \label{EEQ}\eeq Einstein provided an argument against such an interpretation (see \cite{Pauli1}) which seems to be sound at first glance, but one should take into account that at that time quantum theory was not yet fully developed (cf. e.g. our arguments, given in \cite{Requ1}). In this paper we will only comment on the vast field of gravitational energy and its localization in general relativity (GR) in so far as it concerns the problems we want to analyze in the following. An interesting point of view is developed for example in \cite{Coop1}, where it is claimed that gravitational energy should be located at places where $T_{ab}\neq 0$. However, one generally assumes that non-vanishing curvature has something to do with gravitational energy. The latter one can of course be non-zero where $T_{ab}$ vanishes. If the microscopic substructure , i.e. QST, of the classical smooth space-time ST is locally a thermal system, it should possess what one calls \tit{internal energy} in thermodynamics. As in \cite{Requ1} we assume the existence of an interaction between matter and space-time on the quantum level, that is, between QST and QM (quantum matter). The macroscopic ponderable matter and fields making up the right side of the EEQ consist of many microscopic DoF on the quantum scale. The same holds for the left side which describes a curvature property of ST. All this leads us to the following conjecture: \begin{conjecture} We assume that \beq T_{ab}=\kappa^{-1}G_{ab} \eeq is the heat influx at point x into the local thermal state of QST coming from the external QM system. \end{conjecture} Note that this concerns only the gravitational effect of matter. Thus $\kappa^{-1}G_{ab}$ represents a form of \tit{heat tensor} which was discussed for material systems e.g. in \cite{Moeller1} or\cite{Eckart1}. \begin{bem} We will later discuss in this paper the \tit{work terms} in this context, \end{bem} To complete our analysis of the thermal substructure of Gravitation or QST, we have firstly to deal with the \tit{non-tensorial} character of gravitational energy. Usually it holds \beq \partial_a(\sqrt{-g}(T^a_b+t^a_b))\equiv 0 \eeq with $t^a_b$ the \tit{pseudo tensor} of gravitational energy-momentum (brief discussions can be found in practically every book about GR, a particularly nice discussion is e.g. given in \cite{Xulu1}). In the canonical view this pseudo tensor property is frequently considered to be a deficit of the theory. In section (3) we reconsider the situation from a perhaps new vantage point. It is a widespread belief that all concepts and notions which carry a physical meaning should be of tensorial character. It is however reasonable to analyze why tensorial behavior is considered to be of such a great relevance. If we have a manifold with tangent and cotangent spaces at each point, tensor fields can be built over tensor products of these spaces. Their importance is hence that they carry a geometric meaning, that is, tensors may be said to represent the same \tit{invariant} object independent of the choice of coordinate systems. In this context they have the important property that they cannot be transformed away (transformed to zero) by an appropriate choice of coordinate system. In particular, their transformation properties are relatively transparent. On the other hand this philosphy cannot be exactly true as the wellknown \tit{connection coefficients} or \tit{Christoffel symbols} $\Gamma^I_{jk}$ are of great geometric significance and do not have a particularly simple transformation behavior. In our view, Weinberg rightly remarks that there is nothing sacred about the tensor transformation law (\cite{Wein1}). This holds the more so as physical quantities are not automatically elements of the multilinear algebra over tangent and/or cotangent space (which have essentially the meaning of velocity or momentum vectors). In older text books physical tensorial quantities are hence introduced as tuples having simply a certain transformation behavior. We therefore should be prepared that there may exist physical quantities which do not transform as tensors. In the valuable and highly informative essay \cite{Rowe1} there is an interesting remark by Einstein, when asked by the Swiss student Humm, if gravitational energy should be tensorial or not. He thinks \tit{not}, referring for example to the different behavior of kinetic energy $T$ and potential energy $U$ under the Galilei group in \beq \partial_t (T+U)=0 \eeq Another argument, in our view, would be a pendulum clock with friction which allows us to extract energy from the gravitational field in a non-inertial reference system, which is not possible in a freely falling inertial system. This latter example suggests the following conclusion: \begin{ob} The equivalence principle shows that gravitational energy cannot be tensorial. \end{ob} We will discuss the deeper reasons for the possibility of non-tensorial behavior of various physical quantities in section (3) in a more systematic way, the main point being the effects of quantum theory on space-time and gravitational behavior. These quantum effects will lead to a subtle distinction between surface properties living over the smooth classical space-time manifold ST and the behavior in the underlying microscopic substructure denoted by QST. The former are expected to be tensorial, the latter typically not. The spontaneous symmetry breaking of \tit{diffeomorphism invariance} will play an important role in this context. In the last section we discuss in more detail the relevance of diffeomorphism invariance and the corresponding conserved \tit{Noether current} for our investigation of the thermal character of the EEQ and GR in general. Wald studied the Noether current of diffeomorphism invariance in the framework of differential forms lying the main emphasis on integral representations (\cite{Wald2},\cite{Wald3},\cite{Wald4}). We will, on the other hand, rely more on the classical tensor analysis and variational approach, using mainly the non-integrated local point of view as it is e.g. also done in \cite{Pad1}. But before we will do this, we will add some remarks about the quite detailed early work of Hilbert, Noether, Klein in this context (which, written in German, is perhaps not so widely known and which is carefully discussed in \cite{Rowe1}, for other aspects see also \cite{Sauer1}). These papers dealt mainly with a proper understanding of Hilbert's famous socalled \tit{energy vector}, which was derived from a quite intricate variational analysis of some invariant action functional and which seemed to puzzle not only the scientific community of that time quite a lot (\cite{Hilbert1},\cite{Klein1},\cite{Klein2},\cite{Noether1}. We find it remarkable that these long gone investigations anticipate in some sense more recent ones. In Hilbert's analysis an arbitrary vector field $\xi^i$ was introduced in order to get a conserved current from diffeomorphism invariance. It was later understood (\cite{Noether1}) to be a necessary consequence of E.Noether's famous second theorem. We will analyze the remarkable behavior of this conserved current, give a physical interpretation of the vector field $\xi^i$, which is technically the generator of the \tit{Lie derivative} and interpret the role of various terms occurring in the current as to their thermodynamic meaning and implications. This winds up to our observation that what Hilbert used to call the energy vector is rather an expression of the \tit{internal energy-momentum tensor} of the microscopic quantum gravitational substratum QST, underlying our smooth space-time manifold ST. \section{Quantum Space Time, QST, as a Thermal System} As we have already remarked in the introduction, thermality usually enters the field of (quantum) gravity or general relativity (GR) via the existence of horizons, most notably in the paradigmatic case of black holes (BH). We want to argue in this and the following sections that, in our view, classical space time ST is the smooth, coarse grained hull, overlying QST, which is assumed to be a system having quantum micro structure, that is, consists of a complex network of a huge number of microscopic gravitational degrees of freedom (DoF). \begin{assumption} We make the simplifying assumption that we may restrict ourselves to an intermediate energy scale where all DoF can be treated as quantum. Furthermore, we assume the gravitational DoF to live on a scale which is given by the Planck units. \end{assumption} From this we may infer the following. Compared to the Planck scale, the ordinary quantum scale on which our quantum matter is living, is many orders of magnitude larger. This is the reason why on the level of ordinary quantum theory space time can be treated as smooth and relatively invariable. On the other hand, we have briefly described in e.g. \cite{Requ1} how the microscopic gravitational DoF, living in QST cooperate so that as a consequence of a primordial phase transition PT the classical space time manifold ST with its points $x$ and the classical metric tensorfield $g_{ab}(x)$ do emerge, with $g_{ab}(x)$ playing a double role as allowing, on the one hand, to measure distances and time intervalls and, on the other hand, being the carrier of the gravitational field. As in \cite{Requ1} we introduce a Hilbert space $\mcal{H}^g$, in which the global quantum states, $\psi^g$, of QST are living. The microscopic elementary gravitational DoF are represented by their corresponding local Hilbert spaces $\mcal{H}^g_i,\, i\in \N$. The local bases in $\mcal{H}^g_i$ are denoted by $d_i^{\nu_i}$. Basis vectors in $\mcal{H}^g$ are then given by tensor product states \beq \psi^g_I=\bigotimes_i\, d_i^{\nu_i}\, ,\,I:=\{\nu_1,\nu_2,\cdots\} \eeq and a general pure quantum state in $\mcal{H}^g$ is \beq \psi^g=\sum_I\, c_I\psi_I^g \eeq \begin{bem} We note that these assumptions are not really necessary prerequisites to reach the following conclusions but will represent a convenient model to make the derivations more concrete and precise. \end{bem} We have now to discuss what consequences can be inferred from the existence of the nonvanishing classical metric tensor field $g_{ab}(x)$ which lives over the classical space-time manifold ST and by the same token over the underlying microscopic substructure QST. Usually it is assumed that a quantum observable $\hat{g}_{ab}(x)$ exists on QST with \beq g_{ab}(x)=<\hat{g}_{ab}(x)>:=(\psi^g\mid \hat{g}_{ab}(x)\psi^g) \eeq for some global state $\psi^g$ which corresponds as a quantum correlate to the classical ST. This, at first glance, natural assumption leads however to a number of highly non-trivial consequences and has now to be discussed in more detail. We assumed for example in \cite{Requ1} that the existence respectively non-vanishing of the metric tensor field $g_{ab}(x)$ is the result of a primordial phase transition PT. In other words, the existence of a stable space-time distance concept is a non-trivial consequence of a particular physical process. Hence, following the tradition of e.g. quantum statistical mechanics as a paradigm, dealing with a great number of quantum DoF, we define: \begin{defi} We call $g_{ab}(x)$ an order parameter field and the space-time manifold ST an orderparameter manifold, where its non-vanishing signals the existence of a transition to a greater structural order in the underlying microscopic substratum (see e.g. \cite{Requ1} or \cite{Requ2},\cite{Requ3} and earlier references given there). \end{defi} In our context, where we have a double structure of an overlying classical smooth space-time manifold ST and an underlying microscopic quantum structure QST, both being correlated in a subtle way, we have to introduce some particular structural elements, that is, for example, the concept of \tit{macro observables}. As far as we know, this notion was for the first time introduced by v.Neumann in an ingenious but perhaps little kown paper (see \cite{Neum1}, the second part of \cite{Neum2} or \cite{Requ4}). Due to the vastly different scales of, on the one hand, ordinary quantum matter QM and, on the other hand, quantum space-time QST, we can assume that to a \tit{macroscopic point} $x$ do belong a huge number of DoF in QST or \tit{local Hilbert spaces} $\mcal{H}_i$ which live in the infinitesimal neighborhood of $x$. We denote this situation by \beq \mcal{H}_x:= \bigotimes_x\, \mcal{H}_i \sim [x] \eeq where $[x]$ denote the gravitational DoF in the infinitesimal neighborhood of $x$. Furthermore, to a macroscopic value, $g_{ab}(x)$, we can choose a corresponding subspace $[g_{ab}(x)]$ in $\bigotimes_x\,\mcal{H}_i$ containing the local vectors $\psi^g(x)$ with \beq <\psi^g\mid \hat{g}_{ab}(x)\psi^g>=g_{ab}(x) \eeq \begin{defi} Following the tradition in quantum statistical mechanics, we call $[g_{ab}(x)]$ a phase cell and which is spanned by certain basis vectors $\psi_i^g(x)$ so that \beq \psi^g(x)=\sum\, c_i\psi^g_i(x) \eeq \end{defi} Furthermore, in the class of global states $\psi^g$, we may select the subclass $[g_{ab}]$ so that \beq <\psi^g\mid \hat{g}(x)\psi^g>=g_{ab}(x)\quad\text{for all}\; x\in ST \eeq \begin{bem} One should note that the various sets, $[x]$, or $[g_{ab}(x)]$, are not necessarily disjoint in QST. There may be a certain overlap. \end{bem} We now come to a central point in our analysis. Given a global state $\psi^g$ in $\mcal{H}^g$, we can test it by sufficiently localized observables, that is, observables localized in an infinitesimal neighborhood of some macroscopic point $x$. That is, we may concentrate ourselves on state vectors from $\mcal{H}_x$ or $[g_{ab}(x)]$. More specifically, if we have a pure global state $\psi^g$ and test it with observables taken from $\mcal{B}(\mcal{H}_x)$, the bounded operators operating on $\mcal{H}_x$, we can represent the global state $\psi^g$ by a density matrix $\rho_{\psi}$, living over $\mcal{H}_x$ or $[g_{ab}(x)]$. \begin{ob} It exists a density matrix $\rho_{\psi}$ over $\mcal{H}_x$ or $[g_{ab}(x)]$ with \beq <\psi^g\mid A(x)\psi^g>=Tr(A(x)\cdot\rho_{\psi}) \eeq for observables $A(x)\in\mcal{B}(\mcal{H}_x)$ \end{ob} It is a relatively recent observation that much stronger results can in fact be derived which lead to the thermalization results, which we mentioned in the introduction. To our knowledge, early results in this direction were already derived by v.Neumann in \cite{Neum1}. More recently, these phenomena were analyzed in e.g. \cite{Popescu1},\cite{Lebo1}, or \cite{Lebo2}. In the context of \tit{decoherence by environment} see also \cite{Requ4}. Important tools in this connection are, on the one hand, the socalled \tit{concentration of measure phenomenon} and the Levy estimates (see for example \cite{Popescu1}, a more systematic discussion can be found in \cite{Ledoux}) and, on the other hand, the use of \tit{random (unit) vectors} (that is, quantum mechanical states lying on high-dimensional unit spheres) and socalled \tit{typical states}. We will not go into the complex technical details but restrict ourselves to an application of the results to our particular situation. That is, we are given the classical order parameter field $g_{ab}(x)$, defining the class of microscopic quantum states $\psi^g\in [g_{ab}]$ with \beq <\psi^g\mid \hat{g}(x)\psi^g>=g_{ab}(x)\quad\text{for all} x\in ST \eeq We observed above that locally $\psi^g$ can be replaced by a trace operator or density matrix $\rho_{\psi}(x)$. The results derived in the above cited papers, in particular \cite{Popescu1}, now yields the following in our scenario. \begin{conclusion} For a global pure state $\psi^g\in[g_{ab}]$ it hold with very high probability that for local observables it is equivalent to the \tit{micro canonical ensemble}, i.e. \beq \Omega:=d^{-1}\cdot Tr(\mbf{1}\cdot) \eeq with d the dimension of $[g_{ab}]$ and $\mbf{1}$ the identity operator. The local state $\rho_{\psi}(x)$ at some arbitrary macroscopic point is equivalent to this $\Omega$ and is called the generalized canonical ensemble. In other words, this holds for a typical or randomly selected state. One should however note that $\Omega$ is a global state, $\rho_{\psi}(x)$ a local one. \end{conclusion} \begin{bem} In case the generalized canonical ensemble can be associated with some form of energy, the local states have the form of the true canonical ensemble known from thermodynamics. This is the reason why the corresponding local state can rightly be called a \tit{generalized canonical ensemble}. That is, the global micro canonical ensemble behaves locally as a canonical ensemble. \end{bem} \section{The Thermal Role of the Einstein Equation} In section 2 we have argued that in an infinitesimal neighborhood of a macroscopic point $x\in ST$ quantum space-time (Q)ST has the characteristics of a local thermal system. We now want to introduce and discuss the various physical parameters which define and characterize such a thermal system. We begin with a new conceptual understanding of the EEQ. Originally the EEQ are viewed or conceived as a dynamic or evolution equation of the space-time continuum under the influence of (ponderable) matter and classical fields. As mentioned in the introduction, much later Jacobson, Padmanabhan (and perhaps some other workers in the field) argued that the EEQ must hold due to the observed or conjectured thermal behavior near horizons, most notably, near the event horizon of a BH (see e.g. \cite{Jacob1},\cite{Jacob2},\cite{Pad1},\cite{Pad2},\cite{Pad3}). In the following we want to argue that the EEQ carry a direct thermal meaning in the bulk of ST, i.e., not necessarily near horizons and independent of the existence of horizons or singularities. This holds also for the concept of entropy, which is primarily understood in this context as a form of \tit{entanglement entropy}, typically arising near horizons. In our context it will have the ordinary thermodynamical meaning of number of configurational alternatives. As we said in the introduction, already in the early days (1916,1917) e.g. Lorentz or Levi-Civita tried to give the EEQ a slightly different interpretation, which was however rejected by Einstein by an argument which was perhaps convincing at that time with quantum (field) theory still in its infancies (cf. \cite{Pauli1}). Lorentz and Levi-Civita speculated that by bringing the energy-momentum tensor, occurring on the rhs of the EEQ to the lhs, we get an expression which vanishes identically (see equation \ref{EEQ} in the introduction). They then conjectured that this expression can then be identified with the total (vanishing) energy of the universe, consisting of gravitational and matter energy. Einstein argued that the vanishing of the total energy of the universe would not prevent the material systems and the gravitational energy to annihilate each other. However, meanwhile we know that exactly the opposite process might have happened in various models of the \tit{inflationary scenario} (creation from nothing, huge quantum fluctuations etc.). We discussed this zero energy universe idea in more detail in section 2 of \cite{Requ1}. In \cite{Requ1} we introduced and described a similar but slightly different scenario. We argued that our universe, i.e. (quantum) matter, (Q)M, and (quantum) space-time, (Q)ST, emerged both as a result of a primordial phase transition PT from a more primordial phase, QX: \beq QX\underset{PT}\rightarrow (Q)ST+(Q)M \eeq While the internal energy of QST is lowered compared to the original phase QX as the result of an \tit{order transition} by an amount $\Delta E$, this same amount is transferred to QM. That is, \beq E(QX)=E(QST)+E(QM) \eeq In the language of Hilbert spaces we describe the structure of our universe as a pure state in the tensor product \beq \psi\in\mcal{H}=\mcal{H}^g\otimes\mcal{H}^M \eeq with \beq \psi=\psi^g\otimes \psi^M \eeq being a state where the gravitational and the matter state do exist independently from each other. This may be an approximation of a situation where e.g. BH's are absent. The BH situation was discussed in \cite{Requ1}. A thermal system should have an \tit{internal energy}. In the introduction we formulated the following conjecture: \begin{conjecture} We assume that \beq T_{ab}(x)=\kappa^{-1}G_{ab}(x) \eeq is the heat influx at point x into the local thermal state of QST coming from the external QM system. Note that this concerns only the gravitational effect of matter. Thus $\kappa^{-1}G_{ab}$ represents a form of \tit{heat tensor} which was discussed for material systems e.g. in \cite{Moeller1} or\cite{Eckart1}. \end{conjecture} That is, the EEQ is essentially a statement about the total heat influx at point $x\in ST$ into the infinitesimal neighborhood of $x$ in QST. We now want to provide arguments why we think this is indeed the case. In \cite{Requ5} we studied thermodynamics in the regime of special relativity. While in this field there do exist a variety of different approaches, in the approach we favored, i.e. with temperature being the zero component of a contravariant four-vector, \beq T=\gamma\cdot T_0\quad ,\quad \gamma=(1-u^2/c^2)^{-1/2} \eeq with $\gamma$ the Lorentz factor, $T_0$ the rest temperature in a comoving coordinate system, $u$ the 3-velocity, $T$ the temperature in the laboratory frame, we get the following interesting transformation property for the heat influx into the system: \beq \delta Q=\gamma\cdot \delta Q_0 \eeq that is, in contrast to internal energy and work, the heat influx transforms as the zero component of a contravariant 4-vector. That is, it transforms covariantly (cf. in particular section 4.4 of \cite{Requ5}). Another paradigm is BH physics and the first law of BH-thermodynamics. In geometric units (c=G=1) it reads: \beq \delta M=(8\pi)^{-1}\cdot \kappa\delta A+\Omega\delta J \eeq with $\kappa$ the \tit{surface gravity}, $\delta A$ the change of area, $\Omega$ angular velocity of horizon, $\delta J$ change of angular momentum, $\delta M$ the change of mass in the center. The identification reads: \beq \delta S_{BH}=\delta A/4\, , \, T=\kappa/2\pi\, ,\, \delta M\,\text{change of internal energy} \eeq It is perhaps surprising that heat and entropy occur at such a prominent place in this field while the, at first glance, more natural work contributions seem to be missing. We now try to explain this on a perhaps more fundamental level. We already briefly mentioned in the introduction that, in our view, space-time consists of at least two levels, a smooth surface structure, ST, with its tangential and cotangential structure and an underlying microscopic quantum structure, QST. This compound system is assumed to have emerged via a primordial phase transition, PT, together with a (quantum) matter component, QM. This phase transition was a transition to a greater order, that is, (Q)ST carries an extra space-time structure compared to the more primitive phase, QX, given, for example, by the existence (or non-vanishing) of the metric tensor field, $g_{ab}(x)$, having the character of an order parameter field, and in the underlying QST the existence of certain phase cells, $[g_{ab}(x)]$, containing the quantum states of the universe, leading to the same macroscopic metric tensor field in ST, i.e. $g_{ab}(x)$. Therefore we can conclude: \begin{conclusion} The primordial phase transition, PT, is accompanied by a spontaneous symmetry breaking, SSB, of diffeomorphism invariance (or covariance). That is, the more symmetric but less ordered phase, QX, goes over into the phase (Q)ST, which has lesser symmetry but higher order. \end{conclusion} \begin{bem} These points were already discussed in \cite{Requ2} and \cite{Requ3}. \end{bem}. What are the consequences of these observations for our question of non-covariance of e.g. gravitational energy and related questions? As we said above, we have a two storey structure of space-time. We argue that the smooth surface structure ST (i.e. the macroscopic part) supports observables which are diffeomorphism invariant. The SSB, on the other hand, is located in the micro structure of the quantum regime QST. That is, we assume that the existence of quantum excitations, vacuum fluctuations and the emergence of the space-time structure $[g_{ab}(x)]$ is responsible for the breaking of diffeomorphism invariance. We conjecture the following: \begin{conjecture} We assume that quantities which live over the smooth macroscopic manifold ST are diffeomorphism covariant, i.e. behave tensorial. On the other hand, quantities which rather express properties of the microscopic underlying quantum structure QST we assume to behave non-tensorial in general. \end{conjecture} \begin{koro} As we already explained in the introduction, since the scale on which QST is living is many orders of magnitude finer than even the regime of ordinary quantum matter, the latter should also behave covariantly in general. \end{koro} \begin{conclusion} As the energy-momentum tensor, $T_{ab}(x)$, lives essentially over the classical manifold ST, the heat influx, $\kappa^{-1}\cdot G_{ab}(x)$, is by the same token also tensorial or covariant. On the other hand, the part of the relativistic energy-momentum (pseudo) tensor, $t_{ab}(x)$, describing the gravitational energy contained in QST, is non-tensorial because it describes the contributions to gravitational energy which are contained in the underlying network of gravitational DoF. making up the microscopic structure of QST. \end{conclusion} \section{The Concept of Gravitational Temperature} On general grounds we know that our local systems must have a temperature. From what we have said above, the local gravitational system around the point $x\in ST$ has an internal energy $U(x)$ and an entropy $S(x)$. In section 2 we showed that the system at $x\in ST$ can be considered as a \tit{generalized canonical ensemble}. This entails that we can define its v.Neumann entropy \beq S(x):= -Tr\,(\rho_x\cdot \ln\, \rho_x)=-\sum_i\, p_i\ln\,p_i \eeq with $p_i$ the probabilities of the individual states $\psi_i(x)$. Therefore we can (in principle) calculate \beq T(x)=\partial U(x)/\partial S(x)\;\text{with}\; V(x)\;\text{held fixed} \eeq ($V(x)$ some infinitesimal volume element around $x$). We will complete this somewhat abstract definition of gravitational temperature by two different more concrete approaches in this context. Firstly, we discuss the Tolman-Ehrenfest approach (cf. e.g. \cite{Tolman1},\cite{Tolman2}) which was originally introduced for thermal material systems which live in ST. For convenience of the reader we will briefly recapitulate our own approach which we gave in \cite{Requ6}. \begin{bem} In order that we can apply the Tolman-Ehrenfest results to purely gravitational systems, that is, systems not consisting of ordinary material constituents, we must at first motivate that these systems have a thermal character and consist of microscopic constituents. This is what we have done in this paper up to now. \end{bem} The Tolman-Ehrenfest result is the following: \begin{ob} In thermal equilibrium in a static gravitational field we have for an isolated system \beq T(x)\cdot\sqrt{-g_{00}(x)}= const \eeq I.e., in contrast to the non-relativistic regime (cf. sect.2), there exists in general a temperature gradient in a system being in thermal equilibrium in the relativistic regime. \end{ob} To derive this result we use the \tit{entropy maximum principle}. We shall use, for reasons of simplicity, the weak field expansion of the gravitational field. With $\phi(x)$ the Newtonian gravitational potential we have \beq \sqrt{-g_{00}}=(1+2\phi/c^2)^{1/2}\approx 1+\phi/c^2 \eeq \begin{bem} Note that the gravitational potential is negative and is usually assumed to vanish at infinity. \end{bem} We assume an isolated macroscopic system to be in thermal equilibrium in such a static weak gravitational field. Its total entropy and internal energy depend on the gravitational field $\phi(x)$. We now decompose the large system into sufficiently small subsystems so that the respective thermodynamic variables can be assumed to be essentially constant in the small subsystems. As the entropy is an \tit{extensive} quantity, we can write \beq S(\phi)=\sum_i S_i(E^0_i,V_i,N_i) \eeq where $E^0_i$ is the thermodynamical \tit{internal energy}, not including the respective \tit{potential energy}. \begin{ob} It is important that in the subsystems the explicit dependence on the gravitational potential has vanished. The entropy in the subsystems depends only on the respective thermodynamical variables, the values of which are of course functions of the position of the respective subsystem in the field $\phi(x)$. \end{ob} At its maximum the total entropy is constant under infinitesimal redistribution of the internal energies $E^0_i$ with the total energy and the remaining thermodynamic variables kept constant. We now envisage two neighboring subsystems, denoted by (1) and (2). To be definite, we take $\phi(2)\geq \phi(1)$. We now transfer an infinitesimal amount of internal energy $dE_2^0$ from (2) to (1) (note, it consists of pure heat as for example the particle numbers remain unchanged by assumption!). As heat has weight relativistically it gains on its way an extra amount of potential energy. \begin{ob} By transferring $dE_2^0$ from (2) to (1) we gain an additional amount of gravitational energy \beq dE_2^0\cdot\Delta\phi/c^2 \quad ,\quad \Delta\phi=\phi_2-\phi_1 \eeq It is important to realize that this gravitational energy has to be transformed from mechanical energy into heat energy or, rather, internal energy and reinjected in this form into system (1) (for example by a stirring mechanism acting on system (1) and being propelled by the quasistatic fall of the energy $dE_2^0$). \end{ob} The energy balance equation now reads \beq dE_1^0=dE_2^0+dE_2^0\cdot\Delta\phi/c^2=dE_2^0(1+\Delta\phi/c^2) \eeq We then have (with $dS_1=-dS_2$ in equilibrium) \beq T_1^{-1}=dS_1/dE_1^0=-dS_2/-(dE_2^0(1+\Delta\phi/c^2))=T_2^{-1}\cdot (1+\Delta\phi/c^2)^{-1} \eeq that is \begin{conclusion} It holds \beq T_1=T_2(1+\Delta\phi/c^2)\quad ,\quad \Delta\phi:=\phi_2-\phi_1\geq 0 \eeq \end{conclusion} \begin{ob} The subsystem (1), having a potential energy being lower than (2), has a higher temperature. \end{ob} We can give the above relation another more covariant form. In the approximation we are using it holds: \beq \frac{\sqrt{1+2\phi_2/c^2}}{\sqrt{1+2\phi_1/c^2}}=\frac{1+\phi_2/c^2}{1+\phi_1/c^2}=\frac{1+(\phi_1+\Delta\phi)/c^2}{1+\phi_1/c^2}=1+\Delta\phi/c^2 \eeq \begin{conclusion}[Covariant form] It holds \beq T_1\cdot\sqrt{-g_{00}(1)}=T_2\cdot\sqrt{-g_{00}(2)}=const \eeq \end{conclusion} The non-infinitesimal result follows by using a sequence of infinitesimal steps. It is perhaps useful to derive the above result in yet another, slightly different, way. In case $\phi(x)$ vanishes at infinity we can apply the entropy-maximum principle as follows. We extract the energy (bringing it to infinity) \beq dE_2=dE_2^0+dE_2^0\cdot \phi_2/c^2 \eeq from subsystem (2) and reinject the energy \beq dE_2=dE_1=dE_1^0+dE_1^0\cdot \phi_1/c^2 \eeq from infinity into (1). We get \beq dE_1^0=dE_2^0\cdot\frac{1+\phi_2/c^2}{1+\phi_1/c^2} \eeq and \beq T_1^{-1}=dS_1/dE_1^0=T_2^{-1}\cdot\left(\frac{1+\phi_2/c^2}{1+\phi_1/c^2}\right)^{-1} \eeq hence \beq T_1\cdot (1+\phi_1/c^2)=T_2\cdot (1+\phi_2/c^2) \eeq As an example one may mention the Rindler/Unruh space-time. We have in Rindler coordinates \beq g_{00}=-\xi^2\; ,\; a=\xi^{-1}\; ,\; T=a/2\pi \eeq i.e. \beq T\sqrt{-g_{00}}=2\pi^{-1}=const \eeq \begin{bem} One should mention that this method to go to $\infty$ does make only sense if the thermal system extends to $\infty$ as well. \end{bem} In a second approach we will employ the Fulling-Unruh observation of thermalisation in accelerated frames of reference (local Rindler frames were introduced by for example Padmanabhan in e.g. \cite{Pad2}). We will however use it in a slightly different way compared to Jacobson or Padmanabhan. For reasons of convenience we wil choose a static space-time ST and restrict ourselves to a sufficiently small neighborhood of some arbitrary point $x\in ST$. We choose a local inertial frame (LIF) at $x$ with Lorentz-orthonormal coordinates $(X,T)$ so that $x$ has the coordinates $(0,0)$. We assume that the LIF moves along a certain geodesic through $x$. We place a thermometer system at the point $x$ which is gauged so that it yields the result zero in a LIF. However, we assume that the thermometer is in principle sensitive against the thermal excitations of the underlying microscopical gravitational system. The thermometer now experiences the gravitational field $g_{ab}(x)$ at the macroscopic point $x\in ST$. Now, relative to the LIF (moving on a geodesic through $x$) and employing the inertial coordinates $(X,T)$ an observer at the point $x$ (and the thermometer) experiences an acceleration $a(X,T)$ (the detailed numerical calculations, which are a little bit intricate, can be found in \cite{Moeller1} exercise in section 9.6). One should however note that everything we have said holds only locally. But locally, near the origin, we can switch from inertial coordinates $(X,T)$ to socalled Rindler coordinates $(x_R,t_R)$. \beq T=x_R\cdot \sinh\,(\kappa t_R)\; ,\; X=x_R\cdot\cosh(\kappa t_R) \eeq with $\kappa$ the corresponding proper acceleration. In full Rindler space, i.e. the right wedge $W_R$, we know from the work of Fulling (\cite{Full1}) and Unruh (\cite{Un1}) that the thermometer will detect thermal Rindler modes. In our case the situation is only locally, i.e. in a neighborhood of $x$ or $(X,T)=(0,0)$, Rindler like. Therefore we cannot expect to have such fully developed Rindler modes. However we know already that ST at $x$ or $(X,T)=(0,0)$ behaves microscopically as a thermal system, that is, as a socalled generalized canonical ensemble, hence carrying a distribution of local thermal excitations. Therefore we make the conjecture: \begin{conjecture} The observer at $x$ or $(X,T)=(0,0)$ will detect a certain thermal excitation spectrum, consisting of certain quasi particles (which may be approximations of corresponding Rindler modes), leading to a corresponding local temperature. \end{conjecture} \section{Diffeomorphism Invariance and its Conserved Noether Current} In section 3 we discussed the consequences of the possible SSB of diffeomorphism invariance due to the emergence of quantum DoF in the underlying micro structure of ST, i.e. QST as a result of the primordial phase transition PT. We argued that we may have two classes of observables, the one class of covariant observables which live above the macroscopic space-time manifold ST and another class, which we assume to consist of quantities which rather describe properties of the underlying micro structure within QST. We argued that the elements of this latter class need not behave covariantly under geometric transformations of the surface structure ST. A prominent example in this context is the concept of gravitational energy. Another important role will be played by the \tit{work terms} which occur in the complete conserved Noether current we will derive below. Their covariance comes about by exploiting the vector field $\xi^i(x)$ which enters via the Lie derivative, which is a covariant operation. But befor we enter into the technical details, we want to recapitulate what we said in the introduction concerning the work of Hilbert, Noether, Klein etc. (\cite{Hilbert1},\cite{Klein1},\cite{Klein2},\cite{Noether1}). We were surprised to see that Hilbert essentially got after a long and intricate calculation a conservation law for what he considered to be the gravitational \tit{energy vector}. This energy vector contained an arbitrary vector field $\xi^i$, the deeper role of which was not really understood and appreciated at that time, while technically it is a consequence of E.Noether's \tit{second theorem}. In our view Hilbert already performed calculations which were much later repeated in a similar context, being apparently unaware of the earlier results. Below we will explain the physical role of the arbitrary vector field $\xi^i$ while its mathematical role is clear, it is simply the vector field which generates the diffeomorphism group and occurs in the Lie derivative. In traditional physicist's notation: \beq x^i\to \overline{x}^i=x^i+\epsilon\xi^i(x) \eeq with $\epsilon$ infinitesimal. These intensive investigations and discussions performed in the Hilbert group were carefully studied in two beautiful essays by Rowe and Sauer (\cite{Rowe1},\cite{Sauer1}). We give now a brief description of the sequence of necessary steps which lead to the form of conserved Noether current deriving from the assumption of diffeomorphism invariance. We will perform most of these steps in an appendix. Our main motivation for this is a consequence of the observation that in many representations of this stuff for example important and \tit{nonvanishing} boundary terms are frequently dropped which then results in only partial results. \begin{bem}If for example certain boundary terms are dropped one gets only the \tit{contracted Bianchi identity} instead of the full conserved current. This is in our view dangerous because in contrast to ordinary variations diffeomorphisms typically need not vanish at infinity. Furthermore, we will show that variations, having a local support, lead to results which differ in some important respects. \end{bem} In the following we restrict ourselves to a variation of the Hilbert-Einstein action \beq S[g_{ab}]=\int\,R(g_{ab})\cdot \sqrt{-g}\,d^4x \eeq with $R$ the scalar curvature \beq R:=R_a^a\;\text{with}\;R_{ab}\;\text{the Ricci tensor}\; R_{ac}:=R_{abc}^b \eeq the rhs being the Riemann curvature tensor. $g$ is the determinant of the metric tensor. $R$ is a scalar and $\sqrt{-g}\,d^4x$, the \tit{canonical volume element}, is an invariant under coordinate transformations or diffeomorphisms. Hence $S$ is invariant under diffeomorphisms \beq \phi_{\lambda}:\,x\to\,x(\lambda)\;\text{or infinitesimal:}\;x\to\bar{x}=x+d\lambda\,\xi(x) \eeq where the vector field $\xi(x)$ induces the flow of the diffeomorphism group $\phi_{\lambda}$. If $T(x)$ is some arbitrary tensor field, we construct an $\lambda$-dependent tensor field in the following way. We shift back the tensor at $x(\lambda)$ to the point $x$ \beq T_{\lambda}(x):=\phi^*_{-\lambda}(T(x_{\lambda})) \eeq with $\phi^*_{-\lambda}=(\phi^*_{\lambda})^{-1}$ the map, induced by $\phi_{\lambda}:x\to x(\lambda)$. Now all these $T_{\lambda}(x)$ are defined at the same point $x$ and we can take the derivative with respect to $\lambda$ at $\lambda=0$. We get \beq 0=d\,S[g_{ab}]/d\,\lambda=\int\,d/d\lambda\,\mcal{L}(g_{ab}(x;\lambda)\,d^4\,x \eeq where in the following the derivative is taken always at $\lambda=0$ and $\mcal{L}$ is the scalar density $R(g_{ab})\cdot \sqrt{-g}$. \begin{bem}Technical details can be found, for example, in \cite{Wald5}, Appendix C. \end{bem} \begin{ob}The derivative with respect to $\lambda$ at $\lambda=0$ is nothing but the Lie derivative, $\mcal{L}_{\xi}$, of the tensor $T$ or more general geometric objects. As with ordinary derivatives, the Lie derivative obeys the Leibniz rule. \end{ob} \begin{defi}In the following we abreviate \beq \mcal{L}_{\xi}(T)(x):=\delta T(x) \eeq that is, in particular \beq d/d\lambda\,g_{ab}(x;\lambda)=\mcal{L}_{\xi}\, g_{ab}(x)=\delta g_{ab}(x) \eeq \end{defi} It is an important observation that, as the Lie derivative is defined for general differentiable manifolds, it is independent of the concept of covariant derivative. It can hence be expressed with the help of, on the one hand, partial derivatives, on the other hand, expressed by means of an arbitrary covariant derivative operator. For example with the help of the covariant derivative, induced by $g_{ab}(x)$: \beq \mcal{L}_{\xi}\,g_{ab}=\nabla_a\xi_b+\nabla_b\xi_a=\xi^c\partial_c g_{ab}+g_{cb}\partial_a\xi^c+g_{ac}\xi^c \eeq While the Lie derivative for tensor fields is canonically given via $\phi^*_{-\lambda}(T(x_{\lambda}))$ as \beq \mcal{L}_{\xi}(T)(x):=\lim_{\lambda\to 0}\;(\phi^*_{-\lambda}(T(x_{\lambda}))-T(x))/\lambda \eeq one has to say some words in the case of e.g. densities. \begin{bem}A scalar density like $\mcal{L}(x)$ becomes an invariant if multiplied by the volume element $d^4x$. If we have a scalar density at point $x(\lambda)$, its translate back to $x$ has to be again a scalar density so that e.g. $\mcal{L}\,d^4(x_{\lambda})$ remains invariant. \end{bem} What we have said in the remark, allows us to fix the necessary transformation properties and define the Lie derivative of a scalar density. \begin{lemma}We have \beq \mcal{L}_{\xi}(\sqrt{-g}\cdot R)=-\sqrt{-g}\nabla_a(R\cdot\xi^a)=-\partial_a(R\cdot \xi^a\cdot\sqrt{-g}) \label{55} \eeq \end{lemma} We are now going to describe the strategy which will lead to the derivation of a conserved current, following from diffeomorphism invariance. We shall however not follow the perhaps more obvious strategy to exploit the above integral expression for $S[g_{ab}]$ but will use a local approach as it is for example done in \cite{Bjorken} in the case of the ordinary energy-momentum tensor conservation law following from 4-translation invariance in quantum field theory and also used in \cite{Pad1}. Our approach will consist of essentially two steps. i) We take the Lagrange density $\mcal([g_{ab}(x)]$ which is a lengthy expression in $g_{ab}(x)$ and its first and second partial derivatives and, using the Leibniz rule and that partial derivatives and Lie derivative do commute, represent its Lie derivative as a long expression consisting ultimately terms containing the Lie derivative of the basic building blocks $g_{ab}(x)$ which we derived above. Note that we use the abbreviation \beq d/d\lambda\, g_{ab}(x;\lambda)=\mcal{L}_{\xi}(g_{ab}(x)=:\delta g_{ab}(x) \eeq Diffeomorphism invariance enters in the way that $\mcal{L}[g_{ab}]$ does not explicitly depend on the coordinates but only via the field $g_{ab}$ and its derivatives. That is, this approach exploits the structural form of the Lagrange density. ii) In a second step we simply directly calculate the Lie derivative of the density $R\sqrt{-g}$ as described in formula (\ref{55}). We then equate the numerically identical expressions and bring them on the same side, thus getting a vanishing expression which can be written as a conserved current. Now, using formula \ref{62} in the appendix and the contracted Bianch identity, we get for the variation of $\mcal{L}[g_{ab}]$: \beq \delta\mcal{L}[g_{ab}]=\sqrt{-g}\cdot\nabla^a(v_a+2G_{ab}\xi^b) \eeq with \beq v^a=\nabla^b(\nabla^a\xi_b+\nabla_b\xi^a)-g^{cd}\nabla^a(\nabla_c\xi_d+\nabla_d\xi_c) \eeq (some technical remarks: \beq \nabla^a:=g^{ab}\nabla_b\quad\text{and}\quad \nabla^bg^{ac}=0=\nabla_bg_{ac} \eeq for the Levi-Civita connection). Now using in the second step the direct Lie derivative of the Lagrange density $R\cdot\sqrt{-g}$, which we already wrote down above (see formula \ref{55}), we arrive at the conserved Noether current. \begin{satz}The Lie derivative of $R\cdot\sqrt{-g}$, derived in the two ways, described above, yields the conserved Noether current \beq 0=\nabla_a(R\xi^a+2G^{ab}\xi_b+v^a) \eeq with \beq v^a=\nabla^b(\nabla^a\xi_b+\nabla_b\xi^a)-g^{cd}\nabla^a(\nabla_c\xi_d+\nabla_d\xi_c) \eeq \end{satz} We can rewrite this formula to get a slightly different result. With \beq 2G^{ab}\xi_b=2R^{ab}\xi_b-Rg^{ab}\xi_b\quad,\quad g^{ab}\xi_b=\xi^a \eeq we get \begin{koro}A variant of the above result is \beq 0=\nabla_a j^a\quad ,\quad j^a=(2R^{ab}\xi_b+v^a) \eeq \end{koro} \section{The Thermal Meaning of the Conserved Noether Current} We want now to come back to the interpretation of the conserved Noether current we have derived above. We mentioned in the introduction that Hilbert and his colleagues had great difficulties to understand the role the arbitrary vector field $\xi^i(x)$ is playing in this expression. As we have an energy-momentum tensor $T_{ab}$, playing a fundamental role in the theory, the occurrence of a conserved energy vector (as Hilbert liked to call it) like our $j^i(x)$ was puzzling. In this context we remind the reader what we said in section 3 concerning the role of tensorial or covariant quantities compared to non-tensorial or non-covariant quantities, the former referring to properties of the smooth classical surface structure ST, the latter ones referring to the underlying quantum mechanical micro structure QST. If we assume the vector field $\xi^i(x)$ is chosen to be time like, we can associate it with the orbits of observers or measuring devices. Then expressions like \beq T_{ab}(x)\xi^b(x)\quad \text{or}\quad G_{ab}(x)\xi^b(x) \eeq represent (energy) flows as observed or measured by the respective moving observers. That is, they have an objective (geometric) quality. In this sense they should have a covariant tensorial character and therefore can create a covariant conserved current. \begin{ob}The (time like) vector fields $\xi^i(x)$ can be assumed to be the orbits of observers, floating through space-time. \end{ob} In section 3 we argued that \beq G_{ab}(x)=\kappa\cdot T_{ab}(x) \eeq is a statement about gravitational heat energy influx at macroscopic space-time point $x$. Correspondingly the first part of the conserved Noether current describes the flow of heat energy contributing to the internal energy of the gravitational system. We now analyze the physical meaning of the second part $v^i(x)$. \beq v^a=\nabla^b(\nabla^a\xi_b+\nabla_b\xi^a)-g^{cd}\nabla^a(\nabla_c\xi_d+\nabla_d\xi_c) \eeq If $\xi^i(x)$ is a Killing vector field, that is, if it induces a symmetry of the metric tensor, \beq \mcal{L}_{\xi}g_{ab}(x)=0\quad\text{or}\quad \nabla_a\xi_b+\nabla_b\xi_a=0 \eeq and using \beq \nabla_b\xi^a=\nabla_b(g^{ac}\xi_c)\; ,\; \nabla^a\xi_b=g^{ac}\nabla_c\xi_b\; ,\; \nabla_bg^{ac}=0 \eeq we get: \begin{satz}If $\xi^i(x)$ is a Killing vector field we have $v^i(x)=0$. \end{satz} Since in the preceding sections we argued that QST is a thermal system, possessing at each macroscopic space-time point $x$ the local state functions \tit{internal energy} and \tit{entropy} as well as a notion of local heat influx given by $T_{ab}(x)$ or $G_{ab}(x)$, it suggests itself to regard the vector field $v^i(x)$, which contains mainly contributions built from the metric, $g_{ab}(x)$, and the vector field $\xi^i(x)$ as work terms. \begin{conjecture}We assume that the vector field $v^i(x)$ contains the work terms of our gravitational system QST. It contains essentially the effects of 4-volume changes (compressions and decompressions). \end{conjecture} \begin{ob}This interpretation is supported by the above result that $v^i(x)=0$ for Killing vector fields inducing geometric symmetries, $\mcal{L}_{\xi}g_{ab}=0$, i.e., essentially no 4-volume changes. \end{ob} In the introduction we mentioned the work of Sakharov. \tit{Induced Gravity} is assumed to result from the deformation of the structure of vacuum fluctuations by curvature. Our above working philosophy is a related one. Locally compressing or decompressing QST affects the local level structure of the thermal system at point $x$ and thus is a kind of work, done at the system. We would like to make some remarks concerning a point which irritates or irritated many researchers. It is frequently argued that mere coordinate transformations can completely alter the numerical values of quantities like energy or work, even make them vanish in case we are dealing with non-tensor quantities. We dealt already with such problems in \cite{Requ4} in the context of special relativity (SR), in particular if volume changes due to Lorentz contraction have to be included in thermodynamic work terms. Some researchers have the attitude to consider Lorentz contraction as not being real, whatever that actually means. We think Pauli in \cite{Pauli1} sect.5 made this point particularly clear. He argues that the atomic physics, underlying the contraction of a measuring rod is complicated but has to obey Lorentz covariance. Therefore Lorentz contraction, in his view, is, on the one hand, an objective process, but, as it is at the same time a result of Lorentz symmetry, it can as well be explained with the help of the general Lorentz invariance of SR. The same is true, in our view, in the case of curvature effects on the micro structure of QST. Furthermore, we can choose at each space-time point $x$ a local geodesic coordinate system in which SR does hold, thus establishing the close relatedness of SR and GR. What concerns pure coordinate transformations, there exist basically two possibilities. On the one hand, they may be related to concrete changes of reference systems to which our above remarks do apply. On the other hand, they may not be implementable by concrete reference systems. In that case we may assume that transformation behavior should be regarded as a consequence following from consistency reasons. \section{Appendix: The Conserved Noether Current} We begin with the calculation of the variation of $R[g_{ab}]\cdot\sqrt{-g}$, that is, reducing $\delta(R[g_{ab}]\cdot\sqrt{-g})$ to an expression which contains only terms like $\delta g_{ab}$, remembering that $\delta$ denotes the Lie derivative or $d/d\lambda$ at $\lambda=0$. Furthermore we use the above expression for the Lie derivative \beq \mcal{L}_{\xi}\,g_{ab}=\nabla_a\xi_b+\nabla_b\xi_a=\xi^c\partial_c g_{ab}+g_{cb}\partial_a\xi^c+g_{ac}\xi^c \eeq As $R$ is a relative complex expression in the variables $g_{ab}$ and its first and second partial derivatives, the calculations are, as is often the case in this context, lengthy and a little bit intricate. We have (see \cite{Wald5} p.453) \beq \delta(R\cdot \sqrt{-g})=\sqrt{-g}(\delta R_{ab})g^{ab}+\sqrt{-g}R_{ab}\delta g^{ab}+R\delta(\sqrt{-g}) \eeq with $R_{ab}$ the Ricci tensor. Furthermore (\cite{Wald5} p.185) \beq g^{ab}\delta R_{ab}=\nabla^av_a\quad v_a=\nabla^b(\delta g_{ab})-g^{cd}\nabla_a(\delta g_{cd}) \eeq and (see \cite{Wald5} p.453, \cite{Pauli1} section 23) \beq \delta(\sqrt{-g})=-1/2\cdot\sqrt{-g}g^{ab}\delta g_{ab}=+1/2\cdot \sqrt{-g}g_{ab}\delta g^{ab} \eeq \beq (0=\delta(g^{ab}g_{ab})=g^{ab}\delta g_{ab}+\delta g^{ab}g_{ab}) \eeq \begin{conclusion}The above formulas yield \beq \label{62} \delta\mcal{L}(g_{ab})=(R_{ab}-1/2Rg_{ab})\delta g^{ab}\sqrt{-g}+\nabla^av_a\sqrt{-g} \eeq \end{conclusion} \begin{ob}$G_{ab}=(R_{ab}-1/2Rg_{ab})$ is called Einstein tensor with $G_{ab}=8\pi\cdot T_{ab}$. It fulfills the contracted Bianchi identity $\nabla^aG_{ab}=0$ which can also be derived from diffeomorphism invariance if one neglects boundary terms (see \cite{Feynman1} p.138). \end{ob}
1,116,691,499,431
arxiv
\section{Introduction} Let $\theta$ be a transcendental complex number. The function $\Phi:\N\times \R_{>0}\longrightarrow \R_{>0}$ is a transcendental measure for $\theta$ if, for any sufficiently large positive integer $m$, any sufficiently large positive real number $H$ and any nonzero polynomial $P(z)\in \Z[z]$ with ${\rm{deg}}(P)\le m$ and ${\rm{H}}(P)\le H$, we have $${\rm{exp}}(-\Phi(m,H)) \le |P(\theta)|.$$ Let $\alpha$ be an algebraic number different from $0$ and $1$. Then the complex number ${\rm{log}}(\alpha)$ is transcendental. A great deal of work has already been done on finding transcendence measures for the values of logarithms. For example, Mahler \cite{M1},\cite{M2},\cite{M3}, Gel'fond \cite{G}, Feldman \cite{F}, Cijsouw \cite{C}, Reyssat \cite{R} and Waldschmidt \cite{W1}. In \cite{N-W}, Nesterenko-Waldschmidt gave the following transcendence measure of values of logarithm. \begin{theorem} $($\rm{\cite[Theorem $6.$ $1)$]{N-W}}$)$ \label{N-W} Let $\alpha$ be an algebraic number, $\alpha\neq 0,1$. Then there exists a effectively computable positive number $C=C(\alpha)$, depending only on $\alpha$ and the determination of the logarithm of $\alpha$ such that if $P(z)\in \Z[z] \setminus\{0\}$, ${\rm{deg}}P\le m, {\rm{L}}(P)\le L$, then \begin{align} \label{N-W lower bound} |P({\rm{log}}(\alpha))|\ge {\rm{exp}}\left(-Cm^2({\rm{log}}(L)+m{\rm{log}}(m))(1+{\rm{log}}(m))^{-1}\right) \end{align} where ${\rm{L}}(P)=\sum_{i=0}^m|a_i|$ if $P(z)=\sum_{i=0}^m a_iz^{i}$. \end{theorem} The purpose of the present article is to give an improvement of Theorem $\ref{N-W}$ for algebraic numbers $\alpha$ which are sufficiently close to $1$ and give a $p$-adic version of the result. The main integrant of the proof of these results is Hermite-Pad\'{e} approximation of exponential and logarithm functions. \section{Notations and main results} We collect some notations which we use throughout this article. For a prime number $p$, we denote the $p$-adic number field by $\Q_p$, the $p$-adic completion of a fixed algebraic closure of $\Q_p$ by $\C_p$ and the normalized $p$-adic valuation on $\C_p$ by $$|\cdot|_p:\C_p\longrightarrow \R_{\ge0}, \ |p|=p^{-1}.$$ We fix an algebraic closure of $\Q$ and denote it by $\overline{\Q}$. We define the denominator function by $${\rm{den}}:\overline{\Q}\longrightarrow \N, \ \alpha\mapsto \min\{n\in\N\mid n\alpha \ \text{is an algebraic integer}\}.$$ We fix embeddings $\sigma:\overline{\Q}\hookrightarrow \C$ and $\sigma_{p}:\overline{\Q}\hookrightarrow \C_p$. For an algebraic number field $K$, we consider $K$ as a subfield of $\overline{\Q}$ and denote the ring of integers of $K$ by $\mathcal{O}_K$. For $\alpha\in K$, we denote $\sigma(\alpha)=\alpha$, $\sigma_{p}(\alpha)=\alpha$ and the set of conjugates of $\alpha$ by $\{\alpha^{(k)}\}_{1\le k \le [K:\Q]}$ with $\alpha^{(1)}=\alpha$ and $\alpha^{(2)}$ is the complex conjugate of $\alpha$ if $\sigma(K)\not\subset \R$. We denote the set of places of $K$ (resp. infinite places, finite places) by $M_{K}$ (resp. $M^{\infty}_{K}$, $M^{f}_{K}$). For $v\in M_{K}$, we denote the completion of $K$ with respect to $v$ by $K_v$. For $v\in M_{K}$, we define the normalized absolute value $| \cdot |_v$ as follows: \begin{align*} &|p|_v:=p^{-\tfrac{[K_v:\Q_p]}{[K:\Q]}} \ \text{if} \ v\in M^{f}_{K} \ \text{and} \ v|p,\\ &|x|_v:=|\sigma_v x|^{\tfrac{[K_v:\R]}{[K:\Q]}} \ \text{if} \ v\in M^{\infty}_{K}, \end{align*} where $\sigma_v$ is the embedding $K\hookrightarrow \C$ corresponding to $v$. Then we have the product formula \begin{align*} \prod_{v\in M_{K}} |\xi|_v=1 \ \text{for} \ \xi \in K\setminus\{0\}. \end{align*} \ Let $m$ be a natural number and $\boldsymbol{\beta}:=(\beta_0,\ldots,\beta_m) \in K^{m+1} \setminus\{\bold{0}\}$. We define the absolute height of $\boldsymbol{\beta}$ by \begin{align*} &\mathrm{H}(\boldsymbol{\beta}):=\prod_{v\in M_{K}} \max\{1, |\beta_0|_v,\ldots,|\beta_m|_v\}. \end{align*} Note that, for $\boldsymbol{\beta}=(\beta_0,\ldots,\beta_{m})\in \mathcal{O}^{m+1}_K \setminus\{\bold{0}\}$, we have $\mathrm{H}(\boldsymbol{\beta})=\prod_{v\in M^{\infty}_{K}} \max\{1, |\beta_0|_v,\ldots, |\beta_m|_v\}$ and \begin{align} \label{important equal} \prod_{k=1}^{[K:\Q]} \max\{1, |\beta^{(k)}_0|,\ldots, |\beta^{(k)}_m|\}=\mathrm{H}(\boldsymbol{\beta})^{[K:\Q]}. \end{align} \ Let ${\rm{log}}:\C\setminus\R_{\le 0}\longrightarrow \C$ be the principal value logarithm function and ${\rm{log}}_p:\C_p\setminus\{0\} \longrightarrow \C_p$ the $p$-adic logarithm function, i.e. ${\rm{log}}_p$ is a $p$-adic locally analytic function satisfying the following conditions: \begin{align*} &({\rm{i}}) \ {\rm{log}}_p(1+z)=\sum_{k=1}^{\infty}\dfrac{(-1)^{k+1}z^k}{k} \ \text{if} \ |z|_p<1,\\ &({\rm{ii}}) \ {\rm{log}}_p(xy)={\rm{log}}_p(x)+{\rm{log}}_p(y) \ \text{for} \ x,y\in \C_p\setminus\{0\},\\ &({\rm{iii}}) \ {\rm{log}}_p(p)=0. \end{align*} Under the above notations, we shall prove the following results. \begin{theorem}\label{power of log indep} Let $m\in \Z_{\ge2}$, $K$ be an algebraic number field and $\alpha\in K\setminus\{0,-1\}$. We define the real numbers \begin{align*} &T(\alpha)={\rm{exp}}\left({\dfrac{2|{\rm{log}}(1+\alpha)|}{1+\sqrt{1+4|{\rm{log}}(1+\alpha)|}}}\right)(m-1)!,\\ &T^{(k)}(\alpha)=\dfrac{2^m(1+|\alpha^{(k)}|)(m-1)!}{|\alpha^{(k)}|} \ \text{for} \ 1\le k \le [K:\Q], \\ &\mathcal{A}^{(k)}(\alpha)=m(1+{\rm{log}}(2))+{\rm{log}}({\rm{den}}(\alpha))+{\rm{log}}(1+|\alpha^{(k)}|) \ \text{for} \ 1\le k \le [K:\Q],\\ &A(\alpha)=m{\rm{log}}\left(\dfrac{m}{|{\rm{log}}(1+\alpha)|}\right)-\left(\dfrac{m(1+\sqrt{1+4|{\rm{log}}(1+\alpha)|})}{2}+\dfrac{2|{\rm{log}}(1+\alpha)|}{1+\sqrt{1+4|{\rm{log}}(1+\alpha)|}} \right)-{\rm{log}}({\rm{den}}(\alpha)), \\ &\nu(\alpha):=A(\alpha)+\mathcal{A}^{(1)}(\alpha),\\ &\delta(\alpha):=A(\alpha)+\mathcal{A}^{(1)}(\alpha)-\dfrac{(m-1)\sum_{k=1}^{[K:\Q]}\mathcal{A}^{(k)}(\alpha) }{[K_{\infty}:\R]}, \end{align*} where $K_{\infty}$ is the completion of $K$ with respect to the fixed embedding $\sigma:K\hookrightarrow \C$. We assume $\dfrac{m}{|{\rm{log}}(1+\alpha)|}\ge 4$ and $\delta(\alpha)>0$, then the numbers $1, {\rm{log}}(1+\alpha), \ldots, {\rm{log}}^{m-1}(1+\alpha)$ are linearly independent over $K$. For any $\epsilon>0$, we take a natural number $n$ satisfying \begin{align*} &m\left[\sqrt{{\rm{log}}n}\cdot {\rm{exp}}\left(\dfrac{(\sqrt{322}-\sqrt{546})\sqrt{{\rm{log}}n}}{\sqrt{515}}\right)\right]<\dfrac{[K_{\infty}:\R]\delta^2(\alpha)\epsilon}{2(m-1)[K:\Q](2\nu(\alpha)+\epsilon\delta(\alpha))},\\ &{\rm{log}}\left(T(\alpha)n^{\tfrac{m}{2}} \left[\prod_{k=1}^{[K:\Q]}m!(T^{(k)}(\alpha))^{m-1}n^{2m(m-1)}\right]^{\tfrac{1}{[K_{\infty}:\R]}}\right)\le \dfrac{\epsilon \delta^2(\alpha)n}{4(2\nu(\alpha)+\epsilon \delta(\alpha))}. \end{align*} Then $H_0=\left(\dfrac{1}{2}{\rm{exp}}[{\delta(\alpha) n}]\right)^{\tfrac{[K_{\infty}:\R]}{[K:\Q]}}$ satisfies the following property$:$ For any $\boldsymbol{\beta}:=(\beta_0,\ldots,\beta_{m-1}) \in \mathcal{O}^{m}_K \setminus \{ \bold{0} \}$ satisfying $H_0<\mathrm{H}(\boldsymbol{\beta})\le H$, then we have \begin{align*} {\rm{log}}\left(\left|\sum_{i=0}^{m-1}\beta_i{\rm{log}}^i(1+\alpha)\right|\right) >-\left(\dfrac{[K:\Q]\nu(\alpha)}{[K_{\infty}:\R]\delta(\alpha)}+\dfrac{\epsilon[K:\Q]}{2[K_{\infty}:\R]}\right){\rm{log}}(H). \end{align*} \end{theorem} We will prove Theorem $\ref{p power of log indep}$ in Section $6.1$. In the case of $\alpha$ is a rational number, we obtain the following corollary. \begin{corollary} \label{corollary main theorem} Let $m\in \mathbb{Z}_{\ge 2}$, $\epsilon>0$ and $\alpha=c/d\in {\mathbb{Q}}\setminus\{0,-1\}$ with $(c,d)=1$ and $d>0$. Put \begin{align*} \nu(\alpha):=&m{\rm{log}}\left(\dfrac{m}{|{\rm{log}}(1+c/d)|}\right)- \left(\dfrac{m(1+\sqrt{1+4|{\rm{log}}(1+c/d)|})^2+4|{\rm{log}}(1+c/d)|}{2(1+\sqrt{1+4|{\rm{log}}(1+c/d)|)}}\right)\\ &-{\rm{log}}(d)+m(1+{\rm{log}}(2))+{\rm{log}}(d)+{\rm{log}}(1+\left|{c}/{d}\right|).\\ \delta(\alpha):=&{m{\rm{log}}\left(\dfrac{m}{|{\rm{log}}(1+c/d)|}\right)}- \left(\dfrac{m(1+\sqrt{1+4|{\rm{log}}(1+c/d)|})^2+4|{\rm{log}}(1+c/d)|}{2(1+\sqrt{1+4|{\rm{log}}(1+c/d)|)}}\right)\\ &{-{\rm{log}}(d)}-{(m-2)}\left(m(1+{\rm{log}}(2))+{{\rm{log}}(d)}+{\rm{log}}(1+\left|{c}/{d}\right|)\right). \end{align*} Suppose $\delta(\alpha)>0$. Then we have $({\rm{i}})$ The complex numbers $1,{\rm{log}}(1+\alpha),\ldots,{\rm{log}}^{m-1}(1+\alpha)$ are linearly independent over $\Q$. $({\rm{ii}})$ Let $n=n(\epsilon)$ be a natural number satisfying \begin{align*} &m\left[\sqrt{{\rm{log}}n}\cdot {\rm{exp}}\left(\dfrac{(\sqrt{322}-\sqrt{546})\sqrt{{\rm{log}}n}}{\sqrt{515}}\right)\right]<\dfrac{\delta^2(\alpha)\epsilon}{2(m-1)(2\nu(\alpha)+\epsilon\delta(\alpha))},\\ &{\rm{log}}\left(2^{(2m+1)(m-1)}(m!)^{m+1}(m-1)!^{m} \left(\dfrac{(d+|c|)}{|c|}\right)^{2(m-1)}\right)+ \left(\dfrac{2|{\rm{log}}(1+\tfrac{c}{d})|}{1+\sqrt{1+4|{\rm{log}}(1+\tfrac{c}{d})|}}\right)\\ &+\left(\dfrac{m}{2}+2m(m-1)\right){\rm{log}}(n)<\dfrac{\epsilon\delta(\alpha)^2n}{4(2\nu(\alpha)+\epsilon\delta(\alpha))}. \end{align*} Then for $H_0:=\tfrac{1}{2}{\rm{exp}}\left(\delta(\alpha)n\right)$ and $\bold{b}=(b_0,b_1,\ldots,b_{m-1}) \in \mathbb{Z}^{m}\setminus\{\bold{0}\}$ of $H_0<\mathrm{H}(\bold{b})\leq H$, we have $${\rm{log}}\left(\left|\sum_{i=0}^{m-1}b_i{\rm{log}}^i(1+\alpha)\right|\right)>-\left( \dfrac{\nu(\alpha)}{\delta(\alpha)}+\dfrac{\epsilon}{2}\right) \cdot {\rm{log}} H.$$ \end{corollary} \begin{remark} We show that Corollary $\ref{corollary main theorem}$ gives an improvement of Theorem $\ref{N-W}$ for $1+\alpha\in \Q\setminus\{1,0\}$ which are sufficiently close to $1$ and $m\ge 3$. We compare the result of Theorem $\ref{N-W}$ with that of Corollary $\ref{corollary main theorem}$. Let $m\in \mathbb{Z}_{\ge 2}$, $\epsilon>0$ and $\alpha=c/d\in {\mathbb{Q}}\setminus\{0,-1\}$ with $(c,d)=1$ and $d>0$. For $\bold{b}=(b_0,b_1,\ldots,b_{m-1}) \in \mathbb{Z}^{m}\setminus\{\bold{0}\}$, by Theorem $\ref{N-W}$, we obtain \begin{align*} {\rm{log}}\left(\left|\sum_{i=0}^{m-1}b_i{\rm{log}}^i(1+\alpha)\right|\right)\ge -C(\alpha)(m-1)^2(1+{\rm{log}}(m-1))^{-1}((m-1){\rm{log}}(m-1)+{\rm{log}}(L)), \end{align*} where $C(\alpha)$ is a positive number depending on $\alpha$ with $C(\alpha)>105500\cdot e^{{\rm{H}}(\alpha)}$ for $\mathrm{H}(\bold{b})\le H$. Since we have $$-C(\alpha)(m-1)^2{\rm{log}}(H)\ge -C(\alpha)(m-1)^2(1+{\rm{log}}(m-1))^{-1}((m-1){\rm{log}}(m-1)+{\rm{log}}(L)),$$ we compare $C(\alpha)(m-1)^2$ and $\nu(\alpha)/\delta(\alpha)$. Since we have \begin{align*} \dfrac{\nu(\alpha)}{\delta(\alpha)}&\approx \dfrac{m\left({\rm{log}}(m)-{\rm{log}}(|c|)+{\rm{log}}(d)+{\rm{log}}(2)\right)}{{\rm{log}}(d)+m({\rm{log}}(m)-{\rm{log}}(|c|)-1-(m-2){\rm{log}}(2))} \approx m, \end{align*} if $|\alpha|=|c|/d$ is sufficiently close to $0$, Corollary $\ref{corollary main theorem}$ improves Theorem $\ref{N-W}$ for $1+\alpha\in \Q\setminus\{1,0\}$ which are sufficiently close to $1$ and $m\ge 3$. \end{remark} \bigskip Second, we introduce a $p$-adic version of Theorem $\ref{power of log indep}$. \begin{theorem}\label{p power of log indep} Let $m\in\Z_{\ge2}$, $p$ be a prime number, $K$ an algebraic number field and $\alpha\in K\setminus\{0,-1\}$ with $|\alpha|_p<1$. We use the same notations as in Theorem $\ref{power of log indep}$. We also define the real numbers \begin{align*} &T_p(\alpha)=\dfrac{(2m)^{m-1}}{|\alpha|_p},\\ &A_p(\alpha)=-m{\rm{log}}(|\alpha|_p),\\ &\nu_p(\alpha)=A_p(\alpha),\\ &\delta_p(\alpha)=A_p(\alpha)-\dfrac{(m-1)\sum_{k=1}^{[K:\Q]}\mathcal{A}^{(k)}(\alpha)}{[K_{\infty}:\R]}. \end{align*} We assume $\delta_p(\alpha)>0$, then the numbers $1, {\rm{log}}_p(1+\alpha), \ldots, {\rm{log}}^{m-1}_p(1+\alpha)$ are linearly independent over $K$. For any $\epsilon>0$, we take a natural number $n$ satisfying \begin{align*} &\dfrac{1}{{\rm{log}}|\alpha|^{-1}_p}+\dfrac{1}{m}\le n,\\ &mn\left[\sqrt{{\rm{log}}n}\cdot {\rm{exp}}\left(\dfrac{(\sqrt{322}-\sqrt{546})\sqrt{{\rm{log}}n}}{\sqrt{515}}\right)\right] \le \dfrac{\epsilon \delta^2_p(\alpha)[K_{p}:\Q_p]n}{2(m-1)(2\nu_p(\alpha)+\epsilon \delta_p(\alpha))[K:\Q]},\\ &{\rm{log}}\left(T_{p}(\alpha)n^{m-1} \left[\prod_{k=1}^{[K:\Q]}m!(T^{(k)}(\alpha))^{m-1} n^{2m(m-1)}\right]^{\tfrac{1}{[K_{p}:\Q_p]}}\right) \le \dfrac{\epsilon \delta^2_p(\alpha)n}{4 (2\nu_p(\alpha)+\epsilon \delta_p(\alpha)}, \end{align*} where $K_{p}$ is the completion of $K$ with respect to the fixed embedding $\sigma_p:K\hookrightarrow \C_p$. Then $H_0=\left(\dfrac{1}{2}{\rm{exp}}[{\delta_p(\alpha) n}]\right)^{\tfrac{[K_{p}:\Q_p]}{[K:\Q]}}$ satisfies the following property$:$ For any $\boldsymbol{\beta}:=(\beta_0,\ldots,\beta_{m-1}) \in \mathcal{O}^{m}_K \setminus \{ \bold{0} \}$ satisfying $H_0< \mathrm{H}(\boldsymbol{\beta})\le H$, then we have \begin{align*} {\rm{log}}\left(\left|\sum_{i=0}^{m-1}\beta_i{\rm{log}}^i_p(1+\alpha)\right|_p\right)> -\left(\dfrac{[K:\Q]\nu_p(\alpha)}{[K_{p}:\Q_p]\delta_p(\alpha)}+\dfrac{\epsilon[K:\Q]}{2[K_p:\Q_p]}\right) {\rm{log}}(H). \end{align*} \end{theorem} We will prove Theorem $\ref{p power of log indep}$ in Section $6.2$. \section{Pad\'{e} approximations of formal power series} In this section, we recall the definition and basic properties of Pad\'{e} approximation of formal power series. In the following of this section, we use $K$ as a field with characteristic $0$. \begin{lemma} \label{Pade} Let $m\in\N$ and $\bold{f}=(f_1(z),\ldots,f_m(z))\in K[[z]]^m$. For $\bold{n}:=(n_1,\ldots,n_m)\in \Z^m_{\ge0}$, there exists a family of polynomials $(A_1(z),\ldots,A_m(z))\in K[z]^m$ satisfying the following properties$:$ \begin{align*} &({\rm{i}}) \ (A_1(z),\ldots,A_m(z))\neq(0,\ldots,0),\\ &({\rm{ii}}) \ {\rm{deg}}A_j(z)\le n_j \ \text{for} \ 1\le j\le m,\\ &({\rm{iii}}) \ {\rm{ord}}\sum_{j=1}^{m}A_j(z)f_j(z)\ge \sum_{j=1}^m(n_j+1)-1. \end{align*} \end{lemma} In this article, we call the polynomials $(A_1(z),\ldots,A_m(z))\in K[z]^m$ satisfying the conditions $({\rm{i}}),({\rm{ii}}),({\rm{iii}})$ in Lemma $\ref{Pade}$ as a weight $\bold{n}$ Pad\'{e} approximants of $\bold{f}$. For a weight $\bold{n}$ Pad\'{e} approximants of $\bold{f}$, $(A_1(z),\ldots,A_m(z))$, we call the formal power series $\sum_{j=1}^{m}A_j(z)f_j(z)$ as a weight $\bold{n}$ Pad\'{e} approximation of $\bold{f}$. \begin{definition} Let $m\in\Z_{\ge1}$ and $\bold{f}:=(f_1(z),\ldots,f_m(z))\in K[[z]]^m$. $({\rm{i}})$ Let $\bold{n}=(n_1,\ldots,n_m)\in \Z^m_{\ge0}$. We say $\bold{n}$ is normal with respect to $\bold{f}$ if for any weight $\bold{n}$ Pad\'{e} approximation $R(z)$ of $\bold{f}$ satisfy the equality $${\rm{ord}}R(z)=\sum_{j=1}^m(n_j+1)-1.$$ $({\rm{ii}})$ We call $\bold{f}$ is perfect if any indices $\bold{n}\in \Z^m_{\ge0}$ are normal with respect to $\bold{f}$. \end{definition} \begin{remark} \label{remark bij} Let $m\in\N$, $\bold{n}=(n_1,\ldots,n_m)\in \Z^m_{\ge0}$ and $\bold{f}=(f_j(z):=\sum_{k=0}^{\infty}f_{j,k}z^k)_{1\le j \le m} \in K[[z]]^m$. We put $N=\sum_{j=1}^m(n_j+1)$. For $r\in \Z_{\ge0}$, we define a $N\times (r+1)$ matrix $A_{\bold{n},r}(\bold{f})$ by \begin{equation*} A_{\bold{n},r}(\bold{f}):={\begin{pmatrix} f_{1,0}& 0 & \dots & 0 & \ldots & f_{m,0}& 0 & \dots & 0\\ f_{1,1}& f_{1,0} & \dots & 0 & \ldots & f_{m,1}& a_{m,0} & \dots & 0\\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots & \ddots &\vdots \\ f_{1,r}& f_{1,r-1} & \dots & f_{1,r-n_1} & \ldots & f_{m,r} & f_{m,r-1} & \dots & f_{m,r-n_m}\\ \end{pmatrix}}, \end{equation*} where $a_{j,k}=0$ if $k<0$ for $1\le j \le m$. Then we have the following bijection: \begin{align*} &\phi^{(\bold{n})}_{\bold{f}}: {\rm{ker}}(A_{\bold{n},N-2}(\bold{f}))\setminus \{\bold{0}\}\longrightarrow \left\{(A_j(z))\in K[z]^m \middle| \sum_{j} A_j(z)f_j(z) \ \text{is a weight} \ \bold{n}\text{ Pad\'{e} approximation of} \ \bold{f}\right\}\\ &{}^{t}(a_{1,0},\ldots,a_{1,n_1},\ldots,a_{m,0}, \ldots, a_{m,n_m})\mapsto \left(A_j(z):=\sum_{k=0}^{n_j}a_{j,k}z^k\right)_{1\le j \le m}. \nonumber \end{align*} Note that the indice $\bold{n}$ is normal with respect to $\bold{f}$ is equivalent to $A_{\bold{n},N-1}(\bold{f})\in {\rm{GL}}_N(K)$. \end{remark} \begin{lemma} \label{cor fund Pade} Let $m\in \N$, $\bold{f}:=(f_1,\ldots,f_m)\in K[[z]]^m$ and $\bold{n}:=(n_1,\ldots,n_m)\in \N^m$. Put $\bold{n}_i:=(n_1,\ldots,n_{i-1},n_i+1,n_{i+1},\ldots,n_m)\in \N^{m}$ for $1 \le i \le m$. Suppose $\bold{n}$ is normal with respect to $\bold{f}$. Then we have $${\rm{deg}}A_i(z)=n_i+1,$$ for any weight $\bold{n}_i$ Pad\'{e} approximants $(A_1(z),\ldots,A_m(z))$ of $\bold{f}$. \end{lemma} \begin{proof} Put $N:=\sum_{j=1}^m (n_j+1)$. Since $\bold{n}$ is normal with respect to $\bold{f}$, we have ${\rm{dim}}_K{\rm{ker}}(A_{\bold{n},N-1}(\bold{f}))=0$. Suppose there exist $1\le i \le m$ and a weight $\bold{n}_i$ Pad\'{e} approximants $(A_1(z),\ldots,A_m(z))$ of $\bold{f}$ satisfying ${\rm{deg}}A_i<n_i+1$. Put $${}^{t}(a_{1,0},\ldots,a_{1,n_1},\ldots, a_{i,0},\ldots,a_{i,n_i},0,\ldots,a_{m,0},\ldots,a_{m,n_m}):=(\phi^{(\bold{n}_i)}_{\bold{f}})^{-1}(A_1(z),\ldots,A_m(z)).$$ Then we have $${}^{t}(a_{1,0},\ldots,a_{1,n_1},\ldots, a_{i,0},\ldots,a_{i,n_i},\ldots,a_{m,0},\ldots,a_{m,n_m})\in {\rm{ker}}(A_{\bold{n},N-1}(\bold{f}))\setminus\{\bold{0}\}.$$ This is a contradiction. This completes the proof of Lemma $\ref{cor fund Pade}$. \end{proof} \section{Pad\'{e} approximation of exponential functions} In this section, we recall some properties of Pad\'{e} approximation of exponential functions. We quote some propositions for the Pad\'{e} approximation of exponential functions in \cite{J}. \begin{proposition} \label{perfect e} $($cf. \rm{\cite[Theorem $1.2.1$]{J}}$)$ Let $n$ be a natural number and $\omega_1,\ldots,\omega_n$ pairwise distinct complex numbers. Then the functions $e^{\omega_1z},\ldots,e^{\omega_n z}$ are perfect. Especially, for $l\in \N$, the functions $1,e^z,\ldots,e^{lz}$ are perfect. \end{proposition} Let $\omega_1,\ldots,\omega_n$ be pairwise distinct complex numbers. Explicit construction of Pad\'{e} approximations of $e^{\omega_1 z},\ldots,e^{\omega_n z}$ are given by Hermite as follows. \begin{proposition} \label{Pade e} $(cf. $\rm{\cite[p. $242$]{J}}$)$ Let $\bold{m}:=(m_1,\ldots,m_n)\in \Z^{n}_{\ge0}$ and $\{a_{h,j}(\bold{m}, \boldsymbol{\omega})\}_{1\le h \le n, 1\le j \le m_h+1}$ be the family of complex numbers satisfying the following equality$:$ \begin{align} \label{coefficients} \dfrac{1}{\prod_{h=1}^n(x-\omega_h)^{m_h+1}}=\sum_{h=1}^n \sum_{j=1}^{m_h+1}\dfrac{a_{h,j}(\bold{m},\boldsymbol{\omega})}{(x-\omega_h)^j}. \end{align} Then the formal power series $$S(z):=\sum_{h=1}^n\left(\sum_{j=0}^{m_h}a_{h,j+1}(\bold{m},\boldsymbol{\omega})\dfrac{z^j}{j!}\right)e^{\omega_hz},$$ is a weight $\bold{m}$ Pad\'{e} approximation of $e^{\omega_1z},\ldots,e^{\omega_nz}$. \end{proposition} \section{Pad\'{e} approximations of power of logarithm functions} In this section, we construct Pad\'{e} approximations of $1,{\rm{log}}(1+z),\ldots, {\rm{log}}^{m-1}(1+z)$ for $m\in \Z_{\ge2}$ by using that of exponential functions obtained in Proposition $\ref{Pade e}$. \begin{lemma} \label{normality} Let $f(z)\in K[[z]]$. Suppose there exists $g(z)\in K[[z]]$ satisfying $f(g(z))=g(f(z))=z$. Put $\tilde{g}(z):=g(z)+1$ and assume $1,\tilde{g}(z),\ldots,\tilde{g}^l(z)$ are perfect for any $l\in \N$. Then the any indices $\bold{n}\in \{(n_0,n_1,\ldots,n_{m-1})\in \Z^m_{\ge0}| \ n_0\ge n_1\ge \ldots \ge n_{m-1}\}$ are normal with respect to $(1,f(z),\ldots,f^{m-1}(z))$ for any $m\in \Z_{\ge 2}$. \end{lemma} \begin{proof} Denote the set $\{(n_0,n_1,\ldots,n_{m-1})\in \Z^m_{\ge0}| \ n_0\ge n_1\ge \ldots \ge n_{m-1}\}$ by $\mathcal{X}_m$. Let $$\bold{n}:=((n_0)_{r_0},(n_1)_{r_1}\ldots, (n_s)_{r_s}) \in \mathcal{X}_m \ \text{for} \ n_0>n_1>\ldots >n_s,$$ where $(n_i)_{r_i}=(n_i,\ldots,n_i)\in \Z^{r_i}_{\ge 0}$ for $0\le i \le s$. We put \begin{align*} &\bold{m}:=((m-1)_{n_s+1},(\sum_{i=0}^{s-1}r_i-1)_{n_{s-1}-n_s},(\sum_{i=0}^{s-2}r_i-1)_{n_{s-2}-n_{s-1}},\ldots,(r_0-1)_{n_0-n_1})\in \mathcal{X}_{n_0+1},\\ &\bold{f}:=(1,f(z),\ldots,f^{m-1}(z)),\\ &\tilde{\bold{g}}:=(1,\tilde{g}(z),\ldots,\tilde{g}^{n_0}(z)),\\ &V_{\bold{n}}:=\left\{R(z)=\sum_{j=0}^{m-1}A_j(z)f^j(z) \middle| R(z) \ \text{is a weight} \ \bold{n} \ \text{Pad\'{e} approximation of} \ \bold{f}\right\},\\ &W_{\bold{m}}:=\left\{\mathcal{R}(z)=\sum_{j=0}^{n_0}\mathcal{A}_j(z)\tilde{g}^j(z) \middle| \mathcal{R}(z) \ \text{is a weight} \ \bold{m} \ \text{Pad\'{e} approximation of} \ \tilde{\bold{g}}\right\}. \end{align*} We define the $K$-isomorphism $\Psi$ by $$\Psi:K[[z]]\longrightarrow K[[z]], \ \sum_{k=0}^{\infty}a_k z^k\mapsto \sum_{k=0}^{\infty}a_k \tilde{g}^k(z).$$ Note that $\Psi$ is an order preserving map, namely we have ${\rm{ord}}F(z)={\rm{ord}}\Psi(F(z))$ for $F(z)\in K[[z]]$. We prove that $\Psi$ induces the bijection $\Psi:V_{\bold{n}}\longrightarrow W_{\bold{m}}$. Let $R(z)=\sum_{j=0}^{m-1}A_j(z)f^j(z)\in V_{\bold{n}}$ and put $A_j(z)=\sum_{h=0}^{n_{0}}a_{h,j}(1+z)^h$ with for $0\le j \le m-1$. Then we obtain \begin{align} \label{R 1} &R(z)=\\ &\sum_{j=0}^{r_0-1}(\sum_{h=0}^{n_0}a_{h,j}(1+z)^h)f^{j}(z)+\sum_{j=r_0}^{r_0+r_1-1}(\sum_{h=0}^{n_1}a_{h,j}(1+z)^h)f^{j}(z)+\ldots+\sum_{j=r_0+\ldots+r_{s-1}}^{m-1}(\sum_{h=0}^{n_s}a_{h,j}(1+z)^h)f^{j}(z).\nonumber \end{align} Using $(\ref{R 1})$, we have \begin{align}\label{Psi R} &\Psi(R)(z)=\nonumber\\ &\sum_{j=0}^{r_0-1}(\sum_{h=0}^{n_0}a_{h,j}\tilde{g}^h(z))z^j+\sum_{j=r_0}^{r_0+r_1-1}(\sum_{h=0}^{n_0}a_{h,j}\tilde{g}^h(z))z^j+\ldots+\sum_{j=r_0+\ldots+r_{s-1}}^{m-1}(\sum_{h=0}^{n_s}a_{h,j}\tilde{g}^h(z))z^j=\nonumber\\ &\sum_{h=0}^{n_s}\left(\sum_{j=0}^{m-1}a_{h,j}z^j\right)\tilde{g}^h(z)+\sum_{h=n_s+1}^{n_{s-1}}\left(\sum_{j=0}^{r_0+\ldots+r_{s-1}-1}a_{h,j}z^j\right)\tilde{g}^h(z)+\ldots+\sum_{h=n_1+1}^{n_0}\left(\sum_{j=0}^{r_0-1}a_{h,j}z^j\right)\tilde{g}^h(z). \end{align} Since $\Psi$ is order preserving map, the equality $(\ref{Psi R})$ shows that $\Psi(R)$ is a weight $\bold{m}$ Pad\'{e} approximation of $\tilde{\bold{g}}$. Then we have $\Psi(V_{\bold{n}})\subseteq W_{\bold{m}}$. By the similar way, we also obtain $W_{\bold{m}}\subseteq \Psi(V_{\bold{n}})$. Then the map $\Psi:V_{\bold{n}}\longrightarrow W_{\bold{m}}$ is bijection. Since $\bold{m}$ is normal with respect to $\tilde{\bold{g}}$, we have ${\rm{ord}}S(z)=\sum_{i=0}^s (n_i+1)r_i-1$ for all $S(z)\in W_{\bold{m}}$. Since the bijection $\Psi:V_{\bold{n}}\longrightarrow W_{\bold{m}}$ is order preserving map, we also obtain ${\rm{ord}}R(z)=\sum_{i=0}^s (n_i+1)r_i-1$ for all $R(z)\in V_{\bold{n}}$. This shows that the indice $\bold{n}$ is normal with respect to $\bold{f}$. This completes the proof of Lemma $\ref{normality}$. \end{proof} \begin{proposition} $($cf. {\rm{\cite[Theorem $1.2.3$]{J}}}$)$ \label{diagonal normality log} Let $m\in \Z_{\ge 2}$. Denote the set $\{(n_0,\ldots,n_{m-1})\in \Z^m_{\ge0}| \ n_0\ge n_1\ge \ldots \ge n_{m-1}\}$ by $\mathcal{X}_m$. Then any indices $\bold{n}\in \mathcal{X}_m$ are normal with respect to $(1,{\rm{log}}(1+z),\ldots,{\rm{log}}^{m-1}(1+z))$. \end{proposition} \begin{proof} By Proposition $\ref{perfect e}$, we have $1,e^z,\ldots,e^{lz}$ are perfect for $l\in \N$. Using Lemma $\ref{normality}$ for $f(z):={\rm{log}}(1+z)$ and $g(z):=e^z-1$, any indices $\bold{n}\in \mathcal{X}_m$ are normal with respect to $(1,{\rm{log}}(1+z),\ldots,{\rm{log}}^{m-1}(1+z))$. This completes the proof of Proposition $\ref{diagonal normality log}$. \end{proof} Let $m\in \Z_{\ge2}$ and $\bold{n}\in \mathcal{X}_m$. We obtain a weight $\bold{n}$ Pad\'{e} approximation of $1,{\rm{log}}(1+z),\ldots,{\rm{log}}^{m-1}(1+z)$ as follows: \begin{proposition} \label{pade log} Let $\{r_i\}_{0\le i \le s}\subset \N$ and $\{n_i\}_{0\le i \le s}\subset \Z_{\ge0}$ satisfying $r_0+\ldots+r_s=m$ and $n_0>\ldots>n_s$. Put \begin{align*} &\bold{n}:=((n_0)_{r_0},\ldots,(n_s)_{r_s})\in \mathcal{X}_m\\ &\bold{m}:=((m-1)_{n_s+1},(\sum_{i=0}^{s-1}r_i-1)_{n_{s-1}-n_s},(\sum_{i=0}^{s-2}r_i-1)_{n_{s-2}-n_{s-1}},\ldots,(r_0-1)_{n_0-n_1}),\\ &\boldsymbol{\omega}:=(0,1,\ldots,n_0). \end{align*} We define the family of rational numbers $\{a_{h,j}(\bold{m}, \boldsymbol{\omega})\}_{0\le h \le n_0+1, 1\le j \le m-1}$ as follows$:$ $$\dfrac{1}{\prod_{h=0}^{n_s}(x-h)^{m}}\times \dfrac{1}{\prod_{h=n_s+1}^{n_{s-1}}(x-h)^{\sum_{i=1}^{s-1}r_i}} \times \ldots \times \dfrac{1}{\prod_{h=n_1+1}^{n_{0}}(x-h)^{r_0}} =\sum_{h=0}^{n_0} \sum_{j=1}^{m-1}\dfrac{a_{h,j}(\bold{m},\boldsymbol{\omega})}{(x-h)^j}.$$ Then the formal power series \begin{align}\label{Pade log power} R(z):=\sum_{j=0}^{m-1} \left( \dfrac{\sum_{h=0}^{n_0}a_{h,j+1}(\bold{m},\boldsymbol{\omega})(1+z)^h}{j!}\right) {\rm{log}}^{j}(1+z), \end{align} is a weight $\bold{n}$ Pad\'{e} approximation of $(1,{\rm{log}}(1+z),\ldots,{\rm{log}}^{m-1}(1+z))$. \end{proposition} \begin{proof} We define a $\overline{\Q}$-isomorphism $\Psi$ by $$\Psi:\overline{\Q}[[z]]\longrightarrow \overline{\Q}[[z]], \ z\mapsto e^{z}-1.$$ By Proposition $\ref{Pade e}$, the formal power series \begin{align} \label{explicit Pade e} \mathcal{R}(z):=\sum_{h=0}^{n_0}\left(\sum_{j=0}^{m-1}a_{h,j+1}(\bold{m},\boldsymbol{\omega})\dfrac{z^j}{j!}\right)e^{hz}, \end{align} is a weight $\bold{m}$ Pad\'{e} approximation of $1,e^z,\ldots,e^{n_0z}$. By the proof of Lemma $\ref{normality}$, we have \begin{align} \label{R} \Psi^{-1}(\mathcal{R}(z))=\sum_{j=0}^{m-1} \left( \dfrac{\sum_{h=0}^{n_0}a_{h,j+1}(\bold{m},\boldsymbol{\omega})(1+z)^h}{j!}\right) {\rm{log}}^{j}(1+z), \end{align} is a weight $\bold{n}$ Pad\'{e} approximation of $(1,{\rm{log}}(1+z),\ldots,{\rm{log}}^{m-1}(1+z))$. Since the right hand side of the equality $(\ref{R})$ is the formal power series $R(z)$ defined in $(\ref{Pade log power})$, this completes the proof of Proposition $\ref{pade log}$. \end{proof} \begin{remark} Let $(n_0,\ldots,n_m)\in \mathcal{X}_m$ and $R(z)$ and $\mathcal{R}(z)$ be the formal power series defined in $(\ref{Pade log power})$ and $(\ref{explicit Pade e})$ respectively. Put $N:=\sum_{j=0}^{m-1}(n_j+1)$. We have $\mathcal{R}(z)=\dfrac{z^{N-1}}{(N-1)!}+\text{(higher order term)}$ (see p. $242$ \cite{J}). Then by the definition of $R(z)$, we have \begin{align} \label{first term R} R(z)=\dfrac{z^{N-1}}{(N-1)!}+\text{(higher order term)}. \end{align} On the other hand, in p.~$245$ \cite{J}, Jager proved that the function $$r(z):=\dfrac{1}{2\pi\sqrt{-1}}\int_{C}\dfrac{(1+z)^x}{\prod_{j=0}^{m-1}\prod_{h=0}^{n_j}(x-h)}dx,$$ where $C$ is a contour with positive orientation enclosing the set $\{0,1,\ldots,n_0\}$, is a weight $\bold{n}$ Pad\'{e} approximation of $(1,{\rm{log}}(1+z),\ldots,{\rm{log}}^{m-1}(1+z))$ and $r(z)$ satisfies \begin{align} \label{first term r} r(z)=\dfrac{z^{N-1}}{(N-1)!}+\text{(higher order term)} \ \text{for} \ z\in\{z\in\C \mid |z|<1\}. \end{align} Since $\bold{n}$ is normal with respect to $(1,{\rm{log}}(1+z),\ldots,{\rm{log}}^{m-1}(1+z))$, a weight $\bold{n}$ Pad\'{e} approximation of $(1,{\rm{log}}(1+z),\ldots,{\rm{log}}^{m-1}(1+z))$ is uniquely determined up to constant. Thus, by $(\ref{first term R})$ and $(\ref{first term r})$, we obtain \begin{align} \label{integral rep R} R(z)=\dfrac{1}{2\pi\sqrt{-1}}\int_{C}\dfrac{(1+z)^x}{\prod_{j=1}^{m-1}\prod_{h=0}^{n_j}(x-h)}dx. \end{align} \end{remark} \section{Estimations} From this section to the last section, we use the following notations for $m\in \Z_{\ge2}$, $n\in \Z_{\ge0}$ and $1\le i \le m$: \begin{align*} &d_{n+1}:={\rm{l.c.m.}}(1,2,\ldots,n+1),\\ &\bold{n}_i:=(\overbrace{n+1,\ldots,n+1}^{i},n,\ldots,n)\in \Z^{m}_{\ge0},\\ &\bold{m}_i:=(m-1,\ldots,m-1,i-1)\in \Z^{n+2}_{\ge0},\\ &\boldsymbol{\omega}:=(0,\ldots,n,n+1),\\ &Q_{m,i,n+1}(x):=\left[\prod_{h=0}^n(x-h)^m\right]\times (x-n-1)^i. \end{align*} We define the set of rational numbers $\{a_{h,j}(\bold{m}_i,\boldsymbol{\omega})\}_{1\le i \le m, 0\le h \le n+1,1\le j \le m}$ satisfying the equality \begin{align*} \dfrac{1}{Q_{m,i,n+1}(x)}=\sum_{h=0}^{n+1}\sum_{j=1}^{m}\dfrac{a_{h,j}(\bold{m}_i,\boldsymbol{\omega})}{(x-h)^j} \ \text{for} \ 1\le i \le m. \end{align*} By Proposition $\ref{Pade e}$ and Proposition $\ref{pade log}$, the formal power series \begin{align} &\mathcal{R}_{i,n+1}(z):=\sum_{h=0}^{n+1}\left(\sum_{j=0}^{m-1}a_{h,j+1}(\bold{m}_i,\boldsymbol{\omega})\dfrac{z^j}{j!}\right)e^{hz}, \label{exp pade S}\\ &R_{i,n+1}(z):=\sum_{j=0}^{m-1}\left(\dfrac{\sum_{h=0}^{n+1}a_{h,j+1}(\bold{m}_i,\boldsymbol{\omega})(1+z)^h}{j!}\right){\rm{log}}^{j}(1+z), \label{log pade R} \end{align} are weight $\bold{m}_i$ Pad\'{e} approximation of $1,e^z,\ldots,e^{(n+1)z}$ and weight $\bold{n}_i$ Pad\'{e} approximation of $1,{\rm{log}}(1+z),\ldots,{\rm{log}}^{m-1}(1+z)$ respectively. We define \begin{align} \label{coeff polynomial} A_{i,j,n+1}(z):=\dfrac{\sum_{h=0}^{n+1}a_{h,j+1}(\bold{m}_i,\boldsymbol{\omega})(1+z)^h}{j!} \ \text{for} \ 1\le i \le m, 0\le j \le m-1. \end{align} \begin{lemma} \label{denominator} $($cf. {\rm{\cite[Theorem $1 (a)$]{M2}}}$)$ We use the notations as above. For any $1\le i \le m$, $0\le j \le m-1$ and $0\le h \le n+1$, we have \begin{align} \label{tisai denominator} d^m_{n+1}(n+1)!^m a_{h,j}(\bold{m}_i,\boldsymbol{\omega})\in \Z. \end{align} Especially, for an algebraic number field $K$ and an element $\alpha\in K$, we have \begin{align} \label{denomi2} d^m_{n+1}(n+1)!^m(m-1)!{\rm{den}}^{n+1}(\alpha)A_{i,j,n+1}(\alpha)\in \mathcal{O}_K, \end{align} for $1\le i \le m$, $0\le j \le m-1$. \end{lemma} \begin{proof} Recall that, by the definition of $a_{h,j}(\bold{m}_i,\boldsymbol{\omega})$, we have \begin{align} \label{fukusyu} \dfrac{1}{Q_{m,i,n+1}(x)}=\sum_{j=1}^{m}\sum_{h=0}^{n+1} \dfrac{a_{h,j}(\bold{m}_i,\boldsymbol{\omega})}{(x-h)^j}. \end{align} Fix a nonzero integer $\lambda$ satisfying $0\le \lambda \le n$. By the definition of $Q_{m,i,n+1}(x)$, we have the following equalities: \begin{align} &\dfrac{1}{Q_{m,i,n+1}(x)}\nonumber \\ &=\dfrac{1}{(x-\lambda)^m}\prod_{\delta=1}^{\lambda}\dfrac{1}{(x-\lambda+\delta)^m}\prod_{\nu=1}^{n-\lambda}\dfrac{1}{(x-\lambda-\nu)^m} \dfrac{1}{(x-\lambda-(n+1-\lambda))^i} \nonumber\\ &=\dfrac{(-1)^{(n-\lambda)m+i}}{\lambda !^m(n-\lambda)!^m(n+1-\lambda)^i} \dfrac{1}{(x-\lambda)^m}\prod_{\delta=1}^{\lambda}\left(1+\dfrac{x-\lambda}{\delta}\right)^{-m} \prod_{\nu=1}^{n-\lambda}\left(1-\dfrac{x-\lambda}{\nu}\right)^{-m} \left(1-\dfrac{x-\lambda}{n+1-\lambda}\right)^{-i}. \label{lambda exp} \end{align} Since we have \begin{align*} \dfrac{d_{n+1}}{\delta}, \dfrac{d_{n+1}}{\nu}\in \Z \ \text{for} \ \delta=1,\ldots,\lambda \ \text{and} \ \nu=1,\ldots,n-\lambda,n+1-\lambda, \end{align*} then there exist a set of integers $\{c_{i,k}\}_{k\in\Z_{\ge0}}$ satisfying \begin{align} \label{bekikyuusuu} \prod_{\delta=1}^{\lambda}\left(1+\dfrac{d_{n+1}}{\delta}t\right)^{-m} \prod_{\nu=1}^{n-\lambda}\left(1-\dfrac{d_{n+1}}{\nu}t\right)^{-m} \left(1-\dfrac{d_{n+1}}{n+1-\lambda}t\right)^{-i}=\sum_{k=0}^{\infty}c_{i,k}t^k, \end{align} where $t$ is an intermediate. Substituting $t=\dfrac{x-\lambda}{d_{n+1}}$ in the equality $(\ref{bekikyuusuu})$, we obtain \begin{align}\label{bekiyuusuu2} \prod_{\delta=1}^{\lambda}\left(1+\dfrac{x-\lambda}{\delta}\right)^{-m} \prod_{\nu=1}^{n-\lambda}\left(1-\dfrac{x-\lambda}{\nu}\right)^{-m} \left(1-\dfrac{x-\lambda}{n+1-\lambda}\right)^{-i}=\sum_{k=0}^{\infty}c_{i,k}d^{-k}_{n+1}(x-\lambda)^k. \end{align} Substituting $(\ref{bekiyuusuu2})$ for the equality $(\ref{lambda exp})$ and compare the equality $(\ref{fukusyu})$ and $(\ref{lambda exp})$, we have \begin{align} \label{key kill denomi} a_{\lambda,j}(\bold{m}_i,\boldsymbol{\omega})=\dfrac{(-1)^{(n-\lambda)m+i}}{\lambda !^m(n-\lambda)!^m(n+1-\lambda)^i}d^{-m+j}_{n+1} c_{i,m-j}. \end{align} By the relation $(n+1)!^m\dfrac{1}{\lambda !^m(n-\lambda)!^m(n+1-\lambda)^i}\in \Z$ and the equality $(\ref{key kill denomi})$, we obtain $$d^m_{n+1}(n+1)!^m a_{\lambda,j}(\bold{m}_i,\boldsymbol{\omega})\in \Z \ \text{for} \ 0\le \lambda \le n.$$ In the case of $\lambda=n+1$, by using the same method as above, we also obtain $$d^m_{n+1}(n+1)!^m a_{n+1,j}(\bold{m}_i,\boldsymbol{\omega})\in \Z.$$ This completes the proof of $(\ref{tisai denominator})$. The latter assertions are obtained by $(\ref{tisai denominator})$ and the definition of $A_{i,j,n+1}(z)$. This completes the proof of Lemma $\ref{denominator}$. \end{proof} \begin{lemma} \label{keisu ookisa} $($cf. {\rm{\cite[Theorem $1 (b)$]{M2}}}$)$ Let $\alpha$ be a complex number. Then we have \begin{align} |A_{i,j,n+1}(\alpha)|\le \dfrac{2^m(1+|\alpha|)}{|\alpha|}(n+1)^{m} ((1+|\alpha|)2^{m})^{n+1}n!^{-m} \end{align} for any $1\le i \le m$, $0\le j \le m-1$ and $n\in \Z_{\ge 0}$. \end{lemma} \begin{proof} In our proof of Lemma $\ref{keisu ookisa}$, we refer some of the arguments of {\cite[Theorem $1 (b)$]{M2}}. First we remark that the $a_{\lambda,j}(\bold{m}_i,\boldsymbol{\omega})$ can be represented as follows: \begin{align} \label{residue} a_{\lambda,j}(\bold{m}_i,\boldsymbol{\omega})=\dfrac{1}{2\pi\sqrt{-1}}\int_{|z-\lambda|=\tfrac{1}{2}}(z-\lambda)^{j-1}\dfrac{1}{Q_{m,i,n+1}(z)}dz. \end{align} Let $\lambda$ be a natural number satisfying $0\le \lambda \le n+1$. Then by the equality $(\ref{residue})$, we have the following inequality: \begin{align} \label{a1} |a_{\lambda,j}(\bold{m}_i,\boldsymbol{\omega})|\le\dfrac{1}{2\pi}2^{1-j}\pi\cdot {\rm{sup}}_{|z-\lambda|=1/2}\left|\dfrac{1}{Q_{m,i,n+1}(z)}\right|. \end{align} Next, we estimate a lower bound of ${\rm{sup}}_{|z-\lambda|=1/2}\left|{Q_{m,i,n+1}(z)}\right|$. Since we have the inequality $$|z-h|=|z-\lambda+\lambda-h|\ge |\lambda-h|-\tfrac{1}{2},$$ for natural number $h$ satisfying $0\le h \le n+1$ and $h\neq \lambda$ and $z\in\{z\in \C| \ |z-\lambda|=\tfrac{1}{2}\}$, we obtain the following inequalities: \begin{align} \label{ineq Q} |Q_{m,i,n+1}(z)|&\ge\begin{cases} \left(\prod_{\delta=1}^{\lambda}(\delta-\tfrac{1}{2})\right)^m\left(\tfrac{1}{2}\right)^m\left(\prod_{\nu=1}^{n-\lambda}(\nu-\tfrac{1}{2})\right)^{m}(n+1-\lambda-\tfrac{1}{2})^i & \ \text{if} \ 0\le \lambda \le n, \\ \left(\prod_{\delta=1}^{n+1}(\delta-\tfrac{1}{2})\right)^m \left(\tfrac{1}{2}\right)^i & \ \text{if} \ \lambda=n+1. \end{cases} \end{align} In the case of $0\le \lambda \le n$, using the inequality $(\ref{ineq Q})$, we obtain \begin{align} \label{zero en} |Q_{m,i,n+1}(z)|&\ge \left(\prod_{\delta=1}^{\lambda}(\delta-\tfrac{1}{2})\right)^m\left(\prod_{\nu=1}^{n-\lambda}(\nu-\tfrac{1}{2})\right)^{m} \left(\tfrac{1}{2}\right)^{2m} \nonumber\\ &=\left(\dfrac{(2\lambda)!}{\lambda!2^{2\lambda}}\right)^m\left(\dfrac{(2n-2\lambda)!}{(n-\lambda)!2^{2(n-\lambda)}}\right)^m \left(\tfrac{1}{2}\right)^{2m}\nonumber\\ &=\left({\binom{2n}{2\lambda}}^{-1}\binom{n}{\lambda}\binom{2n}{n}n!2^{-2n-2}\right)^{m}. \end{align} Combining the inequality (see Proof of {\cite[Theorem $1$, p.~$376$]{M2}}) \begin{align} \label{Mahler} {\binom{2n}{2\lambda}}^{-1}\binom{n}{\lambda}\binom{2n}{n}\ge \dfrac{2^n}{n+1} \ \text{for} \ 0 \le \lambda \le n, \end{align} and $(\ref{zero en})$, we obtain \begin{align} \label{lambda conclusion} |Q_{m,i,n+1}(z)|\ge (n+1)^{-m} 2^{-(n+2)m}n!^m \ \text{for} \ z\in \{z\in\C| \ |z-\lambda|=\tfrac{1}{2}\}. \end{align} In the case of $\lambda=n+1$, using the inequality $(\ref{ineq Q})$, we obtain \begin{align} \label{n+1} |Q_{m,i,n+1}(z)|\ge \left(\binom{2n+2}{n+1}(n+1)!2^{-2n-3}\right)^{m} \ \text{for} \ z\in \{z\in\C| \ |z-n-1|=\tfrac{1}{2}\}. \end{align} By the same arguments as above, from $(\ref{n+1})$, we obtain \begin{align} \label{n+1 2} |Q_{m,i,n+1}(z)|\ge (n+2)^{-m} 2^{-(n+2)m}(n+1)!^m\ge (n+1)^{-m} 2^{-(n+2)m}n!^m \ \text{for} \ z\in \{z\in\C| \ |z-n-1|=\tfrac{1}{2}\}. \end{align} Using the inequalities $(\ref{a1})$, $(\ref{lambda conclusion})$ and $(\ref{n+1 2})$, we obtain \begin{align} \label{a conclusion} |a_{\lambda,j}(\bold{m}_i,\boldsymbol{\omega})|\le (n+1)^{m} 2^{(n+2)m}n!^{-m}, \end{align} for $0\le \lambda \le n+1$, $1\le j \le m$ and $1\le i \le m$. By the definition of $P_{i,j,n+1}(z)$ and use the inequalities $(\ref{a conclusion})$, we obtain \begin{align*} |A_{i,j,n+1}(\alpha)|&\le (n+1)^{m} 2^{(n+2)m}n!^{-m}\sum_{h=0}^{n+1}(1+|\alpha|)^h \le \dfrac{(1+|\alpha|)^{n+2}}{|\alpha|}(n+1)^{m} 2^{(n+2)m}n!^{-m}, \end{align*} for $\alpha\in \C$. This completes the proof of Lemma $\ref{keisu ookisa}$. \end{proof} \begin{lemma} \label{uekara jyouyokou} Let $m$ be a natural number $m\ge 2$. Let $\alpha\in \C\setminus \{0,-1\}$ satisfying $2\le m/|{\rm{log}}(1+\alpha)|$. Then we have \begin{align*} &|R_{i,n+1}(\alpha)|\le {\rm{exp}}\left({\dfrac{2|{\rm{log}}(1+\alpha)|}{1+\sqrt{1+4|{\rm{log}}(1+\alpha)|}}}\right)\times \\ &\left[ {\rm{exp}} \left(\dfrac{m(1+\sqrt{1+4|{\rm{log}}(1+\alpha)|})}{2}+\dfrac{2|{\rm{log}}(1+\alpha)|}{1+\sqrt{1+4|{\rm{log}}(1+\alpha)|}} \right) \left(\dfrac{|{\rm{log}}(1+\alpha)|}{m}\right)^m \right]^{n+1}(n+1)^{-m(n+1)}. \end{align*} \end{lemma} \begin{proof} This proof is based on that of {\cite[Theorem $1$]{M2}}. By $(\ref{integral rep R})$, we have \begin{align} \label{integ rep R i n+1} R_{i,n+1}(z)=\dfrac{1}{2\pi\sqrt{-1}}\int_{C_{\rho}}\dfrac{(1+z)^x}{(\prod_{h=0}^{n}(x-h))^m(x-n-1)^i}dx \ \text{for} \ 1 \le i \le m, \end{align} where $C_{\rho}$ is a circle in the $x$-plane of center $x=0$ and radius $\rho>n+1.$ In the following, we take a positive real number $\rho$ satisfying $\rho\ge 2(n+1)$. For $x\in C_{\rho}$, we have \begin{align} \left|(\prod_{h=0}^{n}(x-h))^m(x-n-1)^i\right|&=\left|x^{m(n+1)+i}\left[\left(1+\dfrac{1}{x}\right)\ldots \left(1+\dfrac{n}{x}\right)\right]^{m}\left(1+\dfrac{n+1}{x}\right)^i\right| \nonumber\\ &\ge \rho^{m(n+1)+i}\left[\left(1-\dfrac{1}{\rho}\right)\ldots \left(1-\dfrac{n}{\rho}\right)\right]^{m}\left(1-\dfrac{n+1}{\rho}\right)^i \nonumber \\ &\ge \rho^{m(n+1)+1}\left[\left(1-\dfrac{1}{\rho}\right)\ldots \left(1-\dfrac{n+1}{\rho}\right)\right]^{m}. \label{ineq 1} \end{align} Since $$\left(1-\dfrac{1}{\rho}\right)\ldots \left(1-\dfrac{n+1}{\rho}\right)=\left[\left(1+\dfrac{1}{\rho-1}\right)\ldots \left(1+\dfrac{n+1}{\rho-n-1}\right)\right]^{-1},$$ and \begin{align*} \left(1+\dfrac{1}{\rho-1}\right)\ldots \left(1+\dfrac{n+1}{\rho-n-1}\right)&\le {\rm{exp}}\left(\sum_{\lambda=1}^{n+1}\dfrac{\lambda}{\rho-\lambda}\right) \le {\rm{exp}}\left(\sum_{\lambda=1}^{n+1}\dfrac{\lambda}{\rho-n-1}\right) \\ &= {\rm{exp}}\left(\dfrac{(n+1)(n+2)}{2(\rho-n-1)}\right) \le {\rm{exp}}\left(\dfrac{(n+1)(n+2)}{\rho}\right), \end{align*} we have $$\left|(\prod_{h=0}^{n}(x-h))^m(x-n-1)^i\right|\ge \rho^{m(n+1)+1} {\rm{exp}}\left(-\dfrac{m(n+1)(n+2)}{\rho}\right).$$ By $(\ref{integ rep R i n+1})$ and the above inequality, we obtain \begin{align} \label{upper jyouyo 1} |R_{i,n+1}(\alpha)|\le {\rm{exp}}\left(\rho|{\rm{log}}(1+\alpha)|+\dfrac{m(n+1)(n+2)}{\rho}\right) \rho^{-m(n+1)} \ \ \text{for} \ 1 \le i \le m. \end{align} Put $f(x)=x|{\rm{log}}(1+\alpha)|+\dfrac{m(n+1)(n+2)}{x}-m(n+1){\rm{log}}(x)$ for $x>0$. Then $f(x)$ takes the minimal value at $$x=\dfrac{ m(n+1)+\sqrt{m^2(n+1)^2+4m(n+1)(n+2)|{\rm{log}}(1+\alpha)|}}{2|{\rm{log}}(1+\alpha)|}.$$ Since $n+2<m(n+1)$, we take $\rho=\dfrac{ m(n+1)(1+\sqrt{1+4|{\rm{log}}(1+\alpha)|})}{2|{\rm{log}}(1+\alpha)|}$. Note that, by the assumption $2\le m/|{\rm{log}}(1+\alpha)|$, we have $2(n+1)\le \rho$. By $(\ref{upper jyouyo 1})$, we obtain the desire inequality. This completes the proof of Lemma $\ref{uekara jyouyokou}$. \end{proof} Next, we give a $p$-adic version of Lemma $\ref{uekara jyouyokou}$. \begin{lemma} \label{p upper bound jyouyo} Let $\alpha\in \C_p$ satisfying $|\alpha|_p<1$. Then we have \begin{align} \label{upper bound p} \max_{0\le i \le m-1}|d^m_{n+1}(n+1)!^m(m-1)!R_{i,n+1,p}(\alpha)|_p\le (m(n+1)+m-2)^{m-1}|\alpha|^{m(n+1)-1}_p, \end{align} for any natural number $n$ satisfying $1/{\rm{log}}|\alpha|^{-1}_p+1/m\le n.$ \end{lemma} \begin{proof} Since $R_{i,n+1}(z)$ is a wight $\bold{n}_i$ Pad\'{e} approximation of $(1,{\rm{log}}(1+z), \ldots, {\rm{log}}^{m-1}(1+z))$, we have ${\rm{ord}}R_{i,n+1}(z)=m(n+1)+i-1$. Put $E_{n+1}:=d^m_{n+1}(n+1)!^m(m-1)!$ and $$E_{n+1} R_{i,n+1}(z)=\sum_{k=m(n+1)+i-1}r_{i,k,n+1}z^k\in \Q[[z]].$$ First we prove the inequalities \begin{align} \label{abs val coeff R p} |r_{i,k,n+1}|_p\le k^{m-1} \ \text{for} \ 1\le i \le m, m(n+1)+i-1\le k . \end{align} By Lemma $\ref{denominator}$, we have $E_{n+1}P_{i,j,n+1}(z)\in \Z[z]$ for $1\le i \le m$, $0\le j \le m-1$. Using the equality $$E_{n+1}R_{i,n+1}(z)=\sum_{j=0}^{m-1}E_{n+1}P_{i,j,n+1}(z){\rm{log}}^j(1+z)$$ and the definition of ${\rm{log}}(1+z)$, we have ${\rm{den}}(r_{i,k,n+1})\le k^{m-1}$ for $m(n+1)+i-1\le k$. Then we obtain the inequalities $(\ref{abs val coeff R p})$. Using the inequalities $(\ref{abs val coeff R p})$, we have \begin{align*} |E_{n+1}R_{i,n+1,p}(\alpha)|_p\le \max_{m(n+1)+i-1\le k}|r_{i,k,n+1}\alpha^{k}|_p\le \max_{m(n+1)+i-1\le k}k^{m-1} |\alpha^{k}|_p. \end{align*} Since we have $\max_{m(n+1)+i-1\le k}|r_{i,k,n+1}\alpha^{k}|_p\le (m(n+1)+i-1)^{m-1}|\alpha|^{m(n+1)+i-1}_p$ for any natural number $n$ satisfying $1/{\rm{log}}|\alpha|^{-1}_p+1/m\le n$, we have the desire inequalities. This completes the proof of Lemma $\ref{p upper bound jyouyo}.$ \end{proof} \section{Proof of main theorems} In this section, we give the proofs of Theorem $\ref{power of log indep}$ and Theorem $\ref{p power of log indep}$. \subsection{Non-vanishing of certain determinants} \begin{lemma} $($cf. {\rm{\cite[Theorem $1.2.3$]{J}}} $)$ \label{non vanishing of det} Let $K$ be a field with characteristic $0$ and $\bold{f}=(1,f_1,\ldots,f_{m-1})\in K[[z]]^m$. Let $\bold{n}=(n_1,\ldots,n_m)\in \Z^{m}_{\ge 0}$. Put $\bold{n}_i=(n_1+1,n_2+1,\ldots,n_i+1,n_{i+1},\ldots,n_m) \ \text{for} \ 1\le i \le m.$ Let $(A_{i,1}(z),\ldots,A_{i,m}(z))\in K[z]^m$ be a weight $\bold{n}_i$ Pad\'{e} approximants of $\bold{f}$. We define a polynomial $\Delta(z)$ by \begin{equation} \label{det} \Delta(z)={\begin{pmatrix} A_{1,1}(z)& A_{1,2}(z) & \dots &A_{1,m}(z)\\ A_{2,1}(z)& A_{2,2}(z) & \dots &A_{2,m}(z)\\ \vdots & \vdots & \ddots &\vdots \\ A_{m,1}(z)& A_{m,2}(z) & \dots &A_{m,m}(z)\\ \end{pmatrix}}. \end{equation} Then there exists $\gamma\in K$ satisfying \begin{align} \label{calculation of det} \Delta(z)=\gamma z^N, \end{align} where $N=\sum_{j=1}^m (n_j+1)$. Moreover, if the set of indices $\{\bold{n}\}\cup \{\bold{n}_i\}_{1\le i \le m-1}$ are normal with respect to $\bold{f}$, we have $\Delta(z)\neq 0$, i.e. $\gamma\neq 0$. \end{lemma} \begin{proof} Denote the formal power series $A_{i,1}(z)+ A_{i,2}(z)f_1(z)+\dots+A_{i,m}(z)f_{m-1}(z)$ by $R_i(z)$ for $1\le i \le m$. Note that we have \begin{align} \label{order lower bound} {\rm{ord}}R_i(z)\ge N+i-1 \ \text{for} \ 1\le i \le m. \end{align} By adding the $i$-th column of the matrix $(\ref{det})$ multiplied by $f_{i-1}(z)$ to the first column of the matrix $(\ref{det})$ for all $2\le i \le m$, we obtain the following equality: \begin{align} \label{equal determ 1} \Delta(z)= \mathrm{det}{\begin{pmatrix} R_1(z)& A_{1,2}(z) &\dots & A_{1,m}(z)\\ R_2(z)& A_{2,2}(z) & \dots & A_{2,m}(z)\\ \vdots & \vdots & \ddots & \vdots\\ R_m(z)& A_{m,2}(z) & \dots & A_{m,m}(z)\\ \end{pmatrix}}. \end{align} For $1\le t,u\le m$, we denote the $(t,u)$-th cofactor of the matrix in $(\ref{equal determ 1})$ by $\Delta_{t,u}(z)$. Then, we obtain \begin{align} \label{decomp det} \Delta(z)=\displaystyle\sum_{t=1}^{m}R_{t}(z)\Delta_{t,1}(z). \end{align} Using the inequalities $(\ref{order lower bound})$ and the equality $(\ref{decomp det})$, we have \begin{align} \label{order lower bound delta} {\rm{ord}}\Delta(z)\ge N. \end{align} On the other hand, by using the equality $(\ref{det})$, we obtain \begin{align} \label {upper bound det} {\rm{ord}}\Delta(z)\le N. \end{align} Combining the inequalities $(\ref{order lower bound delta})$ and $(\ref{upper bound det})$, we obtain the equality $(\ref{calculation of det})$. If the set of indicies $\{\bold{n}\}\cup \{\bold{n}_i\}_{1\le i \le m}$ are normal with respect to $\bold{f}$ then, using Lemma $\ref{cor fund Pade}$, we have ${\rm{deg}}A_{i,i}(z)=n_i+1$ for $1\le i \le m$. Then we have \begin{align} \label {upper bound det 2} {\rm{ord}}\Delta(z)=N. \end{align} This completes the proof of Lemma $\ref{non vanishing of det}$. \end{proof} Using Lemma $\ref{non vanishing of det}$ for $\bold{f}:=(1,{\rm{log}}(1+z),\ldots,{\rm{log}}^{m-1}(1+z))$ and $\bold{n}=(n,\ldots,n)\in \N^m$, we obtain the following corollary. \begin{corollary} \label{key corollary} Let $\{A_{i,j,n+1}(z)\}_{1\le i \le m,0\le j \le m-1}\subset \Q[z]$ be the set of polynomials defined in $(\ref{coeff polynomial})$. Put $\Delta^{(n+1)}(z):={\rm{det}}(A_{i,j,n+1}(z))_{1\le i \le m,0\le j \le m-1}.$ Then there exists some $\gamma\in \Q\setminus\{0\}$ satisfying \begin{align} \label{non vanish} \Delta^{(n+1)}(z)=\gamma z^{(n+1)m}. \end{align} Especially, we have \begin{align} \label{non vanish value} \Delta^{(n+1)}(\alpha)\neq 0 \ \text{for all} \ \alpha\in \overline{\Q} \setminus\{0\}. \end{align} \end{corollary} \subsection{Proof of Theorem $\ref{power of log indep}$} Before starting to prove Theorem $\ref{power of log indep}$, we introduce a sufficient condition to obtain a lower bound of linear forms of complex numbers with integer coefficients. For $\boldsymbol{\beta}:=(\beta_0,\ldots,\beta_{m})\in K^{m+1}\setminus\{\bold{0}\}$ and $\theta_0,\theta_1,\ldots,\theta_m\in \C$, we denote $\sum_{i=0}^{m}\beta_i\theta_i$ by $\Lambda(\boldsymbol{\beta},\boldsymbol{\theta})$. \begin{proposition} \label{critere} Let $K$ be an algebraic number field and fix an embedding of $\sigma:K\hookrightarrow \C$. We denote the completion of $K$ by the fixed embedding $\sigma$ by $K_{\infty}$. Let $m\in\N$ and $\theta_0:=1,\theta_1,\ldots,\theta_{m}\in \C^{*}$. Suppose there exist a set of matrices $$\{(a_{i,j,n})_{0\le i,j \le m}\}_{n\in \N} \subset {\rm{GL}}_{m+1}(K)\cap {\rm{M}}_{m+1}(\mathcal{O}_K),$$ positive real numbers \begin{align*} &\{\mathcal{A}^{(k)}\}_{1\le k \le [K:\Q]}, \{c^{(k)}_{i}\}_{\substack{0\le i \le m \\ 1\le k \le [K:\Q]}}, \{T^{(k)}\}_{1\le k \le [K:\Q]},A,c,T,N \end{align*} and a function $f:\N\longrightarrow \R_{\ge0}$ satisfying \begin{align} &c^{(k)}_{0}\le \ldots \le c^{(k)}_{m} \ \text{for} \ 1\le k \le [K:\Q], \nonumber\\ &f(n)=o(n) \ \ (n\to \infty), \label{fn} \end{align} and \begin{align*} &\max_{0\le j \le m}|a^{(k)}_{i,j,n}|\le T^{(k)}n^{c^{(k)}_{i}}e^{\mathcal{A}^{(k)}n+f(n)} \ \text{for} \ 0\le i \le m \ \text{and} \ 1\le k \le [K:\Q],\\ &\max_{0\le j \le m} |\sum_{i=0}^ma_{i,j,n}\theta_i|\le Tn^{c}e^{-A n+f(n)}, \end{align*} for $n\ge N$. Put \begin{align*} &\delta:=A+\mathcal{A}^{(1)}-\dfrac{m\sum_{k=1}^{[K:\Q]}\mathcal{A}^{(k)} }{[K_{\infty}:\R]}\\ &\nu:=A+\mathcal{A}^{(1)}. \end{align*} Suppose $\delta>0$, then the numbers $\theta_0,\ldots,\theta_{m}$ are linearly independent over $K$ and, for any $\epsilon>0$, there exists a constant $H_0$ depending on $\epsilon$ and the given data such that the following property holds. For any $\boldsymbol{\beta}:=(\beta_0,\ldots,\beta_m) \in \mathcal{O}^{m+1}_K \setminus \{ \bold{0} \}$ satisfying $\mathrm{H}(\boldsymbol{\beta})\ge H_0$, then we have \begin{align} |\Lambda(\boldsymbol{\beta},\boldsymbol{\theta})|>{\mathrm{H}(\boldsymbol{\beta})}^{-\tfrac{[K:\Q]\nu}{[K_{\infty}:\R]\delta}-\epsilon}. \end{align} \end{proposition} \begin{proof} Since ${\rm{det}}(a_{i,j,n})_{0\le i,j \le m}\neq 0$ for all $n\in \N$, there exists $0\le I_n \le m$ satisfying \begin{equation} \label{det} \Theta_{\boldsymbol{\beta},n}:={\rm{det}} {\begin{pmatrix} a_{0,0,n}& a_{0,1,n} & \dots &a_{0,m,n}\\ \vdots & \vdots & \ddots &\vdots \\ \beta_0& \beta_1 & \dots & \beta_m\\ \vdots & \vdots & \ddots &\vdots \\ a_{m,0,n} & a_{m,1,n} & \dots & a_{m,m,n}\\ \end{pmatrix}}\neq 0, \end{equation} where the vector $(\beta_0,\ldots,\beta_m)$ is placed in the $I_n$-th line of the matrix in the definition of $\Theta_{\boldsymbol{\beta},n}$. Then by the product formula, we have \begin{align} \label{upper infty} 1\le |\Theta^{(1)}_{\boldsymbol{\beta},n}|^{[K_{\infty}:\R]} \times {\prod_{k}}^{\prime} |\Theta^{(k)}_{\boldsymbol{\beta},n}|, \end{align} where ``$ \ {}^{\prime} \ $'' in ${\prod_{k}}^{\prime}$ means $k$ runs $2\le k \le [K:\Q]$ if $K_{\infty}=\R$ and $3\le k \le [K:\Q]$ if $K_{\infty}=\C$. In the following, we denote the $(s,t)$-th cofactor of the matrix in the definition of $\Theta_{\boldsymbol{\beta},n}$ by $\Theta_{\boldsymbol{\beta},n,s,t}$. First we give an upper bound of $|\Theta^{(1)}_{\boldsymbol{\beta},n}|$. \begin{align} &|\Theta^{(1)}_{\boldsymbol{\beta},n}|=\left|{\rm{det}} {\begin{pmatrix} \sum_{i=0}^m a_{i,0,n}\theta_i & a_{0,1,n} & \dots &a_{0,m,n}\\ \vdots & \vdots & \ddots &\vdots \\ \Lambda(\boldsymbol{\beta},\boldsymbol{\theta}) & \beta_1 & \dots & \beta_m\\ \vdots & \vdots & \ddots &\vdots \\ \sum_{i=0}^ma_{i,m,n}\theta_i & a_{m,1,n} & \dots & a_{m,m,n}\\ \end{pmatrix}}\right| \label{upper iota}\\ &=|\sum_{\substack{1\le i \le m+1 \\ i\neq I_n}} \left(\sum_{i=0}^ma^{(1)}_{i,j,n}\theta_i \right)\Theta^{(1)}_{\boldsymbol{\beta},i,1}+\Lambda(\boldsymbol{\beta}, \boldsymbol{\theta})\Theta^{(1)}_{\boldsymbol{\beta},I_n,1}| \nonumber \\ &\le mTn^{c}e^{-An+f(n)} m!\max\{1,|\beta_i| \} \prod_{i=2}^{m} (T^{(1)}n^{c^{(1)}_{i}}e^{\mathcal{A}^{(1)}n+f(n)})+|\Lambda(\boldsymbol{\beta}, \boldsymbol{\theta})|m!\prod_{i=1}^{m}(T^{(1)}n^{c^{(1)}_{i}}e^{\mathcal{A}^{(1)}n+f(n)}) \nonumber\\ &=m!\prod_{i=2}^{m} (T^{(1)}n^{c^{(1)}_{i}}e^{\mathcal{A}^{(1)}n+f(n)})\left(mTn^{c}e^{-An+f(n)}\max\{1, |\beta_i|\}+|\Lambda(\boldsymbol{\beta}, \boldsymbol{\theta})|T^{(1)}n^{c^{(1)}_{1}}e^{\mathcal{A}^{(1)}n+f(n)}\right). \nonumber \end{align} Secondly, we give an upper bound of $|\Theta^{(k)}_{\boldsymbol{\beta},n}|$ for $2\le k \le [K:\Q]$. \begin{align} |\Theta^{(k)}_{\boldsymbol{\beta},n}|&=\left|{\rm{det}} {\begin{pmatrix} a^{(k)}_{0,0,n} & a^{(k)}_{0,1,n} & \dots & a^{(k)}_{0,m,n}\\ \vdots & \vdots & \ddots &\vdots \\ \beta^{(k)}_0 & \beta^{(k)}_1 & \dots & \beta^{(k)}_m\\ \vdots & \vdots & \ddots &\vdots \\ a^{(k)}_{m,0,n}& a^{(k)}_{m,1,n} & \dots & a^{(k)}_{m,m,n}\\ \end{pmatrix}}\right| \label{tau part infty} \\ &\le (m+1)!\max\{1,|\beta^{(k)}_i|\} \prod_{1\le i \le m}\left(T^{(k)} n^{c^{(k)}_{i}}e^{\mathcal{A}^{(k)}n+f(n)}\right) \nonumber\\ &= (m+1)!\max\{1,|\beta^{(k)}_i|\} (T^{(k)})^m n^{\sum_{1\le i \le m}c^{(k)}_{i}}e^{m(\mathcal{A}^{(k)}n+f(n))}. \nonumber \end{align} Substituting the inequalities $(\ref{upper iota})$ and $(\ref{tau part infty})$ to the inequality $(\ref{upper infty})$ and taking the $\tfrac{1}{[K_{\infty}:\R]}$-th power, we obtain \begin{align} \label{conclusion 2} 1\le C_1\mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]}{[K_{\infty}:\R]}} e^{-\delta n}+C_2 \mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]}{[K_{\infty}:\R]}}|\Lambda(\boldsymbol{\beta},\boldsymbol{\theta})|e^{(\nu-\delta)n}, \end{align} where \begin{align*} &C_1:= m!mTn^{c} e^{f(n)}\prod_{i=2}^{m} (T^{(1)}n^{c^{(1)}_{i}}e^{f(n)}) {\prod_k}^{\prime} \left[(m+1)! (T^{(k)})^m n^{\sum_{1\le i \le m}c^{(k)}_{i}}e^{mf(n)}\right]^{\tfrac{1}{[K_{\infty}:\R]}},\\ &C_2:=m!\prod_{i=1}^{m} (T^{(1)}n^{c^{(1)}_{i}}e^{f(n)}){\prod_{k}}^{\prime} \left[(m+1)! (T^{(k)})^m n^{\sum_{1\le i \le m}c^{(k)}_{i}}e^{mf(n)}\right]^{\tfrac{1}{[K_{\infty}:\R]}}. \end{align*} Let $\epsilon>0$ and $0<\tilde{\epsilon}<\delta$ satisfying \begin{align}\label{condition tilde mu} \dfrac{\nu}{\delta}+\dfrac{\epsilon}{2}\ge \dfrac{\nu}{\delta-\tilde{\epsilon}}. \end{align} Write $\tilde{\delta}:=\delta-\tilde{\epsilon}$. By the assumptions $\delta>0$ and $(\ref{fn})$, there exists a natural number ${n^{*}}$ satisfying \begin{align} &C_1 e^{-\delta n}\le e^{-\tilde{\delta}n}, \label{cond 1}\\ &C_2 e^{(\nu-\delta)n}\le e^{(\nu-\tilde{\delta})n}, \label{cond 2} \end{align} for all $n\ge n^{*}$. Consequently, using $(\ref{conclusion 2})$, we obtain \begin{align} \label{conclusion 3} |\Lambda(\boldsymbol{\beta},\boldsymbol{\theta})|\ge \dfrac{1-e^{-\tilde{\delta}n}\cdot \mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]}{[K_{\infty}:\R]}}}{e^{(\nu-\tilde{\delta})n}\cdot \mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]}{[K_{\infty}:\R]}}} \ \ \text{for any} \ n\ge n^{*}. \end{align} Now for this fix $n^{*}$, we consider $H_0>1$ such that $e^{-\tilde{\delta}n^{*}}H^{\tfrac{[K:\Q]}{[K_{\infty}:\R]}}_0\ge \tfrac{1}{2}$. Then we have $e^{-\tilde{\delta}n^{*}}\mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]}{[K_{\infty}:\R]}}\ge \tfrac{1}{2}$ for all $\boldsymbol{\beta}\in \mathcal{O}^{m+1}_K\setminus \{\bold{0}\}$ satisfying $\mathrm{H}(\boldsymbol{\beta})\ge H_0$. Take $\boldsymbol{\beta}\in \mathcal{O}^{m+1}_K$ satisfying $\mathrm{H}(\boldsymbol{\beta})\ge H_0$. Let $\tilde{n}=\tilde{n}(\mathrm{H}(\boldsymbol{\beta})) \in \N$ be the least positive integer satisfying $e^{-\tilde{\delta}\tilde{n}}\mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]}{[K_{\infty}:\R]}}<\tfrac{1}{2}$. Note that $\tilde{n}> n^{*}$. Using inequality $(\ref{conclusion 3})$ for $\tilde{n}$, we have \begin{align} |\Lambda(\boldsymbol{\beta},\boldsymbol{\theta})|>\dfrac{\tfrac{1}{2}}{e^{(\nu-\tilde{\delta})\tilde{n}}\cdot \mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]}{[K_{\infty}:\R]}}}. \end{align} By the definition of $\tilde{n}$, we have $e^{-(\tilde{n}-1)\tilde{\delta}} \mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]}{[K_{\infty}:\R]}}\ge \tfrac{1}{2}$ and then $e^{\tilde{n}}\le (2\mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]}{[K_{\infty}:\R]}})^{\tfrac{1}{\tilde{\delta}}}e$. Finally, we obtain \begin{align*} |\Lambda(\boldsymbol{\beta},\boldsymbol{\theta})|&>\dfrac{1}{2^{\tfrac{\nu}{\tilde{\delta}}} \cdot e^{\nu-\tilde{\delta}} \cdot \mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]\nu}{[K_{\infty}:\R]\tilde{\delta}}}}\\ &\ge \dfrac{1}{\mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]\nu}{[K_{\infty}:\R]\tilde{\delta}}}}\\ &\ge \dfrac{1}{\mathrm{H}(\boldsymbol{\beta})^{\tfrac{[K:\Q]\nu}{[K_{\infty}:\R]\delta}+\tfrac{\epsilon[K:\Q]}{2[K_{\infty}:\R]}}}. \end{align*} Note that the last inequality is obtained by the inequality $(\ref{condition tilde mu})$. This completes the proof of Proposition $\ref{critere}$. \end{proof} \begin{remark} \label{how to calculate} In this remark, we explain that how to take the positive number $H_0$ in Proposition $\ref{critere}$. Let $\epsilon>0$. At first we take $\tilde{\epsilon}:=\dfrac{\epsilon \delta^2}{(2\nu+\epsilon \delta)}$. Then we have \begin{align*} \dfrac{\nu}{\delta}+\dfrac{\epsilon}{2}=\dfrac{\nu}{\delta-\tilde{\epsilon}}. \end{align*} Since $f(n)=o(n) (n\to \infty)$, there exists $\tilde{n}^{*}\in \N$ satisfying \begin{align*} f(n)<\dfrac{[K_{\infty}:\R]}{2m[K:\Q]}\tilde{\epsilon} n \ \text{for any} \ n\ge \tilde{n}^{*}. \end{align*} We take $n^{*}\ge \tilde{n}^{*}$ satisfying \begin{align} &{\rm{log}} \left(m!mTn^{c}\prod_{i=1}^m(T^{(1)}n^{c^{(1)}_{i}})\times {\prod_{k}}^{\prime} \left[(m+1)!(T^{(k)})^m n^{\sum_{1\le i \le m}c^{(k)}_{i}}\right]^{\tfrac{1}{[K_{\infty}:\R]}} \right)\le \dfrac{\epsilon \delta^2n^{*}}{4(2\nu+\epsilon \delta)}, \label{important} \end{align} then we have the inequalities $(\ref{cond 1})$ and $(\ref{cond 2})$ for any $n\ge n^{*}$. At last, we take $H_0$ by $$H_0=\left(\dfrac{1}{2}{\rm{exp}}[{\delta n^{*}}]\right)^{\tfrac{[K_{\infty}:\R]}{[K:\Q]}}.$$ Then by the proof of Proposition $\ref{critere}$, the positive number $H_0$ satisfies the following property$:$ \begin{align*} |\Lambda(\boldsymbol{\beta},\boldsymbol{\theta})|>{\mathrm{H}(\boldsymbol{\beta})}^{-\tfrac{[K:\Q]\nu}{[K_{\infty}:\R]\delta}-\tfrac{\epsilon[K:\Q]}{2[K_{\infty}:\R]}}, \end{align*} for any $\boldsymbol{\beta}:=(\beta_0,\ldots, \beta_m) \in \mathcal{O}^{m+1}_K \setminus \{ \bold{0} \}$ satisfying $\mathrm{H}(\boldsymbol{\beta})\ge H_0$. \end{remark} \begin{proof} [Proof of Theorem $\ref{power of log indep}$] We use the same notations as in Section $5$. Let $R_{i,n+1}(z)$ be the formal power series defined in $(\ref{exp pade S})$ for $1\le i \le m$ and $n\in\N$. Let $K$ be an algebraic number field. We fix an element $\alpha\in K\setminus\{0,-1\}$ satisfying the assumption in Theorem $\ref{power of log indep}$. Then we have \begin{align*} R_{i,n+1}(\alpha)=\sum_{j=0}^{m-1}A_{i,j,n+1}(\alpha)\log^j(1+\alpha). \end{align*} Put \begin{align} &D_{n+1}(\alpha):=d^m_{n+1}(n+1)!^m(m-1)!{\rm{den}}^{n+1}(\alpha), \label{D_n}\\ &a_{i,j,n+1}(\alpha):=D_{n+1}(\alpha) A_{i,j,n+1}(\alpha) \ \text{for} \ 1\le i \le m, 0\le j\le m-1. \label{linear form coefficients} \end{align} Then by Lemma $\ref{denominator}$ and Corollary $\ref{key corollary}$, we have \begin{align} \label{non-vanish log} (a_{i,j,n+1}(\alpha))_{1\le i \le m,0\le j \le m-1}\ \in {\rm{GL}}_{m}(K)\cap {\rm{M}}_m(\mathcal{O}_K) \ \text{for all} \ n\in\N. \end{align} Define the set of positive real numbers \begin{align*} &\mathcal{A}^{(k)}(\alpha)=m(1+{\rm{log}}(2))+{\rm{log}}({\rm{den}}(\alpha))+{\rm{log}}(1+|\alpha^{(k)}|) \ \text{for} \ 1\le k \le [K:\Q],\\ &c^{(k)}_{l}=2m \ \text{for} \ 0\le l \le m-1 \ \text{and} \ 1\le k \le [K:\Q],\\ &T^{(k)}(\alpha)=\dfrac{2^m(1+|\alpha^{(k)}|)(m-1)!}{|\alpha^{(k)}|} \ \text{for} \ 1\le k \le [K:\Q], \\ &A(\alpha)=\dfrac{m}{2}{\rm{log}}\left(\dfrac{m}{|{\rm{log}}(1+\alpha)|}\right)-\left(\dfrac{m(1+\sqrt{1+4|{\rm{log}}(1+\alpha)|})}{2}+\dfrac{2|{\rm{log}}(1+\alpha)|}{1+\sqrt{1+4|{\rm{log}}(1+\alpha)|}} \right)-{\rm{log}}({\rm{den}}(\alpha)), \\ &c(\alpha)=\dfrac{m}{2}, \\ &T(\alpha)={\rm{exp}}\left({\dfrac{2|{\rm{log}}(1+\alpha)|}{1+\sqrt{1+4|{\rm{log}}(1+\alpha)|}}}\right)(m-1)!. \end{align*} Define $g(n):=n\left[\sqrt{{\rm{log}}n}\cdot {\rm{exp}}\left(-\sqrt{({\rm{log}}n)/R}\right)\right]$ with $R:=\tfrac{515}{(\sqrt{546}-\sqrt{322})^2}$. Note that, in \cite{R-S1}, Rosser-Schoenfeld gives an estimate of $d_n$ of the form \begin{align} \label{growth dn} {\rm{exp}}(n-g(n))\le d_n \le {\rm{exp}}(n+g(n)). \end{align} Put $f(n):=mg(n)$. Then we see $f(n)=o(n) \ (n\to \infty)$. By Lemma $\ref{keisu ookisa}$ and Lemma $\ref{uekara jyouyokou}$, we have \begin{align*} &\max_{\substack{1\le i \le m \\ 0\le j \le m-1}}|a^{(k)}_{i,j,n+1}(\alpha)|\le T^{(k)}(\alpha) (n+1)^{c^{(k)}_{i}}e^{\mathcal{A}^{(k)}(\alpha)(n+1)+f(n+1)} \ \text{for} \ 0\le i \le m \ \text{and} \ 1\le k \le [K:\Q],\\ &\max_{1\le i \le m} \left|\sum_{j=0}^{m-1}a_{i,j,n+1}(\alpha){\rm{log}}^j(1+\alpha)\right|\le T(\alpha)(n+1)^{c(\alpha)}e^{-A(\alpha)(n+1)+f(n+1)}, \end{align*} for all $n\in \N$. We use Proposition $\ref{critere}$ for $\theta_1={\rm{log}}(1+\alpha),\ldots,\theta_{m-1}={\rm{log}}^{m-1}(1+\alpha)$ and the above datum $\{\mathcal{A}^{(k)}(\alpha)\}_{1\le k \le [K:\Q]}$, $\{c^{(k)}_{l}\}_{\substack{0\le l \le m-1 \\ 1\le k \le [K:\Q]}}$, $\{T^{(k)}(\alpha)\}_{1\le k \le [K:\Q]}$, $A(\alpha)$, $c(\alpha)$, and $T(\alpha)$ then we obtain the assertion of Theorem $\ref{power of log indep}$. \end{proof} \subsection{Proof of Theorem $\ref{p power of log indep}$} Before proving Theorem $\ref{p power of log indep}$, we introduce a $p$-adic version of Proposition $\ref{critere}$. \begin{proposition} \label{critere p} Let $K$ be an algebraic number field and fix an embedding $\iota_p:K\longrightarrow \C_p$. We denote the completion of $K$ by the fixed embedding $\sigma_{p}$ by $K_{p}$. Let $m\in\N$ and $\theta_0:=1,\theta_1,\ldots,\theta_{m-1}\in \C_p$. Suppose, for all $n\in\N$, there exist a set of matrices $$\{(a_{i,j,n})_{0\le i,j \le m}\}_{n\in \N} \subset {\rm{GL}}_{m+1}(K)\cap {\rm{M}}_{m+1}(\mathcal{O}_K),$$ positive real numbers \begin{align*} &\{\mathcal{A}^{(k)}\}_{1\le k \le [K:\Q]}, \{c^{(k)}_{i}\}_{\substack{0\le i \le m \\ 1\le k \le [K:\Q]}}, \{T^{(k)}\}_{1\le k \le [K:\Q]},A_p,c_p,T_p,N \end{align*} and a function $f:\N\longrightarrow \R_{\ge0}$ satisfying \begin{align*} &c^{(k)}_{0}\le \ldots \le c^{(k)}_{m} \ \text{for} \ 1\le k \le [K:\Q], \nonumber\\ &f(n)=o(n) \ \ (n\to \infty), \end{align*} and \begin{align*} &\max_{0\le j \le m}|a^{(k)}_{i,j,n}|\le T^{(k)}n^{c^{(k)}_{i}}e^{\mathcal{A}^{(k)}n+f(n)} \ \text{for} \ 0\le i \le m \ \text{and} \ 1\le k \le [K:\Q],\\ &\max_{0\le j \le m} |\sum_{i=0}^{m}a_{i,j,n}\theta_i|_p\le Tn^{c_p}e^{-A_pn}, \end{align*} for all $n\ge N$. Put \begin{align*} &\delta_p:=A_p-\dfrac{m\sum_{k=1}^{[K:\Q]}\mathcal{A}^{(k)}}{[K_p:\Q_p]},\\ &\nu_p:=A_p. \end{align*} For $\epsilon>0$ and ${n}^{*}\in \N$ satisfying $n^{*}\ge N$ and \begin{align*} &f(n)\le \dfrac{\epsilon \delta^2_p[K_{p}:\Q_p]n^{*}}{2m(2\nu_p+\epsilon \delta_p)[K:\Q]}, \\ &{\rm{log}}\left(T_{p}n^{b_p} \left[\prod_{k=1}^{[K:\Q]}(m+1)!(T^{(k)})^m n^{\sum_{1\le i \le m}c^{(k)}_{i}}\right]^{\tfrac{1}{[K_{p}:\Q_p]}}\right) \le \dfrac{\epsilon \delta^2_pn^{*}}{4 (2\nu_p+\epsilon \delta_p)}, \end{align*} for all $n\ge n^{*}$. Suppose $\delta_p>0$, then the numbers $\theta_0,\ldots,\theta_{m}$ are linearly independent over $K$ and the positive number $H_0:=\left(\dfrac{1}{2}{\rm{exp}}\left[\delta_p n^{*}\right]\right)^{\tfrac{[K_p:\Q_p]}{[K:\Q]}}$ satisfies the following property$:$ For any $\boldsymbol{\beta}:=(\beta_0,\ldots, \beta_m) \in \mathcal{O}^{m+1}_K \setminus \{ \bold{0} \}$ satisfying $H_0\le \mathrm{H}(\boldsymbol{\beta})$, then we have \begin{align*} |\Lambda_p(\boldsymbol{\beta},\boldsymbol{\theta})|_p>\mathrm{H}(\boldsymbol{\beta})^{-\tfrac{[K:\Q]\nu_p}{[K_{p}:\Q_p]\delta_p}-\tfrac{\epsilon[K:\Q]}{2[K_{p}:\Q_p]}} . \end{align*} \end{proposition} Since Proposition $\ref{critere p}$ can be proved by the same argument of that of Proposition $\ref{critere}$, we omit the proof. \bigskip \begin{proof} [Proof of Theorem $\ref{p power of log indep}$] We use the same notations as in the proof of Theorem $\ref{power of log indep}$. Let $K$ be an algebraic number field. We fix an element $\alpha\in K\setminus\{0,-1\}$ satisfying the assumption in Theorem $\ref{p power of log indep}$. Put \begin{align*} &T_p(\alpha)=\dfrac{(2m)^{m-1}}{|\alpha|_p},\\ &c_p=m-1,\\ &A_p(\alpha)=-m{\rm{log}}(|\alpha|_p). \end{align*} By Lemma $\ref{keisu ookisa}$ and Lemma $\ref{p upper bound jyouyo}$, we obtain \begin{align*} &\max_{\substack{1\le i \le m \\ 0\le j \le m-1}}|a^{(k)}_{i,j,n+1}(\alpha)|\le T^{(k)}(\alpha) (n+1)^{c^{(k)}_{i}}e^{\mathcal{A}^{(k)}(\alpha)(n+1)+f(n+1)},\\ &\max_{1\le i \le m} \left|\sum_{j=0}^{m-1} a_{i,j,n+1}(\alpha)\log^{j}_p(1+\alpha))\right|_p\le T_p(\alpha) (n+1)^{c_p} e^{-A_p(\alpha)(n+1)}, \end{align*} for all natural number $n$ satisfying $1/{\rm{log}}(|\alpha|^{-1}_p)+1/m\le n$. Using Proposition $\ref{critere p}$ for $\theta_1={\rm{log}}_p(1+\alpha),\ldots,\theta_{m-1}={\rm{log}}^{m-1}_p(1+\alpha)$, we obtain the assertion of Theorem $\ref{p power of log indep}$. \end{proof} Acknowledgements. The author warmly thank Noriko Hirata-Khono for her comments on the earlier version of this manuscript.
1,116,691,499,432
arxiv
\section{Introduction} In the context of greener and more cost-effective aviation, industrial and academic researchers have proposed and studied a wide range of control methods mainly over the past three decades. Unfortunately, hardly any offers realistic prospects of being implemented in practice. This applies, in particular, to active control schemes, despite some successful implementations of active devices in laboratory tests. Among passive concepts, the use of riblets was originally inspired by the narrow grooves observed on sharks' placoid scales. Although the effectiveness of the dermal denticles of the shark has been questioned by \citet{Boomsma2016}, the use of optimally-chosen longitudinal grooves, aligned with the main flow direction, has been shown to be capable of reducing the turbulent skin-friction drag by levels of order 5--10\% \citep{Choi1993,Bechert1989,Bechert1997,Garcia-Mayoral2011}. However, a practical, cost-effective implementation has yet to be achieved, mostly hindered by the small optimal spacing required (about 15$\mu$m in cruise-speed conditions) and stringent tolerances on the sharpness of the crests. More complex variants, such as sinusoidal riblets, were studied in \citep{Peet2009b,Kramer2010,BannierPhD}, but despite attempts to optimise the geometry, \citet{BannierPhD} showed that conventional (straight) riblets appear to be as effective. On the active-control front, based upon the work of \citet{Jung1992} on the drag-reducing properties of transverse wall oscillations, it has been established, computationally, that imparting streamwise-modulated, spanwise in-plane wall motions of the form of a travelling wave ${w_w(x,t) = A \sin(2\pi\, x/\lambda_x - \omega t)}$, giving rise to a `Generalised Stokes Layer' (GSL), results in gross friction-drag-reduction levels of up to about 45\% \citep{Quadrio2009,Quadrio2010} at low Reynolds numbers, the effectiveness being observed to reduce at higher Reynolds numbers \citep{Touber2012,Hurst2014,Gatti2016}. An experimental confirmation of the numerical results for the travelling wave was undertaken by \citet{Auteri2010} in pipe flows, for which drag-reduction levels of up to 33\% were achieved, while \citet{Bird2015} designed a Kagome-lattice-based actuator for boundary layers, achieving a drag-reduction level of around 10\%. However, it is very challenging to implement the latter method in a physical laboratory, let alone in practice. A particular case of the GSL is the Spatial Stokes Layer (SSL), consisting of a standing wave ${w_w=A_\text{SSL}\sin(2\pi\,x/\lambda_x)}$, as shown in \cref{fig:SSL}. \begin{figure}\centering \vspace{10pt} \includegraphics[width=\linewidth]{wall_motions} \caption{Schematic of the in-plane wall motion imparted in the case of a Spatial Stokes Layer (SSL).} \label{fig:SSL} \end{figure} This method was studied by \citet{Viotti2009} by means of DNS for various forcing amplitudes $A_\text{SSL}$ and wavelengths $\lambda_x$. The maximum net-energy savings of 23\%, achieved as a result of \citeauthor{Viotti2009}'s exploration at $Re_\tau =O(200)$, was for a forcing wavelength of $\lambda_x^+ = O(1000)$ that is, about two orders of magnitude larger than the optimal dimension of riblets. Thus, while the resulting control method is still active, it is steady, and based on geometric dimensions that, for an entirely passive device, would be compatible with a practical implementation. In an effort to address the need for practical control methods, the present research examines a passive means of emulating spanwise in-plane wall motions, as proposed in \cite{Chernyshenko2013}. The geometry considered is a solid wavy wall, with the troughs and crests skewed with respect to the main flow direction. The presence of the skewed wavy wall creates a spanwise pressure gradient that forces the flow in the spanwise direction, thus generating an alternating spanwise motion, as shown in \cref{fig:flowvisu}. \begin{figure*}\centering \includegraphics[width=.8\textwidth]{ww_ssl} \caption{Emulation of the forcing. Visualisation of mean streaklines close to the wall for SSL (left) and wavy wall (right). The background is coloured by the norm of the velocity vector.} \label{fig:flowvisu} \end{figure*} In contrast to the SSL, where the wall is actuated, the velocity has to vanish at the solid wall, so that the spanwise forcing can obviously not be faithfully emulated. However, the premise is that the wavy geometry will generate a spanwise shear strain, somewhat away from the wall, that will weaken turbulence in a similar manner to that effected by the SSL. Such a passive device would benefit from the favourable actuation characteristics of the SSL (large wavelength), resulting in a practical solution, from a manufacturing and maintenance standpoint. The present study will focus on selected direct numerical simulations of turbulent wavy-channel flows with the aim of examining the degree of Stokes-layer emulation achieved, and the degree to which the drag is reduced relative the plane channel. As part of this study, some major similarities and differences between the flow arising from in-plane wall motions and that past a wavy wall, as shown in \cref{fig:flowvisu}, will be brought to light, including the impact on the near-wall turbulence. \section{Methodology} \subsection{Overall strategy} An obvious problem is posed by the absence of any guidance on which combination of geometric parameters offers the promise of maximum drag reduction. An exploration of the three-dimensional parameter space (wave height, wavelength, and flow angle) by a `carpet-bombing' strategy, or classical formal optimisation strategy, is not tenable on cost grounds, especially because of the tight resolution requirements needed for an accurate prediction of the drag increase/decrease margin. For this reason, a preliminary low-order study was undertaken by \citet{Chernyshenko2013} to narrow down the exploration range within which the drag reduction might be maximised. A preliminary study of this type was undertaken by \citeauthor{Chernyshenko2013} who found an estimate for the streamwise-projected wavelength of the wave ${\lambda_x^+ \approx 1500}$, and the flow angle ${\theta\approx 52^\circ}$, but who did not provide an estimate for the height of the wave. Rather, a condition was given for the wave height, subject to the amplitude of the forcing of the emulated SSL $A_\text{SSL}^+=2$. The present strategy was initiated with the configuration given in \citet{Chernyshenko2013}. The wave height was chosen in order to approximately satisfy the above-mentioned condition on the emulated SSL. Other configurations, a selection of which will be presented below, were later simulated, and the exploration was mainly undertaken by trial an error. \subsection{Computational simulations} Direct Numerical Simulations are performed using an in-house code, that features collocated-variable storage, second-order spatial approximations implemented within a finite-volume, body-fitted mesh. The equations are explicitly integrated in time by a third-order gear-like scheme, described in~\cite{Fishpool2009}. In the fractional-step procedure, the non-solenoidal intermediate velocity field is projected onto the solenoidal space by solving a pressure--Poisson equation. The latter is solved by Successive Line Over-Relaxation \citep[p.~510]{Hirsch2007}, the convergence of which is accelerated using a multigrid algorithm by \citet{Lien1993}. The multigrid iterations within any time step are terminated when a convergence criterion, based on the RMS of the mass residuals, made non-dimensional using the fluid density, bulk velocity, and channel half-height, is met. A typical value used for this criterion is $10^{-10}$. Stable velocity--pressure coupling is ensured by use of the Rhie-and-Chow interpolation \citep{Rhie1983}, preventing odd-even oscillations. The code has been thoroughly verified and validated. A verification of the spatial accuracy of the code was undertaken via the Method of Manufactured Solutions \citep{Roache1997,Roache1998,Roache2002,Roy2004,Salari2000}, which indicated a second-order spatial accuracy for the velocity and pressure fields. The manufactured solution was implemented in a channel with lower and upper walls being wavy and flat, respectively. Validation was performed by independently reproducing the results of flow solutions documented in existing databases. Thus, results were obtained for a turbulent flow past a wavy wall and compared to the experimental and DNS data by \citet{Maass1996}, provided in the ERCOFTAC Classic Database (case~77). Very good agreement was found for all the statistical quantities available, including the velocity, Reynolds stresses, and pressure field. \subsection{Spatial discretisation of the problem} \subsubsection{Simulation of a wavy channel} The simulations were performed using surface-conformal meshing. The wavy geometry was created by adding an increment $h_w(x,y,z)$ to the wall-normal cell coordinates of a plane channel of half-height $h$, with the walls located at $y=\pm h$. A number of grid configurations have been simulated, with the characteristics of the plane-channel mesh ($h_w = 0$) listed in \cref{tab:grids}, where all quantities are scaled by reference to the target friction Reynolds number. The labels G1 to G6 will be used later to identify these cases. As will transpire, the changes in drag relative to the plane channel are small, pushing the requirements for the spatial resolution to much more stringent levels than for regular DNS. Therefore, particular emphasis is placed on a few simulations performed at the highest tractable resolution within the resources available. These key simulations correspond to the finest grid G6, at a bulk Reynolds number of $Re_b=6200$ ($Re_\tau \approx 360$). Quantitative evidence for the level of refinement is given in \cref{fig:kolmogorov_scale} by scaling the grid dimensions by the Kolmogorov length scale in a plane-channel DNS for the mesh G6, showing that the ratio $\Delta/\eta$, where $\Delta = \sqrt[3]{\Delta x \Delta y \Delta z}$, remains lower than unity throughout the channel. The wavy mesh is generated by adding an increment to the wall-normal coordinate of each and every cell of the plane channel. Two types of wavy geometry are considered herein: one with both walls wavy and the other with one wall flat, as shown in \cref{fig:wavywall}. In the former, shown in \cref{fig:wavywall}\,\textit{a}, both walls are in phase, i.e.~yielding a constant passage height of $2h$ along the entire channel. In the latter, shown in \cref{fig:wavywall}\,\textit{b}, the local height varies from $2h-A_w$ to $2h+A_w$, where $2h$ is the mean channel height, and $A_w$ the amplitude of the sinusoidal wall undulations. \begin{table*} \caption{Properties of the grid configurations, on which almost all the simulations presented are based. Viscous units are based on the target value of $Re_\tau$ for the plane channel. (Note that the angle of the flow may vary, e.g.~for an angle of $\pi/2$, the roles of $x$ and $z$ are reversed.)} \label{tab:grids} \begin{minipage}{0.9\textwidth} \begin{ruledtabular} \begin{tabular}{ccccccccccc} Grid label& $Re_\tau$ & $N_x$ & $N_y$ & $N_z$ & $\Delta x^+$ & $\Delta y^+$ & $\Delta z^+$ & $\Delta t^+$ & $L_x$ & $L_z$ \\ [3pt] \hline G1 & 180 & 768 & 192 & 768 & 2.4 & 0.7 -- 2.9 & 2.4 & 0.04 & 10.2 & 10.2\\ G2 & 360 & 768 & 192 & 768 & 4.8 & 0.7 -- 7.3 & 4.8 & 0.05 & 10.2 & 10.2\\ G3 & 360 & 2208 & 192 & 2208 & 2.5 & 0.4 -- 8.5 & 2.5 & 0.02 & 15 & 15\\ G4 & 360 & 1104 & 192 & 1104 & 2.5 & 0.4 -- 8.5 & 2.5 & 0.02 & 7.5 & 7.5\\ G5 & 1000 & 1024 & 768 & 1152 & 7.1 & 0.5 -- 5 & 8.7 & 0.03 & 7.2 & 10\\ G6 & 360 & 1104 & 288 & 2208 & 1.7 & 0.6 -- 4.5 & 1.7 & 0.02 & 5.1 & 10.2\\ \end{tabular} \end{ruledtabular} \end{minipage} \end{table*} \begin{figure*} \centering \includegraphics[scale=1.05]{kolmogorov_scale_plane} \caption{Ratios between grid spacings and the Kolmogorov length scale across the wall-normal direction, for a plane channel, for the grid configuration G6.} \label{fig:kolmogorov_scale} \end{figure*} \begin{figure*}\centering \vspace{-1cm} \includegraphics[scale=1]{ww}\\ \vspace{-7.02cm} \includegraphics[scale=1]{ww_text} \vspace{-1.6cm} \caption{Sketch of the geometrical configurations: (\textit{a}) wavy-wavy channel~(w-w) and (\textit{b}) wavy-flat channel~(w-f). Here, the main flow direction is along the $x$-direction.} \label{fig:wavywall} \end{figure*} \subsubsection{Computational implementation for skewed flow \label{sec:skewness}} The skewed wavy channel can be simulated in two ways: the grid can be aligned with the wave or with the main flow direction. Although the two are physically equivalent, keeping the wavy boundary aligned with the numerical box and skewing the flow at an angle to the grid allows greater flexibility. Specifically, the main advantages of this approach are as follows: \begin{enumerate} \item if the crests are aligned with the $z$-direction, this direction becomes statistically homogeneous; \item the post-processing is significantly eased; and \item this option allows continuous variations of the domain extent in the $z$-direction without affecting the periodicity boundary conditions applied to the $z$-direction boundaries. \end{enumerate} However, a disadvantage of this option is that it increases the problem size, since there is no longer an alignment between the $x$--$z$ coordinates and the streamwise and spanwise directions, respectively (cf.~\cref{fig:skewness}), necessitating both the domain sizes and resolution to be increased in the two wall-parallel ($x$--$z$) directions. This is in contrast to the usual practice of increasing the spanwise resolution whilst decreasing the spanwise domain width. Nevertheless, with the flow inclined at an angle to the grid, longer structures can be captured, as the flow traverses diagonally, thus mitigating the requirements for large domain sizes. Additionally, the difference between the two orientation strategies is shown to be small in \cref{sec:gridcv}. \begin{figure}\centering \includegraphics[scale=.9]{domain_skewness} \caption{Sketch of the configuration of the skewed-flow DNS, with flow-oriented coordinates $\check x$ and $\check z$, relative to domain coordinates $x$ and $z$.} \label{fig:skewness} \end{figure} An approximately constant flow rate across the channel is maintained by iteratively updating the two orthogonal pressure gradients, $P_x$ and $P_z$, implemented as explicit body forces in the momentum equations, so that the bulk velocity is close to unity in the streamwise ($\check x$) direction and zero in the spanwise ($\check z$) direction. The data shows that the target streamwise bulk velocity is satisfied within an error lower than $0.001\%$. Throughout the discussion to follow, only the incremental part of the pressure is reported, relative to a pressure-reference value located at one of the corners of the computational box. Quantities expressed in the frame of reference of the flow will be denoted with an overlaying inverted circumflex accent ($\check{\cdot}$), as in~\cref{fig:skewness}, although this notation will be omitted later when not needed. Unless stated otherwise, all physical interpretations are given in the frame of reference aligned with the flow. \subsection{Flow decomposition} All statistical quantities can be averaged in the homogeneous direction parallel to the wave crests and troughs. This groove-wise-averaging procedure is significantly eased by the choice made in~\cref{sec:skewness} of forcing the flow at an angle $\theta$ to the numerical grid. Phase-averaging is also performed when multiple waves are included within the domain of solution. Furthermore, in the case of a wavy channel with constant wall separation, both boundaries are statistically equivalent, which allows a doubling of the data included in the phase-averaging by shifting one of the walls by half a period and then taking advantage of the symmetry to average over both walls. Thus, any time- and phase-averaged quantity $\overline{\boldsymbol{q}}$ only depends upon the phase location $x/\lambda$ and the wall-normal location $y$, reducing the dependence to $\overline{\boldsymbol{q}}({x}/{\lambda}, {y})$. Depending on the objective of the analysis, two types of statistical decomposition of the mean turbulence properties can be considered. The first decomposition is relevant to studying how the flow properties vary in phase: \begin{equation} \overline{\boldsymbol{q}} = \boldsymbol{Q} + \widetilde{\boldsymbol{q}} \label{eq:triple1} \end{equation} where $\overline{\boldsymbol{q}}$ is any time- and phase-averaged quantity, $\boldsymbol{Q}(y) = \int_{0}^{1}\overline{\boldsymbol{q}}(x/\lambda,y+y_w)\, \dd (x/\lambda)$, ${y_w=A_w \sin(2\pi\,x/\lambda)}$ is the wall-normal location of the wall, and $\boldsymbol{\widetilde{q}}$ is the phase-varying part of the mean field. The second approach lays emphasis on the action of the wavy boundaries relative to the plane-channel flow: \begin{equation} \overline{\boldsymbol{q}} = \boldsymbol{{Q}_0} + \widehat{\boldsymbol{q}} \label{eq:triple2} \end{equation} where $\boldsymbol{{Q}_0}$ is the baseline plane-channel-flow value, and $\widehat{\boldsymbol{q}}$ is the difference to the plane-channel-flow solution. These decompositions will be referred to as `phase-integrated' (${\boldsymbol{Q}}$), `phase-varying' ($\boldsymbol{\widetilde{q}}$), and `difference' ($\widehat{\boldsymbol{q}}$). \subsection{Calculation of the drag contributions}\label{sec:DRtech} A drag coefficient is defined for each contribution to the drag as \begin{equation} D_{\star} = \frac{\bar F_\star}{\frac12 \rho\, L_x\, L_z\, \| \boldsymbol{U_b}\| ^2}, \end{equation} where $\star$ identifies the contribution (e.g. `$f$' for friction), $\bar F_\star$ is the mean force exerted on the walls opposing the flow direction, and $\boldsymbol{U_b}$ the bulk velocity. As mentioned in \cref{sec:skewness}, in all the cases presented, the flow is driven at approximately constant flow rate, so that $\|\boldsymbol{U_b}\| \approx 1$, via the imposition of a spatially-constant (vectorial) pressure gradient in order to balance the total drag force. Given a unit bulk velocity, the correct Reynolds number is set by prescribing the appropriate viscosity. The resulting pressure force driving the flow is: \begin{equation*} \bar F_{tot} = -P_{\check{x}}\, L_x \, 2h \, L_z, \end{equation*} where $P_{\check{x}}$ is the projection of the pressure gradient onto the flow direction: ${P_{\check{x}} = P_x\, \cos \theta + P_z \, \sin \theta}$. For the wavy channel, the total drag force is composed of two contributions: friction and pressure drag. The friction and pressure forces were integrated on the wavy surface and projected onto the flow direction, yielding the drag coefficients $D_f$ and $D_p$, respectively. Since only pressure and friction forces act on the walls, the total drag is: \begin{equation} D_{tot} = D_p + D_f. \label{eq:bal} \end{equation} \section{Simulations} \subsection{Overview of simulations} Simulations were performed at $Re_\tau \approx 360$ for various configurations, grid resolutions, and domain sizes. This Reynolds number was chosen so that the cost of the simulation remained tractable, and that the ratio between the height of the wave and the channel height $A_w/h$ was kept relatively modest. The corresponding parameters are given in \cref{tab:drag}, along with the drag levels calculated, as explained in \cref{sec:DRtech}. The net drag variation is evaluated in two ways: from the imposed pressure gradient and from the sum of the surface-integrated pressure force and friction-drag force. As expressed by \cref{eq:bal}, the two should be identical. However, they differ very slightly in the simulations, and this is expressed by the column `imbal.' in \cref{tab:drag}. In all simulations, the imbalance of the forces is regarded as negligible. By way of contrast with previous DNS studies reporting this value, the force imbalance in \cite{Wang2006} was of about 3--4\%. {\setstretch{1.0} \begin{table*} \caption{Configurations simulated, all at ${Re_\tau \approx 360}$. Each simulation is designated by a label consisting of a set of letters plus identifiers, meant to convey as much information as possible in a compact manner, and identifying each calculation uniquely. `G' stands for `Grid' and refers to the grid configurations detailed in \cref{tab:grids}, `W' stands for `Wavy', `P' for `Plane', and `A' for `Amplitude'. The figure following the letter `A' identifies a particular value of the wave slope $A_w/\lambda$. The suffix `f' indicates the presence of a flat upper wall instead of two in-phase wavy walls, and `bis' reflects the fact that the wavelength of `G2W1bis' is equal to that of `G2W1', but the flow angle $\theta$ is different. TDR, PD and FDR respectively stand for Total-Drag Reduction, Pressure Drag and Friction-Drag Reduction.} \begin{ruledtabular} \begin{tabular}{lrccccrrrrrrr} \multicolumn{1}{c}{\textbf{Simulation}} & \multicolumn{4}{c}{\textbf{Flow}} & \multicolumn{4}{c}{\textbf{Drag coefficients} } & \multicolumn{3}{c}{\textbf{Relative}}\\ \multicolumn{1}{c}{\textbf{label}} & \multicolumn{4}{c}{\textbf{configuration}} & \multicolumn{4}{c}{($\times 10^6$)} & \multicolumn{3}{c}{\textbf{drag variation}}\\ [5pt] \hline & \multicolumn{1}{c}{$A_w^+$} & \multicolumn{1}{c}{$\theta (^\circ)$} & \multicolumn{1}{c}{$\lambda^+$} & \multicolumn{1}{c}{$\lambda_x^+$} & \multicolumn{1}{c}{$D_{tot}$} & \multicolumn{1}{c}{$D_{p}$} & \multicolumn{1}{c}{$D_f$} & \multicolumn{1}{c}{imbal.} & \multicolumn{1}{c}{TDR} & \multicolumn{1}{c}{PD} & \multicolumn{1}{c}{FDR}\\ [3pt] \hline \multicolumn{12}{c}{Key simulations}\\ ~G6P1 & 0 & 70 & -- & -- & 6679 & 0 & 6677 & -0.03\% & & &\\ ~G6W1A1 & 11 & 70 & 918 & 2684 & 6659 & 30 & 6628 & -0.02\% & ~0.3\% & 0.45\% & 0.74\% \\ ~G6W1A2 & 18 & 70 & 918 & 2684 & 6633 & 87 & 6544 & -0.02\% & ~0.7\% & 1.30\% & 1.99\% \\ ~G6W1A3 & 22 & 70 & 918 & 2684 & 6639 & 130 & 6507 & -0.03\% & ~0.6\% & 1.95\% & 2.55\% \\ ~G6W1A4 & 32 & 70 & 918 & 2684 & 6728 & 332 & 6394 & -0.03\% & -0.7\% & 4.97\% & 4.24\% \\ ~G6W2A1 & 7 & 70 & 612 & 1789 & 6671 & 32 & 6637 & -0.02\% & 0.1\% & 0.48\% & 0.60\% \\ ~G6W2A3 & 14 & 70 & 612 & 1789 & 6676 & 138 & 6539 & 0.00\% & 0.0\% & 2.07\% & 2.07\% \\ ~G6W2A4 & 22 & 70 & 612 & 1789 & 6780 & 329 & 6450 & -0.01\% & -1.5\% & 4.92\% & 3.40\% \\ [5pt] \multicolumn{12}{c}{Other simulations}\\ ~G3P1 & 0 & ~0 & -- & -- & 6642 & 0 & 6639 & -0.04\% & & & \\ ~G3P2 & 0 & 45 & -- & -- & 6690 & 0 & 6688 & -0.03\% & & & \\ ~G3W1 & 18 & 70 & 918 & 2684 & 6622 & 87 & 6533 & -0.04\% & 1.1\% & 1.29\% & 2.41\%\\ ~G4P1 & 0 & ~0 & -- & -- & 6640 & 0 & 6637 & -0.04\% & & & \\ ~G4P2 & 0 & 45 & -- & -- & 6694 & 0 & 6691 & -0.04\% & & & \\ ~G2P1 & 0 & 52 & -- & -- & 6805 & 0 & 6809 & -0.01\% & & & \\ ~G2P2 & 0 & 70 & -- & -- & 6755 & 0 & 6755 & 0.01\% & & & \\ ~G2W1 & 18 & 52 & 918 & 1491 & 7021 & 544 & 6479 & 0.02\% & -3.0\% & 7.99\% & 4.85\% \\ ~G2W1f & 18 & 52 & 918 & 1491 & 6911 & 271 & 6641 & 0.01\% & -1.5\% & 3.98\% & 2.47\% \\ ~G2W1bis & 18 & 70 & 918 & 2684 & 6648 & 87 & 6560 & 0.00\% & 1.6\% & 1.29\% & 2.89\% \\ ~G2W2 & 14 & 70 & 612 & 1789 & 6640 & 138 & 6503 & 0.01\% & 1.7\% & 2.04\% & 3.73\% \\ \end{tabular}% \end{ruledtabular} \label{tab:simulations}% \label{tab:drag} \end{table*}% } \subsection{Overall physical characteristics} As the flow travels past the skewed wavy wall, it accelerates on the windward side and then decelerates on the leeward side, a behaviour linked to the pressure being a minimum above the crest and a maximum in the trough region, as shown in \cref{fig:velp}. Because the wave is at an angle to the main flow direction, this pressure variation in phase gives rise to a pressure gradient both in the streamwise and spanwise direction. The latter gradient generates a spanwise motion, shown in \cref{fig:spawisemo}, which is asymmetric in phase and penetrates quite far into the boundary layer above $y^+\approx 100$. However, as previously mentioned, it is not the velocity but the shear strain which is important with respect to the emulation of the Stokes layer, and which dictates the orientation of the near-wall streaks. In \cite{Touber2012}, this orientation was observed to vary in time, with strong reduction in turbulence during the reorientation phase. For the wavy wall, the phase-modulation of the shear strain occurs in space, and also results in a reorientation following the shear-strain field, as well as in a weakening of the streaks, as shown in \cref{fig:snap} at a constant distance from the wall around $y^+\approx 10$. However, the effect is far less pronounced than observed in \cite{Touber2012} because the forcing amplitude is much smaller in the present case. \begin{figure*} \includegraphics[scale=1.05]{velp} \caption{Contours of the phase-varying velocity and pressure fields for G6W1A2 ($A_w^+=18$, $\theta=70^\circ$, $\lambda^+=918$).} \label{fig:velp} \end{figure*} \begin{figure*}\centering \includegraphics[scale=1.05]{spanwise_WW_fix} \caption{Mean spanwise motion along the wavy wall for G6W1A2 ($A_w^+=18$, $\theta=70^\circ$, $\lambda^+=918$). Black lines: spanwise-velocity profiles at evenly-spaced phase locations, background: contours of the spanwise velocity.} \label{fig:spawisemo} \end{figure*} \begin{figure*} \includegraphics[width=.87\textwidth]{snap.png} \caption{Turbulent fluctuations of the streamwise velocity $u'^+$ for G6W1A4 (${A_w^+=32}$, ${\theta=70^\circ}$, ${\lambda^+=918}$), normalised by the local mean friction velocity at the particular phase location $\sqrt{\nu (\partial \overline{u}/\partial y)|_w}$ in a wavy horizontal slice located at constant $y/h$, around $y^+\approx 10$. Straight white lines: location of the crests, dashed straight white lines: location of the troughs, yellow arrows: mean shear-strain field, yellow circle: region of strong streak weakening. The periodic boundary conditions have been exploited to increase the visualisation area.} \label{fig:snap} \end{figure*} \subsection{Influence of the upper wall}\label{sec:flat} Previous studies on wavy channels often had only a single wavy wall, or featured varicose wall undulations \citep{Cherukat1998,Henn1999,Yoon2009,Nakanishi2012,Mamori2012,Luchini2016}. A distinct feature of the present configuration is that the passage height is constant at any phase location. Thus, comparing various wall configurations is interesting in order to ensure that the distance between the two walls is large enough, so that the upper wavy wall does not influence significantly the flow above the lower one. This is addressed by comparing simulations for channels with one or both walls wavy. The relevant entries in \cref{tab:drag,tab:drag_flat} are G2W1 and G2W1f, and the corresponding plane-channel baseline G2P1. The results in~\cref{tab:drag_flat} show that the friction on the lower (wavy) boundary of G2W1f has a lower value than for G2W1, whilst the friction on the upper (flat) surface of G2W1f is slightly increased with respect to the baseline drag. The latter is a consequence of the control strategy of keeping the flow rate constant. The drag coefficients reported in~\cref{tab:drag_flat} show that the drag relative to the baseline level (in this case, drag increase) is about half of that of the case with both walls wavy: the drag increase per wavy wall in G2W1 is 1.58\%, compared to 1.54\% for the wavy-flat case G2W1f. This supports the assertion that the two configurations are close to each other in terms of the processes effective at each wavy wall separately. \begin{table} \caption{Drag coefficients at both upper and lower walls, comparing a wavy-wavy channel (G2W1) to its wavy-flat counterpart (G2W1f).} \label{tab:drag_flat} \begin{ruledtabular} \begin{tabular}{lccccc} \multicolumn{1}{c}{Simulation} & \multicolumn{2}{c}{$D_f$ ($\times 10^6$)} & \multicolumn{2}{c}{$D_p$ ($\times 10^6$)} & DR\\ \multicolumn{1}{c}{label} & \multicolumn{1}{c}{lower} & \multicolumn{1}{c}{upper} & \multicolumn{1}{c}{lower} & \multicolumn{1}{c}{upper} & (total)\\ \hline ~G2P1 & 3405 & 3402 & {--} & -- & -- \\ ~G2W1 & 3242 & 3237 & 272 & {271} & -3.16\%\\ ~G2W1f & 3220 & 3421 & 271 & -- & -1.54\%\\ \end{tabular}% \end{ruledtabular} \end{table} \subsection{Net drag reduction and grid convergence}\label{sec:gridcv} In \cref{tab:drag}, TDR represents the drag relative to that of the plane channel. Physically, the latter is unique regardless of the angle between the flow an the mesh. However, this is not so computationally, owing to variations in solution domain and numerical errors, including the finite time of averaging, grid resolution, and the finite size and orientation of the periodic domain which does not allow very long structures to be captured. As the angle between the main flow direction and the grid varies, the length of the longest structure allowed to exist within the periodic boundaries changes. At the same time, the grid resolution is also altered in the flow-oriented directions. This results, in a plane channel, in a slight dependence of the drag on the flow direction. By way of example of the influence of the domain size on the drag, for a flow aligned with the mesh, \citet{Ricco2004} observed, at $Re_\tau = 200$, that changing the streamwise extent from $4\pi h$ to $21h$ whilst keeping roughly the same spatial resolution, resulted in a drag increase of 0.6\%. This difference in drag is already large enough to be of the same order of magnitude as the variations of the total drag level in the cases considered herein. Therefore, a careful assessment of key computational parameters is required, as detailed below. First, the influence of the domain size is considered for plane-channel flow. This was investigated by means of highly-resolved simulations, with constant resolution, but varying domain sizes. For this purpose, grids G3 and G4 are chosen to have the same spatial resolution $\Delta x^+ = \Delta z^+ = 2.5$ and $\Delta y^+$ ranging from 0.4 at the wall up to 8.5 in the centre, but the domain size of G4 is one half of that of G3 in the $x$-$z$ directions (cf.~\cref{tab:grids}). The largest domain considered is $L_x = L_z = 15h$, which represents about 5400 wall units, i.e.~longer than the commonly chosen value of $8\pi h^+=O(4500)$ at ${Re_\tau = O(180)}$. \Cref{tab:drag} shows that the relative difference in drag between the larger domain (grid G3) and the smaller (grid G4) is lower than 0.05\%, both at an angle of $0^\circ$ (G3P1 and G4P1) and $45^\circ$ (G3P2 and G4P2). Next, the impact of resolution is quantified, still for the case of a plane channel. To this end, simulations were run with the domain size kept constant, as in G3, but with a mesh coarser by a factor of 2 in the wall-parallel and wall-normal directions independently. Then, the drag levels for $\theta = 0^\circ$ and $45^\circ$ configurations were compared. As expected for a consistent discretisation, the difference between the two physically-equivalent configurations ($\theta = 0^\circ, 45^\circ$) decreases as the resolution is increased. However, this difference remains significant at about 0.7\% for grid G3, despite the mesh being fine with respect to usual DNS standards. An important observation is that increasing the number of cells in the wall-normal direction has little impact on the total drag, whereas increasing the resolution in the wall-parallel directions reduces significantly the difference in friction between $\theta= 0^\circ$ and $\theta = 45^\circ$. Drag differences with respect to the angle of the flow were observed to vary monotonically, the minimum drag coefficient being found at $\theta = 0^\circ$, and the maximum at $\theta = 45^\circ$. A possible approach to reducing the error in the predicted drag-reduction level is to evaluate it by reference to a plane-channel flow simulation at exactly the same spatial resolution, domain size, and flow angle. This is the approach preferred here, in light of the work of \citet{Gatti2013,Gatti2016}, who observed some cancellation of the systematic bias associated with the domain size. Thus, for example, the baseline drag for G2W1 is G2P1 (both at $\theta = 52^\circ$), whereas for G2W1bis ($\theta = 70^\circ$), the baseline is taken to be G2P2 (also at $\theta = 70^\circ$). However, even this elaborate approach may not suffice, in the face of the small drag-reduction margin, to remove uncertainties associated with the numerical aspects that contribute to the total error. A factor that may also be influential is the distortion of the cells fitted to the wavy boundary. As shown in~\cref{tab:draggridconv}, changes in the total drag from grid G2 to the finest grid G6 are not independent of the wave geometry. The variation of the total-drag reduction with grid refinement was studied in greater detail for the most promising case, namely ${(A_w^+, \theta, \lambda^+) = (18,70^\circ,918)}$. Since the drag was found to be mainly sensitive to the wall-parallel resolutions, only $\Delta x$ and $\Delta z$ are used as indicators of the grid refinement. The main outcome of the study, shown in~\cref{fig:grid_cv}, is that the total-drag reduction predicted decreases as the mesh is refined, with a quadratic dependence on the wall-parallel mesh spacing. The drag-reduction value appears to tend, asymptotically, to a positive value of 0.5\%. Additionally, the ratios of the friction- and pressure-drag coefficients ($D_f$ and $D_p$ respectively), relative to the total $D_{tot}$, remain constant for grids G2 and G6: thus, the value of $D_p$ is 1.314\% that of $D_{tot}$ for the coarsest mesh G2W1A2, compared with 1.315\% for the finest mesh G6W1A2. Therefore, whilst the share between the pressure and friction drag is grid-independent, the absolute value of the total drag is a very sensitive quantity that requires substantial computational efforts to be accurately predicted. \begin{table} \caption{Grid refinement study for wavy calculations, all at $\theta = 70^\circ$. The coarsest grid is G2, and the finest is G6. Abbreviations are consistent with those of \cref{tab:drag}.} \label{tab:draggridconv}% \begin{ruledtabular} \begin{tabular}{lccccc} \multicolumn{1}{c}{Simulation label} & $\Delta x^+$ & $\Delta z^+$ & FDR & PD & TDR \\ \hline ~G2W1bis & 4.8 & 4.8 & 3.0\% & 1.3\% & 1.7\% \\ ~G3W1 & 2.5 & 2.5 & 2.4\% & 1.3\% & 1.1\% \\ ~G6W1A2 & 1.7 & 1.7 & 2.0\% & 1.3\% & 0.7\% \\ \hline ~G2W2 & 4.8 & 4.8 & 3.8\% & 2.0\% & 1.8\% \\ ~G6W2A3 & 1.7 & 1.7 & 2.1\% & 2.1\% & 0.0\% \\ \end{tabular}% \end{ruledtabular} \end{table}% One additional simulation, physically equivalent to G6W1A2, was undertaken, although at a slightly coarser resolution, in a computational box aligned with the main flow direction, and the wavy wall at an angle to the grid. Two wavelengths were included in the streamwise and spanwise directions. The resulting point, shown in~\cref{fig:grid_cv}, is in line with the simulations of the skewed configuration. \begin{figure*}\centering \includegraphics[scale=1.05]{grid_refinement} \caption{Total-drag reduction for $A_w^+=18$, $\theta = 70^\circ$, and $\lambda^+=918$, for grids G2, G3 and G6. Inverted triangle $\triangledown$: supplementary run with the wave at an angle and the flow aligned with the grid. The baseline drag level is taken at the same angle, except for G3 where the baseline drag is found by interpolation between $\theta=0^\circ$ and $\theta=90^\circ$ cases. Error bars: 90\% confidence interval of the time-averaging error calculated using the method of batch means and batch correlations by \citet{Luchini2017}, and assuming that the baseline and controlled drag levels are independent variables.} \label{fig:grid_cv} \end{figure*} \subsection{Friction-drag reduction and pressure-drag increase}\label{sec:FDR} In \cite{Viotti2009}, the reduction in skin friction was observed to increase linearly with forcing amplitudes up to $A^+_\text{SSL} \approx 5$, and then to increase at a slower rate. Such an observation is not consistent with the expected symmetry of the problem around $A^+_\text{SSL}=0$, which would imply a zero value of the first derivative of the drag reduction as a function of the amplitude. For the wavy wall, the dependence of the friction-drag reduction (FDR) on the wave slope, shown in~\cref{fig:DR}\,(\textit{a}), is compatible with a symmetry condition, thus indicating that this is a local effect taking place below $2A_w/\lambda \approx 0.04$: for very small actuation amplitudes, the FDR does seem to exhibit a quadratic behaviour, which then becomes close to linear. The amplitude of the spanwise shear strain at $2 A_w/\lambda \approx 0.04$ corresponds to that of a SSL with a forcing amplitude of about $A^+_\text{SSL} \approx 1.1$, thereby corroborating the observations of \cite{Viotti2009}, since the smallest amplitude they considered was $A^+_\text{SSL}=1$, which occurs at the intersection between the quadratic and linear behaviour for the present key simulations. The decrease in $\lambda_x$ from G6W1* (${\lambda^+ = 918}$, ${\lambda_x^+ \approx 2700}$) to G6W2* (${\lambda^+=612}$, ${\lambda_x^+ \approx 1800}$) results in a reduced effectiveness of the wavy wall at the same wave slope, i.e. the same equivalent forcing amplitude $A_\text{SSL}^+$. This trend contrasts with the drag-reduction trend of the SSL, which features an optimum around ${\lambda_x^+ \approx 1250}$. Therefore, there exists some unfavourable mechanism limiting the drag reduction achievable by the wavy wall. Such considerations will be discussed in \cref{sec:ssldiff}. \begin{figure*} \begin{center} {\hspace{0pt}\includegraphics[scale=1.05]{FDR}}\quad% {\hspace{10pt}\includegraphics[scale=1.05]{PD}}% \end{center} \caption{(\textit{a}) Levels of friction-drag reduction for simulations G6W*A*. $\blacklozenge$ correspond to G6W1A* $({\lambda^+ = 918})$, and $\blacksquare$ to G6W2A* $({\lambda^+ = 612})$. Continuous lines represent quadratic and linear behaviours. The quadratic curve is interpolated from the value at ${2A_w/\lambda=0.024}$ with zero values for the FDR and its derivative at ${A_w/\lambda = 0}$. The linear curve is interpolated from the two largest wave slopes. (\textit{b}) Levels of pressure drag for simulations G6W*A*. Symbols: same as~(\textit{a}), line: quadratic behaviour.} \label{fig:DR} \end{figure*} \section{Emulation of a spatial Stokes layer} \subsection{Shear strain}\label{sec:shearcmp} The present approach of using a skewed wavy wall is based on the assumption that similar longitudinal patterns of the transverse shear strain, whether created by the SSL or the wavy wall, will lead to the same effects on turbulence and hence lead to some turbulent-drag reduction. In the case of a temporal Stokes layer, \citet{Touber2012} showed that the wall oscillations led to a bimodal partial decay, reformation and reorientation of the streaks, a behaviour dictated by the unsteady Stokes strain in the buffer layer. If this mechanism is indeed intimately linked to the friction-drag reduction, the relevant forcing quantity is the resulting shear strain, rather than the forcing velocity itself. It is this key element that allows for a passive surface, with no slip at the solid wall, to emulate the actuation by in-plane wall oscillations, through the action of a transverse pressure gradient that generates an equivalent shear-strain field. In contrast to the Stokes layer, the wavy wall also induces wall-normal forcing that contributes to the shear strain via ${\partial \overline{v}/\partial z}$, but this additional effect was observed to be small compared to ${\partial \overline{w}/\partial y}$, especially in the near-wall region where the shear is maximum. It follows that there is no significant parasitic effect of the wall-normal velocity on the spanwise forcing, justifying a direct comparison of ${\partial \overline{w}/\partial y}$ between wavy-wall and Stokes-layer configurations. A slight difference between the SSL simulation presented in this section, and those of \cite{Viotti2009}, has been introduced deliberately, in order to maximise the correspondence between the SSL and the wavy-wavy channel configuration. This difference is that the wall forcing on the upper wall is shifted by half a period relative to the lower wall. However, this change does not result in noticeable differences, since the thickness of the Stokes layer is much smaller than the channel half-height. The reference SSL considered is for a forcing amplitude of $A^+_\text{SSL}=2$ (based on unactuated friction velocity) and a wavelength close to the optimum at this forcing amplitude, $\lambda_x^+ \approx 1250$, subject to the assumption that the Reynolds-number change from ${Re_\tau = 200}$ in~\cite{Viotti2009}, to the present value of ${Re_\tau \approx 360}$ does not have a significant impact on the optimal wavelength. The simulation was run on a domain ${L_x^+\approx 2500}$, ${L_z^+\approx 1100}$, with a grid resolution $\Delta x^+ = 9.8$, $\Delta z^+ = 5.8$, and $0.7 < \Delta y < 7.3$. Such a resolution may not be sufficient for an accurate comparison of the drag levels, but is acceptable for the comparison of the shear-strain profiles. Results for some wavy channels are shown in \cref{fig:cmp_shear}. The strain profiles demonstrate that the wavy wall emulates reasonably well the shear layer of the SSL. This observation leads to the expectation that the reduction in turbulent skin friction achieved by the wavy wall would be similar, and arises from the same physical mechanism as in the SSL. In addition, the amplitude of the phase-wise variations in the shear-strain -- and hence the corresponding SSL forcing amplitude $A_\text{SSL}^+$-- appears to be mostly dictated by the streamwise wave slope $A_w/\lambda_x$, as suggested by rescaling the amplitude of the transverse shear strain by this ratio, so as to compensate for the different forcing amplitude. This implies that, for $A_w$ and $\lambda$ kept constant (same wave shape), increasing $\theta$ results in a decrease in the forcing amplitude, at least for angles in the range $\theta \in [50^\circ, 70^\circ]$. \begin{figure*}\centering \includegraphics[scale=1.05]{cmp_shear_fix} \caption{Comparison of $\partial \overline{w}^+/\partial y^+$ scaled by the streamwise-projected wave slope $A_w/\lambda_x$ for various flow configurations. The SSL flow is for a wavelenth $\lambda_x\approx 1250$ and forcing amplitude $A_\text{SSL}^+ = 2$ based on the unactuated friction velocity. The value of $\partial w^+/\partial y^+$ is based on the actual friction velocity, and the scaling factor $A_w/\lambda_x$ based on the parameters of G2W1. In all cases, the profiles were shifted in the wall-normal direction in order to match the wave height of G2W1.} \label{fig:cmp_shear} \end{figure*} \subsection{Streamwise velocity and Reynolds stresses}\label{sec:velRS} As is observed with Stokes layers and, more generally, with most control strategies yielding turbulent-drag reduction, an upward shift in the log law appears relative to the uncontrolled flow when actual scaling is used (i.e. with the modified friction velocity). This shift is shown in~\cref{fig:logRS}\,(\textit{a}) for some key simulations. \begin{figure*} \includegraphics[scale=1.05]{loglaw_and_RS} \caption{Comparison of (\textit{a}) the mean streamwise velocity $U^+ = \langle \overline{u}\rangle_{x,z}/u_\tau$ and (\textit{b}) Reynolds stresses $\langle \overline{u'_iu'_j}\rangle_{x,z}/u_\tau^2$ for wavy walls with wave slopes of $2 A_w/\lambda \in \lbrace 0.05,0.07\rbrace$ (corresponding to labels *A3 and *A4) and wavelengths of $\lambda^+ \in \lbrace 612,918\rbrace$ (corresponding to labels *W2* and *W1*), and the baseline plane channel (G6P1) (scaled using the actual friction velocity).} \label{fig:logRS} \end{figure*} The corresponding flow configurations are characterised by two height-to-wavelength ratios and two wavelengths. Although there is a noticeable difference between the two profiles for the two wavelengths, it is observed that similar wave slopes yield approximately similar shifts, the configuration $\lambda^+=918$ featuring a slightly greater shift that implies a higher level of skin-friction reduction (cf.~\cref{sec:FDR}). As the wave slope is linked to the forcing amplitude, the higher this ratio is, the higher the friction-drag reduction is. This close resemblance of the log shift at a given wave slope, but different wavelength, indicates a limited sensitivity of the friction-drag reduction for the range of wavelengths tested, especially at such a small forcing amplitude. As far as the Reynolds stresses are concerned, there is a substantial decline in the peak of the streamwise normal stresses --~evidence of the weakening of the streaks. The behaviour of the Reynolds stresses is also similar for a given height-to-wavelength ratio, apart from a detrimental increase in the streamwise normal stresses for the shorter wavelength ${\lambda^+ = 612}$, starting within the buffer layer around ${y^+ \approx 20}$ and persisting up to ${y^+\approx 80}$. However, unlike wall-actuated Stokes layers \citep{Agostini2014,Touber2012,Viotti2009}, which entail a larger forcing amplitude, the decline in the streamwise stress only persists in the present configuration up to $y^+ \approx 35$, beyond which it exceeds the baseline value. Similarly, the shear stress is depressed up to about the same wall-normal location, beyond which it also exceeds the baseline level. The difference between the streamwise Reynolds-stress levels for the wavy walls and the baseline case is shown in~\cref{fig:uup} for G6W2A4. \begin{figure*}\centering \includegraphics[scale=1.05]{perturb_uu} \caption{(Colour online) Change in the streamwise Reynolds-stress component with respect to the baseline. Contours of $\widehat{u'^2}$ for G6W1A2 (${A_w^+=18}$, ${\theta=70^\circ}$, ${\lambda^+=918}$).} \label{fig:uup} \end{figure*} This brings to light two different regions showing distinct physical features. One region is close to the wall, where a material weakening of the streaks takes place, and another is further away above the trough, featuring enhanced streamwise turbulence intensity. The latter increase is stronger than the reduction above the crest, thus leading to a net increase in the mean streamwise turbulence intensity above ${y^+\approx 35}$, relative to the baseline (cf.~\cref{fig:logRS}\textit{b}). \subsection{Detrimental effects of the wavy wall}\label{sec:ssldiff} Despite the similarity between the shear-strain phase variations of wavy walls and Stokes layers, highlighted in~\cref{sec:shearcmp}, the effectiveness of the wavy wall is found to be lower. In order to understand the lower performance, two main mechanisms are considered as possible causes of the degradation. \begin{figure}\centering \includegraphics[width=\linewidth]{friction} \caption{Phase-wise variation of the turbulent skin friction. Thick dash-dotted line: plane channel, dashed line: SSL ($\lambda_x^+\approx 1250$, $A_\text{SSL}^+=2$), continuous lines: G6W1A*, i.e. for $\lambda^+ =918$, $\theta=70^\circ$, and $A_w^+\in\lbrace 11, 18, 22, 32 \rbrace$. The grey area shows the corresponding phase location on the wavy wall.} \label{fig:tauw} \end{figure} First, beyond the observations made in~\cref{sec:velRS}, an important difference is that the friction is increased on the windward side of the wave, as shown in \cref{fig:tauw}. The overall skin-friction reduction arises as a balance between the depressed friction on the leeward side of the wave and the enhanced friction on the windward side, whereas in the SSL case, the friction is decreased at all phases. This variation is associated with the magnitude of the phase-varying streamwise velocity $\widetilde{u}$, which is greater than $\widetilde{w}$, whereas the former is almost negligible in the SSL (in optimum actuation conditions). This phase-variation of the mean longitudinal velocity was already identified in \cite{Chernyshenko2013} as the main source of degradation of the performance of the wavy wall relative to that of the SSL. Second, an additional mechanism, specific to the wavy wall, was revealed by the numerical calculations. As shown in~\cref{fig:proddissip}(\textit{a}), there exists a zone of intense production of turbulent kinetic energy ${\Pi_k = -\,\overline{u'_i u'_k} \,\partial \overline{u}_i/\partial x_k }$ above the leeward side of the wave, reflecting a deeper penetration of the disturbance arising from the wavy wall into the boundary layer. As shown in \cref{fig:proddissip}(\textit{b}), the increase in production relative to the baseline is quickly followed, in phase, by an increase in energy dissipation ${\epsilon_k = -\nu\,\overline{\partial u'_i/\partial x_k\, \partial u'_i/\partial x_k}}$, at about the same wall-normal location. This phenomenon strengthens as the wave amplitude increases, and is stronger for G6W2* than for G6W1* at similar wave slopes. \begin{figure*} \begin{flushleft} \vspace{10pt}\hspace{2.5cm}(\textit{a})\\ \vspace{-25pt} \end{flushleft} {\includegraphics[scale=1.05]{perturb_produc}}\\ \begin{flushleft} \vspace{10pt}\hspace{2.5cm}(\textit{b})\\ \vspace{-25pt} \end{flushleft} {\includegraphics[scale=1.05]{perturb_dissip}}% \caption{Change in the production $\widehat{\Pi}_k$ and dissipation $-\widehat{\epsilon}_k$ from the baseline for G6W1A2 (${A_w^+=18}$, ${\theta=70^\circ}$, ${\lambda^+=918}$). (\textit{a})~production difference (\textit{b})~dissipation difference, $\bigstar$ peak in production difference, $\blacklozenge$ peak in dissipation difference.} \label{fig:proddissip} \end{figure*} \subsection{Reynolds-number effect} An interesting question is whether the flow properties remain similar when the shape of the wall is kept constant in viscous units as the Reynolds number is increased. This has been investigated by reference to the flow configurations listed in~\cref{tab:Re}. \begin{table} \centering \def~{\hphantom{0}} \begin{minipage}{0.5\textwidth} \begin{ruledtabular} \begin{tabular}{lcccc} Simulation label & $Re_\tau$ & $A_w^+$ & $\theta$ & $\lambda^+$ \\ \hline ~G1W1bis & 180 & 20 & 70$^\circ$ & 918 \\ ~G2W1bis & 360 & 18 & 70$^\circ$ & 918 \\ ~G5W1 & 1000 & 20 & 70$^\circ$ & 900 \\ \end{tabular}% \end{ruledtabular} \end{minipage} \caption{Flow configurations for friction Reynolds numbers ranging from 180 to 1000.} \label{tab:Re}% \end{table}% Despite the wavy geometry in wall units not being exactly the same across the three flows in \cref{tab:Re}, and the mesh being somewhat coarser for the $Re_\tau = 1000$ case, the shear-strain profiles, scaled in wall units, are almost identical as demonstrated in \cref{fig:reeffect}. \begin{figure*}\centering \includegraphics[scale=1.05]{Re_influence2} \caption{Comparison of the shear strain $\partial \overline w^+ /\partial y^+$ for the same configuration at various Reynolds numbers. $A_w^+ \approx 20$, $\lambda^+\approx 900$, $\theta = 70^\circ$ (cf.~\cref{tab:Re}).} \label{fig:reeffect} \end{figure*} At $Re_\tau = 180$, a wave height of $A_w^+ = 20$ represents a ratio $A_w/h$ of about 11\%, which is significant, while it decreases to 2\% for $Re_\tau = 1000$. Consequently, the wave height $A_w$ is small enough relative to the channel height, so that, even at the lowest Reynolds number where the ratio $A_w/h$ is maximum, the distance between the walls remains sufficient to avoid an interference between the two solid boundaries, adding support to the findings reported in~\cref{sec:flat}. Although net-drag-reduction levels of about 1--2\% were observed for the three Reynolds numbers tested, the value is not quantifiable with any degree of precision, because it is extremely sensitive to various numerical issues, as demonstrated in \cref{sec:gridcv} through a grid-convergence study at ${Re_\tau = 360}$. \section{Conclusions} In the present study, skewed wavy-wall channels have been investigated by means of direct numerical simulation as a potential passive open-loop drag-reduction device. The spanwise-shear profiles generated by the wavy wall were shown to resemble closely those of the well-established method of drag reduction by in-plane wall motions, and to depend only weakly on the Reynolds number when expressed in wall units. Various wavy-wall geometries for different combinations of flow angle $\theta$, wavelength $\lambda$, and height $A_w$ were explored with the aim of seeking a configuration that minimises the total drag relative to a plane wall. Unlike for the Stokes layer, there is no actuation power required, but the skin-friction reduction is militated against by the pressure drag arising from the wavy geometry. This drag increases quadratically with wave height, rapidly exceeding the friction-drag reduction beyond a modest wave height. Consequently, the cumulative effect on the drag is small. The corresponding accuracy requirements of the simulations, in terms of grid density and integration time, are, therefore, far beyond those generally-adopted in DNS of channel flows, making the cost of optimising the wavy-wall parameters prohibitive. Even if relative changes in drag are quantified by reference to drag levels of a baseline plane channel, simulated with similar grid spacing, domain size, and flow-to-mesh angle, the net drag-reduction level is subject to a not insignificant error. Despite the above qualifications, a net drag-reduction value of about 0.7\% (made up of a 2\%-friction reduction and a pressure-drag penalty of 1.3\%) was estimated for the configuration ${A_w^+ \approx 20}$, ${\theta = 70^\circ}$, ${\lambda^+ \approx 920}$, at $Re_\tau \approx 360$, at the finest grid resolution, with indications that this performance would drop to 0.5\% for an asymptotically fine mesh. The generation of a significant phase variation of the mean longitudinal velocity, proposed in earlier studies as a mechanism accounting for the degradation of the performance of the wavy wall relative to the steady Stokes layer, was augmented by the identification of a new mechanism consisting of the intense localised production of turbulence kinetic energy above the leeward side of the wave. \begin{acknowledgements} This study was funded by Innovate UK (Technology Strategy Board), as part of the ALFET project, project reference 113022. The authors are grateful to the UK Turbulence Consortium (UKTC) for providing computational resources on the national supercomputing facility ARCHER under the EPSRC grant EP/L000261/1. Access to Imperial College High Performance Computing Service, doi: 10.14469/hpc/2232 is also acknowledged. \end{acknowledgements}
1,116,691,499,433
arxiv
\section{Introduction} Although we are still trying to understand how confinement arises in QCD, we know that under normal conditions the running coupling is large at low energies, causing quark and gluons to be confined. However, when exposed to extreme conditions such as very high temperatures or densities, quarks are forced to stay at very short distances from one another and there is a transition to a deconfined, quark-gluon-plasma phase. This transition is present both at high temperature and at high density, implying a phase diagram where the hadronic phase exists only near the origin of the plane defined by the temperature and chemical-potential axes. Of course, one wants to know the location of the transition in order to study properties of the new phase of matter, but it is also interesting to determine the nature of this transition. In particular, it is important to establish if the transition is a strong one, of first order, involving discontinuity in the order parameter, or if it is such that the two phases are connected smoothly. This may have consequences for the understanding of the cosmological QCD phase transition, which occurred a few microseconds after the big bang and formed the hadrons we observe today. In this case the transition lies closer to the temperature axis and its nature is of direct importance to determine the types of cosmological relics that can be associated with it. In particular, a first-order transition would very likely be associated with the formation of cold dark matter clumps \cite{Schwarz:2003du,Hindmarsh:2005ix}. The nature of the transition must be also taken into account at relativistic heavy-ion collision experiments. An additional requirement in this case is the description of dynamic effects \cite{Hama:2004rr}. The task of studying the QCD phase transition theoretically must be carried out by nonperturbative methods and a natural choice is to consider the lattice regularization as a formulation of the theory. In fact, lattice-QCD simulations allow a nonperturbative description of the phase transition in hadronic matter at high temperatures and there has been some recent progress in the description of the transition also in the case of finite density \cite{Philipsen:2005mj}. In the case of the finite-temperature transition, there is a qualitative difference when dealing with the full-QCD case (i.e.\ considering dynamic quarks) or with the so-called quenched case, in which the gluonic effects are taken into account but sea quarks are taken to be infinitely massive. For the quenched case one studies the deconfining transition itself, by means of the order parameter given by the Polyakov loop. The transition in this case is of first order. In the full-QCD case there is no equivalent order parameter for the deconfinement transition and one must consider the chiral phase transition. This transition occurs when the chiral symmetry --- exact in the limit of zero quark masses and spontaneously broken at low temperatures --- is restored at high temperature. The case of two dynamic quarks, i.e.\ considering dynamic effects of only two degenerate light-quark flavors, corresponding to the up and down quarks, is particularly interesting. In this case, if the transition is of second order, one would expect to observe universal critical scaling in the class of the $3d$ $O(4)$ continuous-spin model \cite{pisarsky,Rajagopal:1992qz}. Also, in the continuum limit, simulations using different discretizations for the fermion fields should give the same results. The fact that the critical behavior should be in the universality class of a spin model can be precisely checked, since the nonperturbative behavior for these models can be obtained with Monte Carlo simulations by so-called {\em global} methods, which avoid the critical slowing-down present in QCD simulations \cite{wolff,multigrid}. The determination of the correct nature of the transition in the two-flavor case is one of the present challenges of lattice QCD, as pointed out by Wilczek in \cite{Wilczek:2002wi}. This prediction has been investigated numerically by lattice simulations for over ten years, yet there is still no agreement about the order of the transition or about its scaling properties \cite{katz,Philipsen:2005mj}. More precisely, the predicted $O(4)$ scaling has been observed in the Wilson-fermion case \cite{iwasaki}, but not in the staggered-fermion case, believed to be the appropriate formulation for studies of the chiral region. In this case, extensive numerical studies and scaling tests have been done by the Bielefeld \cite{karsch}, JLQCD \cite{aoki} and MILC \cite{bernard} groups. It was found that the chiral-susceptibility peaks scale reasonably well with the predicted exponents, but no agreement is seen in a comparison with the $O(4)$ scaling function. At the same time, some recent numerical studies with staggered fermions claim that the deconfining transition may be of first order \cite{cea,delia}. In \cite{mendes,Mendes:2002pt} a simple method was introduced to obtain a uniquely defined normalization of the QCD data, allowing an unambiguous comparison to the (normalized) $O(4)$ scaling function. The analysis showed a surprisingly better agreement for the {\em larger} values of the quark masses. Let us note that in previous scaling tests the comparison had been done up to a (non-universal) normalization of the data and a match to the scaling function was tried by fitting it to the data points with the smallest masses. One interpretation of this result is that data at smaller masses (closer to the physical values) suffer more strongly from systematic errors in the simulations. In fact, larger quark masses are much easier to simulate, allowing greater control over errors and more reliable results. Here we present a preliminary study at a rather large mass value ($m_q = 0.075$ in lattice units), using staggered fermions and the MILC code. We consider the standard action and temporal lattice extent $N_{\tau}=4$, as in the studies mentioned above. \section{Scaling tests} \label{univ} The behavior of systems around a second-order phase transition (or critical point) may show striking similarities for systems that would otherwise seem completely different. In fact, it is possible to divide systems into so-called universality classes, in such a way that each class will have, e.g., the same critical exponents around the transition. Typical exponents are \begin{eqnarray} M_{h=0,\,t\to 0^-} &\to & |t|^{\beta} \mbox{,} \\ \chi_{h=0,\,t\to 0} &\to & |t|^{-\gamma} \mbox{,} \\ M_{t=0,\,h\to 0} &\to & h^{1/\delta}\,, \end{eqnarray} where $M$ is the order parameter --- e.g.\ the magnetization for a spin system --- $\chi$ is the corresponding susceptibility and \begin{eqnarray} t &=& (T-T_c)/T_0, \\ h &=& H/H_0 \end{eqnarray} are the reduced temperature and magnetic field, respectively. Thus, in principle, one may compare the critical exponents for different systems to check if they belong to the same universality class. In practice, however, the critical exponents may vary little from one class to the other and in order to carry out the comparison one would need to have a very precise determination of the exponents, which is not yet feasible in the QCD case. A more general comparison is obtained through the {\em scaling functions} for both systems. This comparison allows a more conclusive test, and can be applied for cases where the critical exponents cannot be established with great accuracy. In this case we may assume the exponents for a given class and compare the behavior of the whole critical region for one system to the known scaling curve for the proposed universality class. The scaling Ansatz is written for the free energy $F_s$ in the critical region as \begin{equation} F_s(t,h) \;=\; b^{-d}\,F_s(b^{y_t}\,t, b^{y_h}\,h)\,, \label{ansatz} \end{equation} where $b$ is a rescaling factor, $d$ is the dimension and $y_t,y_h$ are related to the usual critical exponents: $\beta$, $\gamma$, $\delta$ mentioned above. The scaling Ansatz implies that the order parameter must be described by a universal function \begin{equation} M/h^{1/\delta} = f_M(t/h^{1/\beta \delta})\;. \end{equation} The statement that the function $f_M$ is {\em universal} means that once the non-universal normalization constants $T_0$ and $H_0$ are determined for a given system in the universality class, the order parameter $M$ scales according to the scaling function $f_M$ for all systems in this class. As said above, the comparison of (normalized) scaling functions between two systems is a more general test of universality, especially in the case of the QCD phase transition. A further difficulty in studying the critical behavior at the QCD phase transition is the impossibility of considering the critical point directly, since that would correspond to having zero quark mass, or zero magnetic field $H$ in the language of the spin models above. In order to check scaling with critical exponents of a given class, or to determine the normalization constants $T_0$ and $H_0$ for systems where a study at $H=0$ is not possible, it is important to determine the {\em pseudo-critical line}, defined by the points where the susceptibility $\chi$ shows a (finite) peak. This corresponds to the rounding of the divergence that would be observed for $H=0$, $T=T_c$. The susceptibility scales as \begin{equation} \chi \,=\, \partial M/\partial H \,=\, (1/H_0)\,h^{1/\delta - 1} \,f_{\chi}(t/h^{1/\beta})\;, \end{equation} where $f_{\chi}$ is a universal function related to $f_M$. At each fixed $h$ the peak in $\chi$ is given by \begin{eqnarray} t_{p} &=& z_p\,h^{1/\beta \delta}, \\ M_p &=& h^{1/\delta}\,f_M(z_p), \\ H_0\,\chi_{p} &=& h^{1/\delta - 1} \, f_{\chi}(z_p)\,. \end{eqnarray} Thus, the behavior along the pseudo-critical line is determined by the universal constants $z_p$, $f_M(z_p)$, $f_{\chi}(z_p)$. Critical exponents, the scaling function $f_M$ and the universal constants above are well-known for the $3d$ $O(4)$ model \cite{o4,o4o2,o4new}. Note that one may also consider the comparison for finite-size-scaling functions, since they are also universal and have the advantage of being valid for finite values of $L$, the linear size of the system. Such functions were studied for the $3d$ $O(4)$ model in \cite{o4o2}. \section{Comparison of QCD data with the predicted scaling function} We now turn to the comparison of the two-flavor QCD data in the critical region (in the case of small but nonzero quark mass) to the predicted scaling properties of the $3d$ $O(4)$ spin model. As mentioned in the Introduction, we consider the chiral phase transition, since there is no clear order parameter for the deconfinement transition in the case of full QCD. The order parameter for the chiral transition is given by the so-called chiral condensate $<\overline \psi \,\psi>$, where $\psi$ is a combination of the quark fields entering the QCD Lagrangian \cite{Rajagopal:1992qz}. The analogue of the magnetic field is the quark mass $m_q$, and (on the lattice) the reduced temperature is proportional to \begin{equation} 6/g^2 - 6/g_c^2(0)\,, \end{equation} where $g$ is the lattice bare coupling and $g_c^2(0)$ is its extrapolated critical value. Therefore, referring to the pseudo-critical line described in the previous section, the chiral susceptibility peaks at \begin{equation} t_{p}\sim {m_q}^{1/\beta\delta}\,. \end{equation} As mentioned in the Introduction, previous results from lattice-QCD simulations in the two-flavor case show good scaling (with the predicted exponents) {\em only} along the pseudo-critical line, which is given by the peaks of the chiral susceptibility. It should be clear from the discussion in the above sections that this is not a sufficient test to prove that the transition is second order, especially if no agreement is seen when comparing the data to the scaling function. As described in \cite{mendes,Mendes:2002pt}, we use the observed scaling along the pseudo-critical line and the universal quantities $z_p$, $f_M(z_p)$ from the $O(4)$ model to determine the normalization constants $H_0$, $T_0$ for the QCD data. This allows an unambiguous comparison of the data to the scaling function $f_M$. More precisely, we note that in previous analyses \cite{bernard} the normalization constants were tentatively adjusted by shifting the $O(4)$ curve so as to get a rough agreement with the data at smaller quark masses, since these are closer to the chiral limit. The problem is that the lighter masses are also more subject to the presence of systematic errors in the simulations. In this case the overall agreement was rather poor, indicating that there were strong systematic effets or that the transition is not in the predicted universality class. Here we fix the constants as described in Section \ref{univ}, following the behavior along the pseudo-critical line. In this way no value of the quark mass is priviledged and the comparison is unambiguous. Our comparison is shown in Fig.\ \ref{scaling} below. The pseudo-critical line corresponds to a point in this plot and is marked with an arrow. For clarity we do not show the data (from the Bielefeld and JLQCD collaborations) obtained directly at the pseudo-critical point. These are slightly scattered around $z_p$ but show good scaling within errors. \begin{figure*}[htbp] \includegraphics[height=0.5\hsize,angle=-0]{sca_2fQCD_lawhep.ps} \caption{ Comparison of QCD (staggered) data to the $O(4)$ scaling function. For clarity, we do not show the data around the pseudo-critical point (indicated by the arrow), which were used to determine the normalization of the remaining data points.} \label{scaling} \end{figure*} We see relatively good scaling in the pseudo-critical region, i.e.\ around [$z_p$, $\,f_M(z_p)$], as expected. Away from this region most MILC points are several standard deviations away from the predicted curve. These data are given for three values of the quark mass in lattice units: 0.008, 0.0125 and 0.025. Note that the points with larger mass come closer to the curve. In particular, we can see that the new data at $m_q = 0.075$ show sensibly better scaling, especially for larger temperatures, where previously the scaling seemed unlikely. The good agreement of these data with the $O(4)$ scaling function motivates a careful study of systematic errors for smaller masses. A possible source of such errors are finite-size corrections, which would be stronger for smaller masses, since then the lattice side may not be large enough to ``contain'' the physical particle. Put differently, finite-size effects are expected when the correlation length (in lattice units) associated with a particle is comparable to or larger than the lattice side. Of course, this is more likely to occur for a lighter particle. \section{Finite-size effects} \label{FSS} In addition to the infinite-volume scaling laws mentioned above, we may also consider finite-size-scaling functions. In fact, the scaling Ansatz in Eq.\ \ref{ansatz} also implies \begin{equation} M = L^{-\beta/\nu} \, Q_z(h\,L^{\beta\delta/\nu}) \end{equation} where $L$ is the linear size of the system and we consider fixed values of the ratio $t/h^{1/\beta \delta} \equiv z$ (e.g.\ $z=0$ as in the critical isotherm, or $z_p$ as along the pseudo-critical line). Thus, $M$ can be described by a universal finite-size-scaling (FSS) function of one variable. We note that in order to recover the infinite-volume expression $M=h^{1/\delta}\,f_M(z)$ as $L\to\infty$, we must have $\;Q_z(u)\,\to\, f_M(z)\,u^{1/\delta}\;$ for large $u$. Thus, in this limit, the FSS functions are given simply in terms of the scaling function $f_M(z)$. Working with the FSS functions $Q_z$ instead of the infinite-volume scaling function $f_M$ has the disadvantage that one must consider $z$ fixed (thus restricting the regions to be compared in parameter space) but the advantage that a comparison can be made already at finite values of $L$. A finite-size-scaling analysis as described above was carried out in \cite{mendes}, but it was found that the QCD data show good (finite-size) scaling only along the pseudo-critical line. \section{Conclusions} Understanding the nature of the chiral phase transition in two-flavor QCD has proven to be a challenging task. The prediction of a second-order transition with critical behavior in the universality class of the $O(4)$ spin model is not verified for staggered fermions of small masses, although it can be shown (by an unambiguous normalization of the data) that better scaling is obtained for the existing data at larger (unphysical) masses. The fact that data for heavier quarks would show such good scaling was not expected, since the normalization of the data for comparison with the scaling curve did not priviledge any particular values of the quark mass. This suggests that the lack of scaling at small masses observed so far may be due to systematic effects, which could be due to finite-size corrections or to uncontrolled errors in the hybrid Monte Carlo algorithm used for updating the configurations. Both these sources of errors would be more significant for the case of smaller masses. As discussed above in Section \ref{FSS} above, the deviations observed are most likely not due to finite-size corrections and we thus argue that the deviations from $O(4)$ scaling at smaller masses may come from systematic errors in the simulation, probably related to the use of the R algorithm for the simulations \cite{clark}. Note that, contrary to what happens for the quenched-QCD case, the algorithm used to update full-QCD configurations is not exact and should have its accuracy tested carefully for each different value of the quark mass used. Let us also mention that a redefinition of the reduced temperature in terms of the physical temperature $T$ including a term in the quark mass $M$, as suggested in \cite{delia}, improves the agreement with the scaling curve further, as has been recently shown in \cite{previous2}. \section{Acknowledgements} This work was supported by FAPESP (Grant 00/05047-5). Partial support from CNPq is also acknowledged. \bibliographystyle{aipproc}
1,116,691,499,434
arxiv
\section{Introduction} The elliptic quantum group has been proposed in papers \cite{FIJKMY, Felder, Fronsdal, EF, JKOS1}. There are two types of elliptic quantum groups, the vertex type ${\cal A}_{q,p}(\widehat{sl_N})$ and the face type ${\cal B}_{q,\lambda}({g})$, where ${g}$ is a Kac-Moody algebra associated with a symmetrizable Cartan matrix. The elliptic quantum groups have the structure of quasi-triangular quasi-Hopf algebras introduced by V.Drinfeld \cite{Drinfeld}. H.Konno \cite{Konno} introduced the elliptic quantum algebra $U_{q,p}(\widehat{sl_2})$ as an algebra of the screening currents of the extended deformed Virasoro algebra in terms of the fusion SOS model \cite{DJKMO}. M.Jimbo, H.Konno, S.Odake, J.Shiraishi \cite{JKOS2} continued to study the elliptic quantum algebra $U_{q,p}(\widehat{sl_2})$. They constructed the elliptic alnalogue of Drinfeld currents and identified $U_{q,p}(\widehat{sl_2})$ with the tensor product of ${\cal B}_{q,\lambda}(\widehat{sl_2})$ and a Heisenberg algebra ${\cal H}$. The elliptic quantum group ${\cal B}_{q,\lambda}(\widehat{sl_2})$ is a quasi-Hopf algebra while the elliptic algebra $U_{q,p}(\widehat{sl_2})$ is not. The intertwining relation of the vertex operator of ${\cal B}_{q,\lambda}(\widehat{sl_2})$ is based on the quasi-Hopf structure of ${\cal B}_{q,\lambda}(\widehat{sl_2})$. By the above isomorphism $U_{q,p}(\widehat{sl_2})\simeq {\cal B}_{q,\lambda} (\widehat{sl_2}) \otimes {\cal H}$, we can understand "intertwining relation" of the vertex operator for the elliptic algebra $U_{q,p}(\widehat{sl_2})$. Along the above scheme the elliptic analogue of Drinfeld current of $U_{q,p}(\widehat{sl_2})$ is extended to those of $U_{q,p}({g})$ for non-twisted affine Lie algebra ${g}$ \cite{JKOS2, KK}. In this paper we are interested in higher-rank generalization of level $k$ free field realization of the elliptic quantum algebra. For the elliptic algebra $U_{q,p}(\widehat{sl_2})$, there exist two kind of free field realizations for arbitrary level $k$, the one is parafermion realization \cite{Konno, JKOS2}, the other is Wakimoto realization \cite{CD}. In this paper we are interested in the higher-rank generalization of Wakimoto realization of $U_{q,p}(\widehat{sl_2})$. We construct level $k$ free field realization of Drinfeld current associated with the elliptic algebra $U_{q,p}(\widehat{sl_3})$. This gives the first example of arbitrary level free field realization of the higher-rank elliptic algebra. This free field realization can be applied for construction of the integrals of motion for the elliptic algebra $U_{q,p}(\widehat{sl_3})$. For this purpose, see references \cite{KS1, KS2, KS3}. The organization of this paper is as follows. In section 2 we set the notation and introduce bosons. In section 3 we review the level $k$ free field realization of the quantum group $U_q(\widehat{sl_3})$ \cite{AOS}. In section 4 we give the level $k$ free field realization of the elliptic quantum algebra $U_{q,p}(\widehat{sl_3})$. In appendix we summarize the normal ordering of the basic operators. \section{Boson} The purpose of this section is to set up the basic notation and to introduce the boson. In this paper we fix three parameters $q,k,r \in {\mathbb C}$. Let us set $r^*=r-k$. We assume $k \neq 0, -3$ and ${\rm Re}(r)>0$, ${\rm Re}(r^*)>0$. We assume $q$ is a generic with $|q|<1, q \neq 0$. Let us set a pair of parameters $p$ and $p^*$ by \begin{eqnarray} p=q^{2r},~~p^*=q^{2r^*}.\nonumber \end{eqnarray} We use the standard symbol of $q$-integer $[n]$ by \begin{eqnarray} [n]=\frac{q^n-q^{-n}}{q-q^{-1}}.\nonumber \end{eqnarray} Let us set the elliptic theta function $\Theta_p(z)$ by \begin{eqnarray} &&\Theta_p(z)=(z;p)_\infty (p/z;p)_\infty (p;p)_\infty,\nonumber\\ &&(z;p)_\infty=\prod_{n=0}^\infty (1-p^nz).\nonumber \end{eqnarray} It is convenient to work with the additive notation. We use the parametrization \begin{eqnarray} q&=&e^{-\pi \sqrt{-1}/r \tau},\nonumber\\ p&=&e^{-2\pi \sqrt{-1}/\tau},~~~ p^*=e^{-2\pi \sqrt{-1}/\tau^*},~~(r\tau=r^*\tau^*),\nonumber\\ z&=&q^{2u}.\nonumber \end{eqnarray} Let us set Jacobi elliptic theta function $[u]_r, [u]_{r^*}$ by \begin{eqnarray} ~[u]_r&=&q^{\frac{u^2}{r}-u}\frac{\Theta_{p}(z)}{(p;p)_\infty^3},~~ ~[u]_{r^*}=q^{\frac{u^2}{r^*}-u}\frac{\Theta_{p^*}(z)}{(p^*;p^*)_\infty^3}.\nonumber \end{eqnarray} The function $[u]_r$ has a zero at $u=0$, enjoys the quasi-periodicity property \begin{eqnarray} ~[u+r]_r=-[u]_r,~~~~[u+r\tau]_r=-e^{-\pi \sqrt{-1} \tau- \frac{2\pi \sqrt{-1} u}{r}}[u]_r.\nonumber \end{eqnarray} Let us set the delta-function $\delta(z)$ as formal power series. \begin{eqnarray} \delta(z)=\sum_{n \in {\mathbb Z}} z^n.\nonumber \end{eqnarray} Following \cite{AOS} we introduce free bosons $a_n^1,a_n^2, b_n^1,b_n^2,b_n^3,c_n^1,c_n^2,c_n^3, (n \in {\mathbb Z}_{\neq 0})$. \begin{eqnarray} ~[a_n^i,a_m^j]&=&\frac{[(k+3)n][A_{i,j}n]}{n}\delta_{n+m,0},~~ [p_a^i,q_a^j]=(k+3) A_{i,j},~~(i,j=1,2), \\ ~[b_n^i,b_m^j]&=&-\frac{[n]^2}{n} \delta_{i,j}\delta_{n+m,0},~~ [p_b^i,q_b^j]=-\delta_{i,j},~~(i,j=1,2,3), \\ ~[c_n^i,c_m^j]&=&\frac{[n]^2}{n} \delta_{i,j} \delta_{n+m,0},~~ [p_c^i,q_c^j]=\delta_{i,j},~~(i,j=1,2,3). \end{eqnarray} Here we have used Cartan matrix $\left(\begin{array}{cc} A_{11}&A_{12}\\ A_{21}&A_{22} \end{array}\right)=\left(\begin{array}{cc}2&-1\\ -1&2 \end{array}\right)$.\\ For parameters $a_1,a_2, b_1,b_2,b_3,c_1,c_2,c_3 \in {\mathbb R}$, we set the vacuum vector $|a, b, c\rangle$ of the Fock space ${\cal F}_{a_1 a_2 b_1 b_2 b_3 c_1 c_2 c_3}$ as following. \begin{eqnarray} &&a_n^i|a,b,c\rangle=b_n^j|a,b,c\rangle =c_n^j|a,b,c\rangle=0,~~(i=1,2; j=1,2,3), \end{eqnarray} \begin{eqnarray} p_a^i|a,b,c\rangle=a_i|a,b,c\rangle,~ p_b^j|a,b,c\rangle=b_j|a,b,c\rangle,~ p_c^j|a,b,c\rangle=c_j|a,b,c\rangle,\nonumber \\ ~~(i=1,2;j=1,2,3;n>0). \end{eqnarray} The Fock space ${\cal F}_{a_1 a_2 b_1 b_2 b_3 c_1 c_2 c_3}$ is generated by bosons $a_{-n}^1,a_{-n}^2, b_{-n}^1, b_{-n}^2, b_{-n}^3, c_{-n}^1, c_{-n}^2, c_{-n}^3$ for $n \in {\mathbb N}_{\neq 0}$. The dual Fock space ${\cal F}_{a_1 a_2 b_1 b_2 b_3 c_1 c_2 c_3}^*$ is defined as the same manner. In this paper we construct the elliptic analogue of Drinfeld current for $U_{q,p}(\widehat{sl_3})$ by these bosons $a_n^i, b_n^j, c_n^j$ acting on the Fock space. \section{Free Field Realization of $U_q(\widehat{sl_3})$} The purpose of this section is to give the free field realization of the quantum affine algebra $U_{q}(\widehat{sl_3})$. We give a review of Wakimoto realization of $U_q(\widehat{sl_3})$ \cite{AOS}. Let us set the bosonic operators $a_\pm^i(z), b_\pm^i(z)$, $\gamma^i(z), \beta_s^i(z)$ by \begin{eqnarray} a_\pm^i(z)&=&\pm(q-q^{-1})\sum_{n>0}a_{\pm n}^i z^{\mp n} \pm p_a^i {\rm log}q,~~(i=1,2), \\ b_\pm^i(z)&=&\pm(q-q^{-1})\sum_{n>0}b_{\pm n}^i z^{\mp n} \pm p_b^i {\rm log}q,~~ (i=1,2,3), \\ b^i(z)&=&-\sum_{n \neq 0}\frac{b_n^i}{[n]}z^{-n}+q_b^i+p_b^i{\rm log}z,~~(i=1,2,3), \\ c^i(z)&=&-\sum_{n \neq 0}\frac{c_n^i}{[n]}z^{-n}+q_c^i+p_c^i{\rm log}z,~~(i=1,2,3), \\ \gamma^i(z)&=&-\sum_{n \neq 0}\frac{(b+c)_n^i}{[n]}z^{-n} +(q_b^i+q_c^i)+(p_b^i+p_c^i){\rm log}(-z),~~ (i=1,2,3), \\ \beta_1^i(z)&=&b_+^i(z)-(b^i+c^i)(qz),~\beta_2^i(z)=b_-^i(z)-(b^i+c^i)(q^{-1}z),~~ (i=1,2,3), \\ \beta_1^i(z)&=&b_+^i(z)+(b^i+c^i)(qz),~\beta_2^i(z)=b_-^i(z)+(b^i+c^i)(q^{-1}z),~~ (i=1,2,3). \end{eqnarray} We give a free field realiztaion of Drinfeld current for $U_q(\widehat{sl_3})$. \begin{df}~~ We define the bosonic operators $e_1^+(z), e_2^+(z), e_1^-(z), e_2^-(z)$ by \begin{eqnarray} e_1^+(z)&=&\frac{-1}{(q-q^{-1})z}(e_1^{+,1}(z)-e_1^{+,2}(z)), \\ e_2^+(z)&=&\frac{-1}{(q-q^{-1})z}(e_2^{+,1}(z)-e_2^{+,2}(z)+ e_2^{+,3}(z)-e_2^{+,4}(z)), \\ e_1^-(z)&=&\frac{-1}{(q-q^{-1})z} (e_1^{-,1}(z)-e_1^{-,2}(z)-e_1^{-,3}(z)+e_1^{-,4}(z)), \\ e_2^-(z)&=&\frac{-1}{(q-q^{-1})z}(e_2^{-,1}(z)-e_2^{-,2}(z)+e_2^{-,3}(z)-e_2^{-,4}(z)). \end{eqnarray} \begin{eqnarray} \psi^\pm_1(z)&=&:\exp\left( b_\pm^1(q^{\pm k}z)+b_\pm^1(q^{\pm (k+2)}z)+b_\pm^2(q^{\pm (k+3)}z)-b_\pm^3(q^{\pm (k+2)}z) +a_\pm^1(q^{\pm \frac{k+3}{2}}z) \right):,\nonumber\\ \\ \psi^\pm_2(z)&=&:\exp\left( -b_\pm^1(q^{\pm (k+1)}z)+b_\pm^2(q^{\pm k}z)+b_\pm^3(q^{\pm (k+1)}z) +b_\pm^3(q^{\pm (k+3)}z) +a_\pm^2(q^{\pm \frac{k+3}{2}}z) \right):,\nonumber\\ \end{eqnarray} Here we have set \begin{eqnarray} e_1^{+,1}(z)&=&:\exp\left(\beta_1^1(z)\right):,\\ e_1^{+,2}(z)&=&:\exp\left(\beta_2^1(z)\right):,\\ e_2^{+,1}(z)&=&:\exp\left(\gamma^1(z)+\beta_1^2(z)\right):,\\ e_2^{+,2}(z)&=&:\exp\left(\gamma^1(z)+\beta_2^2(z)\right):,\\ e_2^{+,3}(z)&=&:\exp\left(\beta_1^3(qz)+b_+^2(z)-b_+^1(qz)\right):,\\ e_2^{+,4}(z)&=&:\exp\left(\beta_2^3(qz)+b_+^2(z)-b_+^1(qz)\right):,\\ \nonumber \\ e_1^{-,1}(z)&=&:\exp\left(\beta_4^1(q^{-k-2}z)+b_-^2(q^{-k-3}z)- b_-^3(q^{-k-2}z)+ a_-^1(q^{-\frac{k+3}{2}}z) \right):, \\ e_1^{-,2}(z)&=&:\exp\left( \beta_3^1(q^{k+2}z)+b_+^2(q^{k+3}z)-b_+^3(q^{k+2}z)+a_+^1(q^{\frac{k+3}{2}}z) \right):, \\ e_1^{-,3}(z)&=& :\exp\left( \gamma^2(q^{k+2}z) +\beta_1^3(q^{k+2}z) +b_+^2(q^{k+3}z)-b_+^3(q^{k+2}z) +a_+^1(q^{\frac{k+3}{2}}z \right):,\\ e_1^{-,4}(z)&=& :\exp\left( \gamma^2(q^{k+2}z) +\beta_2^3(q^{k+2}z) +b_+^2(q^{k+3}z)-b_+^3(q^{k+2}z) +a_+^1(q^{\frac{k+3}{2}}z \right):, \\ e_2^{-,1}(z)&=& :\exp\left( \gamma^2(q^{-k-1}z)-\beta_3^1(q^{-k-1}z) +2b_-^3(q^{-k-1}z)+a_-^2(q^{-\frac{k+3}{2}}z) \right):, \\ e_2^{-,2}(z)&=& :\exp\left( \gamma^2(q^{-k-1}z)-\beta_4^1(q^{-k-1}z) +2b_-^3(q^{-k-1}z)+a_-^2(q^{-\frac{k+3}{2}}z) \right):, \\ e_2^{-,3}(z)&=&:\exp\left(\beta_4^3(q^{-k-3}z)+a_-^2(q^{-\frac{k+3}{2}}z) \right):, \\ e_2^{-,4}(z)&=&:\exp\left(\beta_3^3(q^{k+3}z)+a_+^2(q^{\frac{k+3}{2}}z)\right):. \end{eqnarray} \end{df} Here the symbol $:{\cal O}:$ represents the normal ordering of ${\cal O}$. For example we have \begin{eqnarray} :b_k b_l:=\left\{ \begin{array}{cc} b_k^i b_l^i,& k<0\\ b_l^i b_k^i,& k>0. \end{array}\right.~~~ :p_b^i q_b^i:=:q_b^i p_b^i:=q_b^i p_b^i.\nonumber \end{eqnarray} \begin{thm}~~\cite{AOS}~~ The bosonic operators $e_i^\pm(z)$, $\psi_i^\pm(z)$, $(i=1,2)$ satisfy the following commutation relations. \begin{eqnarray} (z_1-q^{A_{i,j}}z_2)e_i^+(z_1)e_j^+(z_2)&=&(q^{A_{i,j}}z_1-z_2) e_j^+(z_2)e_i^+(z_1),\\ (z_1-q^{-A_{i,j}}z_2)e_i^-(z_1)e_j^-(z_2)&=&(q^{-A_{i,j}}z_1-z_2) e_j^-(z_2)e_i^-(z_1),\\ ~[\psi_i^\pm(z_1),\psi_j^\pm(z_2)]&=&0, \end{eqnarray} \begin{eqnarray} &&(z_1-q^{A_{i,j}-k}z_2)(z_1-q^{-A_{i,j}+k}z_2)\psi_i^\pm(z_1)\psi_j^\mp(z_2)\nonumber\\ &=&(z_1-q^{A_{i,j}+k}z_2)(z_1-q^{-A_{i,j}-k}z_2)\psi_j^\mp(z_2)\psi_i^\pm(z_1), \end{eqnarray} \begin{eqnarray} (z_1-q^{\pm (A_{i,j}-\frac{k}{2})}z_2)\psi_i^+(z_1)e^\pm_j(z_2)&=& (q^{\pm A_{i,j}}z_1-q^{\mp \frac{k}{2}}z_2)e^\pm_j(z_2)\psi_i^+(z_1),\\ (z_1-q^{\pm (A_{i,j}-\frac{k}{2})}z_2)e^\pm_i(z_1)\psi_j^-(z_2)&=& (q^{\pm A_{i,j}}z_1-q^{\mp \frac{k}{2}}z_2)\psi_j^-(z_2)e^\pm_i(z_1), \end{eqnarray} \begin{eqnarray} &&\left\{ e_i^\pm(z_1)e_i^\pm(z_2)e_j^\pm(z_3)-(q+q^{-1}) e_i^\pm(z_1)e_j^\pm(z_3)e_j^\pm(z_2)+ e_i^\pm(z_3)e_i^\pm(z_1)e_j^\pm(z_2) \right\}\nonumber\\ &&+\left\{ z_1 \leftrightarrow z_2 \right\}=0,~~{\rm for}~~(i \neq j), \end{eqnarray} \begin{eqnarray} &&[e_i^+(z_1),e_j^-(z_2)]=\frac{\delta_{i,j}}{(q-q^{-1})z_1 z_2} \left(\delta\left(q^{-k}\frac{z_1}{z_2}\right)\psi_i^+(q^{-\frac{k}{2}}z_1) -\delta\left(q^k\frac{z_1}{z_2}\right) \psi_i^-(q^{-\frac{k}{2}}z_2)\right).\nonumber\\ \end{eqnarray} \end{thm} Hence $e_i^\pm(z), \psi_i^\pm(z)$ give level $k$ free field realization of $U_q(\widehat{sl_3})$. \section{Free Field Realization of $U_{q,p}(\widehat{sl_3})$} The purpose of this section is to give a free field realization of the elliptic analogue of Drinfeld current for $U_{q,p}(\widehat{sl_3})$ with arbitrary level $k\neq 0,-3$. Let us set the bosonic operators ${\cal B}_\pm^{* i}(z), {\cal B}_\pm^{i}(z), (i=1,2,3)$, ${\cal A}^{* i}(z), {\cal A}^{i}(z), (i=1,2)$ by \begin{eqnarray} {\cal B}_\pm^{* i}(z)&=&\exp\left(\pm \sum_{n>0} \frac{b_{-n}^i}{[r^*n]}z^n\right), ~~(i=1,2,3), \\ {\cal B}_\pm^{ i}(z)&=&\exp\left(\pm \sum_{n>0} \frac{b_n^i}{[rn]} z^{-n}\right), ~~(i=1,2,3), \\ {\cal A}^{i *}(z)&=&\exp\left(\sum_{n>0}\frac{a_{-n}^i}{[r^*n]}z^{n}\right), ~~(i=1,2), \\ {\cal A}^i(z)&=&\exp\left(-\sum_{n>0}\frac{a_n^i}{[rn]}z^{-n}\right),~~(i=1,2). \end{eqnarray} \begin{df}~~Let us set the bosonic operators $e_i(z), f_i(z), \Psi_i^\pm(z), (i=1,2)$ by \begin{eqnarray} &&e_i(z)={U}^{* i}(z)e_i^+(z),~~(i=1,2),\\ &&f_i(z)=e_i^-(z){U}^i(z),~~(i=1,2),\\ &&\Psi_i^+(z)= U^{*i}(q^{\frac{k}{2}}z) \psi_i^+(z)U^i(q^{-\frac{k}{2}}z), ~~(i=1,2),\\ &&\Psi_i^+(z)=U^{*i}(q^{-\frac{k}{2}}z)\psi_i^-(z) U^i(q^{\frac{k}{2}}z),~~(i=1,2). \end{eqnarray} Here we have set \begin{eqnarray} {U}^{* 1}(z)&=& {\cal B}_+^{* 1}(q^{r^*}z){\cal B}_+^{* 1}(q^{r^*-2}z) {\cal B}_+^{* 2}(q^{r^*-3}z) {\cal B}_-^{* 3}(q^{r^*-2}z){\cal A}^{* 1}(q^{r^*+\frac{k-3}{2}}z), \\ {U}^{* 2}(z)&=& {\cal B}_+^{* 3}(q^{r^*-3}z){\cal B}_+^{* 3}(q^{r^*-1}z) {\cal B}_+^{* 2}(q^{r^*}z) {\cal B}_-^{* 1}(q^{r^*-1}z){\cal A}^{* 2}(q^{r^*+\frac{k-3}{2}}z), \\ {U}^1(z)&=&{\cal B}_-^{1}(q^{-r^*}z){\cal B}_-^1(q^{-r^*+2}z) {\cal B}_-^2(q^{-r^*+3}z){\cal B}_+^3(q^{-r^*+2}z) {\cal A}^{1}(q^{-r^*-\frac{k-3}{2}}z), \\ {U}^2(z)&=&{\cal B}_-^3(q^{-r^*+1}z){\cal B}_-^3(q^{-r^*+1}z) {\cal B}_-^2(q^{-r^*}z){\cal B}_+^1(q^{-r^*+1}z) {\cal A}^2(q^{-r^*-\frac{k-3}{2}}z). \end{eqnarray} \end{df} The above free field realization of the twistors $U^{*i}(z), U^i(z)$, $(i=1,2)$ is the main result of this paper. \begin{prop}~~The bosonic operators $e_i(z), f_i(z), \Psi_i^\pm(z)$, $(i=1,2)$ satisfy the following commutation relations. \begin{eqnarray} e_i(z_1)e_j(z_2)&=& q^{-A_{i,j}}\frac{\Theta_{p^*}(q^{A_{i,j}}z_1/z_2)} {\Theta_{p^*}(q^{-A_{i,j}}z_1/z_2)} e_j(z_2)e_i(z_1), \\ f_i(z_1)f_j(z_2)&=& q^{A_{i,j}}\frac{\Theta_{p}(q^{-A_{i,j}}z_1/z_2)}{\Theta_{p}(q^{A_{i,j}}z_1/z_2)} f_j(z_2)f_i(z_1), \end{eqnarray} \begin{eqnarray} \Psi_i^\pm(z_1)\Psi_j^\pm(z_2)&=&\frac{\Theta_p(q^{-A_{i,j}}z_1/z_2) \Theta_{p^*}(q^{A_{i,j}}z_1/z_2)}{ \Theta_p(q^{A_{i,j}}z_1/z_2)\Theta_{p^*}(q^{-A_{i,j}}z_1/z_2)} \Psi_j^\pm(z_2)\Psi_i^\pm(z_1),\\ \Psi_i^\pm(z_1)\Psi_j^\mp(z_2)&=& \frac{ \Theta_p(pq^{-A_{i,j}-k}z_1/z_2) \Theta_{p^*}(p^*q^{A_{i,j}+k}z_1/z_2) }{ \Theta_p(pq^{A_{i,j}-k}z_1/z_2) \Theta_{p^*}(p^*q^{-A_{i,j}+k}z_1/z_2) } \Psi_j^\mp(z_2)\Psi_i^\pm(z_1), \\ \nonumber \\ \Psi_i^\pm(z_1)e_j(z_2)&=& \frac{\Theta_{p^*}(q^{A_{i,j}\pm \frac{k}{2}}z_1/z_2)}{ \Theta_{p^*}(q^{-A_{i,j}\pm \frac{k}{2}}z_1/z_2)} e_j(z_2)\Psi_i^\pm(z_1),\\ \Psi_i^\pm(z_1)f_j(z_2)&=& \frac{\Theta_{p^*}(q^{-A_{i,j}\mp \frac{k}{2}}z_1/z_2)}{ \Theta_{p^*}(q^{A_{i,j}\mp \frac{k}{2}}z_1/z_2)} e_j(z_2)\Psi_i^\pm(z_1), \end{eqnarray} \begin{eqnarray} ~[e_i(z_1),f_j(z_2)]=\frac{\delta_{i,j}} {(q-q^{-1})z_1 z_2}\left( \delta\left(q^{-k}\frac{z_1}{z_2}\right) \Psi_i^+(q^{-k/2}z_1)- \delta\left(q^{k}\frac{z_1}{z_2}\right) \Psi_i^-(q^{-k/2}z_2)\right),\nonumber\\ (i\neq j).~~~~ \end{eqnarray} \end{prop} We introduce the Heisenberg algebra ${\cal H}$ generated by the following $P_i,Q_i$, $(i=1,2)$. \begin{eqnarray} ~[P_i,Q_j]=\frac{A_{i,j}}{2},~~(i,j=1,2). \end{eqnarray} \begin{df}~~Let us define the bosonic operators $E_i(z), F_i(z), H_i^\pm(z) \in U_q(\widehat{sl_3}){\otimes}{\cal H}$, $(i=1,2)$ by \begin{eqnarray} E_1(z)&=&e_1(z)e^{2Q_1}z^{-\frac{P_1-1}{r^*}},~~ E_2(z)=e_2(z)e^{2Q_2}z^{-\frac{P_2-1}{r^*}},\\ F_1(z)&=&f_1(z)z^{\frac{2p_b^1+p_b^2-p_b^3+p_a^1}{r}}z^{\frac{P_1-1}{r}},~~ F_2(z)=f_2(z)z^{\frac{2p_b^3+p_b^2-p_b^1+p_a^2}{r}}z^{\frac{P_2-1}{r}}, \\ H_1^\pm(z)&=&\Psi_1^\pm(z)e^{2Q_1} (q^{\mp \frac{k}{2}}z)^{\frac{2p_b^1+p_b^2-p_b^3+p_a^1}{r}} (q^{\pm (r-\frac{k}{2})}z)^{\frac{P_1-1}{r}-\frac{P_1-1}{r^*}},\\ H_2^\pm(z)&=&\Psi_2^\pm(z)e^{2Q_2} (q^{\mp \frac{k}{2}}z)^{\frac{2p_b^3+p_b^2-p_b^1+p_a^2}{r}} (q^{\pm (r-\frac{k}{2})}z)^{\frac{P_2-1}{r}-\frac{P_2-1}{r^*}}. \end{eqnarray} \end{df} \begin{thm}~~The bosonic operators $E_i(z), F_i(z), H_i^\pm(z)$, $(i=1,2)$ satisfy the following commutation relations. \begin{eqnarray} E_i(z_1)E_j(z_2)&=&\frac{\displaystyle \left[u_1-u_2+\frac{A_{i,j}}{2}\right]_{r^*}} {\displaystyle \left[u_1-u_2-\frac{A_{i,j}}{2}\right]_{r^*}}E_j(z_2)E_i(z_1), \\ F_i(z_1)F_j(z_2)&=& \frac{\displaystyle \left[u_1-u_2-\frac{A_{i,j}}{2}\right]_{r}} {\displaystyle \left[u_1-u_2+\frac{A_{i,j}}{2}\right]_{r}}F_j(z_2)F_i(z_1),\\ H^\pm_i(z_1)H^\pm_j(z_2)&=&\frac{\displaystyle \left[u_1-u_2-\frac{A_{i,j}}{2}\right]_r \left[u_1-u_2+\frac{A_{i,j}}{2}\right]_{r^*}}{ \displaystyle \left[u_1-u_2+\frac{A_{i,j}}{2}\right]_r \left[u_1-u_2-\frac{A_{i,j}}{2}\right]_{r^*}} H^\pm_j(z_2)H^\pm_i(z_1),\\ H^+_i(z_1)H^-_j(z_2)&=& \frac{\displaystyle \left[u_1-u_2-\frac{A_{i,j}}{2}-\frac{k}{2}\right]_r \left[u_1-u_2+\frac{A_{i,j}}{2}+\frac{k}{2}\right]_{r^*}}{ \displaystyle \left[u_1-u_2+\frac{A_{i,j}}{2}-\frac{k}{2}\right]_r \left[u_1-u_2-\frac{A_{i,j}}{2}+\frac{k}{2}\right]_{r^*}} H^-_j(z_2)H^+_i(z_1),\nonumber\\ \\ H^\pm_i(z_1)E_j(z_2)&=& \frac{\displaystyle \left[u_1-u_2\pm\frac{k}{4}+\frac{A_{i,j}}{2}\right]_{r^*} } {\displaystyle \left[u_1-u_2\pm \frac{k}{4}-\frac{A_{i,j}}{2}\right]_{r^*}} E_j(z_2)H^\pm_i(z_1),\\ H^\pm_i(z_1)F_j(z_2)&=& \frac{\displaystyle \left[u_1-u_2\mp\frac{k}{4}-\frac{A_{i,j}}{2}\right]_{r} } { \displaystyle \left[u_1-u_2\mp \frac{k}{4}+\frac{A_{i,j}}{2}\right]_{r}} F_j(z_2)H^\pm_i(z_1), \end{eqnarray} \begin{eqnarray} ~[E_i(z_1),F_j(z_2)]=\frac{\delta_{i,j}}{(q-q^{-1})z_1z_2}\left( \delta\left(q^{-k}\frac{z_1}{z_2}\right)H_i^+(q^{-\frac{k}{2}}z_1)- \delta\left(q^{k}\frac{z_1}{z_2}\right)H_i^-(q^{-\frac{k}{2}}z_2)\right). \end{eqnarray} \end{thm} Now we have costructed level $k$ free field realization of Drinfeld current $E_i(z), F_i(z), H_i^\pm(z)$ for the elliptic algebra $U_{q,p}(\widehat{sl_3})$. This gives the first example of arbitrary-level free field realization of higher-rank elliptic algebra. \section*{Acknowledgement}~The author would like to thank the organizing committee of the 27-th International Colloquium of the Group Theoretical Method in Physics held at Yerevan, Armenia 2008. The author would like to thank Prof.A.Kluemper for his kindness at Armenia. This work is partly supported by the Grant-in Aid for Young Scientist {\bf B}(18740092) from Japan Society for the Promotion of Science. \section*{Appendix} In appendix we summarize the normal ordering of the basic operators. \begin{eqnarray} :e^{\gamma^i(z_1)}: {\cal B}_+^{* i}(z_2)&=&: e^{\gamma^i(z_1)} {\cal B}_+^{* i}(z_2): \frac{(q^{r^*+1}z_2/z_1;p^*)_\infty}{(q^{r^*-1}z_2/z_1;p^*)_\infty}, \nonumber\\ :e^{\beta_1^i(z_1)}: {\cal B}_+^{* i}(z_2)&=& :e^{\beta_1^i(z_1)} {\cal B}_+^{* i}(z_2): \frac{(q^{r^*}z_2/z_1;p^*)_\infty}{ (q^{r^*+2}z_2/z_1;p^*)_\infty}, \nonumber\\ :e^{\beta_2^i(z_1)}: {\cal B}_+^{* i}(z_2)&=& :e^{\beta_2^i(z_1)} {\cal B}_+^{* i}(z_2): \frac{(q^{r^*}z_2/z_1;p^*)_\infty}{ (q^{r^*+2}z_2/z_1;p^*)_\infty}, \nonumber\\ :e^{\beta_3^i(z_1)}: {\cal B}_+^{* i}(z_2)&=&: e^{\beta_3^i(z_1)} {\cal B}_+^{* i}(z_2): \frac{(q^{r^*}z_2/z_1;p^*)_\infty}{ (q^{r^*-2}z_2/z_1;p^*)_\infty}, \nonumber\\ :e^{\beta_4^i(z_1)}: {\cal B}_+^{* i}(z_2)&=&: e^{\beta_4^i(z_1)} {\cal B}_+^{* i}(z_2): \frac{(q^{r^*}z_2/z_1;p^*)_\infty}{ (q^{r^*+2}z_2/z_1;p^*)_\infty}, \nonumber \\ {\cal B}_-^i(z_1):e^{\gamma^i(z_2)}:&=&: {\cal B}_-^i(z_1)e^{\gamma^i(z_2)}: \frac{(q^{r+1}z_2/z_1;p)_\infty}{(q^{r-1}z_2/z_1;p)_\infty}, \nonumber \\ {\cal B}_-^i(z_1):e^{\beta_1^i(z_2)}:&=&: {\cal B}_-^i(z_1)e^{\beta_1^i(z_2)} :\frac{(q^rz_2/z_1;p)_\infty}{ (q^{r+2}z_2/z_1;p)_\infty}, \nonumber \\ {\cal B}_-^i(z_1):e^{\beta_2^i(z_2)}:&=& :{\cal B}_-^i(z_1)e^{\beta_2^i(z_2)}: \frac{(q^rz_2/z_1;p)_\infty}{ (q^{r+2}z_2/z_1;p)_\infty}, \nonumber \\ {\cal B}_-^i(z_1):e^{\beta_3^i(z_1)}:&=&: {\cal B}_-^i(z_1)e^{\beta_3^i(z_1)}: \frac{(q^rz_2/z_1;p)_\infty}{ (q^{r-2}z_2/z_1;p)_\infty}, \nonumber \\ {\cal B}_-^i(z_1):e^{\beta_4^i(z_2)}:&=&: {\cal B}_-^i(z_1)e^{\beta_4^i(z_2)} : \frac{(q^rz_2/z_1;p)_\infty}{ (q^{r-2}z_2/z_1;p)_\infty},\nonumber \end{eqnarray} \begin{eqnarray} e^{b_+^i(z_1)}{\cal B}_+^{* i}(z_2)&=& : e^{b_+^i(z_1)}{\cal B}_+^{* i}(z_2) :\frac{ (q^{r^*}z_2/z_1;p^*)_\infty^2}{ (q^{r^*+2}z_2/z_1;p^*)_\infty (q^{r^*-2}z_2/z_1;p^*)_\infty},\nonumber \\ {\cal B}_-^i(z_1) e^{b_-^i(z_2)}&=&: {\cal B}_-^i(z_1) e^{b_-^i(z_2)} :\frac{(q^rz_2/z_1;p)_\infty^2}{ (q^{r+2}z_2/z_1;p)_\infty (q^{r-2}z_2/z_1;p)_\infty},\nonumber \\ e^{a_+^i(z_1)}{\cal A}^{*i}(z_2)&=& :e^{a_+^i(z_1)}{\cal A}^{*i}(z_2): \frac{ (q^{r^*+k+5}z_2/z_1;p^*)_\infty (q^{r^*-k-5}z_2/z_1;p^*)_\infty}{ (q^{r^*+k+1}z_2/z_1;p^*)_\infty (q^{r^*-k-1}z_2/z_1;p^*)_\infty},\nonumber\\ e^{a_+^1(z_1)}{\cal A}^{*2}(z_2)&=&: e^{a_+^1(z_1)}{\cal A}^{*2}(z_2): \frac{ (q^{r^*+k+2}z_2/z_1;p^*)_\infty (q^{r^*-k-2}z_2/z_1;p^*)_\infty}{ (q^{r^*+k+4}z_2/z_1;p^*)_\infty (q^{r^*-k-4}z_2/z_1;p^*)_\infty},\nonumber\\ e^{a_+^2(z_1)}{\cal A}^{*1}(z_2)&=& : e^{a_+^2(z_1)}{\cal A}^{*1}(z_2): \frac{ (q^{r^*+k+2}z_2/z_1;p^*)_\infty (q^{r^*-k-2}z_2/z_1;p^*)_\infty}{ (q^{r^*+k+4}z_2/z_1;p^*)_\infty (q^{r^*-k-4}z_2/z_1;p^*)_\infty},\nonumber\\ {\cal A}^{i}(z_1)e^{a_-^i(z_2)}&=&: {\cal A}^{i}(z_1)e^{a_-^i(z_2)} :\frac{(q^{r+k+5}z_2/z_1;p)_\infty (q^{r-k-5}z_2/z_1;p)_\infty}{ (q^{r+k+1}z_2/z_1;p)_\infty (q^{r-k-1}z_2/z_1;p)_\infty}, \nonumber\\ {\cal A}^{1}(z_1)e^{a_-^2(z_2)}&=&: {\cal A}^{1}(z_1)e^{a_-^2(z_2)}: \frac{ (q^{r+k+2}z_2/z_1;p)_\infty (q^{r-k-2}z_2/z_1;p)_\infty}{ (q^{r+k+4}z_2/z_1;p)_\infty (q^{r-k-4}z_2/z_1;p)_\infty},\nonumber\\ {\cal A}^{2}(z_1)e^{a_-^1(z_2)}&=& :{\cal A}^{2}(z_1)e^{a_-^1(z_2)}: \frac{ (q^{r+k+2}z_2/z_1;p)_\infty (q^{r-k-2}z_2/z_1;p)_\infty}{ (q^{r+k+4}z_2/z_1;p)_\infty (q^{r-k-4}z_2/z_1;p)_\infty},\nonumber \end{eqnarray} \begin{eqnarray} {\cal B}_-^i(z_1){\cal B}_{+}^{*i}(z_2)&=&: {\cal B}_-^i(z_1){\cal B}_{+}^{*i}(z_2) :\frac{(q^k z_2/z_1;q^{2k},p^*)_\infty^2}{ (q^{k+2}z_2/z_1;q^{2k},p^*)_\infty (q^{k-2}z_2/z_1;q^{2k},p^*)_\infty }\nonumber\\ &\times&\frac{ (q^{k+2}z_2/z_1;q^{2k},p)_\infty (q^{k-2}z_2/z_1;q^{2k},p)_\infty} {(q^k z_2/z_1;q^{2k},p)_\infty^2}, \nonumber\\ {\cal A}^i(z_1){\cal A}^{*i}(z_2)&=&: {\cal A}^i(z_1){\cal A}^{*i}(z_2): \frac{(q^{2k+5}z_2/z_1;q^{2k},p^*)_\infty (q^{-5}z_2/z_1;q^{2k},p^*)_\infty }{ (q^{2k+1}z_2/z_1;q^{2k},p^*)_\infty (q^{-1}z_2/z_1;q^{2k},p^*)_\infty }\nonumber\\ &\times& \frac{(q^{2k+1}z_2/z_1;q^{2k},p)_\infty (q^{-1}z_2/z_1;q^{2k},p)_\infty }{ (q^{2k+5}z_2/z_1;q^{2k},p)_\infty (q^{-5}z_2/z_1;q^{2k},p)_\infty},\nonumber \\ {\cal A}^1(z_1){\cal A}^{*2}(z_2)&=&: {\cal A}^1(z_1){\cal A}^{*2}(z_2): \frac{(q^{2k+2}z_2/z_1;q^{2k},p^*)_\infty (q^{-2}z_2/z_1;q^{2k},p^*)_\infty }{ (q^{2k+4}z_2/z_1;q^{2k},p^*)_\infty (q^{-4}z_2/z_1;q^{2k},p^*)_\infty }\nonumber\\ &\times& \frac{(q^{2k+4}z_2/z_1;q^{2k},p)_\infty (q^{-4}z_2/z_1;q^{2k},p)_\infty }{ (q^{2k+2}z_2/z_1;q^{2k},p)_\infty (q^{-2}z_2/z_1;q^{2k},p)_\infty},\nonumber \\ {\cal A}^2(z_1){\cal A}^{*1}(z_2)&=&: {\cal A}^2(z_1){\cal A}^{*1}(z_2): \frac{(q^{2k+2}z_2/z_1;q^{2k},p^*)_\infty (q^{-2}z_2/z_1;q^{2k},p^*)_\infty }{ (q^{2k+4}z_2/z_1;q^{2k},p^*)_\infty (q^{-4}z_2/z_1;q^{2k},p^*)_\infty }\nonumber\\ &\times& \frac{(q^{2k+4}z_2/z_1;q^{2k},p)_\infty (q^{-4}z_2/z_1;q^{2k},p)_\infty }{ (q^{2k+2}z_2/z_1;q^{2k},p)_\infty (q^{-2}z_2/z_1;q^{2k},p)_\infty}.\nonumber \end{eqnarray} Here we have used the notation $$(z;p_1,p_2)_\infty=\prod_{n_1,n_2=0}^\infty (1-p_1^{n_1}p_2^{n_2}z).$$
1,116,691,499,435
arxiv
\section{Introduction} Given two polytopes $P$ and $Q$, we can ask: What is a polytope $P'$ of largest volume such that $P'$ is similar to $P$ and contained in~$Q$. By ``similar'' we understand that $P'$ can be transformed into $P$ by a dilation and rigid motions. Instead of ``largest volume'' we might as well ask for a polytope that maximizes the dilation factor between $P$ and~$P'$. An equivalent question asks for the smallest polytope $Q'$, which is similar to $Q$ and contains~$P$. The earliest work on this topic might already be found in Kepler's work, \cite[libri V, caput I, p.~181]{K}. One finds descriptions of the largest regular tetrahedron included in a cube and of the largest cube included in a regular dodecahedron, although no claim on maximality is made. A substantial contribution is made by Croft, \cite{C80}. Here the case where $P$ and $Q$ are three-dimensional is considered. He notes that apart from exceptional cases \emph{local} maxima must be immobile and therefore satisfy $7$ linear constraints, see \cite[Theorem, p.~279]{C80}. Using this information he calculates all local maxima and obtains \emph{global} maximal configurations, see \cite[p.~283--295]{C80}. Letting $P$ and $Q$ range over the platonic solids, Croft gives a complete answer for $14$ out of the $20$ non-trivial cases. This is the problem described by the same author, Falconer and Guy as Problem B3 in \cite[p.~52]{CFG91}; see below for an answer for the remaining six cases. Containment problems for (simple) polygons are discussed for example in \cite{C83} and \cite{AAS98}, and some algorithms are given. Taking $P$ to be a regular $n$-gon and $Q$ to be a regular $m$-gon, the size of the largest copy of $P$ inside $Q$ is known if and only if $n$ and $m$ share a common prime factor. If they are coprime only conjectural results are known; see the article by Dilworth and Mane, \cite{DM10}. More general containment problems are studied by Gritzmann and Klee, \cite{GK94}. They also allow other groups than the group of similarities act on the polytopes. Gritzmann and Klee state the problem where the group acting is the group of similarities, \cite[p.~143]{GK94}, but do not discuss a computational approach. The related problem of finding a largest, not necessarily regular, $j$-simplices in $k$-cubes is related to Hardamard matrices and discussed in \cite{HKL96}. In some cases the maximizer is indeed a \emph{regular} simplex, see \cite{MRT09} for details. A short summary of the results of this paper by the author has been posted on mathoverflow, \cite{F13}. \vspace*{.5cm} In Section~\ref{methods} we present a method for finding solutions for this problem in general. In the last section we apply this method to some special cases and thereby offer a solution to Problem B3 in \cite[p.~52]{CFG91}. \section{Methods}\label{methods} \subsection{Setting up the optimization problem} Let $P$ and $Q$ be polytopes, let $p$ be the dimension of $P$ and $q$ be the dimension of~$Q$. We assume $q\geq p$; otherwise it is not quite clear what it means that $P$ is included in~$Q$. Let $H_1,\dots H_m$ be the defining half spaces for $Q$, such that \[Q=\bigcap_{k=1}^mH_k\] and $w_1,\dots,w_n$ denote the vertices of~$P$. We formulate the problem of finding the largest polytope $P'$ such that $P'$ is contained in $Q$ and similar to $P$ as a quadratic maximization problem. \begin{problem}\label{prob} {\centering\fbox{% \begin{minipage}{.77\textwidth} \begin{description} \item[Input data:] \[\text{halfspaces }H_1,\dots,H_m\text{ of }Q\text{, vertices }w_1\dots w_n\text{ of }P.\]% \item[Variables:] \[s\text{ and }v_{ij}\text{ for }1\leq i\leq n, 1\leq j\leq q\] \item[Objective function:] \[\text{maximize } s\] \item[Linear constraints:] \[(v_{i1},\dots,v_{iq})\in H_k \text{ for }1\leq i \leq n, 1\leq k\leq m\] \item[Quadratic constraints:] \[\sum_{l=1}^q(v_{il}-v_{kl})^2=s||w_i-w_j||^2_2\text{ for }1\leq i<j\leq n\] \end{description} \end{minipage}}} \end{problem} \vspace*{.1cm} \noindent In this formulation the variable $s$ can be thought of as the square of the dilation factor between $P$ and~$P'$. The other variables are supposed to be the coordinates of the vertices of~$P'$. The linear constraints consist of $nm$ weak inequalities and make sure that $P'\subset Q$. The quadratic constraints assert that the distances between vertices of $P'$ agree with those of $P$ up to a dilation factor $\sqrt{s}$, which is the same for all pairs of vertices. Hence the quadratic equalities make sure that $P'$ is similar to~$P$. A global optimum of the optimization problem gives us a largest polytope $P'$ as desired. It might happen that there are combinatorially different optimal solutions to our problem. The goal in Section~\ref{loesung} is to identify \emph{one} of the optimal solutions. From that we can deduce the optimal dilation factor and hence answer the question: how large is the largest polytope $P'$ similar to $P$ and contained in~$Q$. We do not explain in full generality in what combinatorially different ways $P'$ can then be contained in $Q$, but rather describe one possible inclusion. \subsubsection{Improved formulation} The above formulation for Problem~\ref{prob} is particularly simple and straightforward. However an equivalent formulation using less variables and less quadratic constraints can be obtained as follows. Choose an affine basis from the set of vertices of~$P$. For the optimization problem we can then only take those variables $v_{ij}$, such that $w_i$ belong to that affine basis and substitute all occurrences of other variables by linear combination of the former. These linear combinations can be obtained from the vertices of $P$, using the fact that we chose an affine basis. Using this substitution, we have $(p+1)q+1$ variables in total and this number only depends on the dimensions of $P$ and $Q$ and not on the number of vertices of~$P$. In order to obtain less quadratic constraints we also focus on the chosen affine basis: it is enough to make sure that all the distances between all pairs of two vectors in the affine basis are all scaled by the same factor $\sqrt{s}$. Since there are $q+1$ vectors in the affine basis, we obtain $\binom{q+1}{2}=\frac{1}{2}(q+1)(q+2)$ quadratic equations. Counting the number of linear equations we see that there are $nm$ many, independent of the dimension of~$Q$. An axis aligned bounding box for $Q$ gives bounds on the variables $v_{ij}$. We can trivially include a copy of $P$, whose circumsphere coincides with the in sphere of $Q$, so a lower bound for $s$ would be the Keplerian ratio \[s\geq\left(\frac{\text{circumradius of }Q}{\text{inradius of }P}\right).\] In a similar way we could give an upper bound for $s$, but in view of the objective function this does not seem necessary. The equations used in setting up Problem~\ref{prob} depend on the position of~$Q$. If many of the defining hyperplanes for $Q$ are parallel to many coordinate axes, then less variables are used in the linear equations. Also the choice of an affine basis of $P$ might influence the number of variables used in the equations. The precision for the input of the polytopes should be higher than the desired precision, when solving Problem~\ref{prob} with a solver numerically. If $P$ and $Q$ possess symmetry one can use this symmetry to get additional constraints. For example if $P$ and $Q$ are centrally symmetric, then it suffices to search a maximal $P'$ among those copies of $P$ which are concentric with~$Q$. See \cite[\textsc{Observation} p.~288]{C80} for a simple proof. If $P$ and $Q$ are regular polytopes, one can say without loss of generality that one vertex of $P'$ must lie in one face of~$Q$. \subsubsection{Solving the optimization problem numerically} In order to solve Problem~\ref{prob} numerically we can use SCIP, which is a solver for mixed integer non-linear programming. This solver uses branch and bound techniques in order to find a \emph{global} optimum within a certain precision; see \cite{A09} and \cite{ABKW08} for details. We don't use SCIP's capability to handle integer variables, since all of our variables are continuous. \subsection{From numerical solutions to exact solutions} \subsubsection{Setting up the quadratic system} We obtain approximate results for the global optimum Problem \ref{prob}, with a certain precisio , let's call the resulting polytope~$\widetilde{P}$. The goal is to derive exact values for the coordinates of a polytope $P'$ which in indistinguishable from $\widetilde{P}$ in the approximation within the precision. We can identify the vertices of $\widetilde{P}$ that lie in a face of~$Q$. If $\widetilde{P}$ has been calculated with sufficiently high precision (see assumptions in Section~\ref{limits}) $\widetilde{P}$ will satisfy the same vertex-face incidences an optimal solution~$P'$. In fact $P'$ is given by the real solution of a system of quadratic equations, which is derived from these incidences. An approximate real solution of this system is given by~$\widetilde{P}$. \subsubsection{Solving the quadratic system}\label{method} A numerical solution to this quadratic system with arbitrary precision can be obtained using Newton's method, and a solution $P'$ to Problem~\ref{prob} gives a good starting point. If all the defining hyperplanes of $Q$ are defined in terms of algebraic numbers, solutions of the quadratic system must be algebraic. In case the system obtained in this way is to complicated to be solved by hand or automatically by a computer algebra system, we can attempt to find solutions by using the following three-step approach. We already have an approximate real solution given by~$\widetilde{P}$. \begin{enumerate} \item[Step 1]\label{step1} Numerically approximate the solution to high precision, for example using multi-dimensional Newton's method \item[Step 2]\label{step2} For each variable guess the algebraic number close to the approximation using integer relation algorithms such as LLL (\cite{LLL82}). \item[Step 3]\label{step3} Verify the solution by exact calculation in the field of real algebraic numbers. \end{enumerate} We can expect to find solutions, if they are algebraic numbers with minimal polynomials of low degree and small coefficients. See Section~\ref{loesung} for two successful application of this method. This method can be in principle applied to any given system of equations with algebraic solutions, for which we can obtain high precision numerical approximate solutions. \subsection{Limitations of the method}\label{limits} The solver SCIP, which can be used for solving Problem~\ref{prob} finds a \emph{global} optimum, but the calculations are done only with a certain prescribed precision. In general it might be the case that exists a maximizer $P'$, which attains the maximal dilation factor $\sqrt{s}$ and a second locally maximal feasible solution $P''$, with dilation factor $\sqrt{s-\varepsilon}$, for a small~$\varepsilon>0$. Indeed it is possible to construct examples of $P$ and $Q$ where this is the case for arbitrarily small $\varepsilon$, take for example $P$ and $Q$ to both be the same rectangle with almost equal side length. Hence in order to make sure that we have indeed found an optimal solution to Problem~\ref{prob}, we make the following assumptions. \begin{assumption}\label{assu1} The solution $\widetilde{P}$ to Problem~\ref{prob} has sufficient precision such that there is only one local maximum $P'$ near~$\widetilde{P}$. \end{assumption} \begin{assumption}\label{assu2} Problem~\ref{prob} has been solved with sufficient precision such that the dilation factor $\sqrt{s}$ of the local maximum $P'$ near $\widetilde{P}$ is the \emph{global} maximum. \end{assumption} \begin{assumption}\label{assu3} Problem~\ref{prob} has been solved with sufficient precision such that $\widetilde{P}$ and the local maximum $P'$ near $\widetilde{P}$ satisfy the same vertex-face incidences with~$Q$. \end{assumption} The precision necessary for the solution to satisfy these properties depends on $P$ and $Q$ and since there exist examples where the global maximum and the second largest local maximum are arbitrarily close it is in general not possible to prescribe the precision necessary for Assumptions~\ref{assu1}-\ref{assu3} to hold. Assumptions~\ref{assu1}-\ref{assu3} also deal with possible numerical mistakes or bugs of a solver for Problem~\ref{prob}. If Assumptions~\ref{assu1} and \ref{assu2} hold and we can, because of Assumption~\ref{assu3} identify an exact algebraic solution near $P'$, this will be a maximizer of the problem. In any case, even if the assumptions do not hold, we get a lower bound if we can solve system derived from the approximate solution~$P'$. In the calculations in Section~\ref{loesung} we do not attempt to prove that Assumptions~\ref{assu1}-\ref{assu3} hold, but we state the precision which was used to solve the problems. In this sense our calculations below do not prove optimality but provide putatively optimal results. \section{Results}\label{loesung} \subsection{Inclusions of platonic solids} When each of $P$ and $Q$ is taken to be one of the $5$ platonic solids, i.e.\ regular three-dimensional polyhedra, we can consider $20$ non-trivial inclusions. Croft found optimal pairs in $14$ out of these $20$ cases and proved optimality in \cite{C80}. In the following we assume that the regular three-dimensional polyhedron $Q$ has side length~$1$. We abbreviate tetrahedron, cube, octahedron, dodecahedron and icosahedron by $T$,$C$,$O$,$D$ and $I$ respectively and denote the golden ratio by~$\phi$. With the methods described above we are able to confirm all the known cases and answer all six unknown cases. The solver used was SCIP version 3.1.0 with a precision set to $10^{-10}$. With the improved formulation described above the calculations for all 20 inclusions took a few hours on a single core of a Xeon CPU running at 3 GHz, using less than 8GB of RAM. Some cases were solved in less than a second. \begin{table}[H] \centerin \begin{tabular}{|m{.16\textwidth}|m{.16\textwidth}|m{.16\textwidth}|m{.16\textwidth}|m{.16\textwidth}|} \hline &\input{./CT.tikz}& \input{./OT.tikz}& \input{./DT.tikz}& \input{./IT.tikz}\\ \hline \input{./TC.tikz}&& \input{./OC.tikz}& \input{./DC.tikz}& \input{./IC.tikz}\\ \hline \input{./TO.tikz}& \input{./CO.tikz}&& \input{./DO.tikz}& \input{./IO.tikz}\\ \hline \input{./TD.tikz}& \input{./CD.tikz}& \input{./OD.tikz}&& \input{./ID.tikz}\\ \hline \input{./TI.tikz}& \input{./CI.tikz}& \input{./OI.tikz}& \input{./DI.tikz}&\\ \hline \end{tabular} \caption{Maximal platonic solids included in a platonic solid}\label{alltikz} \end{table} \pagebreak The tables below give decimal approximations and symbolic values of the side length of a largest copy of $P$ inside $Q$, where $P$ and $Q$ range over the platonic solids. For completeness we restate the results of Croft, he gives a similar but incomplete table: \cite[p.~295]{C80}. We correct three typos in his table, the corresponding cells are \emph{emphasized}; new results are marked with a $\star$star. \begin{table}[H] \noindent\resizebox{1 \textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline\backslashbox{\scriptsize$Q$}{\scriptsize$P$} &$T$&$C$&$O$&$D$&$I$\\ \hline $T$& & 0.29590654 & 0.50000000 & $\star$ 0.16263158 & 0.27009076 \\ \hline $C$&1.4142136 & & 1.0606602 & \emph{0.39428348} & 0.61803399 \\ \hline $O$&1.0000000 & 0.58578644 & &$\star$ 0.31340182 & 0.54018151 \\ \hline $D$& \emph{2.2882456} & 1.6180340 & \emph{1.8512296} & & $\star$ 1.3090170\\ \hline $I$&$\star$ 1.3474429 & $\star$ 0.93874890 & 1.1810180 & $\star$ 0.58017873 &\\ \hline \end{tabular}} \end{table} \begin{table}[H] \extrarowsep=2mm \resizebox{1 \textwidth}{!}{ \begin{tabu}{|c|c|c|c|c|c|} \hline\backslashbox{\scriptsize$Q$}{\scriptsize$P$} &$T$&$C$&$O$&$D$&$I$\\\hline $T$& & $\frac{1}{1+\frac{2}{3}\sqrt{3}+\frac{1}{2}\sqrt{6}}$& $\frac{1}{2}$ & $\star d$ & $\frac{1}{\phi^2\sqrt{2}}$ \\ \hline $C$& $\sqrt{2}$ & & $\frac{3}{4}\sqrt{2}$ & \scriptsize$\frac{1}{\sqrt{2}\phi^3}(1-\frac{1}{2}\sqrt{10}+\frac{1}{2}\sqrt{2}+\sqrt{5})$ & $\frac{1}{\phi}$ \\ \hline $O$& $1$ & $2-\sqrt{2}$ &&\large$\star\frac{(25\sqrt{2})-(9 \sqrt{10})}{22}$ & $\frac{\sqrt{2}}{\phi^2}$ \\ \hline $D$& $\phi\sqrt{2}$ &$\phi$&$\frac{\phi^2}{\sqrt{2}}$ & & \large$\star\frac{1}{2\phi}+1$ \\ \hline $I$&$\star t$ &\large $\star\frac{5+7\sqrt{5}}{22}$ &\scriptsize$\frac{1}{2}(1-\frac{1}{2}\sqrt{10}+\frac{1}{2}\sqrt{2}+\sqrt{5})$&\large $\star\frac{15-\sqrt{5}}{22}$&\\ \hline \end{tabu}}\caption*{ \noindent$\phi = \text { golden ratio }$\\ $t = \text{ zero near } 1.3 \text{ of } 5041x^{32} - 1318386 x^{30} + 60348584 x^{28} - 924552262 x^{26} + 5246771058 x^{24} - 15736320636 x^{22} + 29448527368 x^{20} - 37805732980 x^{18}\\ + 35173457839 x^{16} - 24298372458 x^{14} + 12495147544 x^{12} - 4717349124x^{10}\\ + 1256858478 x^8- 217962112 x^6+21904868 x^4 - 1536272 x^2 + 160801$\\ $d = \text{ zero near } 0.16 \text{ of } 4096x^{16} - 3701760x^{14} + 809622720x^{12} - 17054118000x^{10} + 79233311025x^8 - 94166084250x^6 + 31024053000x^4 - 3236760000x^2 + 65610000$ } \end{table} \begin{figure}[H] \centering \begin{subfigure}[b]{0.49\textwidth} \input{./DI2.tikz} \subcaption{D in I} \end{subfigure}~ \begin{subfigure}[b]{0.49\textwidth} \input{./ID2.tikz} \subcaption{I in D} \end{subfigure} \caption{Self reciprocal cases} \end{figure} \noindent For the $6$ previously unknown cases we give a description of an optimal position. \subsubsection{Dodecahedron in icosahedron} For $D$ in $I$ we are in a concentric situation. The five vertices of one face of $D$ lie on the five edges of $I$ incident to a common vertex, one on each. The five vertices of the opposite face of that face of $D$ also lie on five edges of $I$ incident to a common vertex, namely the vertex of $I$ antipodal to the one mentioned before. The other ten vertices of $D$ lie in the interior of faces of~$I$. The side length is $$\frac{15-\sqrt{5}}{22}\approx 0.58017873.$$ \subsubsection{Icosahedron in dodecahedron} For $I$ in $D$ we are also in a concentric situation; each of the $12$ vertices of $I$ lies in the interior of one of the $12$ faces of $D$ and in each face of $D$ there is one vertex of~$I$. Let's position $D$ in the usual fashion such that $6$ of its edges are parallel to the $3$ coordinate axes. To each of the $12$ vertices on these edges of $D$ we associate the unique face which contains one but not the other vertex of the edge in its boundary. This gives us pairs $v,f$ of vertices and faces of~$D$. For each pair $v,f$ a vertex of $I$ lies on the bisector of $f$, which goes through $v$ and its position on the bisector is the point where the bisector is divided in two parts, such that the larger part has $\frac{\phi}{2}$ the length of the whole bisector. The position of the vertex of $I$ is closer to $v$ and the absolute distance to $v$ is $(1-\frac{\phi}{2})\cdot \frac{1}{2}\sqrt[4]{5}\phi^{\frac{3}{2}}=\frac{\sqrt[4]{5}}{4\sqrt{\phi}}$. (Remember we assume that $D$ has side length $1$ which results in a bisector of length $\frac{1}{2}\sqrt[4]{5}\phi^{\frac{3}{2}}$.) The edge length of $I$ obtained in this way is \[\frac{1}{2\phi}+1\approx 1.3090170.\] \begin{figure}[H] \centering \begin{subfigure}[b]{0.49\textwidth} \input{./CI2.tikz} \subcaption{C in I} \end{subfigure}~ \begin{subfigure}[b]{0.49\textwidth} \input{./DO2.tikz} \subcaption{D in O}\label{f2b} \end{subfigure} \caption{Two reciprocal cases} \end{figure} \subsubsection{Cube in icosahedron} Also concentric. For $C$ in $I$, two vertices of one edge of $C$ lie in the interior of two adjacent edges in $I$, which are not contained in the same face. And the vertices of the antipodal edge of this edge in $C$ lie in the interior of the corresponding antipodal edges in $I$. The other 4 edges of $C$ lie in the interior of faces of $I$. The side length is $$\frac{5+7\sqrt{5}}{22}\approx 0.93874890.$$ \subsubsection{Dodecahedron in octahedron} Again this is a concentric situation. Put two opposite edges of $D$ in a hyperplane spanned by $4$ vertices of $O$. Four faces of $O$ each contain an edge of $D$ and the other four faces of $O$ each contain only one vertex of $D$. The incidences can be seen in Figure~\ref{f2b}; vertices of $D$ which lie in the interior of a face of $O$ are marked white. See the considerations about reciprocity below. \noindent For $D$ in $O$ the maximum is \[\frac{(25\sqrt{2})-(9 \sqrt{10})}{22}\approx0.31340182.\] \subsubsection*{Reciprocity of \texorpdfstring{$C\subset I$}{C in I} and \texorpdfstring{$D\subset O$}{D in O}} If $P\subset Q$ are concentric and $P$ is maximal in $Q$ we can take polar reciprocals and get $Q^\circ\subset P^\circ$, such that $Q^\circ$ is maximal in $P^\circ$. Since $C^\circ=O$ and $I^\circ=D$, we can check that the two previous cases are reciprocal: \[\frac{(25\sqrt{2})-(9 \sqrt{10})}{22}\left(\frac{\phi^3}{\sqrt{2}}\right)=\frac{5+7\sqrt{5}}{22}.\] Concentric $C$ and $D$, which are reciprocals with respect to the unit sphere have the product of their edge lengths constant, namely $2\sqrt{2}$. Similarly for concentric, reciprocal $I$ and $D$ this product equals $\frac{4}{\phi^3}$. The factor $\frac{\phi^3}{\sqrt{2}}$ is the quotient of these two numbers. \begin{figure}[H] \centering \begin{subfigure}[b]{0.49\textwidth} \input{./TI2.tikz} \subcaption{T in I}\label{f3a} \end{subfigure}~ \begin{subfigure}[b]{0.49\textwidth} \input{./DT2.tikz} \subcaption{D in T}\label{f3b} \end{subfigure} \caption{Two cases with more involved solutions}\label{f3} \end{figure} \subsubsection{Tetrahedron in icosahedron} The incidences of the $T$ in $I$ are best seen in Figure~\ref{f3a}: one vertex of $T$ coincides with one vertex $v$ of $I$, another vertex of $T$ lies on an edge of $I$, which is neither incident to the vertex $v$ nor its antipode, and the two remaining vertices lie in the interior of faces of $I$. While in this case the resulting system can be somewhat automatically solved by the computer algebra system Mathematica 9 (while version 8 was not able to perform the calculation), we use the methods described in Section~\ref{method}. We choose two variables each for the barycentric coordinates for the two vertices in the interior of faces of $I$ and one variable for barycentric coordinates for the vertex in the interior of an edge of $I$. Together with a variable $t$ for the side length of $T$, i.e.\ the dilation factor, this results in a system of $6$ quadratic equations in $6$ variables. The $6$ equations confirm that all $6$ edges are of length $t$. We use the open source computer algebra system \emph{sage}, \cite{sage}. For the newton method, i.e.\ Step~1 we use scipy, \cite{scipy}, and for the integer relation, i.e.\ Step 2 PARI, \cite{PARI2} is used. It is sufficient to obtain 800 decimal digits in Step~1 of the method described in Section~\ref{method} in order to obtain the exact values for the variables in Step~2. The exact edge length is the zero near $1.3474429$ of this polynomial: \[ \begin{array}{l}5041 t^{32}-1318386 t^{30}+60348584 t^{28} -924552262 t^{26}+5246771058 t^{24}\\ -15736320636 t^{22} +29448527368 t^{20}-37805732980 t^{18}+35173457839 t^{16} \\ -24298372458 t^{14}+12495147544 t^{12}-4717349124t^{10} +1256858478 t^8\\-217962112 t^6 +21904868 t^4-1536272 t^2 +160801.\end{array}\] \subsubsection{Dodecahedron in tetrahedron} The incidences are best seen in Figure~\ref{f3b}: a complete face of $D$ is contained in one face of $T$, two vertices of $D$ lie in another face of $T$ and the two other faces of $T$ contain one vertex of $D$ each. We choose a variable $d$ for the side length of $D$ and four additional variables that describe the position of the vertices of $D$ that lie in a face of $T$, which is not the face that contains a complete face of $D$. Making sure that the edges between these four vertices have the correct length results again in a system of $6$ quadratic equations with $5$ variables, which can be successfully solved as in the previous case. In this case 350 decimal digits suffice to find solutions in the field of real algebraic numbers. The exact edge length is the zero near $0.16263158$ of this polynomial: \[ \begin{array}{l} 4096d^{16} - 3701760d^{14} + 809622720d^{12} - 17054118000d^{10} + 79233311025d^8 - \\94166084250d^6 + 31024053000d^4 - 3236760000d^2 + 65610000. \end{array}\] \section{Further applications} Possibly interesting situations where the method of this paper could be applied include the following cases. \begin{enumerate}[a)] \item\label{e1} Take $P$ and $Q$ to be (regular) polygons. \item Take $P$ and $Q$ to be regular polytopes of dimension greater than~$3$. \item Take $P$ to be a $n$-cube and $Q$ an $m$-cube with $n<m$. \item Take $P$ to be a regular $n$-simplex and $Q$ an $m$-cube with $n\leq m$. \item Take $P$ to be a regular $n$-simplex and $Q$ an regular $m$-simplex with $n<m$. \item Take $Q$ to be any polytope and $P$ some projection of $Q$. \end{enumerate} For the first case, i.e.\ finding the largest regular $n$-gon in a regular $m$-gon, the author has checked the conjecture of Dilworth and Mane \cite[Section 9]{DM10} for coprime $m$ and $n$ up to a precision of $10^{-10}$ for all pairs $m,n$ with $m,n\leq120$. It is possible to modify Problem~\ref{prob} in order to solve similar packing problems. \section*{Acknowledgements}\label{ackref} I would like to thank Ambros Gleixner, Günter M. Ziegler, Hartmut Monien, Louis Theran and Peter Bürgisser for fruitful discussions. \bibliographystyle{alpha}
1,116,691,499,436
arxiv
\section{Introduction} One of the fundamental theoretical tools extensively used in every branch of natural sciences for modeling the evolution of natural phenomena are the continuously time evolving dynamical systems. In scientific applications they are extremely useful, and their usefulness is determined by their predictive power. In turn, this predictive power is mostly determined by the stability of their solutions. In a realistic setup some uncertainty in the measured initial conditions in a physical system always does exist. Therefore a mathematical model that is physically meaningful must offer information and control on the time evolution of the deviations of the trajectories of the dynamical system with respect to a given reference trajectory. It is important to note that a local understanding of the stability is equally important as the global understanding of the late-time deviations. From a mathematical point of view the global stability of the solutions of the dynamical systems is described by the well known theory of the Lyapunov stability. In this approach to stability the basic quantities are the Lyapunov exponents, measuring exponential deviations from the given reference trajectory \cite{1,2}. However, one should mention that it is usually very difficult to determine the Lyapunov exponents analytically. Therefore, various numerical methods for their calculation have been developed, and are applied in the study of the dynamical systems \cite{3}-\cite{12}. However, even though the methods of the Lyapunov stability analysis are well established and well understood, it is important to adopt in the study the stability of the dynamical system different points of view. Then, once such a study is done, one can compare the obtained alternative results with the corresponding Lyapunov linear stability analysis. An important alternative approach to the study of the dynamical systems is represented by what we may call the geometro-dynamical approach. An example of such an approach is Kosambi-Cartan-Chern (KCC) theory, which was initiated in the pioneering works of Kosambi \cite{Ko33}, Cartan \cite{Ca33} and Chern \cite{Ch39}, respectively. The KCC theory is inspired, and based, on the geometry of the Finsler spaces. Its basic idea is the fundamental assumption that there is a one to one correspondence between a second order dynamical system, and the geodesic equations in an associated Finsler space (for a recent review of the KCC theory see \cite{rev}). From a geometric point of view the KCC theory is a differential geometric theory of the variational equations describing the deviations of the whole trajectory of a dynamical system with respect to the nearby ones \cite{An00}. In this geometrical description to each dynamical system one associates a non-linear connection, and a Berwald type connection. With the help of these two connections five geometrical invariants are obtained. The most important of them is the second invariant, also called the curvature deviation tensor. Its importance relies on the fact that it gives the Jacobi stability of the system \cite{rev, An00, Sa05,Sa05a}. The KCC theory has been extensively applied for the study of different physical, biochemical or engineering systems \cite{Sa05, Sa05a, An93, YaNa07, Ha1, Ha2, T0, T1, Ab1, Ab2, Ab3, Ab4, Ha3}. An alternative geometrization method for dynamical systems was introduced in \cite{Pet1} and \cite{Kau}, and further developed in \cite{Pet0}-\cite{Pet4 . Applications to the Henon-Heiles system and Bianchi type IX cosmological models were also investigated. In particular, in \cite{Pet0} a theoretical approach describing geometrically the behavior of dynamical systems, and of their chaotic properties was considered. In this case for the underground manifold a Finsler space was adopted. The properties of the Finsler space allow the description of a wide class of dynamical systems, including those with potentials depending on time and velocities. These are systems for which the Riemannian geometry approach is generally unsuitable. The Riemannian geometric approach to dynamical systems is based on the well-known results that the flow associated with a time dependent Hamiltonian \begin{equation} H=\frac{1}{2}\delta ^{ab}p_{a}p_{b}+V\left( x^{a}\right) , \end{equation can be described as a geodesic flow in a curved, but conformally flat, manifold \cite{Kau}. With the introduction of a metric of the form ds^{2}=W\left( x^{a}\right) \delta _{ab}dx^{a}dx^{b},$ in which the conformal factor is given by $W\left( x^{a}\right) =E-V\left( x^{a}\right)$ , where $E$ is the conserved energy associated with the time-independent Hamiltonian $H$, it follows that in the metric $g_{ab}=W\left( x^{a}\right) \delta _{ab},$ the geodesic equation for motion is completely equivalent to the Hamilton equations \cite{Kau} \begin{equation} \frac{dx^{a}}{dt}=\frac{\partial H}{\partial p_{a}},\frac{dp_{a}}{dt}=-\frac \partial H}{\partial x_{a}}. \label{Ham} \end{equation This result implies that the confluence, or divergence of nearby trajectories $x^{a}(s)$ and $[x+\xi ]^{a}(s)$ of the Hamiltonian dynamical system is determined by the Jacobi equation, i.e., the equation of geodesic deviation, which for the present case takes the following form, \begin{equation} \frac{D^{2}\xi ^{a}}{Ds^{2}}=R_{bcd}^{a}u^{b}u^{d}\xi ^{c}\equiv -K_{c}^{a}\xi ^{c}. \label{JR} \end{equation In Eq.~(\ref{JR}) $R_{bcd}^{a}$ is the Riemann tensor associated with the metric $g_{ab}$, and $D/Ds=u^{a}\nabla _{a}$ denotes a directional derivative along the velocity field $u^{a}=dx^{a}/ds$. Therefore linear stability for the trajectory $x^{a}(s)$ is thus related to the Riemann curvature $R_{bcd}^{a}$ or, more exactly, to the curvature $K_{c}^{a}$. If, for example, $R_{bcd}^{a}$ is everywhere negative, then it follows that K_{c}^{a}$ always has one or more negative eigenvalues, and therefore the trajectory must be linearly unstable \cite{Kau}. There are a large number of mathematical results on the geometrization of the dynamical systems. For example, from a geometric perspective point of view, in \cite{Punzi} the global and local stability of solutions of continuously evolving dynamical systems was reconsidered, and the local stability was defined based on the choice of a linear connection. Note that an important point in favor of the use of a linear connection is that it is naturally defined for any dynamical system $(S,X)$, and not only for those related to second-order evolution equations. An important testing ground of the KCC theory is represented by the study of two-dimensional autonomous systems, and of their stability properties. Such a study was performed in \cite{Sa05a, Sa05} for two dimensional systems of the form \begin{equation}\label{4n} \frac{du}{dt}=f(u,v),\qquad \frac{dv}{dt}=g(u,v), \end{equation} under the assumption that the point $(0,0)$ is a fixed point, i.e. $f(0,0)=g(0,0)=0$. By relabeling $v$ as $x$, and $g(u,v)$ as $y$, and by assuming that $g_u|_{(0,0)}\neq 0$, one can eliminate the variable $u$. Moreover, since $(u,v)=(0,0)$ is a fixed point, from the Theorem of Implicit Functions it follows that the equation $g(u,x)-y=0$ has a solution $u=u(x,y)$ in the vicinity of $(x,y)=(0,0)$. Since $\ddot x = \dot g = g_u \, f + g_v \, y$, we obtain an autonomous one-dimensional second order equation, equivalent to the system (\ref{4n}), namely \be\label{5n} \ddot x^1 + g^1(x,y) = 0, \ee where \begin{equation} g^1(x,y)=-g_u(u(x,y),x) \, f(u(x,y),x) - g_v(u(x,y),x) \, y. \end{equation} The Jacobi stability properties of Eq.~(\ref{5n}) can be studied by using the KCC theory \cite{Sa05a,Sa05}, and the comparative study of the Jacobi and Lyapunov stability can be performed in detail. In the present paper we will introduce an alternative view of the KCC theory by adopting the perspective of the first order dynamical systems. After a brief presentation of the general formalism, we will concentrate our attention to the simple, but important case of two dimensional autonomous dynamical systems, whose properties are studied in detail. Instead of reducing the two-dimensional autonomous system to a single second order differential equation of the form (\ref{5n}), by taking the time derivative of each equation we transform the system to an equivalent system of two second-order differential equations. We also clarify the relation of the Jacobi stability approach with classical Lyapunov stability theory. As a physical application of our formalism we apply it to the study of two dimensional Hamiltonian systems, describing physical processes with one degree of freedom. The Hamilton system is transformed into an equivalent system of two second order differential equations, which can be studied geometrically similarly as geodesic equations into an associated Finsler space. We obtain the geometrical quantities describing the geometerized Hamilton system, and the conditions for its Jacobi stability are obtained. The present paper is organized as follows. The KCC theory is presented in Section \ref{kcc}. The applications of the KCC stability theory to the case of two-dimensional systems is considered in Section~\ref{2dim}. The comparison between the linear Lyapounov stability and the Jacobi stability is performed in Section~\ref{comp}. The application of the KCC theory to two dimensional Hamiltonian systems is discussed in Section~\ref{aHam}. We discuss and conclude our results in Section~\ref{concl}. \section{Brief review of the KCC theory and of the Jacobi stability} \label{kcc} In the present Section we briefly summarize the basic concepts and results of the KCC theory, and we introduce the relevant notations (for a detailed presentation see \cite{rev} and \cite{An00}). \subsection{Geometrical interpretation of dynamical systems} In the following we assume that $\mathcal{M}$ is a real, smooth $n -dimensional manifold, and we denote by $T\mathcal{M}$ its tangent bundle. On an open connected subset $\Omega $ of the Euclidian $(2n+1)$ dimensional space $R^{n}\times R^{n}\times R^{1}$, we introduce a $2n+1$ dimensional coordinates system $\left(x^i,y^i,t\right)$, $i=1,2,...,n$, where $\left( x^{i}\right) =\left( x^{1},x^{2},...,x^{n}\right) $, $\left( y^{i}\right) =\left( y^{1},y^{2},...,y^{n}\right) $ and $t$ is the usual time coordinate. The coordinates $y^i$ are defined as \begin{equation} y^{i}=\left( \frac{dx^{1}}{dt},\frac{dx^{2}}{dt},...,\frac{dx^{n}}{dt \right) . \end{equation} A basic assumption in our approach is the assumption that the time coordinate $t$ is an absolute invariant. Therefore, the only admissible coordinate transformations are \begin{equation} \tilde{t}=t,\tilde{x}^{i}=\tilde{x}^{i}\left( x^{1},x^{2},...,x^{n}\right) ,i\in \left\{1 ,2,...,n\right\} . \label{ct} \end{equation} Following \cite{Punzi}, we assume that a deterministic dynamical systems can be defined as a set of formal rules that describe the evolution of points in a set $S$ with respect to an external time parameter $t\in T$, which can be discrete, or continuous. More exactly, a dynamical system is a map \cit {Punzi} \begin{equation} \phi:T \times S \rightarrow S, (t,x)\mapsto \phi (t,x), \end{equation} which satisfies the condition $\phi (t , \cdot) \circ \phi (s , \cdot)=\phi (t+s , \cdot)$, $\forall t ,s\in T$. For realistic dynamical systems that can model natural phenomena additional structures need to be added to the above definition. In many situations of physical interest the equations of motion of a dynamical system follow from a Lagrangian $L$ via the Euler-Lagrange equations, \begin{equation} \frac{d}{dt}\frac{\partial L}{\partial y^{i}}-\frac{\partial L}{\partial x^{i}}=F_{i},i=1,2,...,n, \label{EL} \end{equation where $F_{i}$, $i=1,2,...,n$, is the external force. Note that the triplet \left( M,L,F_{i}\right) $ is called a Finslerian mechanical system \cit {MiFr05,MHSS}. If the Lagrangian $L$ is regular, it follows that the Euler-Lagrange equations defined in Eq.~(\ref{EL}) are equivalent to a system of second-order ordinary (usually nonlinear) differential equations \begin{equation} \frac{d^{2}x^{i}}{dt^{2}}+2G^{i}\left( x^{j},y^{j},t\right) =0,i\in \left\{ 1,2,...,n\right\} , \label{EM} \end{equation where each function $G^{i}\left( x^{j},y^{j},t\right) $ is $C^{\infty }$ in a neighborhood of some initial conditions $\left( \left( x\right) _{0},\left( y\right) _{0},t_{0}\right) $ in $\Omega $. The fundamental idea of the KCC theory is that if an arbitrary system of second-order differential equations of the form (\ref{EM}) is given, with no \textit{a priori} Lagrangean function assumed, still one can study the behavior of its trajectories by analogy with the trajectories of the Euler-Lagrange system. \subsection{The non-linear connection and the KCC invariants associated to a dynamical system} As a first step in the analysis of the geometry associated to the dynamical system defined by Eqs.~(\ref{EM}), we introduce a nonlinear connection $N$ on $M$, with coefficients $N_{j}^{i}$, defined as \cite{MHSS} \begin{equation} \label{NC} N_{j}^{i}=\frac{\partial G^{i}}{\partial y^{j}}. \end{equation} Geometrically the nonlinear connection $N_{j}^{i}$ can be interpreted in terms of a dynamical covariant derivative $\nabla ^N$: for two vector fields $v$, $w$ defined over a manifold $M$, we define the covariant derivative \nabla ^N$ as \cite{Punzi} \begin{equation} \label{con} \nabla _v^Nw=\left[v^j\frac{\partial }{\partial x^j}w^i+N^i_j(x,y)w^j\right \frac{\partial }{\partial x^i}. \end{equation} For $N_i^j(x,y)=\Gamma _{il}^j(x)y^l$, from Eq.~(\ref{con}) we recover the definition of the covariant derivative for the special case of a standard linear connection, as defined in Riemmannian geometry. For the non-singular coordinate transformations introduced through Eqs.~(\re {ct}), the KCC-covariant differential of a vector field $\xi ^{i}(x)$ on the open subset $\Omega \subseteq R^{n}\times R^{n}\times R^{1}$ is defined as \cite{An93,An00,Sa05,Sa05a} \begin{equation} \frac{D\xi ^{i}}{dt}=\frac{d\xi ^{i}}{dt}+N_{j}^{i}\xi ^{j}. \label{KCC} \end{equation} For $\xi ^{i}=y^{i}$ we obtain \begin{equation} \frac{Dy^{i}}{dt}=N_{j}^{i}y^{j}-2G^{i}=-\epsilon ^{i}. \end{equation} The contravariant vector field $\epsilon ^{i}$ defined on $\Omega $ is called the first KCC invariant. Now we vary the trajectories $x^{i}(t)$ of the system (\ref{EM}) into nearby ones according to the rule \begin{equation} \tilde{x}^{i}\left( t\right) =x^{i}(t)+\eta \xi ^{i}(t), \label{var} \end{equation} where $\left| \eta \right| $ is a small parameter, and $\xi ^{i}(t)$ are the components of a contravariant vector field defined along the trajectory x^{i}(t)$. By substituting Eqs.~(\ref{var}) into Eqs.~(\ref{EM}), and by taking the limit $\eta \rightarrow 0$, we obtain the deviation, or Jacobi, equations in the form \cite{An93,An00,Sa05,Sa05a} \begin{equation} \frac{d^{2}\xi ^{i}}{dt^{2}}+2N_{j}^{i}\frac{d\xi ^{j}}{dt}+2\frac{\partial G^{i}}{\partial x^{j}}\xi ^{j}=0. \label{def} \end{equation} Eq.~(\ref{def}) can be rewritten in a covariant form with the use of the KCC-covariant derivative as \begin{equation} \frac{D^{2}\xi ^{i}}{dt^{2}}=P_{j}^{i}\xi ^{j}, \label{JE} \end{equation} where we have denoted \begin{equation} \label{Pij} P_{j}^{i}=-2\frac{\partial G^{i}}{\partial x^{j}}-2G^{l}G_{jl}^{i}+ y^{l \frac{\partial N_{j}^{i}}{\partial x^{l}}+N_{l}^{i}N_{j}^{l}+\frac{\partial N_{j}^{i}}{\partial t}, \end{equation} and we have introduced the Berwald connection $G_{jl}^{i}$, defined as \cit {rev, An00, An93,MHSS,Sa05,Sa05a} \begin{equation} G_{jl}^{i}\equiv \frac{\partial N_{j}^{i}}{\partial y^{l}}. \end{equation} The tensor $P_{j}^{i}$ is called the second KCC-invariant, or the deviation curvature tensor, while Eq.~(\ref{JE}) is called the Jacobi equation. When the system of equations (\ref{EM}) describes the geodesic equations, in either Riemann or Finsler geometry Eq.~(\ref{JE}) is the Jacobi field equation. The trace $P$ of the curvature deviation tensor can be obtained from the relation \begin{equation} P=P_{i}^{i}=-2\frac{\partial G^{i}}{\partial x^{i}}-2G^{l}G_{il}^{i}+ y^{l \frac{\partial N_{i}^{i}}{\partial x^{l}}+N_{l}^{i}N_{i}^{l}+\frac{\partial N_{i}^{i}}{\partial t}. \end{equation} One can also introduce the third, fourth and fifth invariants of the system \ref{EM}), which are defined as \cite{An00} \begin{equation} \label{31} P_{jk}^{i}\equiv \frac{1}{3}\left( \frac{\partial P_{j}^{i}}{\partial y^{k}} \frac{\partial P_{k}^{i}}{\partial y^{j}}\right) ,P_{jkl}^{i}\equiv \frac \partial P_{jk}^{i}}{\partial y^{l}},D_{jkl}^{i}\equiv \frac{\partial G_{jk}^{i}}{\partial y^{l}}. \end{equation} Geometrically, the third invariant $P_{jk}^{i}$ can be interpreted as a torsion tensor. The fourth and fifth invariants $P_{jkl}^{i}$ and D_{jkl}^{i}$ are called the Riemann-Christoffel curvature tensor, and the Douglas tensor, respectively \cite{rev, An00}. Note that in a Berwald space these tensors always exist. In the KCC theory they describe the geometrical properties and interpretation of a system of second-order differential equations. \subsection{The definition of the Jacobi stability} The behavior of the trajectories of the dynamical system given by Eqs.~(\re {EM}) in a vicinity of a point $x^{i}\left( t_{0}\right) $ is extremely important in many physical, chemical or biological applications. For simplicity in the following we take $t_{0}=0$. In the following we consider the trajectories $x^{i}=x^{i}(t)$ as curves in the Euclidean space $\left( R^{n},\left\langle .,.\right\rangle \right) $, where $\left\langle .,.\right\rangle $ is the canonical inner product of $R^{n}$. We assume that the deviation vector $\xi $ obeys the initial conditions $\xi \left( 0\right) =O$ and $\dot{\xi}\left( 0\right) =W\neq O$, where $O\in R^{n}$ is the null vector \cite{rev, An00, Sa05,Sa05a}. Thus, for the focusing tendency of the trajectories around $t_{0}=0$ we introduce the following description: if $\left| \left| \xi \left( t\right) \right| \right| <t^{2}$, $t\approx 0^{+}$, then the trajectories are bunching together. But if $\left| \left| \xi \left( t\right) \right| \right| >t^{2}$, $t\approx 0^{+}$, the trajectories have a dispersing behavior \cit {rev, An00, Sa05,Sa05a}. The focusing/dispersing tendency of the trajectories of a dynamical system can be also described in terms of the deviation curvature tensor in the following way: The trajectories of the system of equations (\ref{EM}) are bunching together for $t\approx 0^{+}$ if and only if the real part of the eigenvalues of the deviation tensor P_{j}^{i}\left( 0\right) $ are strictly negative. On the other hand, the trajectories are dispersing if and only if the real part of the eigenvalues of $P_{j}^{i}\left( 0\right) $ are strictly positive \cite{rev, An00, Sa05,Sa05a}. Based on the above considerations we introduce the concept of the Jacobi stability for a dynamical system as follows \cite{rev, An00,Sa05,Sa05a}: \textbf{Definition:} If the system of differential equations Eqs.~(\ref{EM}) satisfies the initial conditions $\left| \left| x^{i}\left( t_{0}\right) \tilde{x}^{i}\left( t_{0}\right) \right| \right| =0$, $\left| \left| \dot{x ^{i}\left( t_{0}\right) -\tilde{x}^{i}\left( t_{0}\right) \right| \right| \neq 0$, with respect to the norm $\left| \left| .\right| \right| $ induced by a positive definite inner product, then the trajectories of Eqs.~(\ref{EM ) are Jacobi stable if and only if the real parts of the eigenvalues of the deviation tensor $P_{j}^{i}$ are strictly negative everywhere. Otherwise, the trajectories are Jacobi unstable. \section{ The case of the two variable dependent dynamical system} \label{2dim} In the present Section we consider the case of the two-dimensional dynamical systems already studied in \cite{Sa05,Sa05a}. However, the present formulation is in a parametric form. Let's consider the following two dimensional dynamical system, \begin{equation} \frac{dx^{1}}{dt}=f\left( x^{1},x^{2}\right) {,} \label{ex1} \end{equation} \begin{equation} \frac{dx^{2}}{dt}=g\left( x^{1},x^{2}\right) {.} \label{ex2} \end{equation} We point out that we regard a solution of Eqs. (\ref{ex1}) and (\ref{ex2}) as a flow $\varphi _{t}:D\subset \mathbb{R}^{2}\rightarrow \mathbb{R} ^{2}$, or, more generally, $\varphi _{t}:D\subset M\rightarrow M$, where $M$ is a smooth surface in $\mathbb{R} ^{3}$. The canonical lift of $\varphi _{t}$ to the tangent space $TM$ can be geometrically defined as \begin{equation} \hat{\varphi}_{t}:TM\rightarrow TM, \label{3.2} \end{equation} \begin{equation} \label{3.2a} \hat{\varphi}_{t}(u)=\left( \varphi _{t}(u),\dot{\varphi}_{t}(u)\right) . \end{equation} In terms of dynamical systems, we simply take the derivative of Eqs.~(\re {ex1}) and (\ref{ex2}) and obtain \begin{equation} \frac{d^{2}x^{1}}{dt^{2}}=f_{1}\left( x^{1},x^{2}\right) y^{1}+f_{2}\left( x^{1},x^{2}\right) y^{2}, \label{3.3a} \end{equation} \begin{equation} \frac{d^{2}x^{2}}{dt^{2}}=g_{1}\left( x^{1},x^{2}\right) y^{1}+g_{2}\left( x^{1},x^{2}\right) y^{2}, \label{3.3b} \end{equation where we have denoted \begin{equation} f_{1}:=\frac{\partial f}{\partial x^{1}},f_{2}:=\frac{\partial f}{\partial x^{2}},g_{1}:=\frac{\partial g}{\partial x^{1}},g_{2}:=\frac{\partial g} \partial x^{2}},y^{1}=\frac{dx^{1}}{dt},y^{2}=\frac{dx^{2}}{dt}. \end{equation} In other words, on $TM$ we obtain \begin{equation} \frac{dy^{1}}{dt}=f_{1}\left( x^{1},x^{2}\right) y^{1}+f_{2}\left( x^{1},x^{2}\right) y^{2}, \end{equation} \begin{equation} \frac{dy^{2}}{dt}=g_{1}\left( x^{1},x^{2}\right) y^{1}+g_{2}\left( x^{1},x^{2}\right) y^{2}, \end{equation where $\left( x^{1}x^{2},y^{1},y^{2}\right) $ are local coordinates on $TM$. Hence we can see that the above system is actually a linear dynamical system on the fiber $T_{\left( x^{1},x^{2}\right) }M$. Moreover, the system of Eqs. (\ref{3.3a}) and (\ref{3.3b}) can be written as \begin{equation} \frac{d^{2}x^{1}}{dt^{2}}+\left( -f_{1}y^{1}-f_{2}y^{2}\right) =0, \end{equation} \begin{equation} \frac{d^{2}x^{2}}{dt^{2}}+\left( -g_{1}y^{1}-g_{2}y^{2}\right) =0. \end{equation} By comparison with Eqs. (\ref{EM}) we have \begin{equation} \begin{pmatrix} G^{1} \\ G^{2 \end{pmatrix =-\frac{1}{2 \begin{pmatrix} f_{1}y^{1}+f_{2}y^{2} \\ g_{1}y^{1}+g_{2}y^{2 \end{pmatrix =-\frac{1}{2 \begin{pmatrix} f_{1} & f_{2} \\ g_{1} & g_{2 \end{pmatrix \begin{pmatrix} y^{1} \\ y^{2 \end{pmatrix =-\frac{1}{2}J\cdot y, \end{equation where \begin{equation} J=J\left( f,g\right) \begin{pmatrix} f_{1} & f_{2} \\ g_{1} & g_{2 \end{pmatrix \end{equation is the Jacobian of the dynamical system Eqs.~(\ref{ex1})-(\ref{ex2}). Using Eq. (\ref{NC}) we obtain for the nonlinear connection \begin{equation} \left( N_{j}^{i}\right) _{i,j=1,2} \begin{pmatrix} N_{1}^{1} & N_{2}^{1} \\ N_{1}^{2} & N_{2}^{2 \end{pmatrix \begin{pmatrix} \frac{\partial G^{1}}{\partial y^{1}} & \frac{\partial G^{1}}{\partial y^{2}} \\ \frac{\partial G^{2}}{\partial y^{1}} & \frac{\partial G^{2}}{\partial y^{1} \end{pmatrix =-\frac{1}{2 \begin{pmatrix} f_{1} & f_{2} \\ g_{1} & g_{2 \end{pmatrix =-\frac{1}{2}J\left( f,g\right) . \end{equation} Therefore all the components of the Berwald connection cancel, \begin{equation} G_{jl}^{i}:=\frac{\partial N_{j}^{i}}{\partial y^{l}}\equiv 0. \end{equation} Then, for the components of the deviation curvature tensor $\left( P_{j}^{i}\right) $, given in Eq.~(\ref{Pij}), we obtain \begin{eqnarray*} P_{1}^{1} &=&-2\left( \frac{\partial G^{1}}{\partial x^{1}}\right) +y^{1 \frac{\partial N_{1}^{1}}{\partial x^{1}}+y^{2}\frac{\partial N_{1}^{1}} \partial x^{2}}+N_{l}^{1}N_{1}^{l}=\frac{\partial }{\partial x^{1}}\left( f_{1}y^{1}+f_{2}y^{2}\right) -\frac{1}{2}y^{1}\frac{\partial f_{1}}{\partial x^{1}}-\frac{1}{2}y^{2}\frac{\partial f_{1}}{\partial x^{2} +N_{l}^{i}N_{j}^{l} \\ &=&\frac{1}{2}f_{11}y^{1}+\frac{1}{2}f_{12}y^{2}+N_{l}^{i}N_{j}^{l}, \end{eqnarray*} \begin{equation*} P_{1}^{2}=-2\left( \frac{\partial G^{2}}{\partial x^{1}}\right) +y^{1}\frac \partial N_{1}^{2}}{\partial x^{1}}+y^{2}\frac{\partial N_{1}^{2}}{\partial x^{2}}+N_{l}^{2}N_{2}^{l}=\frac{\partial }{\partial x^{1}}\left( g_{1}y^{1}+g_{2}y^{2}\right) -\frac{1}{2}y^{1}\frac{\partial g_{1}}{\partial x^{1}}-\frac{1}{2}y^{2}\frac{\partial g_{1}}{\partial x^{2} +N_{l}^{2}N_{2}^{l}, \end{equation*} and so on. Therefore we get \begin{eqnarray*} \left( P_{j}^{i}\right) &= \begin{pmatrix} P_{1}^{1} & P_{2}^{1} \\ P_{1}^{2} & P_{2}^{2 \end{pmatrix =\frac{1}{2 \begin{pmatrix} f_{11}y^{1}+f_{12}y^{2} & f_{12}y^{1}+f_{22}y^{2} \\ g_{11}y^{1}+g_{12}y^{2} & g_{12}y^{1}+g_{22}y^{2 \end{pmatrix +\frac{1}{4}J_{l}^{i}\left( f,g\right) \times J_{j}^{l}\left( f,g\right) \\ &= \begin{pmatrix} f_{11}y^{1}+f_{12}y^{2} & g_{11}y^{1}+g_{12}y^{2} \\ f_{12}y^{1}+f_{22}y^{2} & g_{12}y^{1}+g_{22}y^{2 \end{pmatrix ^{t}+\frac{1}{4}J_{l}^{i}\left( f,g\right) \times J_{j}^{l}\left( f,g\right) \\ &= \begin{pmatrix} \begin{pmatrix} f_{11} & f_{12} \\ f_{21} & f_{22 \end{pmatrix \begin{pmatrix} y^{1} \\ y^{2 \end{pmatrix \Bigg| & \begin{pmatrix} g_{11} & g_{12} \\ g_{21} & g_{22 \end{pmatrix \begin{pmatrix} y^{1} \\ y^{2 \end{pmatrix \end{pmatrix ^{t}+\frac{1}{4}J_{l}^{i}\left( f,g\right) \times J_{j}^{l}\left( f,g\right) . \end{eqnarray*} Therefore we have obtained the following \textbf{Proposition 3.1} \textit{The curvature deviation tensor associated to a second order dynamical system is given by \begin{equation} P=\frac{1}{2 \begin{pmatrix} H_{f}\cdot y & H_{g}\cdot \end{pmatrix ^{t}+\frac{1}{4}J^{2}\left( f,g\right) , \end{equation} where $H_{f} \begin{pmatrix} f_{11} & f_{12} \\ f_{21} & f_{22 \end{pmatrix $ is the Hessian of $f$, and similarly for $g$.} \section{Lyapunov and Jacobi stability of two dimensional dynamical systems}\label{comp} Similarly with \cite{rev}, and without losing generality, we assume $p=(0,0)$ is a fixed point of Eqs.~(\ref{ex1}) and (\ref{ex2}). Then the Lyapunov stability is governed by the characteristic equation \begin{equation} \mu ^{2}-\mathrm{tr}\;A\cdot \mu +\det A=0, \label{p} \end{equation where tr$\;A$ and $\det A$ are the trace and determinant of the matrix \begin{equation} A:=J(f,g)|_{(0,0)} \begin{pmatrix} f_{1} & f_{2} \\ g_{1} & g_{2 \end{pmatrix |_{(0,0)}. \end{equation} We also denote by $\Delta =\left( \mathrm{tr}\;A\right) ^{2}-4\det A$ the discriminant of Eq. (\ref{p}). By descending again on $\mathbb{R} ^{2}$ (or M$), and evaluate at the fixed point $\left( 0,0\right) $, we obtain $\left( y^{1},y^{2}\right) |_{(0,0)}=(0,0)$, and therefore \begin{equation} P|_{(0,0)}=\left( \frac{1}{2}A\right) ^{2}. \end{equation} To be more precise, we have \begin{equation} P|_{(0,0)}=\frac{1}{4 \begin{pmatrix} f_{1} & f_{2} \\ g_{1} & g_{2 \end{pmatrix \begin{pmatrix} f_{1} & f_{2} \\ g_{1} & g_{2 \end{pmatrix =\frac{1}{4 \begin{pmatrix} f_{1}^{2}+f_{2}g_{1} & f_{1}f_{2}+f_{2}g_{2} \\ f_{1}g_{1}+g_{1}g_{2} & f_{2}g_{1}+g_{2}^{2 \end{pmatrix . \end{equation} Hence we have \begin{equation} \mathrm{tr}\;P=\frac{1}{4}\left( f_{1}^{2}+2f_{2}g_{1}+g_{2}^{2}\right) \frac{1}{4}\left( f_{1}^{2}+2f_{1}g_{2}+g_{2}^{2}-2f_{1}g_{2}+2f_{2}g_{1}\right) =\frac{1}{4 \left[ \left( \mathrm{tr}\;A\right) ^{2}-2\det A\right] , \end{equation} \begin{equation} \det P=\frac{1}{4}\left( f_{1}g_{2}-f_{2}g_{1}\right) ^{2}=\left(\frac{1}{4} \det A\right) ^{2}. \end{equation} The eigenvalues of the matrix $\left( P_{j}^{i}\right) $ are given by the characteristic equation \begin{equation} \lambda ^{2}-\mathrm{tr}\;P\cdot \lambda +\det P=0, \end{equation} and its discriminant is \begin{equation} \tilde{\Delta}=\left( {\rm tr}\;P\right) ^{2}-4\det P=\frac{1}{16}\left[ \left( {\rm tr}\;A\right) ^{2}-2\det A\right] ^{2}-\frac{1}{4}\left( \det A\right) ^{2} \frac{1}{16}\left( {\rm tr}\;A\right) ^{2}\left[ \left( {\rm tr }\;A\right) ^{2}-4\det \right] . \end{equation} Thus we obtain the following \textbf{Computational Lemma 4.1.} \textit{The trace, determinant and discriminant of the characteristic equation of the deviation curvature matrix $P$ are}: \begin{equation*} \mathrm{tr}\;P=\frac{1}{4}\left[ \left( {\rm tr}\;A\right) ^{2}-2\det A\right] , \end{equation*} \begin{equation*} \det P=\frac{1}{16}\left( \det A\right) ^{2}, \end{equation*} \begin{equation*} \tilde{\Delta }=\frac{1}{16}\left({\rm tr}\;A\right) ^{2}\Delta , \end{equation*} \textit{where $\Delta =\left({\rm tr}\;A\right)^2-4\det A$ is the discriminant of Eq.~(\ref{p}).} We recall now some general results of linear algebra. {\bf Lemma 4.2.} Let $A$ be a $\left( 2,2\right) $ matrix, and denote by $\lambda _{1}$, $\lambda _{2}$ its eigenvalues. Then (i) the eigenvalues of $k\cdot A$ are $k\cdot \lambda _{1}$, $k\cdot \lambda _{2}$, for $k\neq 0$ scalar. (ii) the eigenvalues of $A^{k}:=\underbrace{A\cdot A...\cdot A}$ are $\left( \lambda _{1}\right) ^{k}$, $\left( \lambda _{2}\right) ^{k}$. From here it follows: \textbf{Lemma 4.3.} \textit{If $\lambda _{1}$, $\lambda _{2}$ are eigenvalues of $A$, then $\mu _{1}=\left( \frac{1}{2}\lambda _{1}\right) ^{2} $, $\mu _{2}=\left( \frac{1}{2}\lambda _{2}\right) ^{2}$ are eigenvalues of $P$.} Remark: The formulas of Lemma 4.3 imply \begin{equation*} S:=\mu _{1}+\mu _{2}=\frac{1}{4}\left( \lambda _{1}^{2}+\lambda _{2}^{2}\right) =\frac{1}{4}\left[ \left( \lambda _{1}+\lambda _{2}\right) ^{2}-2\cdot \lambda _{1}\lambda _{2}\right] , \end{equation*} \begin{equation*} \rho :=\mu _{1}\cdot \mu _{2}=\left( \frac{1}{2}\lambda _{1}^{2}\right) \cdot \left( \frac{1}{2}\lambda _{2}\right) ^{2}=\frac{1}{16}\left( \lambda _{1}\lambda _{2}\right) ^{2}. \end{equation*} The above formulas are consistent with the computational Lemma 4.1. \subsection{Comparison of Jacobi and Lyapunov stability} \textbf{Definition} Let $p$ be a fixed point of the two-dimensional system \dot{\mathbf{x}}=f(\mathbf{x})$, and denote by $\lambda_1$, $\lambda_2$ the two eigenvalues of $A := (Df)_{|p}$. The following classification of the fixed point $p$ is standard. \begin{itemize} \item[(I)] $\Delta >0$, \textbf{$\lambda_1$, $\lambda_2$ are real and distinct.} \begin{itemize} \item[I.1.] \textbf{$\lambda_1\cdot \lambda_2>0$ (the eigenvalues have the same sign)}: $p$ is called a \textit{node} or type I singularity; that is, every orbit tends to the origin in a definite direction as $t \to \infty$. \begin{itemize} \item[I.1.1.] \textbf{$\lambda_1, \lambda_2>0$ }: $p$ is an \textit{unstable node} \index{unstable node} \item[I.1.2.] \textbf{$\lambda_1, \lambda_2<0$ }: $p$ is a \textit{stable node} \index{stable node} \end{itemize} \item[I.2.] $\Delta <0$, \textbf{$\lambda_1\cdot \lambda_2<0$ (the eigenvalues have different signs)}: $p$ is an \textit{unstable fixed point}, or a \textit{saddle} point singularity. \end{itemize} \item[(II)] \textbf{$\lambda_1$, $\lambda_2$ are complex, i.e. \lambda_{1,2}=\alpha\pm i \beta$, $\beta \neq 0$.} \begin{itemize} \item[II.1.] \textbf{$\alpha\neq 0$}: $p$ is a \textit{spiral}, or a \textit focus}, that is, the solutions approach the origin as $t\to \infty$, but not from a definite direction. \begin{itemize} \item[II.1.1.] \textbf{$\alpha<0$}: $p$ is a \textit{stable focus}. \item[II.1.2.] \textbf{$\alpha>0$}: $p$ is an \textit{unstable focus}. \end{itemize} \item[II.2.] \textbf{$\alpha = 0$}: $p$ is a \textit{center}, \index{center} that means it is not stable in the usual sense, and we have to look at higher order derivatives. \end{itemize} \item[(III)] $\Delta =0$, \textbf{$\lambda_1$, $\lambda_2$ are equal, i.e. \lambda_1=\lambda_2=\lambda$.} \begin{itemize} \item[III.1.] If there are two linearly independent eigenvectors, we have a \textit{star singularity}, or a \textit{stable singular node} (these are simple straight lines through the origin). \item[III.2.] If there is only one linearly independent eigenvector, we have an \textit{improper node}, or \textit{unstable degenerate node}. \end{itemize} \end{itemize} By combining the above results with the Computational Lemma 4.1, it follows that I. \tilde{\Delta}>0$, $\mu _1, \mu _2 \in \mathbb{R}$. I.1. $S>0$, $p$ is Jacobi unstable $\Leftrightarrow$ \begin{equation} \label{3.4} \left(\mathrm{tr}\;A\right)^2-4\det A>0, \left(\mathrm{tr}\;A\right)^2-2\det A>0. \end{equation} - for $\det A>0$, we must have $\left(\mathrm{tr}\;A\right)^2>2\det A>4\det A $, and hence both cases $\left(\mathrm{tr}\;A>0,\det A>0\right)$ and \left(\mathrm{tr}\;A<0,\det A<0\right)$ imply Jacobi stability. -for $\det A<0$, we obtain again Jacobi stability because now relations (\re {3.4}) are identically satisfied. I.2. $S>0$: this case is not possible algebraically. II. $\tilde{\Delta}<0$: $\mu _{1,2}=\alpha\pm i\beta \in \mathbb{C}$ (Remark that $\tilde{\Delta}<0 \Rightarrow \Delta <0$, i.e. \lambda _1, \lambda _2$ are complex, and hence $\det A>0$. II.1. $S>0$: $p$ is Jacobi unstable $\Leftrightarrow$ \begin{equation} \label{3.41} \left(\mathrm{tr}\;A\right)^2-4\det A<0, \left(\mathrm{tr}\;A\right)^2-2\det A>0, \end{equation} that is, $2\det A<\left(\mathrm{tr }A\right)^2<4\det A$. This case is possible for $\left(\mathrm{tr}\;A>0,\det A>0\right)$, and $\left(\mathrm{tr \;A<0,\det A>0\right)$. For $\det A>0$ this case is not possible algebraically. II.2. $S<0$: $p$ is Jacobi stable $\Leftrightarrow$ \begin{equation} \label{3.42} \left(\mathrm{tr}\;A\right)^2-4\det A<0, \left(\mathrm{tr}\;A\right)^2-2\det A<0, \end{equation} that is $\left(\mathrm{tr}\;A\right)^2<2\det A<4\det A$. This is possible for both $\left(\mathrm{tr}\;A>0,\det A>0\right)$, and $\left(\mathrm{tr \;A<0,\det A>0\right)$. III. $\tilde{\Delta}=0$: $\mu _1=\mu _2\in \mathbb{R}$, equivalent to \Delta =0$, or $\mathrm{tr}\;A=0$. - if $\Delta =0$, then $\lambda _1=\lambda _2\in \mathbb{R}$ and $p$ is singular node or degenerate node -if $\mathrm{tr}\;A=0$, $\Delta \neq 0$, it follows (i) $\Delta >0$, $\det A<0$, i.e. saddle point, or (ii) $\Delta <0$, $\det A<0$, i.e., $p$ is a center. \subsection{Important remarks} 1. Assume $p$ is Jacobi stable, that is, by definition, we must have one of the following situations: (i) $\tilde{\Delta }>0$, $S>0$, or (ii) $\tilde{\Delta }<0$, $S>0$. As we have seen already (i) is algebraically impossible, hence, if $p$ is Jacobi stable it must follow $\tilde{\Delta }<0 \Leftrightarrow \Delta <0$. Hence we have proved: If $p$ is Jacobi stable, then $\Delta <0$. 2. Conversely, assume $\Delta <0$. This is equivalent to $\tilde {\Delta }<0$ due to the Computational Lemma 3.3. Since $\Delta <0$ implies $\det A>0$, and $\lambda _{1,2}=\alpha \pm i\beta $, we have \begin{equation} \mu _1=\left(\frac{1}{2}\lambda _1\right)^2=\frac{1}{4}\left(\alpha +i\beta \right)^2=\frac{1}{4}\left[\left(\alpha ^2-\beta ^2\right)+2i\alpha \bet \right], \end{equation} \begin{equation} \mu _2=\left(\frac{1}{2}\lambda _2\right)^2=\frac{1}{4}\left(\alpha -i\beta \right)^2=\frac{1}{4}\left[\left(\alpha ^2-\beta ^2\right)-2i\alpha \beta \right]. \end{equation} From the above equations we obtain \begin{equation} S=\frac{1}{2}\left(\alpha ^2-\beta ^2\right). \end{equation} Therefore it follows (i) $\alpha ^2-\beta ^2>0$ $\Rightarrow$ $p$ is Jacobi unstable, (ii) $\alpha ^2-\beta ^2<0$ $\Rightarrow$ $p$ is Jacobi stable. Condition (ii) above does not appear in \cite{rev} due to the reduction of the two dimensional dynamical system to a one-dimensional SODE. We obtain \textbf{Theorem 4.1.} 1. If $p$ is a Jacobi stable fixed point, then $\Delta <0$. 2. If $\Delta <0$ for the fixed point $p$, then (i) if $\alpha ^2-\beta ^2>0$, then $p$ is Jacobi unstable; (ii) if $\alpha ^2-\beta ^2<0$, then $p$ is Jacobi stable, where $\lambda _{1,2}=\alpha \pm i\beta $ are the eigenvalues of $A$. \section{Applications to two dimensional Hamiltonian systems} \label{aHam} As a physical application of the formalism developed in this paper, in the present Section we consider the geometrical description, and the stability properties of a physical system described by a Hamiltonian function H=H(x,p)=H\left(x^1,x^2\right)$, where $H(x,p)\in C^n$, $n\geq 2$. From a physical point of view $x$ represents the particle's coordinate, while $p$ is its momentum. The motion of the system is described by a Hamiltonian system of equations in the plane. \textbf{Definition.} A system of differential equations on $\mathbf{R}^{2}$ is called a conservative Hamiltonian system with one degree of freedom if it can be expressed in the form \begin{equation} \frac{dx^{1}}{dt}=\frac{\partial H\left( x^{1},x^{2}\right) }{\partial x^{2} =H_{2}\left( x^{1},x^{2}\right) ,\frac{dx^{2}}{dt}=-\frac{\partial H\left( x^{1},x^{2}\right) }{\partial x^{1}}=-H_{1}\left( x^{1},x^{2}\right) . \label{54} \end{equation} By taking the total derivative of the Hamiltonian function, and with the use of Eqs.~(\ref{54}) we obtain \begin{equation} \label{55} \frac{dH}{dt}=\frac{\partial H\left(x^1,x^2\right)}{\partial x^1}\frac{dx^1} dt}+\frac{\partial H\left(x^1,x^2\right)}{\partial x^2}\frac{dx^2}{dt}\equiv 0. \end{equation} Eqs.~(\ref{55}) shows that $H\left(x^1,x^2\right)$ is constant along the solution curves of Eqs.~(\ref{54}). Hence the Hamiltonian function is a first integral and a constant of motion. Moreover, all the trajectories of the dynamical system lie on the contours defined by $H\left(x^1,x^2\right) = C$, where $C$ is a constant. From a physical point of view H\left(x^1,x^2\right)$ represents the total energy, which is a conserved quantity. By taking the derivatives of Eqs.~(\ref{54}) with respect to the time parameter $t$ we obtain first \begin{equation} \frac{d^{2}x^{1}}{dt^{2}}=H_{21}\left( x^{1},x^{2}\right) y^{1}+H_{22}\left( x^{1},x^{2}\right) y^{2}, \end{equation \begin{equation} \frac{d^{2}x^{2}}{dt^{2}}=-H_{11}\left( x^{1},x^{2}\right) y^{1}-H_{12}\left( x^{1},x^{2}\right) y^{2}, \end{equation which can be written in the equivalent form \begin{equation} \frac{d^{2}x^{i}}{dt^{2}}+2G^{i}\left( x^{1},x^{2},y^{1},y^{2}\right) =0,i=1,2, \label{56} \end{equation where \begin{equation} \begin{pmatrix} G^{1} \\ G^{2 \end{pmatrix =-\frac{1}{2 \begin{pmatrix} H_{21} & H_{22} \\ -H_{11} & -H_{12 \end{pmatrix \begin{pmatrix} y^{1} \\ y^{2 \end{pmatrix =-\frac{1}{2}J_{H}\cdot y, \end{equation where \begin{equation} J_{H}=J_H\left( -H_{1},H_{2}\right) \begin{pmatrix} H_{21} & H_{22} \\ -H_{11} & -H_{12 \end{pmatrix , \end{equation is the Jacobian of the Hamiltonian system given by Eqs.~(\ref{54}). Eqs.~ \ref{56}) give the geometrical interpretation of a bidimensional (one degree of freedom) Hamiltonian system, showing that they can be studied by similar methods as the geodesics in a Finsler space. Using Eq. (\ref{NC}) we obtain for the nonlinear connection associated to a Hamiltonian system the expressions \begin{equation} \left( N_{j}^{i}\right) _{i,j=1,2} \begin{pmatrix} N_{1}^{1} & N_{2}^{1} \\ N_{1}^{2} & N_{2}^{2 \end{pmatrix =-\frac{1}{2 \begin{pmatrix} H_{21} & H_{22} \\ -H_{11} & -H_{12 \end{pmatrix =-\frac{1}{2}J_{H}\left( -H_{1},H_{2}\right) . \end{equation} Therefore, for a Hamiltonian system all the components of the Berwald connection vanish, \begin{equation} G_{jl}^{i}:=\frac{\partial N_{j}^{i}}{\partial y^{l}}\equiv 0. \end{equation} The components of the deviation curvature tensor of a Hamiltonian system can be obtained as \begin{eqnarray*} \left( P_{j}^{i}\right) _{H} &= \begin{pmatrix} P_{1}^{1} & P_{2}^{1} \\ P_{1}^{2} & P_{2}^{2 \end{pmatrix _H= \begin{pmatrix} H_{211}y^{1}+H_{212}y^{2} & H_{212}y^{1}+H_{222}y^{2} \\ -H_{111}y^{1}-H_{112}y^{2} & -H_{112}y^{1}-H_{122}y^{2 \end{pmatrix ^{t}+ \\ &&\frac{1}{4}\left( J_{H}\right) _{l}^{i}\left( -H_{1},H_{2}\right) \times \left( J_{H}\right) _{j}^{l}\left( -H_{1},H_{2}\right) \\ &= \begin{pmatrix} \begin{pmatrix} H_{211} & H_{212} \\ -H_{111} & -H_{112 \end{pmatrix \begin{pmatrix} y^{1} \\ y^{2 \end{pmatrix \Bigg| & \begin{pmatrix} H_{212} & H_{222} \\ -H_{112} & -H_{122 \end{pmatrix \begin{pmatrix} y^{1} \\ y^{2 \end{pmatrix \end{pmatrix ^{t}+ \\ &&\frac{1}{4}\left( J_{H}\right) _{l}^{i}\left( -H_{1},H_{2}\right) \times \left( J_{H}\right) _{j}^{l}\left( -H_{1},H_{2}\right) . \end{eqnarray*} Therefore we have the following \textbf{Proposition 5.1} \textit{The curvature deviation tensor associated to a Hamiltonian dynamical system is given by \begin{equation} P_{H}=\frac{1}{2 \begin{pmatrix} \mathit{H}_{-H_{1}}\cdot y & \mathit{H}_{H_{2}}\cdot \end{pmatrix ^{t}+\frac{1}{4}J_{H}^{2}\left( -H_{1},H_{2}\right) , \end{equation where $\mathit{H}_{-H_{1}} \begin{pmatrix} H_{211} & H_{212} \\ -H_{111} & -H_{112 \end{pmatrix $ is the Hessian of $H_{1}$, and similarly for $H_{2}$.} Hence in the following we can introduce the concept of the Jacobi stability of a Hamiltonian system by means of the following \textbf{Definition. }If the Hamiltonian system of Eqs.~(\ref{54}) satisfies the initial conditions $\left\vert \left\vert x^{i}\left( t_{0}\right) \tilde{x}^{i}\left( t_{0}\right) \right\vert \right\vert =0$, $\left\vert \left\vert \dot{x}^{i}\left( t_{0}\right) -\tilde{x}^{i}\left( t_{0}\right) \right\vert \right\vert \neq 0$, with respect to the norm $\left\vert \left\vert .\right\vert \right\vert $ induced by a positive definite inner product, then the trajectories of Hamiltonian dynamical system are Jacobi stable if and only if the real parts of the eigenvalues of the curvature deviation tensor $P_{H}$ are strictly negative everywhere. Otherwise, the trajectories are Jacobi unstable. To illustrate the implications of the geometric approach introduced here we consider the simple case of the one dimensional conservative motion of a point particle with mass $m>0$ under the influence of an external potential V(x)$, described by the Hamiltonian \begin{equation} \label{Ham} H=\frac{p^2}{2m}+V(x)=\frac{1}{2m}\left(x^2\right)^2+V\left(x^1\right). \end{equation} The equation of motion of the particle are \begin{equation} \label{65} \frac{dx^1}{dt}=\frac{1}{m}x^2,\frac{dx^2}{dt}=-V^{\prime }\left(x^1\right), \end{equation} where in the following we denote by a prime the derivative with respect to the coordinate $x^1$. By taking the derivative with respect to time of Eqs.~ \ref{65}) we obtain first \begin{equation} \label{66} \frac{d^2x^1}{dt^2}=\frac{1}{m}\frac{dx^2}{dt}=\frac{1}{m}y^2, \frac{d^2x^2} dt^2}=-V^{\prime \prime }\left(x^1\right)\frac{dx^1}{dt}=-V^{\prime \prime }\left(x^1\right)y^1. \end{equation} Eqs.~(\ref{66}) can be written as \begin{equation} \label{67} \frac{d^2x^i}{dt^2}+2G^i\left(x^1,x^2,y^1,y^2\right)=0,i=1,2, \end{equation} where \begin{equation} G^1\left(x^1,x^2,y^1,y^2\right)=-\frac{1}{2m}y^2,G^2\left(x^1,x^2,y^1,y^ \right)=\frac{1}{2}V^{\prime \prime }\left(x^1\right)y^1. \end{equation} The components of the non-linear connection $N^i_j=\partial G^i/\partial y^j$ can be obtained as \begin{equation} N_1^1=\frac{\partial G^1}{\partial y^1}=0, N^1_2=\frac{\partial G^1} \partial y^2}=-\frac{1}{2m},N^2_1=\frac{\partial G^2}{\partial y^1}=\frac{1} 2}V^{\prime \prime }\left(x^1\right), N_2^2=\frac{\partial G^2}{\partial y^2 =0. \end{equation} With the use of Eq.~(\ref{Pij}) we obtain for the components of the deviation curvature tensor $P^i_j$ the expressions \begin{equation} P_1^1=-\frac{1}{4m}V^{\prime \prime }\left(x^1\right),P^1_2=0,P^2_1=-\frac{ }{2}V^{\prime \prime }\left(x^1\right)y^1,P_2^2=-\frac{1}{4m}V^{\prime \prime }\left(x^1\right). \end{equation} The eigenvalues $\lambda _{1,2}$ of the curvature deviation tensor are given by \begin{equation} \lambda _{1,2}=\frac{1}{2}\left[P_1^1+P_2^2\pm\sqrt{4P^1_2P^2_1 \left(P_1^1-P_2^2\right)^2}\right], \end{equation} and can be obtained explicitly as \begin{equation} \lambda_{1,2}=-\frac{1}{4m}V^{\prime \prime }\left(x^1\right). \end{equation} Therefore we have obtained the following \textbf{Theorem 5.1} The trajectories of a one-dimensional Hamiltonian dynamical system with point particle like Hamiltonian given by Eq.~(\ref{Ham ) are Jacobi stable if and only if the potential $V(x)$ of the external forces satisfies for all $x$ the condition $V^{\prime \prime }(x)>0$. From a physical point of view the condition $V^{\prime \prime }(x)>0$ implies that the potential $V(x)$ is in a minimum. More exactly, if V^{\prime }\left(x_0\right)=0$ is an equilibrium state of a physical system at point $x_0$, the condition for the Jacobi stability $V^{\prime \prime }\left(x_0\right)>0$ requires that the potential $V$ has a minimum at $x=x_0 . For a point particle like two dimensional Hamiltonian system with one degree of freedom the equations of the geodesic deviation, satisfied by the deviation vector $\xi ^i$, $i=1,2$, given by Eq.~(\ref{def}), can be obtained as \begin{equation} \frac{d^2\xi ^1}{dt^2}-\frac{1}{m}\frac{d\xi ^2}{dt}=0, \end{equation} \begin{equation} \frac{d^2\xi ^2}{dt^2}+V^{\prime \prime }\left(x^1\right)\frac{d\xi ^1}{dt +V^{\prime \prime \prime }\left(x^1\right)y^1\xi ^1=0. \end{equation} Let's assume now that $\left( x^{1}=x_{0},x^{2}=0,y^{1}=0,y^{2}=0\right) $ are the critical points of the Hamiltonian system of Eqs.~(\ref{65}). Then the system of the geodesic deviation equations take the form \begin{equation} \frac{d^{2}\xi ^{1}}{dt^{2}}-\frac{1}{m}\frac{d\xi ^{2}}{dt}, \end{equation \begin{equation} \frac{d^{2}\xi ^{2}}{dt^{2}}+V^{\prime \prime }\left( x_{0}\right) \frac d\xi ^{1}}{dt}=0, \end{equation and must be integrated with the initial conditions $\xi ^{1}(0)=0$, $\xi ^{2}(0)=0$, $\dot{\xi}^{1}=\xi _{10}$, and $\dot{\xi}^{2}(0)=\xi _{20}$, respectively. The geodesic deviation equations for the point mass Hamiltonian have the general solution \begin{equation} \xi ^{1}(t)=\frac{1}{V^{\prime \prime }\left( x_{0}\right) }\left[ \sqrt{m \xi _{10}\sqrt{V^{\prime \prime }\left( x_{0}\right) }\sin \left( \frac \sqrt{V^{\prime \prime }\left( x_{0}\right) }}{\sqrt{m}}t\right) -\xi _{20}\cos \left( \frac{\sqrt{V^{\prime \prime }\left( x_{0}\right) }}{\sqrt{ }}t\right) +\xi _{20}\right] , \end{equation \begin{equation} \xi ^{2}(t)=m\xi _{10}\left[ \cos \left( \frac{\sqrt{V^{\prime \prime }\left( x_{0}\right) }}{\sqrt{m}}t\right) -1\right] +\frac{\sqrt{m}\xi _{20 }{\sqrt{V^{\prime \prime }\left( x_{0}\right) }}\sin \left( \frac{\sqrt V^{\prime \prime }\left( x_{0}\right) }}{\sqrt{m}}t\right) . \end{equation} \section{Discussions and final remarks}\label{concl} In the present paper we have considered an alternative approach to the standard KCC theory for first order autonomous dynamical systems, based on a different transformation of the system to second order differential equations. The approach was presented in detail for two dimensional dynamical systems, for which the two basic stability analysis methods -- the (Lyapunov) linear stability analysis and the Jacobi stability analysis -- were discussed in detail. From the point of view of the KCC theory the present approach allows an extension of the geometric framework for first order systems, such increasing the parameter space, and the predictive power, of the method. We have also found that there is a good correlation between the linear stability of the critical points, and the Jacobi stability of the same points, describing the robustness of the corresponding trajectory to a small perturbation \cite{Sa05}. On the other hand, the Jacobi stability is a very convenient way of describing the resistance of limit cycles to small perturbation of trajectories. As an application of our approach we have considered the study of a bi-dimensional Hamiltonian system, describing the one dimensional (one degree of freedom) motion of a physical system. The KCC theory can provide an alternative, and very powerful, method for the geometrization of classical mechanical systems, whose properties can be described by a Hamiltonian function. The transformation of the corresponding Hamilton equations to second order differential equations allows naturally their study similarly as geodesics in an associated Finsler space, and gives the possibility of a full geometric description of the properties of the dynamical system in a {\it non-metric} setting. This represents one of the basic differences, and advantages, of the KCC approach as compared to the alternative Jacobi and Eisenhart methods for geometrization \cite{PR}, which essentially require as a starting point a metric. It is important to emphasize that the advantages of the geometric approach to the description of the dynamical systems are not only conceptual, but also the method has a predictive value. By starting from the deviation curvature tensor we can obtain some effective stability conditions for physical systems. Moreover, in the present approach, the geodesic deviation equation can be formulated and solved rather easily (either analytically or numerically), and thus the behavior of the full perturbations of the trajectories near critical points can be studied in detail. To summarize our results, in the present paper we have introduced and studied in detail some geometrical theoretical tools necessary for an in depth analysis and description of the stability properties of dynamical systems that may play a fundamental role in our understanding of the evolution of natural phenomena. \section*{Acknowledgments}
1,116,691,499,437
arxiv
\section{Introduction to JLab} The Continuous Electron Beam Accelerator Facility (CEBAF) at the Thomas Jefferson National Accelerator Facility (Jefferson Lab) is devoted to the investigation of the electromagnetic structure of mesons, nucleons, and nuclei using high energy and high duty-cycle electron and photon beams. CEBAF is a superconducting electron accelerator \cite{CEBAF} with a maximum energy of 6~GeV and 100~\% duty-cycle. Three electron beams with a maximum total current of 200~$\mu A$ can be used simultaneously for electron scattering experiments in the experimental areas. The accelerator design concept is based on two parallel superconducting continuous-wave linear accelerators joined by magnetic recirculation arcs. The accelerating structures are five-cell superconducting niobium cavities with a nominal average energy gain of 5~MeV/m. The accelerator operation has met all design goals, achieving 5.7 GeV for physics running, and delivering three high quality beams with intensity ratios exceeding 10$^6$:1. The electron beam is produced using a strained GaAs photocathode allowing the delivery of polarized electrons (P$_{e}~\geq$ 75\%) simultaneously to all three halls. Three experimental areas are available for simultaneous experiments, the only restriction being that the beam energies have to be multiples of the single pass energy. The halls contain complementary equipment which cover a wide range of physics topics: Hall A \cite{halla} has two high resolution magnetic spectrometers with 10$^{-4}$ momentum resolution in a 10\% momentum bite, and a solid angle of 8~msr. Hall B houses the large acceptance spectrometer, CLAS \cite{clas}. Hall C uses a combination of a high momentum spectrometer ($10^{-3}$ momentum resolution, 7~msr solid angle and maximum momentum of 7 GeV/c) and a short orbit spectrometer. To illustrate the physics which is being addressed at Jefferson Lab, we have chosen three topics of current interest: measurements of parity violation in electron elastic scattering, the N-$\Delta$ transition form factors, and high statistic searches for the $\Theta^+$ exotic strangeness +1 baryon with the CLAS. These programs have the common goal of probing baryon structure which comes from quark pairs beyond the contribution from the standard three valence quarks. \section{Strange quarks in the nucleon} Early deep-inelastic scattering experiments demonstrated that the structure of the proton can not be described by its $uud$ valence structure alone. For example, quarks carry only 50\% of the proton momentum and are responsible for less than 30\% its spin \cite{spinannrev}. The fraction and distribution of $s$ and $\overline{s}$ quarks were also determined in $\nu$ and $\overline{\nu}$ scattering experiments. For example, NuTeV \cite{NuTeV} reports the fraction $(s+\overline{s})$/$(\overline{u}+\overline{d})$ to be 0.42 $\pm$ 0.07 $\pm$ 0.06 at a $Q^2$ of 16 GeV$^2$. But most of these sea quarks are very short lived ($\sim \hbar/\sqrt{Q^2}$) and arise from perturbative evolution of QCD \cite{burkardt92}. The focus of this section is to describe experiments that are sensitive to the long-lived component of $s\overline{s}$ pairs which contribute to the static properties of the nucleon such as its magnetic moment and charge radius. It is easy to understand how $s\overline{s}$ pairs in the sea can contribute to the proton magnetic moment in a simple hadronic picture \cite{hannelius}. During the process of $uuds\overline{s}$ fluctuations, the quarks will tend to arrange themselves into energetically favorable configurations, the lowest state being a $\Lambda K^+$. The $K^+$ must be emitted in an L=1 state to conserve angular momentum and parity. The probability that the $\Lambda$ has its spin anti-parallel to the proton is twice as likely as the probability that the spin have the same direction. In this case, the (positive) kaon will have $l_z=+1$ and contribute a positive amount to the magnetic moment of the proton. In addition, the spin of the (negative) s quark in the $\Lambda$ is anti-aligned with the proton spin so it also gives a positive contribution to the magnetic moment. By convention, however, the usual definition of the strange magnetic moment ($G_M^s(Q^2=0)$) does not include the s-quark charge of ($-\frac{1}{3}$), and therefore this picture actually predicts a \emph{negative} strange magnetic moment. Over the past decade there have been many different models \cite{pvmodels} used to estimate both the magnitude and sign of $G_M^s$, but all have been forced to make non-trivial approximations. Most models predict a negative $G_M^s$, as in the case of the naive hadronic model above, with a value in the range of $-0.8$~to~$0$. But it is clear for the moment that experimental measurements are required to guide our understanding of the contributions of strange quarks to the properties of the proton. Parity violation in electron scattering is a unique tool for probing the contribution of $s\overline{s}$ sea quarks to the structure of the nucleon. The weak interaction is probed by measuring the electron helicity dependence of the elastic scattering rate off unpolarized targets. Experimental sensitivities of the order of a part per million are required to measure the parity violating asymmetry on a nucleon which can be expressed as \begin{eqnarray} A^p_{PV} & = & {{d\sigma_R - d\sigma_L} \over {d\sigma_R - d\sigma_L}} = {{G_F Q^2} \over {4\pi\alpha\sqrt{2}}} \left[ {A_E + A_M + A_A} \over \epsilon \left(G_M^\gamma\right)^2 + \tau\left(G_M^\gamma\right)^2 \right] , \label{eq:eq1} \end{eqnarray} where $A_E = \epsilon G_E^Z(Q^2) G_E^\gamma(Q^2)$, $A_M = \tau G_M^Z(Q^2) G_M^\gamma(Q^2)$ are the electric and magnetic $\gamma$-Z interference terms, $\tau = Q^2/4M_N^2$, and $\epsilon$ is the polarization of the virtual photon. The isolation of the electric and magnetic terms can be accomplished by taking advantage of the sensitivity to kinematics (though $\epsilon$ and $\tau$) and taking data at different scattering angles. The last term, $A_A$, picks out the contribution due to the axial form factor, but the forward-angle measurements are insensitive to this term, so we refer the reader to our references for details \cite{sample}. Finally, the flavor decomposition of the form factors can be determined uniquely assuming charge symmetry and by combining parity-violating asymmetries with measurements of the electric and magnetic form factors on the proton and neutron. The scattering amplitude from a zero spin isoscalar target, such as $^4He$, is particularly simple because it does not allow magnetic transitions so the asymmetry is sensitive only to the isoscalar electric form factor: \begin{eqnarray} A^{He}_{PV} & = & {{G_F Q^2} \over {4\pi\alpha\sqrt{2}}} \left[ 4\sin^2{\theta_W} + {G_E^s \over {\frac{1}{2} \left( G_E^{\gamma p} + G_E^{\gamma n} \right)}} \right] , \label{eq:eq2} \end{eqnarray} The contribution of the asymmetry which is attributable to strange quarks is obtained by subtracting the calculated asymmetry assuming no strange contribution, including radiative corrections and best estimates of the electromagnetic form factors. The uncertainties in the calculations are estimated and included separately when quoting the uncertainties in the strange form factors. Thus a fairly extensive experimental program is required to isolate all contributions. \begin{figure} \includegraphics[width=10cm]{2004_ALL_Elton.eps} \caption{Constraints on the $G_E^s$ and $G_M^s$ strange form factors at $Q^2 = 0.1GeV^2$. The curves give the 95\% contours. Sample model calculations are also indicated as 1 \cite{pvtheory1} 2 \cite{pvtheory2} 3 \cite{pvtheory3} and 4 \cite{pvtheory4}. \label{fig:pv1}} \end{figure} There are two major experimental programs at JLab studying parity violation: HAPPEX \cite{happexprc} and G0 \cite{G005}. Both collaborations have recently reported results. The precision of these experiments, together with other experimental and theoretical efforts world-wide \cite{sample,A404,A405,parityannrev,parityprog,emworkshop}, has now achieved sufficient accuracy to be sensitive to the contribution of strange quarks to the nucleon form factors. The first measurements of parity-violating electron scattering at JLab were carried out by the HAPPEX collaboration. The experiment detected the scattered electrons in each of the Hall A high resolution spectrometers, with sufficient hardware resolution to spatially separate the elastically scattered events from inelastic events into special lead-scintillator absorption counters whose output was integrated during a 30 ms window. Window pairs of opposite helicity were chosen randomly at 15 Hz. A half wave plate in the CEBAF injector was inserted or removed about once a day, switching the helicity of the beam without changing the electronics. HAPPEX has previously reported measurements of parity violation in ep elastic scattering at Q$^2$=0.48 GeV$^2$ \cite{happex01}. The asymmetry errors are completely dominated by statistical uncertainties, and the helicity correlated systematic uncertainties are about 0.1 ppm. Recently it has completed asymmetry measurements at Q$^2$=0.1 GeV$^2$ in ep scattering giving $G_E^s + 0.080\;G_M^s = 0.030\pm0.025\pm0.006\pm0.012$ \cite{happex05} and a measurement of elastic scattering on $^4$He at a Q$^2$=0.091 GeV$^2$ which yields a direct measurement of $G_E^s = -0.038\pm0.042\pm0.010$, consistent with zero. \cite{happexhe4}. These measurements were performed at electron scattering angles of 6$^\circ$, made possible by the new superconducting septum magnets installed in Hall A \cite{septum}. We note that HAPPEX continues to take data and is expected to improve the statistical uncertainty in each measurement by a factor of 2.5--3 this year. The G0 experiment uses a dedicated detector \cite{G0detector} to determine $G_E^s$, $G_M^s$, and $G_A^e$ over a broad kinematical range. To date measurements have been completed for forward angles. The result of their first experimental run has been the determination of the linear combination $G_E^s + \eta G_M^s$ for $Q^2$ between 0.12 to 1.0 GeV$^2$ ($\eta = \tau G_M^p/\epsilon G_E^p$). These measurements are consistent with HAPPEX and A4, but they provide broad coverage in $Q^2$ to probe the spatial dependence of strange quarks in the nucleon. The three lowest $Q^2$ points (out of 16 measurements) can be used together with the other experiments (see Fig.\,\ref{fig:pv1}) to determines a central value for $G_E^s$ and $G_M^s$ separately for $Q^2=0.1$ GeV$^2$. We close by summarizing the experimental situation, and relating the measured quantities back to our simple hadronic picture for possible strange contributions to the proton magnetic moment. Analysis of the world measurements leads to $G_M^s = 0.55\pm0.28$ and $G_E^s = -0.01\pm0.03$. To compare to the proton magnetic moment of +2.79 n.m. we multiply $G_M^s$ by ($-\frac{1}{3}$) and, taken at face value, it is approximately 7\% of the magnitude of the proton magnetic moment, and opposite in sign. The interpretation of positive values for $G_M^s$ has been investigated within the context of the quark model \cite{zou}. This study shows that, even though the quark model is very successful at predicting relations between baryon magnetic moments, the quark model does not lead naturally to positive values for $G_M^s$. The authors find that positive moments are generated when the $\overline{s}$ is in the ground state and the $uuds$ quarks are in excited states. Although these configurations do not seem very natural, they are very analogous to quark-models configurations for pentaquarks, which is the topic of the last section of this paper. \section{N-$\Delta$ deformation} Within the spin-flavor SU(6) symmetry of the quark model, the $\Delta$ resonance is a completely symmetric object with all quark spins aligned. Within this model, the photo-excitation $\gamma^* N \rightarrow \Delta(1232) \rightarrow N\pi$ proceeds via a single quark spin flip in the L=0 nucleon ground state. This magnetic dipole transition, characterized by the M$_{1+}$ multipole, predicts negligible contributions from the electric (E$_{1+}$) and scalar (S$_{1+}$) quadrupole multiples. However, strong-interaction dynamics will modify this picture, and several models predict deviations from this simple picture which would indicate deformations of both the nucleon and the $\Delta$ \cite{henley01}. For example, the deformation of the nucleon and $\Delta$ could result from D-state admixtures into the baryon wave function. However, D-wave admixtures are expected to be small, but quark-antiquark pairs which are present in the nucleon can also contribute to the quadrupole moment. These latter contribute to the quadrupole moment via a two-body spin tensor in the charge operator, even when the valence quarks are in pure S states. The physical interpretation in this case is that the distribution of $q \overline{q}$ sea quarks in the nucleon deviate from spherical symmetry. These models interpret the negative values of the measured transition quadrupole moments as corresponding to a positive intrinsic quadrupole moment (cigar shape) for the nucleon and a negative moment (pancake shape) for the $\Delta$. Complete angular distributions of the pion in the reaction $e p \rightarrow ep \pi^0$ at the peak of the $\Delta$ allow the experimental determination of the quadrupole moments. The differential cross section of a polarized beam (helicity $h=\pm 1$) and unpolarized target can be written as a function of the transverse and longitudinal structure functions $\sigma_T$ and $\sigma_{L}$, and the interference terms $\sigma_{TT}$, $\sigma_{LT}$ and $\sigma_{LT'}$ : \begin{eqnarray} {d^2\sigma \over d \Omega_\pi^*} & = & {p_\pi^* \over k_\gamma^*} \left( \sigma_T + \epsilon_L\sigma_L + \epsilon\sigma_{TT} \; \sin^2{\theta_\pi^*} \cos{2\phi_\pi^*} \right. \\ & & + \left. \sqrt{2\epsilon_L\left(\epsilon + 1 \right)} \;\sigma_{LT}\;\sin{\theta_\pi^*}\cos{\phi_\pi^*} +{} h\;\sqrt{2\epsilon_L\left(\epsilon - 1 \right)} \;\sigma_{LT'}\;\sin{\theta_\pi^*}\sin{\phi_\pi^*} \right), \;\nonumber \end{eqnarray} where $p_\pi^*$, $\theta_\pi^*$, and $\phi_\pi^*$ are the pion center-of-mass (c.m.) momentum and angles, $\epsilon$ and $\epsilon_L$ are the usual transverse and longitudinal polarizations of the virtual photon, and $k_\gamma^*$ is the real-photon-equivalent c.m. energy. The moments of interest can be selected as terms in the partial wave expansion of the structure functions using Legendre polynomials \cite{burkertlee,burkert05}. Unique solutions can be obtained by truncating the multipole expansion to terms involving only $M_{1+}$. Fitting for the coefficients of the expansion allows the extraction of the electric ($E_{1+}$) and scalar ($S_{1+}$) quadrupole moments through their interference with the dominant $M_{1+}$. The systematic errors due to the truncation of higher multipoles are estimated by calculating the effects of higher partial waves using realistic parameterizations of higher mass resonances and backgrounds. At the peak of the $\Delta$ and for $Q^2$ of a few GeV$^2$, this procedure results in a largely model-independent extraction of the moments. Early interest in the quadrupole moments stemmed from the perturbative QCD predictions of quark helicity conservation which requires that $E_{1+} \rightarrow M_{1+}$ and $S_{1+}$ approaches a constant at very high $Q^2$. Therefore, the first experiments at JLab in Hall C \cite{frolov99} determined the quadrupole moments at the highest accessible $Q^2 \sim 3-4$ GeV$^2$, followed by results from CLAS \cite{joo02} at intermediate momentum transfer. These measurements showed that $E_{1+}$ was not only a few percent of $M_{1+}$, but also of opposite sign. The asymptotic kinematics of perturbative QCD was clearly well beyond the range of these measurements. \begin{figure} \includegraphics[width=10cm]{mrat-had05.eps} \caption{$Q^2$ dependence of the ratios of the electric (E$_{1+}$) and scalar (S$_{1+}$) quadrupoles to the dominant dipole moment. The curves are model calculations from references \cite{satolee}, \cite{dmtmodel} and \cite{lidongma}. \label{fig:remrsm}} \end{figure} Recently the focus has shifted to understanding the deformation of the nucleon at very low $Q^2$ which probes chiral symmetry breaking, models of heavy baryon chiral perturbation theory, and the shape of the pion cloud surrounding the nucleon. Data at $Q^2 = 0.12$~GeV$^2$ from MAMI \cite{mami01} and Bates \cite{bates01} show a surprisingly large scalar quadrupole moment of 6.5\%, twice that expected from the trend of higher energy measurements and most models. New data from CLAS \cite{lcsmith05} were taken with an electron beam energy of 1.046 GeV, polarization of $\sim$70\% and 10 nA beam current on a 2-cm liquid hydrogen target. The $\pi^0$ electroproduction data from this experiment have provided nearly complete angular distributions for $Q^2 = 0.14-0.38$ GeV$^2$ and $W = 1.2-1.3$ GeV. The broad acceptance of the CLAS detector allows measurements of the full angular distribution over a broad range of $Q^2$ and covering the W range of the $\Delta$. The ratios of $E_{1+}/M_{1+}$ and $S_{1+}/M_{1+}$ from this experiment along with other measurements are shown in Fig.\,\ref{fig:remrsm}. Important constraints on the non-resonant backgrounds have been obtained by measuring the helicity-dependent structure function $\sigma_{LT'}$ \cite{joo03}. Weak non-resonant backgrounds underlying the $\Delta$ peak can be enhanced through their interference with the magnetic multipole $Im{M_{1+}}$, which is supressed in $\sigma_{LT}$ due to the vanishing of the real part of the resonant amplitude at the pole. In summary, there is general agreement on the value of $E_{1+}/M_{1+}\sim$ -2.5\%, which is approximately constant at small $Q^2$. The recent analysis of CLAS indicates that $S_{1+}/M_{1+}$ ratio shows a $Q^2$-dependence in this same kinematic region, in apparent disagreement with previous measurements that show it is leveling off. Global parameterizations of electroproduction data is underway and will be used along with $\pi^+$ data to extract the nucleon parameters with refined accuracy. \section{Search for pentaquarks} Pentaquark states have been studied both theoretically and experimentally for many years \cite{pdg86}. Recent interest was revived by predictions within the chiral soliton model \cite{diakonov} for the existence of an anti-decuplet of 5-quark resonances with spin and parity $J^\pi = \frac{1}{2}^+$. The lowest mass member of the anti-decuplet, now called the $\Theta^+$, is an isosinglet with valence quark configuration $uudd\bar{s}$ with strangeness $S=+1$, was predicted to have a mass of 1.53 GeV/c$^2$ and a width of $\sim 0.015$ GeV/c$^2$. These definite predictions prompted experimental searches to focus attention in this mass region. Shortly after the first observation of an exotic $S=+1$ state by the LEPS collaboration \cite{leps03}, many experiments presented confirming evidence of the initial report, including two observations using the CLAS detector \cite{clas-d,clas-p}. However, there have been an increasing number of experiments reporting null results, even though in very different reactions and disparate kinematic regions \cite{dzierba04}. This paper reports on two searches for a narrow $S=+1$ baryon in the mass range between 1520 and 1600 MeV using the CLAS detector. These new high-statistics searches were conducted using the tagged photon beam facility in Hall B \cite{tagger} with deuterium \cite{g10} and proton \cite{g11} targets. The deuterium data can be compared directly with the first reported observation by CLAS \cite{clas-d}, which analyzed a data set referred to as ``G2a.'' The G2a data set consisted of two run periods at E$_e$ = 2.478 GeV and 3.115 GeV, corresponding to 0.83 and 0.35 pb$^{-1}$ respectively. The photon energy was tagged beginning below the reaction threshold of 1.51 GeV up to 95\% of the electron beam energy. The target was 10 cm long and located on CLAS center, and the CLAS torus field was set at 90\% of its maximum value. \begin{figure} \begin{minipage}[t]{7cm} \begin{flushleft} \includegraphics[width=8cm]{g2apub_g10mix_x0p17.eps} \end{flushleft} \end{minipage} \hfill \vspace{-7.5cm} \begin{minipage}[t]{7cm} \begin{flushright} \includegraphics[width=8cm]{prl_fig3.eps} \end{flushright} \end{minipage} \caption{(Left) $MM(pK^-)$ distribution of the G10 data (red histogram) compared to the G2a distribution (black points with error bars). Selection cuts have been made on the G10 data set to best represent the experimental conditions of G2a and scaled for direct comparison. \label{fig:gdata} (Right)Distribution of the $nK^+$ mass spectrum for the G11 data. No evidence for narrow structures is apparent. The inset shows the mass distribution with selection cuts to reproduce the SAPHIR analysis \cite{saphir}. Note: All results from CLAS are preliminary.} \end{figure} The new deuterium data set, referred to as G10, used a primary electron beam energy of E$_e$ = 3.767 GeV creating a tagged photon beam with energies E$_\gamma$ between 1 and 3.6 GeV. The run was divided evenly between two field settings of the torus magnet (60\% and 90\% of the full-field current of 3860 A). The hardware trigger required two charged particles in any two sectors and the target was a 24-cm long cell contaning liquid deuterium and located 25 cm upstream of nominal CLAS center. The integrated luminosity for this data set was 38 pb$^{-1}$ for E$_\gamma$ greater than 1.5 GeV. Evidence for the exotic baryon was searched for using the reaction $\gamma d \rightarrow K^- K^+ p n$. The momentum of the charged particles was determined using magnetic analysis and their mass determined using time-of-flight techniques. The analysis selected a detected proton, $K^+$ and $K^-$ in the final state, all originating from the same beam bucket. The neutron momentum and energy was reconstructed from the measured charge particle tracks and known energy of the incident photon. The exotic $\Theta^+$ baryon was searched for in the decay $\Theta^+ \rightarrow K^+ n$, where the $K^+$ uniquely identified the positive strangeness of the baryonic state. The mass of the $K^+ n$ system is shown in Fig.\,\ref{fig:gdata}(left) for both the G2 and G10 data samples. The spectrum is relatively smooth and does not exhibit the peak found in the original smaller data set. The figure overlays the two spectra with selection cuts on G10 to mimic the G2a data set as closely as possible including the photon energy spectrum. If we use the G10 data as a background shape, the peak at 1.54 GeV in the G2 data set is consistent with a 3 $\sigma$ fluctuation. The new data show no indication of a peak in the $\Theta^+ \rightarrow K^+ n$ exotic S=+1 channel. The measurement on the proton (G11) was conducted at lower photon energy and using a different reaction from the published CLAS result \cite{clas-p}. This new experiment used a primary electron beam energy E$_e$ = 4.0 GeV to produce a tagged photon beam in the range of E$_\gamma$ between 1.6 and 3.8 GeV. The target consisted of a 40-cm long cylindrical cell with liquid hydrogen. The data were taken over a period of 50 days, corresponding to an integrated luminosity of 70 pb$^{-1}$, which is an order-of-magnitude greater than previous experiments with tagged photons. Evidence for the $S=+1$ exotic baryon $\Theta^+$ was searched for in the reaction $\gamma p \rightarrow \overline{K}^0 K^+ n$. The search was conducted for the baryon produced in association with the $\overline{K}^0$ and subsequent decay $\theta^+ \rightarrow K^+ n$. The positive strangeness of the baryon was uniquely tagged using the $K^+$ of the decay. The charged kaon, as well as the pions from the $\overline{K}^0 \rightarrow \pi^+\pi^-$ decay, were reconstructed in the CLAS detector. The neutron momentum was reconstructed using energy and momentum conservation and the known energy of the incident photon. The missing mass peak of the neutron was reconstructed to be 939 MeV with a width of 10 MeV. The $\Sigma^+\rightarrow \pi^+n$ and $\Sigma^-\rightarrow \pi^-n$ reactions are also present in this data sample and are reconstructed with an accuracy of less than 1 MeV and resolution of 3 MeV. These baryons, as well as the $\Lambda^*(1520)$ were excluded from the present analysis. After all cuts, the reconstructed $n K^+$ mass spectrum is shown in Fig.\,\ref{fig:gdata}(right). The spectrum is smooth and structureless, with no indication of a peak near 1540 MeV where the $\Theta^+$ was previously reported \cite{saphir,ostrick}. An upper limit to the $\Theta^+$ cross section of less than 1 nb was set based on assuming a t-exchange production mechanism for the reaction. We conclude with a brief summary of the status of pentaquark measurements with CLAS. There are two published observations \cite{clas-d,clas-p} of the $\Theta^+$ baryon in photon beam reactions on deteurium and proton targets. The measurement on deuterium has been repeated with higher statistics and the new data do not confirm the original observation. A new experiment to verify the production off the proton in the reaction $\gamma p \rightarrow \pi^+ K^- K^+ n$ for photon energies between 3 and 5.5 GeV has been approved \cite{superg6}, but has not yet received beam time. However, a recent high statistics experiment on the proton at lower energy \cite{g11pub} has searched for the narrow exotic baryon in the reaction $\gamma p \rightarrow \overline{K}^0 K^+ n$ and found no evidence for the state in the $n K^+$ mass range between 1520 and 1600 MeV. \begin{theacknowledgments} I would like to thank Alberto dos Reis and all the organizers of the conference for their gracious hospitality. I would also like to thank K. Paschke, D. Armstrong and D. Beck of the HAPPEX and G0 collaborations for making available the results on parity violation. From the CLAS collaboration I would like to express special thanks to K. Joo and L. Smith for help in preparation of materials on Delta production, and S. Stepanyan, R. De Vita and M. Battaglieri for discussions of the pentaquark search. The Southeastern Universities Research Association (SURA) operates the Thomas Jefferson National Accelerator Facility for the U.S. Department of Energy under contract DE-AC05-84ER40150. \end{theacknowledgments} \bibliographystyle{aipproc}
1,116,691,499,438
arxiv
\section{Introduction} The object of this paper is a review and a complement of our results in \cite{ANO2}, \cite{ANO3}, \cite{ANH} and \cite{ANH1}. All considered objects are smooth. Let $M$ be a connected paracompact differentiable manifold of dimension $n\geq 2$, $J$ the vector $1-$form defining the tangent structure, $C$ the Liouville field on the tangent space $TM$, $S$ a spray. We denote $\Gamma=[J,S]$, $\Gamma$ is an almost product structure: $\Gamma^2=I$, $I$ being the identity vector $1-$form. We can consider $\Gamma$ \cite{GRI} as a linear connection with vanishing torsion. The curvature of $\Gamma$ is then the Nijenhuis tensor of $h$, $R=\frac{1}{2}[h,h]$, with $h=\frac{I+\Gamma}{2}$. We will give some properties of $R$. We then study a linear connection coming from a metric. At the end, we are interested in the Lie algebra $A_S=\{X\in\chi(TM)\ \text{such that } [X,S]=0\}$, where $\chi(TM)$ denotes the set of all vector fields on $TM$. \section{Preliminaries} We recall the bracket of two vectors $1-$form $K$ and $L$ on a manifold $M$ \cite{FN1}, \begin{eqnarray*} [K,L](X,Y)&=&[KX,LY]+[LX,KY]+KL[X,Y]+LK[X,Y]-K[LX,Y]\\&&-L[KX,Y]-K[X,LY]-L[X,KY] \end{eqnarray*} for all $X,\ Y\in\chi(M)$.\\ The bracket $ N_L = \frac{1}{2} [L, L] $ is called the Nijenhuis tensor of $ L $. The Lie derivative $L_X$ with respect to $ X $ applied to $ L $ can be written \begin{equation*} [X,L]Y=[X,LY]-L[X,Y]. \end{equation*} The exterior derivation $ d_L $ is defined in \cite{ANO1}: $ d_L = [i_L, d] $.\\ Let $ \Gamma $ be an almost product structure. We denote \begin{eqnarray*} h=\frac{1}{2}(I+\Gamma)\ \text{and} \ v=\frac{1}{2}(I-\Gamma), \end{eqnarray*} The vector $ 1-$form $ h $ is the horizontal projector, projector of the subspace corresponding to the eigenvalue $ + 1 $, and $ v $ the vertical projector corresponding to the eigenvalue $ -1 $. The curvature of $ \Gamma $ is defined by $ R = \frac{1}{2}[h, h] $, which is also equal to $ \frac{1}{8}[\Gamma, \Gamma] $.\\ The Lie algebra $ A_\Gamma $ is defined by \begin{eqnarray*} A_\Gamma=\{X\in\chi(TM)\ \text{such that}\ [X,\Gamma]=0 \}. \end{eqnarray*} The nullity space of the curvature $ R $ is:\begin{eqnarray*} N_R=\{X\in \chi(TM)\ \text{ such that}\ R(X,Y)=0,\ \forall\ Y\in \chi(TM)\}. \end{eqnarray*} \begin{definition} A second order differential equation on a manifold $M$ is a vector field $S$ on the tangent space $TM$ such that $JS=C$.\\ Such a vector field on $TM$ is also called a semi-spray on $M$, $S$ is a spray on $M$ if $S$ is homogeneous of degree $1$: $[C,S]=S$.\\ In what follows, we use the notation in \cite{GRI} to express a geodesic spray of a linear connection. In local natural coordinates on an open set $U$ of $M$, $(x^i, y^j)$ are the coordinates in $TU$, a spray $ S $ is written \begin{equation*} S= y^i\frac{\partial}{\partial x^i}-2G^i(x^1,\ldots,x^n,y^1,\ldots,y^n)\frac{\partial}{\partial y^i}. \end{equation*} \end{definition} For a connection $\Gamma=[J,S]$, the coefficients of $ \Gamma $ become $ \Gamma^j_i = \frac{\partial G^j} {\partial y^i} $ and the projector horizontal is \begin{eqnarray*} h(\frac{\partial}{\partial x^i})=\frac{\partial}{\partial x^i}-\Gamma^j_i\frac{\partial}{\partial y^j},\ h(\frac{\partial}{\partial y^j})=0 \end{eqnarray*} the projector vertical \begin{eqnarray*} v(\frac{\partial}{\partial x^i})=\Gamma^j_i\frac{\partial}{\partial y^j},\ v(\frac{\partial}{\partial y^j})= \frac{\partial}{\partial y^j} \end{eqnarray*} The curvature $R=\frac{1}{2}[h,h]$ become \begin{eqnarray*} R=\frac{1}{2}R^k_{ij}dx^i\wedge dx^j\otimes \frac{\partial}{\partial y^k}\ \text{with}\ R^k_{ij}=\frac{\partial \Gamma^k_i}{\partial x^j}-\frac{\partial \Gamma^k_j}{\partial x^i}+\Gamma^l_i \frac{\partial \Gamma^k_j}{\partial y^l}-\Gamma^l_j \frac{\partial \Gamma^k_i}{\partial y^l}, \ i,j,k,l\in\{1,\ldots,n\}. \end{eqnarray*} As the functions $ G^k $ are homogeneous of degree $ 2 $, the coefficients $ \Gamma^k_{ij} = \frac{\partial^2 G^k} {\partial y^i \partial y^j} $ do not depend on $ y^i $, $ i \in \{1, \ldots, n \} $. We then have $ R^ k_{ij} = y^l R^k_ {l, ij} (x) $, the $ R^k_{l, ij} (x) $ depend only on the coordinates of the manifold $ M $.\\ \section{Properties of curvature $R$} \begin{proposition}[\cite{RRA1}]\label{P3.1} The horizontal nullity space of the curvature $R$ is involutive. The elements of $A_\Gamma$ are projectable vector fields. \end{proposition} \begin{proof} From the expression of the curvature $R$ and taking into account $h^2=h$, we have \begin{equation*} R(hX,Y)=v[hX,hY], \end{equation*} If $hX\in N_R$, we obtain $v[hX,hY]=0$ $\forall Y\in\chi(TM)$.\\ Using the Jacobi Identity, for all $hX$ and $hY\in N_R$, we find $v[[hX,hY],hZ]=0$ $\forall Z\in\chi(TM)$. As we have $h[hX,hY]=[hX,hY]$, the horizontal nullity space of the curvature $R$ is involutive.\\ We notice that $A_\Gamma=A_h=A_v$.\\ For $X\in A_h$, we obtain \begin{equation*} [X,hY]=h[X,Y]\ \forall Y\in\chi(TM). \end{equation*} If $Y$ is a vertical vector field, we have $h[X,Y]=0$. This means that $X$ is a projectable vector field. \end{proof} \begin{proposition}[\cite{ANO2}]\label{P3.2} Let $X$ be a projectable vector field. The two following relations are equivalent \begin{enumerate} \item[$i)$] $[hX,J]=0$ \item[$ii)$] $[JX,h]=0$ \end{enumerate} \end{proposition} \begin{proof} See proposition 3 of \cite{ANO2}. \end{proof} \pagestyle{fancy} \fancyhead{} \fancyhead[ER]{Manelo Anona} \fancyhead[OR, EL]{\thepage} \fancyhead[OL]{On almost product structure defined by a spray} \fancyfoot{} \renewcommand\headrulewidth{0.5 pt} \begin{proposition}[\cite{ANH}]\label{P3.3} We assume that $hN_R$ is generated as a module by projectable vector fields. If the rank of the nullity space $hN_R$ of the curvature $R$ is constant, there exists a local basis of $hN_R$ satisfying Proposition \ref{P3.2}. \end{proposition} \begin{proof} See proposition 4 of \cite{ANH}. \end{proof} \section{Riemannian manifolds} Given a function $E$ from $\mathcal{T}M=TM-\{0\}$ in $\mathbb{R}^+$, with $E(0)=0$, $\mathcal{C}^\infty$ on $\mathcal{T}M$, $\mathcal{C}^2$ on the null section, homogeneous of degree two, such that $dd_JE$ has a maximal rank. The function $E$ defines a Riemannian manifold on $M$. The map $E$ is called an energy function, its fundamental form $\Omega=dd_JE$ defines a spray $S$ by $i_Sdd_JE=-dE$ \cite{KLE}, the derivation $i_S$ being the inner product with respect to $S$. The vector $1-$form $\Gamma=[J,S]$ is called the canonical connection \cite{GRI}. The fundamental form $\Omega$ defines a metric $g$ on the vertical bundle by $g(JX,JY)=\Omega(JX,Y)$, for all $X$, $Y\in\chi(TM)$. There is \cite{GRI}, one and only one metric lift $D$ of the canonical connection such that: \begin{eqnarray*} &&J\mathbb{T}(hX,hY)=0,\ \mathbb{T}(JX,JY)=0\ (\mathbb{T}(X,Y)=D_XY-D_YX-[X,Y]);\\ && DJ=0;\ DC=v; D\Gamma=0;\ Dg=0. \end{eqnarray*} The linear connection $D$ is called Cartan connection. We have \begin{equation*} D_{JX}JY=[J,JY]X,\ D_{hX}JY=[h,JY]X. \end{equation*} From the linear connection $D$, we associate a curvature \begin{equation}\label{E4.1} \mathcal{R}(X,Y)Z=D_{hX}D_{hY}JZ-D_{hY}D_{hX}JZ-D_{[hX,hY]}JZ \end{equation} for all $X$, $Y$, $Z\in\chi(TM)$. The relationship between the curvature $\mathcal{R}$ and $R$ is \begin{equation*} \mathcal{R}(X,Y)Z=J[Z,R(X,Y)]-[JZ,R(X,Y)]+R([JZ,X],Y)+R(X,[JZ,Y]). \end{equation*} for all $X$, $Y$, $Z\in\chi(TM)$. In particular, \begin{equation*} \mathcal{R}(X,Y)S=-R(X,Y). \end{equation*} In natural local coordinates on an open set $U$ of $M$, $(x^i,y^j)\in TU$, the energy function is written \begin{equation*} E=\frac{1}{2}g_{ij}(x^1,\ldots,x^n)y^iy^j, \end{equation*} where $g_{ij}(x^1,\ldots,x^n)$ are symmetric positive functions such that the matrix $(g_{ij}(x^1,\ldots,x^n))$ is invertible. And the relation $i_Sdd_JE=-dE$ gives the spray $S$ \begin{equation*} S= y^i\frac{\partial}{\partial x^i}-2G^i(x^1,\ldots,x^n,y^1,\ldots,y^n)\frac{\partial}{\partial y^i}, \end{equation*} with $G_k=\frac{1}{2}y^iy^j\gamma_{ikj}$,\\ where $\gamma_{ikj}=\frac{1}{2}(\frac{\partial g_{kj}}{\partial x^i}+\frac{\partial g_{ik}}{\partial x^j}-\frac{\partial g_{ij}}{\partial x^k})$ and $\gamma_{ij}^k=g^{kl}\gamma_{ilj}$.\\ We have $G^k=\frac{1}{2}y^iy^j\gamma_{ij}^k$. \begin{proposition}\label{P4.1} Let $E$ be an energy function, $\Gamma$ a connection such that $\Gamma=[J,S]$. The following two relationship are equivalent: \begin{enumerate} \item[$i)$] $i_Sdd_JE=-dE$; \item[$ii)$] $d_hE=0$. \end{enumerate} \end{proposition} \begin{proof} See proposition 1 \cite{ANH}. \end{proof} \begin{proposition}\label{P4.2} For a connection satisfying the Proposition \ref{P4.1}, the scalar $1-$form $d_vE$ is completely integrable. \end{proposition} \begin{proof} The Kernel of $d_vE$ is formed by vector fields belonging to the horizontal space $Im h$ ($v\circ h=0$) and vertical vector fields $JY$ such that $L_{JY}E=0$, $Y\in Im h$, taking into account $vJ=J$.\\ As we have \begin{equation*} [hX,hY]=h[hX,hY]+v[hX,hY]= h[hX,hY]+R(X,Y), \end{equation*} for all $X$, $Y\in\chi(TM)$, and that $d_hE=0$ implies $d_RE=0$. We obtain \begin{equation*} [hX,hY]\in Ker d_vE. \end{equation*} Its remains to show that $L_{v[hX,JY]}E=0$ $\forall X\in Im h$ and, $Y\in Im h$ satisfying $L_{JY}E=0$. This is immediate since we have $v=I-h$. \end{proof} \begin{proposition}\label{P4.3} On a Riemannian manifold $(M,E)$, the horizontal nullity space $hN_R$ of the curvature $R$ is generated as a module by projectable vector fields belonging to $hN_R$ and, orthogonal to the image space $Im R$ of the curvature $R$ and $hN_R=hN_\mathcal{R}$. \end{proposition} \begin{proof} If $R^\circ=i_SR$ is zero, then the curvature $R$ is zero; in this case, the horizontal space $Im h$ is the horizontal nullity space of $R$, isomorphic to $\chi(U)$, $U$ being a open set of $M$ \cite{ANO1}.\\ In what follows, we assume that $R^{\circ}\neq0$. According to relation (4.2) of \cite{ANH}, $JX\perp Im R\Longleftrightarrow\mathcal{R}(S,X)Y=0$ $\forall Y\in\chi(TM)$. We obtain $R(X,Y)=R^{\circ}([JY,X])$ $\forall Y\in \chi(TM)$. As $R$ is a semi-basic vector $2-$form, the above relation is only possible if $X=S$ or if $X\in hN_R$, then $X$ is generated as a module by projectable vector fields belonging to $hN_R$. We get $hN_R=hN_\mathcal{R}$. \end{proof} \begin{theorem}\label{T4.4} Let $\Gamma=[J,S]$ be a linear connection. The connection $\Gamma$ comes from a energy function if and only if \begin{enumerate} \item[$(1)$] there is an energy function $E_0$ such that $d_RE_0=0$; \item[$(2)$] the scalar $1-$form $d_vE_0$ is completely integrable. \end{enumerate} Then, there exist a constant $\varphi(x)$ on the bundle such that $e^{\varphi(x)}E_0$ is the energy function of $\Gamma$. \end{theorem} \begin{proof} Both conditions are necessary according to the Proposition \ref{P4.1} and \ref{P4.2}. \\Conversely, let $E_0$ be an energy function such that $d_RE_0$. We will show that, there exist a constant $\varphi$ function on the bundle such that $d_h(e^\varphi E_0)=0$.\\ The equation is equivalent to \begin{equation*} d\varphi=-\frac{1}{E_0}d_hE_0. \end{equation*} The condition of integrability of such an equation is \begin{eqnarray*} d(\frac{1}{E_0})\wedge d_hE_0+\frac{1}{E_0}dd_h E_0=0, \end{eqnarray*} namely \begin{equation*} dd_hE_0=\frac{dE_0}{E_0}\wedge d_hE_0. \end{equation*} As $d_v E_0$ is completely integrable, we have, according to Frobenius theorem, \begin{equation*} dd_vE_0\wedge d_vE_0=0 \end{equation*} Applying the inner product $i_C$ to the above equality, we get \begin{equation*} dd_vE_0=\frac{dE_0}{E_0}\wedge d_vE_0, \end{equation*} that is to say \begin{equation*} dd_hE_0=\frac{dE_0}{E_0}\wedge d_hE_0. \end{equation*} This is the condition of integrability sought.\\ For more information see \cite{ANH}. \end{proof} \section{Lie algebra defined by spray} Let $A_S=\{X\in\chi(TM)\ \text{such that }[X,S]=0 \}$. By developing the calculation $[X,S]=0$, we note that the projectable elements of $A_S$ are, on an open set $U$ of $M$, of the form: \begin{equation*} X=X^i(x)\frac{\partial}{\partial x^i}+y^j\frac{\partial X^i}{\partial x^j}\frac{\partial}{\partial y^i}. \end{equation*} Denoting $\overline{\chi(M)}$ the complete lift of the vector fields $\chi(M)$ on $TM$, the projectable elements of $A_S$ are in $A_S\cap\overline{\chi(M)}$. The geodesic spray of a linear connection is defined locally by \begin{equation*} \ddot{x}^i=-\Gamma^i_{jk}\dot{x}^j\dot{x}^k. \end{equation*} A result from \cite{LOO} shows that the dimension of the Lie algebra $\overline{A_S}$ is at most equal to $n^2+n$. If the dimension $\overline{A_S}$ equal to $n^2+n$, then $(M,S)$ is isomorphic to $(\mathbb{R}^n,Z_\lambda)$ for a unique $\lambda\in \mathbb{R}$, $Z_\lambda$ is given by the equations $\ddot{x}^i=\lambda \dot{x}^i$, $i=1,\dots, n$. This condition is equivalent to the nullity of the curvature $R$ of $\Gamma$ cf.\cite{ANO1}. We can see this property on example 5 de \cite{ANH1}. In the following, we are interested in the nature of the algebra $\overline{A_S}$. By associating the equality $[\overline{X},S]=0$ with the tangent structure $J$ using the Jacobi identity \cite{FN1}, we can write \begin{equation*} [[\overline{X},S],J]+[[S,J],\overline{X}]+[[J,\overline{X}],S]=0. \end{equation*} Taking into account the hypothesis $[\overline{X},S]=0$ and a result of \cite{LEH}: $[J,\overline{X}]=0$, we find \begin{equation*} [\overline{X},\Gamma]=0\ \text{with}\ \Gamma=[J,S]. \end{equation*} We notice that $[C,J]=-J$ and $[C,\overline{X}]=0$, we then take $\Gamma=[J,S]$ with $[C,S]=S$.\\ The $1-$vector form $\Gamma$ is a linear connection without torsion in the sense of \cite{GRI}. \begin{proposition}[\cite{ANO2}]\label{P5.1} The Lie algebra $\overline{A_S}$ coincides with $\overline{A_\Gamma}=A_\Gamma\cap\overline{\chi(M)}$. \end{proposition} \begin{proof} See proposition 9 of \cite{ANO2}. \end{proof} \begin{proposition}\cite{RRA1}\label{P5.2} Let $H^\circ$ denote the set of projectable horizontal vector fields and $A_\Gamma\cap H^\circ=A_\Gamma^h$, then we have $A_\Gamma^h=N_R\cap H^\circ$ and $A_\Gamma^h$ is an ideal of $A_\Gamma$. \end{proposition} \begin{proof} The curvature $R$ is written, for all $X,Y\in\chi(TM)$\begin{equation*} R(X,Y)=v[hX,hY]. \end{equation*} If $hX\in A_\Gamma^h$, we have $R(X,Y)=v\circ h[hX,Y]=0$,$\forall Y\in\chi(TM)$. That means $X\in N_R$.\\ The curvature $R$ is written, for all $X,Y\in\chi(TM)$ \begin{equation*} R(X,Y)=[hX,hY]+h^2[X,Y]-h[hX,Y]-h[X,hY]. \end{equation*} If $X\in N_R\cap H^\circ$, given $hX=X$ and $R(X,Y)=0$ for all $Y\in\chi(TM)$, we find $[X,hY]=h[X,hY]$.\\ If $Y$ is a vertical vector field, the above equality still holds, because it is zero.\\ For the ideal $A_\Gamma^h$, it is immediate from the expression of $A_\Gamma$. \end{proof} \begin{proposition}\label{P5.3} Let $\overline{A_\Gamma}^h=A_\Gamma^h\cap\overline{A_\Gamma}$, the set of the horizontal vector fields $\overline{A_\Gamma}^h$ form a commutative ideal of $\overline{A_\Gamma}$. The dimension of $\overline{A_\Gamma}^h$ corresponds to the dimension of $A_\Gamma^h$ if the rank of $A_\Gamma^h$ is constant. \end{proposition} \begin{proof} By Proposition\ref{P5.2}, $A_\Gamma^h$ is an ideal of $A_\Gamma$, so $\overline{A_\Gamma}^h=A_\Gamma^h\cap\overline{\chi(M)}$ is an ideal of $\overline{A_\Gamma}=A_\Gamma\cap\overline{\chi(M)}$, moreover $v[\overline{X},\overline{Y}]=0$, for all $\overline{X},\overline{Y}\in\overline{A_\Gamma}^h$. Propositions \ref{P3.2} and 2 of \cite{ANO2} give $J[\overline{X},\overline{Y}]=0$, for all $\overline{X},\overline{Y}\in\overline{A_\Gamma}^h$, noting that $[J,\Gamma]=0$. The horizontal and vertical parts of $[\overline{X},\overline{Y}]$ are therefore zero, that is, $[\overline{X},\overline{Y}]=0$.\\ The existence of such an element of $\overline{A_\Gamma}^h$ is given by the proposition\ref{P3.3}. \end{proof} \section{Case of constant values of $\overline{A_\Gamma}$} If we expand the equation $[\overline{X},S]=0$ with $S=[C,S]$, we get \begin{equation*} X^l\frac{\partial \Gamma^k_{ij}}{\partial x^l}+\frac{\partial X^l}{\partial x^j}\Gamma^k_{il}+\frac{\partial X^l}{\partial x^i}\Gamma^k_{lj}+\frac{\partial^2 X^k}{\partial x^i\partial x^j}-\frac{\partial X^k}{\partial x^l}\Gamma^l_{ij}=0. \end{equation*} we note that the constant values of $\overline{A_\Gamma}$ verify \begin{equation}\label{E6.1} X^l\frac{\partial \Gamma^k_{ij}}{\partial x^l}=0 \end{equation} \begin{proposition}\label{P6.1} Le $\Gamma$ be a linear connection without torsion. If the constant vector fields of $\overline{A_\Gamma}$ form a commutative ideal of $\overline{A_\Gamma}$, they are at most the constant elements of an ideal $I$ of affine vector fields containing these constants such that for all $\overline{X}\in\overline{A_\Gamma}$, $\overline{X}$ is written $\overline{X}=\overline{X_1}+\overline{X_2}$ with $\overline{X_2}\in I$ and that $[\overline{X_1},\overline{X_2}]=0$, the derived ideal of $\overline{A_\Gamma}$ never coincides with $\overline{A_\Gamma}$. \end{proposition} \begin{proof} \begin{description} \item[1st case:] The functions $G^k$ do not depend on some coordinates in an open set $U$ of $M$. To simplify, quite to change the numbering order of the coordinates, the spray $S$ is such that $\frac{\partial G^k}{\partial x^{p+1}}=0$, $\dots$, $\frac{\partial G^k}{\partial x^{n}}=0$, $k\in\{1,\dots, n\}$ and $1\leq p\leq n-1$. Then, we have $\frac{\partial}{\partial x^{p+1}},\dots, \frac{\partial}{\partial x^{n}}\in\overline{A_\Gamma}(U)$.\\ For any $\overline{X}\in \overline{A_\Gamma}(U)$, we can write \begin{eqnarray*} \overline{X}&=& X^i\frac{\partial}{\partial x^i}+y^j\frac{\partial X^i}{\partial x^j}\frac{\partial}{\partial y^i}\\ &=& \sum_{l=1}^{p}(X^l\frac{\partial}{\partial x^l}+y^j\frac{\partial X^l}{\partial x^j}\frac{\partial}{\partial y^l})+\sum_{r=p+1}^{n}(X^r\frac{\partial}{\partial x^r}+y^j\frac{\partial X^r}{\partial x^j}\frac{\partial}{\partial y^r}),\ 1\leq j\leq n. \end{eqnarray*} For the Lie sub-algebra generated by $\{\frac{\partial}{\partial x^{p+1}},\dots, \frac{\partial}{\partial x^{n}}\}$ form an ideal of $\overline{A_\Gamma}(U)$, we must have $[\frac{\partial}{\partial x^{h}},\overline{X}]$ belong to this ideal for all $h$, $p+1\leq h\leq n$.\\ That implies $\frac{\partial X^l}{\partial x^{h}}=0$, for all $l$ such that $1\leq l\leq p$ and for all $h$ such that $p+1\leq h\leq n$.\\ We have $X^r=a_s^r x^s+b^r$, $p+1\leq r,s\leq n$; $a_s^r,b^r\in \mathbb{R}$.\\ Denoting \begin{eqnarray*} \overline{X_1}&=&\sum_{l=1}^{p}(X^l\frac{\partial}{\partial x^l}+y^j\frac{\partial X^l}{\partial x^j}\frac{\partial}{\partial y^l}),\ 1\leq j\leq n\\ \overline{X_2}&=&\sum_{r=p+1}^{n}(a_s^r x^s+b^r)\frac{\partial}{\partial x^r}+a_s^r y^s\frac{\partial}{\partial y^r},\ p+1\leq s\leq n. \end{eqnarray*} An element $\overline{X}\in\overline{A_\Gamma}$, $\overline{X}$ is written $\overline{X}=\overline{X_1}+\overline{X_2}$ with $[\overline{X_1},\overline{X_2}]=0$. \item[2nd case:] The elements of $\overline{A_\Gamma}$ are of the form $a^l\frac{\partial}{\partial x^l}$, $l\in\{1,\dots, p\}$. The decomposition of the elements of $\overline{A_\Gamma}$ amounts to the same way. In any case, the derived ideal of $\overline{A_\Gamma}$ never coincides with $\overline{A_\Gamma}$. \end{description} \end{proof} \begin{theorem}\label{T6.2} The Lie algebra $\overline{A_\Gamma}$ is semi-simple if and only if the horizontal and projectable vector fields of the nullity space of the curvature $R$ is zero and the derived ideal of $\overline{A_\Gamma}$ coincides with $\overline{A_\Gamma}$. \end{theorem} \begin{proof} If the Lie algebra $\overline{A_\Gamma}$ is semi-simple, any commutative ideal of $\overline{A_\Gamma}$ reduces to zero by definition. According to the proposition \ref{P5.3}, the horizontal and projectable vector fields of the nullity space of the curvature $R$ of $\Gamma$ is zero. The derived ideal of $\overline{A_\Gamma}$ coincides with $\overline{A_\Gamma}$ by a classical result.\\ Conversely, if $\overline{X}\in\overline{A_\Gamma}$, we have $[\overline{X},h]=0$. According to the Jacobi Identity cf.\cite{FN1} $[\overline{X},[h,h]]=0$, ie. $[\overline{X},R]=0$. We can write $[\overline{X},R(Y,Z)]=R([\overline{X},Y],Z)+R(Y,[\overline{X},Z])$, for all $Y,Z\in\chi(TM)$. If $\overline{X}$ and $\overline{Y}$ are elements of a commutative ideal of $\overline{A_\Gamma}$, we find \begin{eqnarray}\label{E6.2} [\overline{X},R(\overline{Y},Z)]=R(\overline{Y},[\overline{X},Z]),\ \forall Z\in\chi(TM). \end{eqnarray} If the horizontal and projectable vector fields of the nullity space of the curvature $R$ is zero, the semi-basic vector $2-$form $R$ is non-degenerate on $\overline{\chi(M)}\times\chi(TM)$. The only possible case for the equations (\ref{E6.2}) is that the commutative ideal of $\overline{A_\Gamma}$ is at most formed by constant vector fields of $\overline{A_\Gamma}$, according to the proposition \ref{P6.1}, the derived ideal of $\overline{A_\Gamma}$ never coincides with $\overline{A_\Gamma}$ if this ideal formed by constant vector fields is not zero. \end{proof} \begin{example} We take $ M = \mathbb{R}^3 $, a spray $ S $: \begin{equation*} S= y^1\frac{\partial}{\partial x^1}+y^2\frac{\partial}{\partial x^2}+y^3\frac{\partial}{\partial x^3} -2(e^{x^3}(y^1)^2+y^2y^3) \frac{\partial}{\partial y^1}. \end{equation*} and the linear connection $\Gamma=[J,S]$. The non-zero coefficients of $ \Gamma $ are \begin{eqnarray*} &&\Gamma^1_1=2e^{x^3}y^1,\ \Gamma^1_2= y^3,\ \Gamma^1_3=y^2. \end{eqnarray*} A base of the horizontal space of $\Gamma$ is written \begin{eqnarray*} &&\frac{\partial}{\partial x^1}-2e^{x^3}y^1\frac{\partial}{\partial y^1},\\ &&\frac{\partial}{\partial x^2}-y^3\frac{\partial}{\partial y^1},\\ &&\frac{\partial}{\partial x^3}-y^2\frac{\partial}{\partial y^1}. \end{eqnarray*} The horizontal nullity space of the curvature is generated as a module by \begin{eqnarray*} (y^1-y^2)\frac{\partial}{\partial x^2}+y^3\frac{\partial}{\partial x^3}-y^1y^3\frac{\partial}{\partial y^1}. \end{eqnarray*} The horizontal nullity space is not generated as a module by projectable vector fields in $hN_R$. This linear connection according to the proposition \ref{P4.3} cannot come from an energy function.\\ The Lie algebra $ \overline {A_\Gamma} $ is generated as Lie algebra by: \begin{eqnarray*} g_1= x^1\frac{\partial}{\partial x^1}+x^2\frac{\partial}{\partial x^2}-\frac{\partial}{\partial x^3}+y^1\frac{\partial}{\partial y^1}+y^2\frac{\partial}{\partial y^2},\ g_2=\frac{\partial}{\partial x^1},\ g_3=\frac{\partial}{\partial x^2}. \end{eqnarray*} The Lie algebra $\overline{A_\Gamma}$ is that of affine vector fieds containing the commutative ideal $\{g_2,g_3\}$. \end{example} \section{Lie algebras of infinitesimal isometries} \begin{definition} A vector field $X$ on a Riemannian manifold $(M,E)$ is called infinitesimal automorphism of the symplectic form $\Omega$ if $L_X\Omega=0$.\\ The set of infinitesimal automorphisms of $\Omega$ forms a Lie algebra. We denote this Lie algebra by $A_g$, in general of infinite dimension. \end{definition} \begin{theorem}\label{T7.2} We denote $\overline{A_g}=A_g\cap \overline{\chi(M)}$. The Lie algebra $\overline{A_g}$ is semi-simple if and only if the horizontal nullity space of the Nijenhuis tensor of $\Gamma$ is zero and, the derived ideal of $\overline{A_g}$ coincides with $\overline{A_g}$. \end{theorem} \begin{proof} This is the application of proposition \ref{P4.3} and theorem \ref{T6.2}.\\ For more information, see \cite{ANO3} and \cite{ANH1}. \end{proof} \begin{example} We take $ M = \mathbb{R}^4 $ and the energy function is written: $$E=\frac{1}{2}(e^{x^3}(y^1)^2+(y^2)^2+e^{x^1}(y^3)^2+e^{x^2}(y^4)^2).$$ The non-zero linear connection coefficients are \begin{eqnarray*} &&\Gamma^1_1=\frac{y^3}{2},\ \Gamma^1_3= -\frac{y^3e^{x^1-x^3}-y^1}{2},\ \Gamma^2_4=-\frac{y^4e^{x^2}}{2},\\ &&\Gamma^3_1=-\frac{y^1e^{x^3-x^1}-y^3}{2},\ \Gamma^3_3=\frac{y^1}{2},\ \Gamma^4_2= \frac{y^4}{2},\ \Gamma^4_4= \frac{y^2}{2}. \end{eqnarray*} The horizontal nullity space of the curvature is zero. \\ The Lie algebra $ \overline {A_\Gamma} $ is generated as Lie algebra by: \begin{eqnarray*} g_1&=& x^4 \frac{\partial}{\partial x^2}-(-e^{-x^2}+\frac{(x^4)^2}{4})\frac{\partial}{\partial x^4}+y^4\frac{\partial}{\partial y^2}-(\frac{x^4y^4}{2}+y^2 e^{-x^2})\frac{\partial}{\partial y^4},\\ g_2&=& -2\frac{\partial}{\partial x^2}+x^4\frac{\partial}{\partial x^4}+y^4\frac{\partial}{\partial y^4},\ g_3=\frac{\partial}{\partial x^4},\ g_4= \frac{\partial}{\partial x^1}+\frac{\partial}{\partial x^3}. \end{eqnarray*} We see that $g_4$ is the center of $\overline{A_\Gamma}$ corresponding to the second case of the proposition \ref{P6.1}, while the Lie algebra $\overline{A_g}$ is generated as a Lie algebra by $g_1,\ g_2,\ g_3$. The Lie algebra $\overline{A_g}$ is simple and isomorphic to $sl(2)$. \end{example} \section{Finite dimensional Lie algebra} Let $A$ be a finite dimensional Lie algebra over $\mathbb{R}$. According to the theorem 3 p.198 \cite{BOU2}, there exists a simply connected Lie group $G$ such that $T_e(G)$ the tangent space to $G$ at the point of identity $e$ of $G$, generates a Lie algebra of vector fields isomorphic to $A$. Let $X_1,\dots,X_m$ be the elements of such an algebra and by considering the module over the ring of functions on $G$ generated by $\{X_1,\dots,X_m\}$, the system of vectors fields $\{X_1,\dots,X_m\}$ is completely integrable \cite{DDN} (19.7.4). Since we have the isomorphism of algebra $X_i\longmapsto \overline{X_i}$ of $\chi(G)\rightarrow \overline{\chi(G)}$, according to the Frobenius theorem, the system of equations $[\overline{X_i},S]=0$, $i\in\{1,\dots,m\}$ admits a solution $S$ a spray. With this spray $S$, we have the linear connection $\Gamma=[J,S]$, we can therefore apply the theorem \ref{T6.2}. \begin{theorem}\label{T8.2} The Lie algebra $A$ over $\mathbb{R}$ of finite dimension is semi-simple if and only if the derived ideal of $A$ coincides with $A$ and any derivation of $A$ is inner. \end{theorem} \begin{proof} If $\overline{A_\Gamma}^h$ is non zero, by proposition \ref{P5.3}, $\overline{A_\Gamma}^h$ is commutative ideal of $\overline{A_\Gamma}$ and the derivation $D(e_i)=e_i$, $i\in\{1,\dots, p\}$ with $e_i\in\overline{A_\Gamma}^h$ is an outer derivation if the derived ideal of $\overline{A_\Gamma}$ coincides with $\overline{A_\Gamma}$. Because, if the derivation $D$ is inner, there exists $\overline{X}\in\overline{A_\Gamma}$ such that $D=ad_{\overline{X}}$. The linear form $\overline{X}\longmapsto Tr(ad_{\overline{X}})$ (the trace function of $ad_{\overline{X}}$) vanishes when $\overline{X}$ is of the form $[\overline{Y},\overline{Z}]$, $\overline{Y},\overline{Z}\in\overline{A_\Gamma}$, so $[\overline{A_\Gamma},\overline{A_\Gamma}]=\overline{A_\Gamma}$ \cite{BOU} p.71. Which is absurd. \end{proof} \begin{remark}\label{R.8.2} We studied in \cite{RRA2} the Lie algebra of polynomial vector fields. This result is not true for a Lie algebra of countable dimension. \end{remark} \begin{remark}\label{R.8.3} The counterexamples of \cite{BEN} are incorrect taking into account the result of \cite{BOU} p.71 by the reasoning developed above. \end{remark}
1,116,691,499,439
arxiv
\section{Introduction} Although no experimental evidence has yet been detected, it is possible that coordinates of spacetime are noncommutative \cite{snyder}. One theoretical support comes from the fact that field theories constructed on a Moyal space can be viewed as low-energy effective theories from open string theory with a constant NS-NS $B$ field \cite{s-w}. In the context of field theories space-time coordinates satisfy \begin{equation} [x^{\mu}, x^{\nu}]=i \theta^{\mu\nu} \label{noncomu} \end{equation} where $\theta^{\mu\nu}$ is a real constant antisymmetric matrix. Energy-momentum conservation still holds from translational invariance in the Moyal space. Usually the commutation relations~(\ref{noncomu}) spoil Lorentz invariance. However, it is clear that both Lorentz symmetry and translation symmetry are preserved in $(1+1)$ dimensional spacetime. It is shown in Ref.\cite{s-w} that a noncommutative gauge theory should be gauge equivalent to an ordinary counterpart defined on a commutative spacetime. The two equivalent descriptions are related to each other by the Seiberg-Witten map. After that noncommutative field theories have been an extensively studied subject and many properties have been learned. In particular, the unitarity of scalar field theories in noncommutative spacetime was investigated~\cite{gomis} in the approach of covariant perturbation theory and it was shown that the uncertainty relation between temporal and spatial coordinates would lead to non-unitarity of quantum field theories. In other words, noncommutative field theories with $\theta^{0i} \neq 0$ do not satisfy Cutkosky's cutting rules\footnote{Although it has been shown that~\cite{bahns,liao} unitarity is manifest in the approach of time-ordered perturbation theory, this formalism causes other serious flaws~\cite{ohl,reichenbach}. In this paper we restrict our discussion in the context of covariant perturbation theory.}. Therefore, noncommutative quantum field theories in two dimensional spacetime are not unitary. Unitarity was traditionally treated as a criterion to judge the fate of a quantum field theory. A theory that violates unitary contains as a part the states with negative norm. However, a modern point of view within the framework of effective field theories is that a field theory violating unitarity might still be sensible at a low energy as long as the ghost states are unstable so that one cannot have them as asymptotic states (for more details, see Ref.~\cite{antoniadis}). The Schwinger model \cite{schwinger}, quantum electrodynamics of massless fermions in two dimensional spacetime, has been a subject of interest for a long time (see Ref.~\cite{lowenstein} for a review). Ever since Schwinger's pioneer work, it is known that in two dimensional spacetime the theory of a massless Dirac fermion $\psi$ is equivalent to the theory of a massless scalar field $\phi$. In particular, the current operator $\bar{\psi} \gamma_{\mu} \psi$ is equivalent to ${1\over \pi} \epsilon_{\mu\nu} \partial^{\nu} \phi$ and the chiral composites $\bar{\psi}(1\pm \gamma_{5} )\psi$ are $e^{\pm i \sqrt{4 \pi} \phi}$ up to a prescription dependent constant \cite{coleman}. The emergence of the phenomena such as confinement of fermions, gauge boson mass generation, axial anomaly, and the nontrivial topological sectors has made this exactly solvable model a productive theoretical laboratory. Easier calculations in the Schwinger model than other gauge theories provides theorists a testing ground for new ideas. The aim of this paper is to study the Schwinger model in a Moyal spacetime manifold. We show that, after the Seiberg-Witten map, up to the first nontrivial order in the noncommutative parameter, the counterpart of the noncommutative Schwinger model has only one additional term compared with the ordinary Schwinger model. This term is of the form $\theta^{\alpha\beta}F_{\beta\gamma}F^{\gamma\delta}F_{\delta\alpha}$. In other words, the fermion sector is not modified by the noncommutative structure of spacetime. It should be stressed that the above statement is only correct in two dimensional spacetime. As is well known, the fermion anomaly has many important applications in physics. Since the fermion sector is the same as the ordinary Schwinger model, the ABJ anomaly is left unchanged up to the first nontrivial order in the noncommutative parameter . Moreover, we will show perturbatively that the gauge boson mass is not modified by the self-energy diagram constructed from the three-photon vertex. Note that the origin of the gauge boson self-interaction terms in an ordinary non-Abelian gauge theory are due to the noncommutativity of the internal space. Here the photon self-interaction term is due to the noncommutative structure of the external spacetime. What are the physical effects under the presence of the photon self interaction term? The subject of this paper is to answer this question. Indeed, we will show by explicit calculations that a higher derivative operator of Lee-Wick type \cite{lee-wick} is generated dynamically. Once this is done, we argue that the noncommutativity between temporal and spatial coordinates provides a natural explanation of the presence of higher derivative operators in theories such as Lee-Wick QED \cite{lee-wick} and Lee-Wick standard model \cite{wise} (see Ref.~\cite{wu} for another prescription). For early works on the noncommutative Schwinger model, see Ref.~\cite{grosse}. The rest of the paper is organized as follows. In the next section we consider the $\theta$-expansion of the noncommutative Schwinger model by first studying $(n+1)$ dimensional noncommutative QED. In this way one can see manifestly which features are dimensionality-dependent. Global and local symmetries are discussed. Most of the remarks in this section are well-known. We then proceed with calculating the one-loop vacuum polarization diagrams in section III. We show that because of the noncommutativity of spacetime a higher derivative operator is generated and the photon mass is not modified by the structure of spacetime. In section IV we discuss the relation between the emergence of higher derivative operators and the unitarity of a noncommutative field theory. We conclude our investigations in the last section. \section{From $(n+1)$ noncommutative QED to noncommutative Schwinger model } We start with quantum electrodynamics of massless fermions in a noncommutative $\mathbb{R}^{1,n} $ space characterized by Eqn.~(\ref{noncomu}). Its action is given by \begin{equation} S=\int d^{n+1} x \left( -{1\over 4} \hat{F}_{\mu\nu} \star \hat{F}^{\mu\nu} + \hat{\bar{\psi}} \star i \hat{D\hskip-0.65em /}\hat{\psi} \right) \label{action1} \end{equation} where the field strength of the gauge connection $\hat{A}_{\mu}$ and the covariant derivative are defined as \begin{eqnarray} \hat{F}_{\mu\nu} &=& \partial_{\mu} \hat{A}_{\nu} - \partial_{\nu} \hat{A}_{\mu} - i g [ \hat{A}_{\mu} , \hat{A}_{\nu} ]_{\star}, \\ \hat{D}_{\mu} \hat{\psi} &=& \partial_{\mu} \psi -ig \hat{A}_{\mu} \star \hat{\psi}. \end{eqnarray} Here $[A,B]_{\star}$ denotes Moyal bracket: \begin{equation} [A,B]_{\star} \equiv A \star B - B \star A . \end{equation} The $\star$-product of two fields $\phi_{1}(x)$ and $\phi_{2}(x)$ is defined as: \begin{equation} \phi_{1}(x) \star \phi_{2}(x) = exp \left( i {\theta^{\mu\nu} \over 2} {\partial \over \partial \xi^{\mu}}{\partial \over \partial \eta^{\nu}} \right) \phi_{1}(x+\xi) \phi_{2}(x+\eta) \vert_{\xi=\eta=0}. \end{equation} The coupling constant $g$ has mass dimension $({3-n \over 2})$. Therefore in four dimensional spacetime it is dimensionless while in two dimensional spacetime it has dimension 1. The action~(\ref{action1}) is invariant under the noncommutative $U(1)$ gauge transformations: \begin{eqnarray} \hat{\delta}_{\hat{\lambda}} \hat{A}_{\mu} &=& {1\over g} \partial_{\mu} \hat{\lambda} +i [ \hat{\lambda} , \hat{A}_{\mu}]_{\star},\\ \hat{\delta}_{\hat{\lambda}} \hat{\psi} &=& i \hat{\lambda} \star \hat{\psi},\\ \hat{\delta}_{\hat{\lambda}} \hat{\bar{\psi}} &=&-i \hat{\bar{\psi}} \star \hat{\lambda}. \end{eqnarray} To the lowest order in the noncommutative parameter, the Seiberg-Witten map gives \cite{s-w} \begin{eqnarray} \hat{A}_{\mu}&=& A_{\mu} - {g\over 2} \theta^{\rho\sigma} A_{\rho} \left( \partial_{\sigma} A_{\mu} + F_{\sigma\mu} \right) +O(\theta^{2}),\\ \hat{\psi}&=& \psi -{g\over 2} \theta^{\rho\sigma} A_{\rho} \partial_{\sigma} \psi +O(\theta^{2}),\\ \hat{\lambda}&=& \lambda - {g\over 2} \theta^{\rho \sigma} A_{\rho} \partial_{\sigma} \lambda +O(\theta^{2}), \end{eqnarray} where $A_{\mu}$, $\psi$ and $\lambda$ are the ordinary gauge field, fermion field and gauge transformation parameter respectively. Using these expressions, up to order $O(\theta)$ \cite{jurco}, we have \begin{equation} \begin{split} S= \int d^{n+1} x & \left( -{1\over 4} \left[ \left( 1+{g\over2} \theta^{\mu\rho}F_{\rho\mu} \right) F^{\alpha\beta}F_{\alpha\beta} + 2g \theta^{\mu\rho} {F_{\rho}}^{\nu} {F_{\mu}}^{\sigma} F_{\sigma\nu} \right] \right. \\ &\left. + \left(1+{g\over 4} \theta^{\mu\rho}F_{\rho\mu}\right) \bar{\psi} i D\hskip-0.65em / \psi -{g\over 2} \theta^{\alpha\beta} \bar{\psi} i \gamma^{\mu} F_{\mu\alpha} D_{\beta} \psi \right).\label{1staction} \end{split} \end{equation} This expansion makes sense since the noncommutativity of spacetime, if exists, is small. The second and third terms in~(\ref{1staction}) are the photon self-interaction terms. Similar to Yang-Mills theories in commutative spacetime, the existence of gauge boson self-interaction terms cause the theory to be asymptotically free \cite{martin}. However, in noncommutative quantum electrodynamics it is the structure of spacetime causing the nonlinearity of the field strength in the gauge connection. A remarkable fact is that for $n > 1$ all the $\theta$-dependent terms in Eqn.~(\ref{1staction}) are Lorentz violating. Thus in four dimensional spacetime the action ~(\ref{1staction}) provides an interesting Lorentz-violating extension of QED. For further details on this topic, we refer to Ref.~\cite{carroll}. The canonical momenta conjugate to the fermion field $\psi$ and the gauge field $A_{\mu}$ are \begin{equation} \Pi_{\psi}\equiv {\partial \mathcal{L} \over \partial \dot{\psi}}=\left( 1+{g\over 4} \theta^{\mu\rho} F_{\rho\mu} \right) i \psi^{\dagger} -{g\over 2} \theta^{\alpha 0}\bar{\psi} i \gamma^{\mu} F_{\mu\alpha},\label{pi} \end{equation} and \begin{equation} \begin{split} \Pi^{\mu} \equiv {\partial \mathcal{L} \over \partial \dot{A}_{\mu}} = & {g\over 4} \theta^{0 \mu } F^{\alpha\beta} F_{\alpha\beta} +\left( 1+ {g\over 2} \theta^{\mu\rho} F_{\rho\mu} \right)F^{ \mu 0}- g \left( \theta^{\lambda [ 0} F_{\lambda \sigma} F^{\sigma \mu ]} + \theta^{\lambda \rho} {F_{\rho}}^{\mu} {F_{\lambda}}^{0}\right) \\ & +{g\over 2} \theta^{\mu 0}\bar{\psi} i D\hskip-0.65em / \psi + {g\over 2} \theta^{[ 0 \beta} \bar{\psi} i \gamma^{\mu ] } D_{\beta} \psi.\label{piA} \end{split} \end{equation} It is obvious to see from the above result that $\Pi^{0} = 0$, which means that $A_{0}$ does not propagate. This result is the same as the one in commutative quantum electrodynamics. The well-known physical meaning of this is that there are redundant modes in the Lagrangian. Eqn.~(\ref{pi}) and Eqn.~(\ref{piA}) show that in $(n+1)$ noncommutative spacetime the canonical momenta $\Pi_{\psi}$ and $\Pi^{\mu}$ depend on both the gauge connection field $A_{\mu}$ and the fermion field $\psi$. However, as is shown in the following, in two dimensional spacetime, commutative or not, $\Pi_{\psi}$ depends only on the fermion field and $\Pi^{\mu}$ depends only on the gauge connection field. The equations of motion for the Lagrangian in~(\ref{1staction}) are \begin{equation} \left(1+{g\over 4} \theta^{\mu\rho} F_{\rho\mu} \right) i D\hskip-0.65em / \psi - {g \over 2} \theta^{\alpha\beta} F_{\mu\alpha} i \gamma^{\mu} D_{\beta} \psi =0, \end{equation} and \begin{equation} \begin{split} \partial_{\xi}& \left( - F^{\xi \lambda} \left(1+ {g \over 2} \theta^{\mu\rho} F_{\rho \mu} \right) +{g \over 4} \theta^{ \xi \lambda} F^{\alpha\beta} F_{\alpha\beta} - g \left( \theta^{\mu [ \xi} F_{\mu \sigma} F^{\sigma \lambda ]} + \theta^{\mu\rho} {F_{\rho}}^{\lambda} {F_{\mu}}^{\xi} \right) \right. \\ &\left. -{g \over 2} \left(\theta^{\xi \lambda} \bar{\psi} i D\hskip-0.65em / \psi - \theta^{[ \xi \beta} \bar{\psi} i \gamma^{\lambda ]} D_{\beta} \psi \right) \right) = g \left( 1+ {g\over 4} \theta^{\mu\rho} F_{\rho\mu} \right) \bar{\psi} \gamma^{\lambda} \psi - {g\over 2} \theta^{\alpha \lambda} \bar{\psi} \gamma^{\mu} F_{\mu \alpha} \psi . \end{split} \end{equation} For $n \neq 1$, the above equations are the Lorentz-breaking extensions of the Dirac equation and the inhomogeneous Maxwell equations in the presence of sources. At this point, the dimensionality of spacetime is not specified. From now on, we will focus on two dimensional spacetime. Our conventions for the spacetime are \begin{equation} \theta^{\mu\nu}= \theta \epsilon^{\mu\nu} \end{equation} with $\epsilon^{01}=g^{00} = - g^{11} =1$. The dimension of the Lorentz invariant parameter $\theta$ is length-squared. Thus $\sqrt{\theta} $ is related to the noncommutativity length scale. In two dimensional spacetime, the action~(\ref{action1}) is the noncommutative Schwinger model. It is straightforward to show from Eqn.~(\ref{1staction}) that in this context the Lagrangian is \begin{equation} \begin{split} \mathcal{L} &= - {1\over 4} \left( F_{\alpha\beta} F^{\alpha\beta} + g \theta \epsilon^{\alpha \beta}F_{\beta\gamma} F^{\gamma\delta} F_{\delta\alpha} \right) + \bar{\psi} i D\hskip-0.65em / \psi \\ &=- {1\over 4} \left( 1+{g\over 2} \theta \epsilon^{\alpha\beta} F_{\alpha\beta} \right) F_{\mu\nu} F^{\mu\nu} + \bar{\psi} i D\hskip-0.65em / \psi . \label{schwinger} \end{split} \end{equation} From the first line to the second line in Eqn.~(\ref{schwinger}), we used the relation $\epsilon^{\alpha\beta}\epsilon_{\beta\gamma}\epsilon^{\gamma\delta}\epsilon_{\delta\alpha}={1\over 2} ( \epsilon^{\alpha\beta} \epsilon_{\alpha\beta} )^2$, which is correct in two dimensional spacetime. Therefore, with spacetime noncommutativity, up to the first nontrivial order in $\theta$ we end up with a ``modified" Schwinger model. The difference between the Lagrangian~(\ref{schwinger}) and the Lagrangian of the ordinary Schwinger model is the three-photon interaction term. Besides the local $U(1)$ gauge symmetry, the Lagrangian~(\ref{schwinger}) is invariant under global $U(1) \otimes U(1)^{\prime}$ symmetries, where $U(1)$ and $U(1)^{\prime}$ are the charge and chirality symmetries of the massless fermion. Discrete symmetries of noncommutative quantum electrodynamics have been investigated in \cite{sheikh}. One can do the same analysis to the ``modified" Schwinger model defined by~(\ref{schwinger}) . It is not difficult to show that it is parity ($P$) invariant. However, both charge conjugation ($C$) and time reversal ($T$) are violated by the three-photon interaction term. A theory with a positive noncommutative parameter is related to the one with a negative noncommutative parameter by charge conjugation. In consequence, the theory is still $CPT$ invariant but $C$ and $CP$ violated. Note that since the noncommutative Schwinger model is manifestly Lorentz invariant, different from the four dimensional noncommutative quantum electrodynamics, $CPT$ invariance is not a surprising result. One interesting result in Eqn.~(\ref{schwinger}) is that the fermion sector is not modified in $(1+1)$ spacetime at one loop order. This means that different from the gauge boson, a two dimensional fermion does not know whether the spacetime is commutative or not and will respond in the same way in either case. Since there is an exact mapping between the fermionic and bosonic theories in two dimensional spacetime, the same is true for a scalar. From this one can easily conclude that the ABJ anomaly is the same as the ordinary result till the first nontrivial order in $\theta$. This agrees with the analysis in Ref.~\cite{banerjee}, which calculated the axial anomaly in an arbitrary even dimensional noncommutative field theory by using the point-splitting regularization. The canonical momenta are \begin{eqnarray} \Pi_{\psi}&=&i \psi^{\dagger},\\ \Pi^{0}&=&0, \qquad \textrm{and}\qquad \Pi^{1}= F^{10} + {3\over2}g\theta (F^{01})^2. \end{eqnarray} The equations of motion arising from the Lagrangian~(\ref{schwinger}) are \begin{equation} iD\hskip-0.65em / \psi =0 \label{dirac} \end{equation} and \begin{equation} \partial_{\xi} \left( F^{\lambda \xi} - {3\over 2} g \theta \epsilon^{\alpha\xi}F^{\lambda\delta} F_{\delta\alpha} \right)= g \bar{\psi}\gamma^{\lambda} \psi. \label{eom} \end{equation} Eqn.~(\ref{dirac}) is nothing but the Dirac equation. From Eqn.~(\ref{eom}), it it straightforward to show that the current operator $\bar{\psi}\gamma^{\mu} \psi$ is conserved. \section{Vacuum polarization} In analogy to commutative gauge theories, perturbative analysis begins with gauge fixing. Feynman rules for the fermion and photon propagators and the fermion-photon vertex are the same as the ones in the commutative Schwinger model. It is straightforward to derive from Eqn.~(\ref{schwinger}) the three-photon vertex and it reads: \begin{equation} \begin{split} V^{\mu\nu\rho}(k,p,q) ={1\over 2} \theta g \epsilon^{\alpha\beta} \left( k_{\beta} \left( \left(p^{\mu}q^{\nu}-p\cdot q g^{\mu\nu} \right)g^{\rho}_{\alpha} + \left(q^{\mu} p^{\rho} -p\cdot q g^{\mu\rho} \right) g^{\nu}_{\alpha} \right) + p_{\beta} \left( \left( k^{\rho}q^{\nu}-k\cdot q g^{\rho\nu}\right) g^{\mu}_{\alpha} \right. \right. \\ \left. + \left( q^{\mu} k^{\nu}-k\cdot q g^{\mu\nu}\right) g^{\rho}_{\alpha} \right) + q_{\beta} \left( \left( k^{\rho}p^{\mu}-k\cdot p g^{\mu\rho}\right) g^{\nu}_{\alpha} + \left( p^{\rho} k^{\nu}-k\cdot p g^{\nu\rho}\right) g^{\mu}_{\alpha} \right) \\ \left. -p^{\mu} \left(q_{\alpha}k_{\beta} g^{\nu\rho}+g^{\nu}_{\alpha} g^{\rho}_{\beta} k\cdot q \right) - p^{\rho} \left(k_{\alpha} q_{\beta} g^{\nu\mu}+g^{\nu}_{\alpha} g^{\mu}_{\beta} k\cdot q \right) -q^{\mu} \left(p_{\alpha}k_{\beta} g^{\nu\rho}+g^{\rho}_{\alpha} g^{\nu}_{\beta} k\cdot p \right) \right. \\ \left. - q^{\nu} \left(k_{\alpha}p_{\beta} g^{\mu\rho}+g^{\rho}_{\alpha} g^{\mu}_{\beta} k\cdot p \right) -k^{\nu} \left(q_{\alpha}p_{\beta} g^{\mu\rho}+g^{\mu}_{\alpha} g^{\rho}_{\beta} p\cdot q \right)-k^{\rho} \left(p_{\alpha}q_{\beta} g^{\mu\nu}+g^{\mu}_{\alpha} g^{\nu}_{\beta} p\cdot q \right) \right) \end{split} \end{equation} where photon momenta $k^{\mu}$, $p^{\nu}$ and $q^{\rho}$ satisfy $k+p+q=0$. \begin{figure}[t] \begin{center} \includegraphics[width=10cm,clip=true,keepaspectratio=true]{loop.eps} \caption{\small The lowest-order vacuum polarization diagrams. Wave lines denote photons, solid lines denote fermions.} \end{center}\label{loop} \end{figure} There are two one-loop diagrams for the self-energy of the gauge boson, as shown in Fig. 1. We do not include the tadpole diagrams since they are identically zero. In the commutative Schwinger model it is the well-known fact that the fermion-loop diagram dynamically generates a mass for the photon field; the result is $m_{\gamma}={g \over \sqrt \pi}$ \cite{schwinger}. The mass generation is due to the IR behavior of the intermediate state formed by a fermion-antifermion pair. Recall that the Lagrangian defined by (\ref{schwinger}) has another Lorentz invariant dimensionful constant, the noncommutativity parameter $\theta$. It is natural at this point to expect that the photon mass is going to be changed by the photon-loop diagram. However, as we shall show by explicit calculation, this is not the case. The photon-loop diagram gives \begin{equation} {1\over2} \int {d^2p \over (2\pi)^2} V^{\alpha\beta \mu} (-(p+q),p,q ) G_{\alpha\rho}(p+q) V^{\rho\lambda\nu}( p+q, -p, -q ) G_{\beta \lambda}(p) \end{equation} where $G_{\alpha\rho}(p+q)$ and $ G_{\beta \lambda}(p)$ are photon propagators and ${1\over2}$ is a symmetry factor. Note that this integral is gauge independent. By naive power counting, the photon-loop diagram is quadratically divergent. We use the dimensional regularization scheme and evaluate the integrals in $(2-\epsilon)$ dimensional spacetime to extract possible singularities. As a matter of fact, explicit calculation shows that all spurious poles cancel each other and the diagram is well-defined and finite. Remarkably, the degree of divergence is lower than expected because of the gauge symmetry. The same striking phenomenon happens for the fermion-loop diagram, whose superficial degree of divergence is 0. The calculation is straightforward and we report here the final result for the photon-loop diagram: \begin{equation} i \Pi^{\mu\nu}(q) = -i {(\theta g)^2 \over 2 \pi} q^2 \left(q^{\mu} q^{\nu} - g^{\mu\nu} q^2\right) .\label{loop} \end{equation} This result shows that instead of contributing to the photon mass, the radiative correction due to the photon loop generates a new type of operator. The independence of the Schwinger mass on the noncommutativity of spacetime makes sense since the mechanism for the photon mass generation is purely an IR effect. In fact, one can show that the term $i q^2(q^{\mu}q^{\nu}- q^2 g^{\mu\nu})$ corresponds to the dimension four operator $-{1\over 2}\partial_{\mu}F^{\mu\nu}\partial^{\lambda}F_{\lambda\nu}$. Therefore, a higher derivative term is dynamically generated due to the noncommutativity of spacetime. Note that at quadratic order in the fields the $\theta$-expanded action of a noncommutative theory is the same as in the commutative theory. This means that the $O(\theta^2)$ part of the classical noncommutative Schwinger model contains only interaction terms. The higher derivative kinetic terms will not show up in the classical action. Thus the appearance of the operator $\partial_{\mu}F^{\mu\nu}\partial^{\lambda}F_{\lambda\nu}$ is a pure quantum effect. Higher derivative terms are also discovered in Ref.~\cite{bichl}, where the quantization of noncommutative QED via the Seiberg-Witten map is investigated. It is argued that because of the existence of nonrenormalizable vertices in the $\theta$-expanded action, higher derivative terms allowed by symmetries should be added to extend the action in order to absorb the divergences. It is realized later \cite{bichl2} that higher derivative terms are in fact a part of the Seiberg-Witten map. A remarkable difference here is that while the term $\partial F \partial F $ is of order $\theta$ in the noncommutative QED action, it appears in second order in $\theta$ in the noncommutative Schwinger model. This is due to the fact that the coupling constant $g$ is dimensionful in two dimensional spaccetime. Besides, our calculation shows that while the radiative corrections to the photon self-energy are divergent in noncommutative QED, they are finite in the noncommutative Schwinger model. Hence, like the generation of the photon mass, the appearance of the operator $\partial_{\mu}F^{\mu\nu}\partial^{\lambda}F_{\lambda\nu}$ is a dynamical effect. \section{Unitarity} We next consider the issue of unitarity. Including the higher derivative term generated by the photon loop into the Lagrangian (\ref{schwinger}), the extended gauge sector becomes\footnote{To avoid the unnecessary complexity, we do not include the term $\sim F_{\mu\nu}{1\over \Box} F^{\mu\nu}$ generated by the fermion loop since the following discussion on unitarity does not depend on the pole of the photon propagator.} \begin{equation} - {1\over 4} \left( F_{\alpha\beta} F^{\alpha\beta} + {\sqrt{2 \pi} \over M_{\theta}} \epsilon^{\alpha \beta}F_{\beta\gamma} F^{\gamma\delta} F_{\delta\alpha} \right) + {1 \over 2 M_{\theta}^{2}}\partial_{\alpha}F^{\alpha\mu}\partial^{\beta}F_{\beta\mu} \label{gauge} \end{equation} where $M_{\theta} \equiv {\sqrt{2 \pi} \over g\theta}$. The positive sign of the higher derivative term assures the vacuum stability. $M_{\theta}$ and the photon mass $m_{\gamma}$ are related by the relation \begin{equation} M_{\theta} m_{\gamma} ={\sqrt{2} \over \theta}. \end{equation} Theories with the lowest-order higher derivative operators contain states with negative norm, that is, ghost states. In fact, defining \cite{hawking} \begin{eqnarray} \tilde{A}_{\mu} &=& {1 \over M_{\theta}^{2}} \left[ \left( \Box + M_{\theta}^{2} \right) A_{\mu} - \partial_{\mu} \left( \partial \cdot A \right) \right],\\ \bar{ A}_{\mu} &=& {1 \over M_{\theta}^{2}} \left[ \Box A_{\mu} - \partial_{\mu} \left( \partial \cdot A \right) \right], \end{eqnarray} Eqn.~(\ref{gauge}) can be rewritten as \begin{equation} \begin{split} -{1\over 4} \tilde{F}_{\alpha\beta} \tilde{F}^{\alpha\beta} + {1\over 4} \bar{F}_{\alpha\beta} \bar{F}^{\alpha\beta}-{M_{\theta}^{2} \over 2} \bar{A}_{\alpha}\bar{A}^{\alpha} -\sqrt{{ \pi\over 8}}{1\over M_{\theta}} \epsilon^{\alpha\beta} & \left( \tilde{F}_{\beta\gamma} \tilde{F}^{\gamma\delta} \tilde{F}_{\delta\alpha}- \bar{F}_{\beta\gamma} \bar{F}^{\gamma\delta} \bar{F}_{\delta\alpha} \right. \\ & \left. -3 \tilde{F}_{\beta\gamma} \tilde{F}^{\gamma\delta} \bar{F}_{\delta\alpha} +3 \bar{F}_{\beta\gamma} \bar{F}^{\gamma\delta} \tilde{F}_{\delta\alpha}\right). \label{gauge2} \end{split} \end{equation} The unusual positive sign of the kinetic term of the field $\bar{A}_{\mu}$ means that $\bar{A}_{\mu}$ particles are massive ghosts. The last two terms in Eqn.~(\ref{gauge2}) connect the two Hilbert spaces where $\tilde{A}_{\mu}$ and $\bar{A}_{\mu}$ particles live respectively and cause the nonconservation of the ghost number. This would result in the loss of unitarity. Note that different from the photon mass, which is independent of the noncommutativity parameter $\theta$, the mass of ghost fields $\bar{A}_{\mu}$ depends on both dimensionful parameters in the theory. Since states with $\bar{A}_{\mu}$ particles decouple from the theory in the limit $\theta \rightarrow 0$, unitarity violation is caused by noncommutativity of spacetime. This is consistent with the well-known fact that $\theta^{0i} \neq 0$ leads to unitarity violation \cite{gomis}. However, as was argued in Ref.~\cite{hawking}, a physical $S$ matrix defined between stable states is still well-behaved as long as ghost particles can decay and do not show up as asymptotic states. In the noncommutative Schwinger model this requires \begin{equation} m_{\gamma}^{2}\theta \leq {1\over \sqrt{2}}. \end{equation} As a result, an interacting field theory with higher derivative operators can still make sense even though unitary is lost at a high energy scale of order $M_{\theta}$, the ghost mass. In that sense, even though interaction terms between massive ghost fields and physical fields cause a flaw in the theory, they provide a cure in the meantime. As a final point, let us observe the higher loop effects. Because of the existence of the three-photon interaction term, which is non-renormalizable by naive power counting, higher-order 1PI vacuum polarization diagrams will in principle generate an infinite number of higher derivative operators. However, it is not a difficult exercise to show that a theory with higher derivative kinetic terms can always be rewritten as another equivalent theory which involves more fields (at least one of them is a ghost if the lowest-order higher derivative kinetic terms exist in the original theory) and contains terms with at most two derivatives in it. Therefore, the above argument still holds. Even though we have only done calculations for the noncommutative Schwinger model, we have reasons to believe the emergence of higher derivative operators is a feature for other theories with space-time noncommutativity. This is because for any theory expansions in $\theta$ absolutely contain nonrenormalizable vertices. As a result, by power-counting analysis higher derivative operators obeying symmetries will be generated in loop diagrams. Since higher derivative kinetic terms obey the same symmetries as the classical two-derivative kinetic terms, they shall naturally be generated by the polarization diagrams. Thus, from the above discussion we conclude that in the framework of the Seiberg-Witten map the feature of non-unitarity for a noncommutative field theory with $\theta^{0i}\neq 0$ is characterized by the presence of higher derivative kinetic terms. That is, the $\theta$-expanded version of a unitary theory will not generate the lowest-order higher derivative kinetic terms. \section{Conclusion} The purpose of this paper has been to show via a study of the noncommutative Schwinger model that the noncommutativity of space-time, though still waiting for the experimental proof, provides an explanation for the emergence of higher derivative kinetic terms in a field theory. Our focus on the noncommutative Schwinger model bypasses unrelevant complexities as can be seen by comparing Eqn.~(\ref{1staction}) and Eqn.~(\ref{schwinger}). This simplicity is mainly because a two dimensional fermion is insensitive to the noncommutativity of spacetime up to order $O(\theta)$. In addition, issues related to the breakdown of Lorentz invariance are avoided in two dimensional spacetime. As an aside, we showed that the Schwinger mass is not modified by the noncommutativity of spacetime till the first nontrivial order in $\theta$. \begin{acknowledgments} The research of F.W. was supported in part by the National Nature Science Foundation of China under grant No. 10805024 and the project of Chinese Ministry of Education under grant No. 208072. M.Z. was supported in part by the research fund of National University of Defense Technology. \end{acknowledgments}
1,116,691,499,440
arxiv
\section{Introduction}\label{sec:intro} In the algebra of symmetric functions there is interest in determining when two skew Schur functions are equal \cite{HDL, gut, HDL3, HDL2, vW}. The equalities are described in terms of equivalence relations on skew diagrams. It is consequently natural to investigate whether new equivalence relations on skew diagrams arise when we restrict our attention to the subalgebra of skew Schur $Q$-functions. This is a particularly interesting subalgebra to study since the combinatorics of skew Schur $Q$-functions also arises in the representation theory of the twisted symmetric group \cite{Bess, ShawvW, StemP}, and the theory of enriched $P$-partitions \cite{Stem}, and hence skew Schur $Q$-function equality would impact these areas. The study of skew Schur $Q$-function equality was begun in \cite{QEq}, where a series of technical conditions classified when a skew Schur $Q$-function is equal to a Schur $Q$-function. In this paper we extend this study to the equality of ribbon Schur $Q$-functions. Our motivation for focussing on this family is because the study of ribbon Schur function equality is fundamental to the general study of skew Schur function equality, as evidenced by \cite{HDL, HDL3, HDL2}. Our method of proof is to study a slightly more general family of skew Schur $Q$-functions, and then restrict our attention to ribbon Schur $Q$-functions. Since the combinatorics of skew Schur $Q$-functions is more technical than that of skew Schur functions, we provide detailed proofs to highlight the subtleties needed to be considered for the general study of equality of skew Schur $Q$-functions. The rest of this paper is structured as follows. In the next section we review operations on skew diagrams, introduce the skew diagram operation \emph{composition of transpositions} and derive some basic properties for it, including associativity in Proposition~\ref{prop:assoc}. In Section~\ref{sec:schurq} we recall $\Omega$, the algebra of Schur $Q$-functions, discover new bases for this algebra in Proposition~\ref{prop:sbasis} and Corollary~\ref{cor:rbasis}. We see the prominence of ribbon Schur $Q$-functions in the latter, which states \begin{theorem*} The set of all ribbon Schur $Q$-functions ${\mathfrak r} _\lambda$, indexed by strict partitions $\lambda$, forms a $\mathbb{Z}$-basis for $\Omega$. \end{theorem*} Furthermore we determine all relations between ribbon Schur $Q$-functions in Theorems~\ref{ribbonrelations} and \ref{ribbonrelations2}. The latter is particularly succinct: \begin{theorem*}All relations amongst ribbon Schur $Q$-functions are generated by the multiplication rule ${\mathfrak r} _\alpha {\mathfrak r} _\beta = {\mathfrak r} _{\alpha \cdot \beta} + {\mathfrak r} _{\alpha \odot \beta}$ for compositions $\alpha, \beta$, and ${\mathfrak r} _{2m} = {\mathfrak r} _{1^{2m}}$ for $m\geq 1$. \end{theorem*} In Section~\ref{sec:eqskewschurq} we determine a number of instances when two ordinary skew Schur $Q$-functions are equal including a necessary and sufficient condition in Proposition~\ref{prop:power2}. Our main theorem on equality is Theorem~\ref{the:bigone}, which is dependent on composition of transpositions denoted $\bullet$, transposition denoted $^t$, and antipodal rotation denoted $^\circ$: \begin{theorem*} For ribbons $\alpha _1, \ldots , \alpha _m$ and skew diagram $D$ the ordinary skew Schur $Q$-function indexed by $$\alpha _1 \bullet \cdots \bullet \alpha _m \bullet D$$is equal to the ordinary skew Schur $Q$-function indexed by $$\beta _1 \bullet \cdots \bullet \beta _m \bullet E$$where $$ \beta _i \in \{ \alpha _i, \alpha _i ^t, \alpha _i ^\circ , (\alpha _i ^t)^\circ = (\alpha _i ^\circ)^t\} \quad 1\leq i \leq m,$$ $$ E\in \{ D, D^t, D^\circ , (D^t)^\circ = (D^\circ)^t\}.$$ \end{theorem*} We restrict our attention to ribbon Schur $Q$-functions again in Section~\ref{sec:ribbons}, and derive further ribbon specific properties including irreducibility in Proposition~\ref{prop:irrrib}, and that the non-commutative analogue of ribbon Schur $Q$-functions is the flag $h$-vector of Eulerian posets in Theorem~\ref{the:commconnection}. \section*{Acknowledgements}\label{sec:ack} The authors would like to thank Christine Bessenrodt, Louis Billera and Hugh Thomas for helpful conversations, Andrew Rechnitzer for programming assistance, and the referee for helpful comments. John Stembridge's QS package helped to generate the pertinent data. Both authors were supported in part by the National Sciences and Engineering Research Council of Canada. \section{Diagrams}\label{sec:diagrams} A \emph{partition}, $\lambda$, of a positive integer $n$, is a list of positive integers $\lambda _1 \geq \cdots \geq \lambda _k >0$ whose sum is $n$. We denote this by $\lambda \vdash n$, and for convenience denote the empty partition of 0 by 0. We say that a partition is \emph{strict} if $\lambda _1 > \cdots > \lambda _k >0$. If we remove the weakly decreasing criterion from the partition definition, then we say the list is a composition. That is, a \emph{composition}, $\alpha$, of a positve integer $n$ is a list of positive integers $\alpha _1 \cdots \alpha _k$ whose sum is $n$. We denote this by $\alpha \vDash n$. Notice that any composition $\alpha = \alpha _1 \cdots \alpha _k$ determines a partition, denoted $\lambda (\alpha)$, where $\lambda (\alpha)$ is obtained by reordering $\alpha _1 , \ldots , \alpha _k$ in weakly decreasing order. Given a composition $\alpha = \alpha _1 \cdots \alpha _k\vDash n$ we call the $\alpha _i$ the \emph{parts} of $\alpha$, $n= :|\alpha |$ the \emph{size} of $\alpha$ and $k=: \ell (\alpha)$ the \emph{length} of $\alpha$. There also exists three partial orders on compositions, which will be useful to us later. Firstly, given two compositions $\alpha = \alpha _1\cdots \alpha _{\ell (\alpha)}$, $\beta = \beta _1 \cdots \beta _{\ell(\beta)} \vDash n$ we say $\alpha $ is a \emph{coarsening} of $\beta$ (or $\beta$ is a \emph{refinement} of $\alpha$), denoted $\alpha \succcurlyeq \beta$ if adjacent parts of $\beta$ can be added together to yield the parts of $\alpha$, for example, $5312 \succcurlyeq 1223111$. Secondly, we say $\alpha$ \emph{dominates} $\beta$, denoted $\alpha \geq \beta$ if $\alpha _1 + \cdots +\alpha _i \geq \beta _1 + \cdots + \beta _i $ for $i=1, \ldots , \min\{\ell (\alpha), \ell(\beta)\}.$ Thirdly, we say $\alpha$ is \emph{lexicographically greater than} $\beta$, denoted $\alpha >_{lex} \beta$ if $\alpha \neq \beta$ and the first $i$ for which $\alpha _i \neq \beta _i$ satisfies $\alpha _i > \beta _i$. From partitions we can also create diagrams as follows. Let $\lambda $ be a partition. Then the array of left justified cells containing $\lambda _i$ cells in the $i$-th row from the top is called the \emph{(Ferrers or Young) diagram} of $\lambda$, and we abuse notation by also denoting it by $\lambda$. Given two diagrams $\lambda, \mu$ we say $\mu$ is \emph{contained} in $\lambda$, denoted $\mu \subseteq \lambda$ if $\mu _i \leq \lambda _i$ for all $i=1, \ldots ,\ell(\mu)$. Moreover, if $\mu \subseteq \lambda$ then the \emph{skew diagram} $D=\lambda /\mu$ is obtained from the diagram of $\lambda$ by removing the diagram of $\mu$ from the top left corner. The \emph{disjoint union} of two skew diagrams $D_1$ and $D_2$, denoted $D_1 \oplus D_2$, is obtained by placing $D_1$ strictly north and east of $D_2$ such that $D_1$ and $D_2$ occupy no common row or column. We say a skew diagram is \emph{connected} if it cannot be written as $D_1\oplus D_2$ for two non-empty skew diagrams $D_1, D_2$. If a connected skew diagram additionally contains no $2\times 2$ subdiagram then we call it a \emph{ribbon}. Ribbons will be an object of focus for us later, and hence for ease of referral we now recall the well-known correspondence between ribbons and compositions. Given a ribbon with $\alpha _1$ cells in the 1st row, $\alpha _2$ cells in the 2nd row, $\ldots$, $\alpha _{\ell(\alpha)}$ cells in the last row, we say it corresponds to the composition $\alpha _1 \cdots \alpha _{\ell(\alpha)}$, and we abuse notation by denoting the ribbon by $\alpha$ and noting it has $|\alpha|$ cells. \begin{example} $\lambda / \mu = 3221 / 11 = \tableau{&\ &\ \\&\ \\ \ &\ \\\ } = 2121 = \alpha$. \end{example} \subsection{Operations on diagrams}\label{subsec:ops} In this subsection we introduce operations on skew diagrams that will enable us to describe more easily when two skew Schur $Q$-functions are equal. We begin by recalling three classical operations: transpose, antipodal rotation, and shifting. Given a diagram $\lambda = \lambda _1 \cdots \lambda _{\ell(\lambda)}$ we define the \emph{transpose} (or \emph{conjugate}), denoted $\lambda ^t$, to be the diagram containing $\lambda _i$ cells in the $i$-th column from the left. We extend this definition to skew diagrams by defining the transpose of $\lambda /\mu$ to be $(\lambda /\mu)^t:=\lambda ^t/\mu ^t$ for diagrams $\lambda, \mu$. Meanwhile, the \emph{antipodal rotation} of $\lambda /\mu$, denoted $(\lambda /\mu )^\circ$, is obtained by rotating $\lambda /\mu$ 180 degrees in the plane. Lastly, if $\lambda , \mu$ are strict partitions then we define the \emph{shifted} skew diagram of $\lambda /\mu$, denoted $(\widetilde{\lambda/\mu})$, to be the array of cells obtained from $\lambda /\mu$ by shifting the $i$-th row from the top $(i-1)$ cells to the right for $i>1$. \begin{example} If $\lambda = 5421, \mu = 31$ then $$\lambda/\mu = \tableau{&&&\ &\ \\ &\ &\ &\ \\ \ &\ \\ \ }, (\lambda /\mu)^t = \tableau{&&\ &\ \\ &\ &\ \\ & \ \\ \ &\ \\ \ }, (\lambda /\mu )^\circ = \tableau{&&&&\ \\ &&&\ &\ \\ & \ &\ &\ \\ \ &\ }, (\widetilde{\lambda/\mu}) = \tableau{&\ &\ \\ \ &\ &\ \\ \ &\ \\ &\ }.$$ \end{example} We now recall three operations that are valuable in describing when two skew Schur functions are equal, before introducing a new operation. The first two operations, concatenation and near concatenation, are easily obtained from the disjoint union of two skew diagrams $D_1, D_2$. Given $D_1\oplus D_2$ their \emph{concatenation} $D_1\cdot D_2$ (respectively, \emph{near concatenation} $D_1 \odot D_2$) is formed by moving all the cells of $D_1$ exactly one cell west (respectively, south). \begin{example}If $D_1 = 21, D_2=32$ then $$D_1\oplus D_2 = \tableau{&&&\ &\ \\ &&& \ \\ \ &\ &\ \\ \ &\ }\ ,\quad D_1 \cdot D_2 = \tableau{&&\ &\ \\ &&\ \\ \ &\ &\ \\ \ &\ }\ , \quad D_1\odot D_1 = \tableau{&&&\ &\ \\ \ &\ &\ &\ \\ \ &\ }\ .$$ \end{example} For the third operation recall that $\cdot$ and $\odot$ are each associative and associate with each other \cite[Section 2.2]{HDL2} and hence any string of operations on diagrams $D_1, \ldots , D_k$ $$D_1\bigstar _1 D_2 \bigstar _2 \cdots \bigstar _{k-1} D_k$$ in which each $\bigstar _i$ is either $\cdot$ or $\odot$ is well-defined without parenthesization. Also recall from \cite{HDL2} that a ribbon with $|\alpha |=k$ can be uniquely written as $$\alpha =\square\bigstar _1 \square \bigstar _2 \cdots \bigstar _{k-1} \square$$where $\square$ is the diagram with one cell. Consequently, given a composition $\alpha$ and skew diagram $D$ the operation \emph{composition of compositions} is $$\alpha \circ D = D\bigstar _1 D\bigstar _2 \cdots \bigstar _{k-1} D.$$This third operation was introduced in this way in \cite{HDL2} and we modify this description to define our fourth, and final, operation \emph{composition of transpositions} as \begin{equation} \alpha \bullet D=\left \{ \begin{array}{ll} D\bigstar _1 D^t\bigstar _2 D \bigstar _3 D ^t \cdots \bigstar _{k-1} D & \hbox{if } |\alpha| \hbox{ is odd } \\ D\bigstar _1 D^t\bigstar _2 D \bigstar _3 D ^t \cdots \bigstar _{k-1} D^t & \hbox{if } |\alpha| \hbox{ is even. } \end{array} \right. \label{eq:compoftrans} \end{equation} We refer to $\alpha \circ D$ and $\alpha \bullet D$ as consisting of blocks of $D$ when we wish to highlight the dependence on $D$. \begin{example} Considering our block to be $D = 31$ and using coloured $\ast$ to highlight the blocks $$21 \circ D = \tableau{&&&&&\tcb{\ast} &\tcb{\ast} &\tcb{\ast} \\ &&\tcm{\ast} &\tcm{\ast} &\tcm{\ast} &\tcb{\ast} \\ &&\tcm{\ast} \\ \tcb{\ast} &\tcb{\ast} &\tcb{\ast} \\ \tcb{\ast} }\ \mbox{ and }\ 21\bullet D = \tableau{&&&&\tcb{\ast} &\tcb{\ast} &\tcb{\ast} \\ &&\tcm{\ast} &\tcm{\ast} &\tcb{\ast} \\ &&\tcm{\ast} \\ &&\tcm{\ast} \\ \tcb{\ast} &\tcb{\ast} &\tcb{\ast} \\ \tcb{\ast} }\ .$$Observe that if we consider the block $D=2$, then the latter ribbon can also be described as $312 \bullet 2$: $$\tableau{&&&&\tcm{\ast} &\tcb{\ast} &\tcb{\ast} \\ &&\tcb{\ast} &\tcb{\ast} &\tcm{\ast} \\ &&\tcm{\ast} \\ &&\tcm{\ast} \\ \tcm{\ast} &\tcb{\ast} &\tcb{\ast} \\ \tcm{\ast} }\ .$$ \end{example} This last operation will be the focus of our results, and hence we now establish some of its basic properties. \subsection{\texorpdfstring{Preliminary properties of $\bullet$}{Preliminary properties of bullet}}\label{subsec:bulletprops} Given a ribbon $\alpha$ and skew diagram $D$ it is straightforward to verify using \eqref{eq:compoftrans} that \begin{equation} ( \alpha \bullet D)^\circ=\left \{ \begin{array}{ll} \alpha^\circ \bullet D^\circ & \hbox{if } |\alpha| \hbox{ is odd } \\ \alpha^\circ \bullet (D^t)^\circ & \hbox{if } |\alpha| \hbox{ is even } \end{array} \right. \label{dirotation} \end{equation} and \begin{equation} ( \alpha \bullet D)^t=\left \{ \begin{array}{ll} \alpha^t \bullet D^t & \hbox{if } |\alpha| \hbox{ is odd } \\ \alpha^t \bullet D & \hbox{if } |\alpha| \hbox{ is even. } \end{array} \right. \label{ditransposition} \end{equation} We can also verify that $\bullet$ satisfies an associativity property, whose proof illustrates some of the subtleties of $\bullet$. \begin{proposition}\label{prop:assoc} Let $\alpha, \beta$ be ribbons and $D$ a skew diagram. Then $$\alpha \bullet (\beta \bullet D) = (\alpha \bullet \beta) \bullet D.$$ \end{proposition} \begin{proof} First notice that, if we decompose the $\beta \bullet D$ components of $\alpha \bullet (\beta \bullet D)$ into blocks of $ D$ then the $ D$ blocks are alternating in appearance as $D$ or $D^t$ as is in $(\alpha \bullet \beta) \bullet D$. Furthermore both $\alpha \bullet (\beta \bullet D)$ and $(\alpha \bullet \beta) \bullet D$ are comprised of $|\alpha|\times|\beta|$ blocks of $ D$. The only remaining thing is to show that the $i$-th and $i+1$-th block of $ D$ are joined in the same manner (i.e. near concatenated or concatenated) in both $\alpha \bullet (\beta \bullet D)$ and $(\alpha \bullet \beta) \bullet D$. \\ For a ribbon $\gamma$ let $$f^{\gamma}(i)=\left \{ \begin{array}{ll} -1 & \hbox{if in the ribbon $\gamma$, the $i$-th and $i+1$-th cell are near concatenated} \\ 1 & \hbox{if in the ribbon $\gamma$, the $i$-th and $i+1$-th cell are concatenated.} \end{array} \right. $$ {\it Case 1: i=$|\beta|q$.} Note that $\beta \bullet D$ has $|\beta|$ blocks of $ D$. Therefore, the way that the $i$-th and $i+1$-th blocks of $ D$ are joined in $\alpha \bullet (\beta \bullet D)$ is given by $f^{\alpha}(q)$. Now in $(\alpha \bullet \beta) \bullet D$ the way that the $i$-th and $i+1$-th blocks of $ D$ are joined is given by $f^{\alpha \bullet \beta}(i)$, which is equal to $f^{\alpha}(q)$. \\ {\it Case 2: i=$|\beta|q+r$ where $r\neq 0$.} Note that $f^{\gamma^t}(i)=-f^{\gamma}(i)$. Since in $\alpha \bullet \beta$, the $\beta$ components are alternating in appearance as $\beta$, $\beta ^t$, the way that the $i$-th and $i+1$-th block of $ D$ are joined in $(\alpha \bullet \beta) \bullet D$ is given by $f^{\alpha\bullet\beta}(i)=(-1)^{q}f^{\beta}(r)$. For $\alpha \bullet (\beta \bullet D)$, note that the $i$-th and $i+1$-th blocks of $ D$ are part of $\beta \bullet D$, hence they are joined given by $(-1)^{q}f^{\beta}(r)$, where $(-1)^{q}$ comes from the fact that we are using $\beta \bullet D$ and its transpose alternatively to form $\alpha \bullet (\beta \bullet D)$. \end{proof} \section{\texorpdfstring{Skew Schur $Q$-functions}{Skew Schur Q-functions}}\label{sec:schurq} We now introduce our objects of study, skew Schur $Q$-functions. Although they can be described in terms of Hall-Littlewood functions at $t=-1$ we define them combinatorially for later use. Consider the alphabet $$1'<1<2'<2<3'<3 \cdots .$$Given a shifted skew diagram $(\widetilde{\lambda /\mu})$ we define a \emph{weakly amenable tableau}, $T$, of \emph{shape} $(\widetilde{\lambda /\mu})$ to be a filling of the cells of $(\widetilde{\lambda /\mu})$ such that \begin{enumerate} \item the entries in each row of $T$ weakly increase \item the entries in each column of $T$ weakly increase \item each row contains at most one $i'$ for each $i\geq 1$ \item each column contains at most one $i$ for each $i\geq 1$. \end{enumerate} We define the {content} of $T$ to be $$c(T)=c_1(T)c_2(T)\cdots$$where $$c_i(T)= |\ i\ | +|\ i'\ |$$ and $|\ i\ |$ is the number of times $i$ appears in $T$, whilst $|\ i'\ |$ is the number of times $i'$ appears in $T$. The monomial associated to $T$ is given by $$x^T:=x_1 ^{c_1(T)}x_2 ^{c_2(T)}\cdots $$and the \emph{skew Schur $Q$-function}, $Q_{\lambda /\mu}$, is then $$Q_{\lambda /\mu} = \sum _T x^T$$where the sum is over all weakly amenable tableau $T$ of shape $(\widetilde{\lambda /\mu})$. Two skew Schur $Q$-functions that we will be particularly interested in are ordinary skew Schur $Q$-functions and ribbon Schur $Q$-functions. If $(\widetilde{\lambda /\mu}) = D$ where $D$ is a skew diagram then we define $${\mathfrak s} _D:= Q_{\lambda /\mu}$$and call it an \emph{ordinary skew Schur $Q$-function}. If, furthermore, $(\widetilde{\lambda /\mu})$ is a ribbon, $\alpha$, then we define $${\mathfrak r} _\alpha := Q_{\lambda /\mu}$$and call it a \emph{ribbon Schur $Q$-function}. Skew Schur $Q$-functions lie in the algebra $\Omega$, where $$\Omega = \mathbb{Z} [ q_1, q_2, q_3, \ldots ] \equiv \mathbb{Z} [ q_1, q_3, q_5, \ldots ]$$and $q_n = Q_n$. The $q_n$ satisfy \begin{equation}\sum _{r+s = n} (-1)^r q_rq_s = 0, \label{eq:qrels}\end{equation}which will be useful later, but for now note that for any set of countable indeterminates $x_1, x_2, \ldots$ the expression $\sum _{r+s = n} (-1)^r x_rx_s$ is often denoted $\chi _n$ and is called the \emph{$n$-th Euler form}. Moreover, if $\lambda = \lambda _1 \cdots \lambda _{\ell(\lambda)}$ is a partition and we define $$q_\lambda := q_{\lambda _1}\cdots q_{\ell(\lambda)},\quad q_0=1$$then \begin{proposition}\cite[8.6(ii)]{MacD}\label{prop:qbasis} The set $\{ q_\lambda \} _{\lambda \vdash n\geq 0}$, for $\lambda$ strict, forms a $\mathbb{Z}$-basis of $\Omega$. \end{proposition} This is not the only basis of $\Omega$ as we will see in Proposition~\ref{prop:sbasis}. \subsection{\texorpdfstring{Symmetric functions and $\theta$}{Symmetric functions and theta}}\label{subsec:symmap} It transpires that the ${\mathfrak s} _D$ and ${\mathfrak r} _\alpha$ can also be obtained from symmetric functions. Let $\Lambda$ be the subalgebra of $\mathbb{Z}[x_1, x_2, \ldots]$ with countably many variables $x_1, x_2, \ldots$ given by $\Lambda = \mathbb{Z}[e_1, e_2, \ldots ] = \mathbb{Z}[h_1, h_2, \ldots ]$ where $e_n = \prod _{ i_1 < \cdots <i_n} x_{i_1}\cdots x_{i_n}$ is the \emph{$n$-th elementary symmetric function} and $h_n = \prod _{ i_1 \leq \cdots \leq i_n} x_{i_1}\cdots x_{i_n}$ is the \emph{$n$-th homogeneous symmetric function}. Moreover, if $\lambda = \lambda _1\cdots \lambda _{\ell(\lambda)}$ is a partition and we define $e_\lambda := e_{\lambda _1}\cdots e_{\ell(\lambda)}$, $h_\lambda := h_{\lambda _1}\cdots h_{\ell(\lambda)}$, and $e_0=h_0=1$ then \begin{proposition}\cite[I.2]{MacD}\label{prop:ehbasis} The sets $\{ e_\lambda \} _{\lambda \vdash n\geq 0}$ and $\{ h_\lambda \} _{\lambda \vdash n\geq 0}$, each form a $\mathbb{Z}$-basis of $\Lambda$. \end{proposition} Given a skew diagram, $\lambda /\mu$ we can use the \emph{Jacobi-Trudi determinant formula} to describe the \emph{skew Schur function} $s_{\lambda /\mu}$ as \begin{equation}\label{eq:jth}s_{\lambda /\mu} = \det (h _{\lambda _i-\mu _j -i+j}) _{i,j = 1} ^{\ell(\lambda)}\end{equation}and via the involution $\omega:\Lambda \rightarrow \Lambda$ mapping $\omega (e_n)=h_n$ we can deduce \begin{equation}\label{eq:jte}s_{(\lambda /\mu)^t} = \det (e _{\lambda _i-\mu _j -i+j}) _{i,j = 1} ^{\ell(\lambda)}\end{equation}where $\mu_i = 0, i>\ell(\mu)$ and $h_n=e_n=0$ for $n<0$. If, furthermore, $\lambda/\mu$ is a ribbon $\alpha$ then we define $$r_\alpha := s _{\lambda /\mu}$$and call it a \emph{ribbon Schur function}. To obtain an algebraic description of our ordinary and ribbon Schur $Q$-functions we need the graded surjective ring homomorphism $$\theta : \Lambda \longrightarrow \Omega$$that satisfies \cite{Stem} $$\theta (h_n)=\theta (e_n)=q_n, \quad \theta (s_D) = {\mathfrak s} _D,\quad \theta(r_\alpha )={\mathfrak r} _\alpha$$for any skew diagram $D$ and ribbon $\alpha$. The homomorphism $\theta$ enables us to immediately determine a number of properties of ordinary skew and ribbon Schur $Q$-functions. \begin{proposition} Let $\lambda /\mu$ be a skew diagram and $\alpha $ a ribbon. Then \begin{equation}\label{eq:Qrot} {\mathfrak s} _{\lambda /\mu} = {\mathfrak s} _{(\lambda /\mu)^\circ} \end{equation} \begin{equation}\label{eq:Qtr} {\mathfrak s} _{\lambda /\mu} = \det (q _{\lambda _i-\mu _j -i+j}) _{i,j = 1} ^{\ell(\lambda)} = {\mathfrak s} _{(\lambda /\mu)^t} \end{equation} \begin{equation}\label{eq:Qrib} {\mathfrak r} _\alpha = (-1)^{\ell(\alpha)} \sum _{\beta \succcurlyeq \alpha} (-1) ^{\ell(\beta)} q _{\lambda (\beta)}. \end{equation} Moreover, for $D,E$ being skew diagrams and $\alpha, \beta$ being ribbons \begin{equation}\label{eq:Qmult} {\mathfrak s} _D{\mathfrak s} _E = {\mathfrak s} _{D\cdot E} + {\mathfrak s} _{D\odot E} \end{equation} \begin{equation}\label{eq:Qribmult} {\mathfrak r} _\alpha{\mathfrak r} _\beta = {\mathfrak r} _{\alpha\cdot \beta} + {\mathfrak r} _{\alpha\odot \beta}. \end{equation} \end{proposition} \begin{proof} The first equation follows from applying $\theta$ to \cite[Exercise 7.56(a)]{ECII}. The second equation follows from applying $\theta$ to \eqref{eq:jth} and \eqref{eq:jte}. The third equation follows from applying $\theta$ to \cite[Proposition 2.1]{HDL}. The fourth and fifth equations follow from applying $\theta$ to \cite[Proposition 4.1]{HDL2} and \cite[(2.2)]{HDL}, respectively. \end{proof} \subsection{\texorpdfstring{New bases and relations in $\Omega$}{New bases and relations in Omega}}\label{subsec:basesandrels} The map $\theta$ is also useful for describing bases for $\Omega$ other than the basis given in Proposition~\ref{prop:qbasis}. \begin{definition} If $D$ is a skew diagram, then let $srl(D)$ be the partition determined by the (multi)set of row lengths of $D$. \end{definition} \begin{example} $$D = \tableau{ &\ &\ \\ \ &\ &\ \\ \ &\ \\ \ }\ \quad srl(D) = 3221$$ \end{example} \begin{proposition} \label{prop:sbasis} Let $\mathfrak{D}$ be a set of skew diagrams such that for all $D\in \mathfrak{D}$ we have $srl(D)$ is a strict partition, and for all strict partitions $\lambda$ there exists exactly one $D\in \mathfrak{D}$ satisfying $srl(D)=\lambda$. Then the set $\{ {\mathfrak s} _D \} _{D\in \mathfrak{D}}$ forms a $\mathbb{Z}$-basis of $\Omega$. \end{proposition} \begin{proof} Let $D$ be any skew diagram such that $srl(D)=\lambda$. By \cite[Proposition 6.2(ii)]{HDL2}, we know that $h_\lambda$ has the lowest subscript in dominance order when we expand the skew Schur function $s_D$ in terms of complete symmetric functions. That is $$s_D=h_\lambda+\hbox{\scriptsize a sum of $h_\mu$'s where $\mu$ is a partition with $\mu>\lambda$}.$$ Now applying $\theta$ to this equation and using \cite[(8.4)]{MacD}, we conclude that \begin{equation} {\mathfrak s}_D=q_\lambda+\hbox{\scriptsize a sum of $q_\mu$'s where $\mu$ is a strict partition with $\mu>\lambda$} \label{sdtoq}.\end{equation} Hence by Proposition~\ref{prop:qbasis}, the set of ${\mathfrak s}_D$, $D\in{\mathfrak D}$, forms a basis of $\Omega$. The equation \eqref{sdtoq} implies that if we order $\lambda$'s and $srl(D)$'s in lexicographic order the transition matrix that takes ${\mathfrak s}_D$'s to $q_\lambda$'s is unitriangular with integer coefficients. Thus, the transition matrix that takes $q_\lambda$'s to ${\mathfrak s}_D$'s is unitriangular with integer coefficients. Hence \begin{equation}q_\lambda={\mathfrak s}_D+ \hbox{\scriptsize a sum of ${\mathfrak s}_E$'s where $srl(E)$ is a strict partition and $srl(E)>srl(D)$} \label{qtosd}\end{equation} where $E,D \in {\mathfrak D}$ and $srl(D)=\lambda$. Combining Proposition~\ref{prop:qbasis} with \eqref{qtosd} it follows that the set of ${\mathfrak s}_D$, $D\in {\mathfrak D}$, forms a ${\mathbb Z}$-basis of $\Omega$. \end{proof} \begin{corollary}\label{cor:rbasis} The set $\{ {\mathfrak r} _\lambda \} _{\lambda \vdash n \geq 0}$, for $\lambda$ strict, forms a $\mathbb{Z}$-basis of $\Omega$. \end{corollary} We can now describe a set of relations that generate \emph{all} relations amongst ribbon Schur $Q$-functions. \begin{theorem} \label{ribbonrelations} Let {$z_{\alpha}, \alpha \vDash n, n\ge 1$} be commuting indeterminates. Then as algebras, ${\Omega}$ is isomorphic to the quotient $${\Q[z_{\alpha}]/ \langle z_{\alpha}\ z_{\beta}-z_{\alpha\cdot \beta} - z_{\alpha \odot \beta}, \chi _2, \chi_4, \ldots\rangle}$$where $\chi_{2m}$ is the even Euler form $\chi _{2m} = \sum _{r+s = 2m} (-1)^{r} z_rz_s$. Thus, all relations amongst ribbon Schur $Q$-functions are generated by ${\mathfrak r}_{\alpha}\ {\mathfrak r}_{\beta}= {\mathfrak r}_{\alpha\cdot \beta} + {\mathfrak r}_{\alpha \odot \beta}$ and $\sum _{r+s = 2m} (-1)^{r} {\mathfrak r}_r{\mathfrak r}_s = 0$, $m\geq 1$. \end{theorem} \begin{proof} Consider the map $\varphi:\Q[z_{\alpha}] \rightarrow \Omega$ defined by $z_{\alpha} \mapsto {\mathfrak r}_\alpha$. This map is surjective since the ${\mathfrak r}_{\alpha}$ generate $\Omega$ by Corollary~\ref{cor:rbasis}. Grading $\Q[z_{\alpha}]$ by setting the degree of $z_{\alpha}$ to be $n=|\alpha|$ makes $\varphi$ homogeneous. To see that $\varphi$ induces an isomorphism with the quotient, note that $\Q[z_{\alpha}]/ \langle z_{\alpha}\ z_{\beta}-z_{\alpha\cdot \beta} - z_{\alpha \odot \beta},\chi _2, \chi_4, \ldots\rangle$ maps onto $\Q[z_{\alpha}]/\ker \varphi \simeq\Omega $, since $ \langle z_{\alpha}\ z_{\beta}-z_{\alpha\cdot \beta} - z_{\alpha \odot \beta}, \chi _2, \chi_4,\ldots\rangle \subset \ker\varphi$ as we will see below. It then suffices to show that the degree $n$ component of $$\Q[z_{\alpha}]/ \langle z_{\alpha}\ z_{\beta}-z_{\alpha\cdot \beta} - z_{\alpha \odot \beta}, \chi _2, \chi_4,\ldots \rangle$$is generated by the images of the $z_{\lambda}, \lambda \vdash n$, $\lambda$ is a strict partition, and so has dimension at most the number of partitions of $n$ with distinct parts. We show $ \langle z_{\alpha}\ z_{\beta}-z_{\alpha\cdot \beta} - z_{\alpha \odot \beta}, \chi _2, \chi_4,\ldots\rangle \subset \ker\varphi$ as follows. From \cite[p 251]{MacD} we know that \begin{E} 2q_{2x}=q_{2x-1}q_1-q_{2x-2}q_2+\cdots+q_1q_{2x-1} \label{qrelations} \end{E}and since $q_i={\mathfrak r}_i$, we can rewrite the above equation $$2{\mathfrak r}_{2x}={\mathfrak r}_{2x-1}{\mathfrak r}_1-{\mathfrak r}_{2x-2}{\mathfrak r}_2+\cdots+{\mathfrak r}_1{\mathfrak r}_{2x-1}.$$ Substituting ${\mathfrak r}_{2x-i}{\mathfrak r}_i={\mathfrak r}_{2x}+{\mathfrak r}_{(2x-i)i}$ and simplifying, we get $${\mathfrak r}_{2x}={\mathfrak r}_{(2x-1)1}-{\mathfrak r}_{(2x-2)2}+\cdots+(-1)^{x+1}{\mathfrak r}_{xx}+\cdots+{\mathfrak r}_{1(2x-1)}.$$Together with \eqref{eq:Qribmult} we have $ \langle z_{\alpha}\ z_{\beta}-z_{\alpha\cdot \beta} - z_{\alpha \odot \beta}, \chi _2, \chi_4,\ldots\rangle \subset \ker\varphi$. Now we show that if we have the following relations then every $z_\gamma$ can be written as the sum of $z_\lambda$'s where the $\lambda$'s are strict partitions. \begin{E} \left \{ \begin{array}{l} z_\alpha z_\beta=z_{\alpha\cdot\beta}+z_{\alpha\odot\beta} \\ z_2=z_{11}\\ z_4=z_{31}-z_{22}+z_{13}\\ \vdots \\ z_{2x}=z_{(2x-1)1}-z_{(2x-2)2}+\cdots+z_{1(2x-1)} etc. \end{array} \right. \label{zrelations} \end{E}where $\alpha, \beta$ are compositions. Note that the last equation in \eqref{zrelations} is equivalent to \begin{E} z_{xx}=(-1)^{x+1}(z_{2x}-z_{(2x-1)1}+z_{(2x-2)2}-\cdots\widehat{z_{xx}}\cdots-z_{1(2x-1)}) \label{zxx}.\end{E} Let $\gamma$ be a composition with length $k$. Using the first equation in \eqref{zrelations}, we have \begin{E} z_{\alpha\cdot\beta}+z_{\alpha\odot\beta}=z_{\beta\cdot\alpha}+z_{\beta\odot\alpha} \label{switch}.\end{E} By \cite[Proposition 2.2]{HDL} we can sort $z_\gamma$, that is $z_\gamma=z_{\lambda(\gamma)}+$ a sum of $z_\delta$'s with $\delta$ having $k-1$ or fewer parts. For $\alpha=\alpha_1\cdots\alpha_m$, define $prod(\alpha)$ to be the product of the parts of the composition $\alpha$, that is $prod(\alpha)=\alpha_1\times\alpha_2\times\cdots\times\alpha_m$. The partition $\alpha$ is called a {\it semi-strict} partition if it can be written in the form $\alpha=\alpha_1\alpha_2\cdots\alpha_k1\cdots1$ where $\alpha_1\alpha_2\cdots\alpha_k$ is a strict partition. Suppose that $\lambda(\gamma)=g_1g_2\ldots g_k$. If there is no $i$, $1\leq i\leq k-1$, such that $g_i=g_{i+1}=t>1$ then $\lambda(\gamma)$ is a semi-strict partition and we have \eqref{zgamma}, otherwise \begin{E}\begin{array}{lll} z_\gamma &=&z_{\lambda(\gamma)}+\hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$} \\ & = & z_{g_ig_{i+1}\ldots g_kg_1\ldots g_{i-1}}+\hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$} \\ & =& z_{g_ig_{i+1}}z_{g_{i+2}\ldots g_kg_1\ldots g_{i-1}}+ \hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$} \\ & =& (-1)^{t+1}[(z_{2t}-z_{(2t-1)1}+z_{(2t-2)2}-\cdots\widehat{z_{tt}}\cdots-z_{1(2t-1)})z_{g_{i+2}\ldots g_kg_1\ldots g_{i-1}}]\\ &&+ \hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$}\\ &= & (-1)^{t+1}[-z_{(2t-1)1g_{i+2}\ldots g_kg_1\ldots g_{i-1}}+z_{(2t-2)2g_{i+2}\ldots g_kg_1\ldots g_{i-1}}\\ &&-\cdots \widehat{z_{ttg_{i+2}\ldots g_kg_1\cdots g_{i-1}}} \cdots- z_{1(2t-1)g_{i+2}\ldots g_kg_1\ldots g_{i-1}}]+ \hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$}\\ &=& (-1)^{t+1}[-z_{\lambda((2t-1)1g_{i+2}\ldots g_kg_1\ldots g_{i-1})}+z_{\lambda((2t-2)2g_{i+2}\ldots g_kg_1\ldots g_{i-1})}\\ &&-\cdots\widehat{z_{\lambda(ttg_{i+2}\ldots g_kg_1\ldots g_{i-1})}} \cdots- z_{\lambda(1(2t-1)g_{i+2}\ldots g_kg_1\ldots g_{i-1})}]+ \hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$}\\ \end{array}\label{process}\end{E}where we used \eqref{switch} for the second, the first equation of \eqref{zrelations} for the third, \eqref{zxx} for the fourth, the first equation of \eqref{zrelations} for the fifth, and sorting for the sixth equality. Although $\lambda((2t-1)1g_{i+2}\ldots g_kg_1\ldots g_{i-1})$, $\lambda((2t-2)2g_{i+2}\ldots g_kg_1\ldots g_{i-1})$, $\ldots$, $\lambda(1(2t-1)g_{i+2}\ldots g_kg_1\ldots g_{i-1})$ have $k$ parts, the product of their parts is smaller than $prod(\lambda(\gamma))$ since $(2t-1),2\times(2t-2),\ldots,2t-1<t^2$. We repeat the process in \eqref{process} for each of the terms with $k$ parts in the last line of \eqref{process}. Since $prod(\alpha)$ is a positive integer, the process terminates, which yields \begin{E} z_\gamma= (\hbox{\scriptsize a sum of $z_\sigma$'s such that $\sigma$ is a semi-strict partition with $\ell(\sigma)=k$})\hspace{5pt}+\hspace{5pt}(\hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$}) \label{zgamma}.\end{E} Now if $\sigma$ is a semi-strict partition with at least two 1's, that is $\sigma=\sigma'11$ where $\sigma'$ is a semi-strict partition and $\ell(\sigma')=k-2$, then we have \begin{E}\begin{array}{lll} z_\sigma & =&z_{11\sigma'}+\hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$} \\ &=& z_{11}z_{\sigma'}+\hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$}\\ &=&z_2z_{\sigma'}+\hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$} \\ &=&z_{2\sigma'}+z_{2\odot\sigma'}+\hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$} \end{array} \label{zsigma} \end{E}where we used \eqref{switch} for the first, the first equation of \eqref{zrelations} for the second, the second equation of \eqref{zrelations} for the third, and the first equation of \eqref{zrelations} for the fourth equality. Note that $\ell(2\sigma')=k-1$ and $\ell(2\odot\sigma')=k-2$. If $\sigma$ does not have two 1's then it is a strict partition. Now applying \eqref{zsigma} to each $z_\sigma$ with $\sigma $ having at least two 1's in \eqref{zgamma}, we have $$ z_\gamma= (\hbox{\scriptsize a sum of $z_\sigma$'s such that $\sigma$ is a strict partition with $\ell(\sigma)=k$})\hspace{5pt}+\hspace{5pt} (\hbox{\scriptsize a sum of $z_\delta$'s such that $\ell(\delta)<k$}).$$ A trivial induction on the length of $\gamma$ now shows that any $z_\gamma$ in the quotient can be written as a linear combination of $z_\lambda$, $\lambda \vdash n$ and $\lambda$ is a strict partition. \end{proof} However, this is not the only possible set of relations and we now develop another set. This alternative set will help simplify some of our subsequent proofs in addition to being of independent interest. \begin{theorem} \label{ribbonrelations2} Let {$z_{\alpha}, \alpha \vDash n, n\ge 1$} be commuting indeterminates. Then as algebras, ${\Omega}$ is isomorphic to the quotient $${\Q[z_{\alpha}]/ \langle z_{\alpha}\ z_{\beta}-z_{\alpha\cdot \beta} - z_{\alpha \odot \beta}, \xi _2, \xi_4, \ldots\rangle}$$where $\xi_{2m}$ is the even transpose form $\xi _{2m} = z_{2m} - z_{\underbrace{1\ldots1}_{2m}}$. Thus, all relations amongst ribbon Schur $Q$-functions are generated by ${\mathfrak r}_{\alpha}\ {\mathfrak r}_{\beta}= {\mathfrak r}_{\alpha\cdot \beta} + {\mathfrak r}_{\alpha \odot \beta}$ and ${\mathfrak r}_{2m} = {\mathfrak r}_{\underbrace{1\ldots1}_{2m}}$, $m\geq 1$. \end{theorem} We devote the next subsection to the proof of this theorem. \subsection{Equivalence of relations} We say that the set of relationships $A$ {\it implies} the set of relationships $B$, if we can deduce $B$ from $A$. Two sets of relationships are {\it equivalent}, if each one implies the other one. \begin{itemize} \item For all compositions $\alpha$ and $\beta$, refer to $$z_\alpha z_\beta=z_{\alpha\cdot\beta}+z_{\alpha\odot\beta}$$ as multiplication. \item For all positive integers $x$, refer to the set of $$z_{2x}=z_{(2x-1)1}-z_{(2x-2)2}+\cdots-z_{2(2x-2)}+z_{1(2x-1)}$$ as $EE$. \item For all positive integers $x$, refer to the set of $$2z_{2x}=z_{2x-1}z_1-z_{2x-2}z_2+\cdots-z_2z_{2x-2}+z_1z_{2x-1}$$ as $EI$. \item For all positive integers $x$, refer to the set of $$z_{x}=z_{\underbrace{1\ldots1}_{x}}$$ as $T$. \item For all positive integers $x$, refer to the set of $$z_{2x}=z_{\underbrace{1\ldots1}_{2x}}$$ as $ET$. \end{itemize} \begin{lemma} Multiplication and $EE$ is equivalent to multiplication and $EI$. \label{eeei} \end{lemma} \begin{proof} $$\begin{array}{ll} &z_{2x}=z_{(2x-1)1}-z_{(2x-2)2}+\cdots-z_{2(2x-2)}+z_{1(2x-1)}\\ \Leftrightarrow &z_{2x}=(z_{2x-1}z_1-z_{2x})-(z_{2x-2}z_2-z_{2x})+\cdots-(z_2z_{2x-2}-z_{2x})+(z_1z_{2x-1}-z_{2x})\\ \Leftrightarrow & 2z_{2x}=z_{2x-1}z_1-z_{2x-2}z_2+\cdots-z_2z_{2x-2}+z_1z_{2x-1} \end{array}$$ where we used multiplication for the first equivalence.\end{proof} \begin{lemma} Multiplication and $T$ is equivalent to multiplication and $EI$. \label{tei}\end{lemma} \begin{proof} First we show that the set of $T$ and multiplication implies $EI$. $$ \begin{array}{ll} & z_{2x-1}z_1-z_{2x-2}z_2+z_{2x-3}z_3-\cdots-z_2z_{2x-2}+z_1z_{2x-1}\\ = & z_{2x-1}z_1-z_{2x-2}z_{11}+z_{2x-3}z_{111}-\cdots-z_2z_{\underbrace{1\ldots1}_{2x-2}}+z_1z_{\underbrace{1\ldots1}_{2x-1}}\\ = & (z_{2x}+z_{(2x-1)1})-(z_{(2x-1)1}+z_{(2x-2)11})+(z_{(2x-2)11}+z_{(2x-3)111})-\cdots-\\ &(z_{3\underbrace{1\ldots1}_{2x-3}}+z_{2\underbrace{1\ldots1}_{2x-2}})+(z_{2\underbrace{1\ldots1}_{2x-2}}+z_{\underbrace{1\ldots1}_{2x}})\\ = & z_{2x}+ z_{\underbrace{1\ldots1}_{2x}}\\ = & 2z_{2x} \end{array} $$ where we used $T$ for the first, multiplication for the second, and $T$ for the fourth equality. Now we proceed by induction to show that the set of $EI$ and multiplication implies $T$. The base case is $z_1=z_1$. Assume the assertion is true for all $n$ smaller than $2x$, so the set of $EI$ and multiplication implies $z_n=z_{\underbrace{1\ldots1}_{n}}$ for all $n<2x$. We show that it is true for $2x$ and $2x+1$ as well. $$ \begin{array}{lll} 2z_{2x} & = & z_{2x-1}z_1-z_{2x-2}z_2+z_{2x-3}z_3-\cdots-z_2z_{2x-2}+z_1z_{2x-1}\\ & = & z_{2x-1}z_1-z_{2x-2}z_{11}+z_{2x-3}z_{111}-\cdots-z_2z_{\underbrace{1\ldots1}_{2x-2}}+z_1z_{\underbrace{1\ldots1}_{2x-1}}\\ & = & (z_{2x}+z_{(2x-1)1})-(z_{(2x-1)1}+z_{(2x-2)11})+(z_{(2x-2)11}+z_{(2x-3)111})-\cdots-\\ & &(z_{3\underbrace{1\ldots1}_{2x-3}}+z_{2\underbrace{1\ldots1}_{2x-2}})+(z_{2\underbrace{1\ldots1}_{2x-2}}+z_{\underbrace{1\ldots1}_{2x}})\\ & = & z_{2x}+ z_{\underbrace{1\ldots1}_{2x}}\\ \end{array}$$ where we used $EI$ for the first, the induction hypothesis for the second, and multiplication for the third equality. Thus $z_{2x}=z_{\underbrace{1\ldots1}_{2x}}$. Now we show that $z_{2x+1}=z_{\underbrace{1\ldots1}_{2x+1}}$. $$\begin{array}{lll} 0 & =& z_{2x}z_1-z_{2x-1}z_2+z_{2x-2}z_3-\cdots+z_2z_{2x-1}-z_1z_{2x}\\ & =& z_{2x}z_1-z_{2x-1}z_{11}+z_{2x-2}z_{111}-\cdots+z_2z_{\underbrace{1\ldots1}_{2x-1}}-z_1z_{\underbrace{1\ldots1}_{2x}}\\ & =& (z_{2x+1}+z_{(2x)1})-(z_{(2x)1}+z_{(2x-1)11})+(z_{(2x-1)11}+z_{(2x-2)111})-\cdots+\\ & & (z_{3\underbrace{1\ldots1}_{2x-2}}+z_{2\underbrace{1\ldots1}_{2x-1}})-(z_{2\underbrace{1\ldots1}_{2x-1}}+z_{\underbrace{1\ldots1}_{2x+1}})\\ & =& z_{2x+1}-z_{\underbrace{1\ldots1}_{2x+1}} \end{array}$$ where we used the induction hypothesis and $z_{2x}=z_{\underbrace{1\ldots1}_{2x}}$ for the second, and multiplication for the third equality. Thus $z_{2x+1}=z_{\underbrace{1\ldots1}_{2x+1}}$, which completes the induction. \end{proof} \begin{lemma} Multiplication and $T$ is equivalent to multiplication and $ET$. \label{tet}\end{lemma} \begin{proof} The set of relationships $ET$ is a subset of $T$, thus $T$ implies $ET$. To prove the converse, we need to show $z_{2x+1}=z_{\underbrace{1\ldots1}_{2x+1}}$ given $ET$ and multiplication. We proceed by induction. The base case is $z_1=z_1$. Assume the result is true for all odd positive integers smaller than $2x+1$, then $$\begin{array}{lll} 0 & =& z_{2x}z_1-z_{2x-1}z_2+z_{2x-2}z_3-\cdots+z_2z_{2x-1}-z_1z_{2x}\\ & =& z_{2x}z_1-z_{2x-1}z_{11}+z_{2x-2}z_{111}-\cdots+z_2z_{\underbrace{1\ldots1}_{2x-1}}-z_1z_{\underbrace{1\ldots1}_{2x}}\\ & =& (z_{2x+1}+z_{(2x)1})-(z_{(2x)1}+z_{(2x-1)11})+(z_{(2x-1)11}+z_{(2x-2)111})-\cdots+\\ & & (z_{3\underbrace{1\ldots1}_{2x-2}}+z_{2\underbrace{1\ldots1}_{2x-1}})-(z_{2\underbrace{1\ldots1}_{2x-1}}+z_{\underbrace{1\ldots1}_{2x+1}})\\ & =& z_{2x+1}-z_{\underbrace{1\ldots1}_{2x+1}} \end{array}$$ where we used $ET$ and the induction hypothesis for the second, and multiplication for the third equality. Thus $z_{2x+1}=z_{\underbrace{1\ldots1}_{2x+1}}$, which completes the induction. \end{proof} Combining Lemma \ref{eeei}, Lemma \ref{tei} and Lemma \ref{tet} we get \begin{proposition} Multiplication and $EE$ is equivalent to multiplication and $ET$. \label{coreeet}\end{proposition} Theorem~\ref{ribbonrelations2} now follows from Theorem~\ref{ribbonrelations} and Proposition~\ref{coreeet}. \section{\texorpdfstring{Equality of ordinary skew Schur $Q$-functions}{Equality of ordinary skew Schur Q-functions}}\label{sec:eqskewschurq} We now turn our attention to determining when two ordinary skew Schur $Q$-functions are equal. Illustrative examples of the results in this section can be found in the next section, when we restrict our attention to ribbon Schur $Q$-functions. In order to prove our main result on equality, Theorem~\ref{the:bigone}, which is analogous to \cite[Theorem 7.6]{HDL2}, we need to prove an analogy of \cite[Proposition 2.1]{HDL}. First we need to prove a Jacobi-Trudi style determinant formula. Let $D_1,D_2,\ldots,D_k$ denote skew diagrams, and recall from Section~\ref{sec:diagrams} that $$D_1\bigstar_1D_2\bigstar_2D_3\bigstar_3\cdots\bigstar_{k-1}D_k$$in which $\bigstar_i$ is either $\cdot$ or $\odot$ is a well-defined skew diagram. Set $$\bar{\bigstar}_i=\left \{ \begin{array}{ll} \odot & \hbox{if }\bigstar_i=\cdot\\ \cdot & \hbox{if }\bigstar_i=\odot . \end{array} \right. $$ With this in mind we have \begin{proposition} Let $s_D$ denote the skew Schur function indexed by the ordinary skew diagram $D$. Then $$s_{D_1\bigstar_1D_2\bigstar_2D_3\bigstar_3\cdots\bigstar_{k-1}D_k}= \det \left [ \begin{array}{ccccc} s_{D_1} & s_{D_1\bar{\bigstar}_1D_2} & s_{D_1\bar{\bigstar}_1D_2\bar{\bigstar}_2D_3} & \cdots & s_{D_1\bar{\bigstar}_1D_2\bar{\bigstar}_2\cdots\bar{\bigstar}_{k-1}D_k}\\ 1 & s_{D_2} & s_{D_2\bar{\bigstar}_2D_3} & \cdots & s_{D_2\bar{\bigstar}_2\bar{\bigstar}_3\cdots\bar{\bigstar}_{k-1}D_k} \\ & 1 & s_{D_3} & \cdots & s_{D_3\bar{\bigstar}_3\cdots\bar{\bigstar}_{k-1}D_k} \\ & & \ddots & & \vdots \\ 0 & & &1 &s_{D_k} \end{array} \right ] .$$ \label{propsd}\end{proposition} \begin{proof} We proceed by induction on $k$. Assuming the assertion is true for $k-1$, we show that it is true for $k$ as well. Note that the base case, $k=2$, is, say \cite[Proposition 4.1]{HDL2}, that \begin{E} s_{D_1}s_{D_2}=s_{D_1\cdot D_2}+s_{D_1\odot D_2} \label{skewschur0}\end{E}for skew diagrams $D_1, D_2$. By the induction hypothesis, we have \begin{E} \det \left [ \begin{array}{cccc} s_{D} & s_{D\bar{\bigstar}_2D_3} & \cdots & s_{D\bar{\bigstar}_2D_3\bar{\bigstar}_3\cdots\bar{\bigstar}_{k-1}D_k}\\ 1 & s_{D_3} & \cdots & s_{D_3\bar{\bigstar}_3\cdots\bar{\bigstar}_{k-1}D_k} \\ & \ddots & & \vdots \\ 0 & &1 &s_{D_k} \end{array} \right ] =s_{D\bigstar_2D_3\bigstar_3\cdots\bigstar_{k-1}D_k} \label{skewschur1}\end{E} where $D$ can be any skew diagram. Now expanding over the first column yields \begin{E}\begin{array}{ll} \det \left [ \begin{array}{ccccc} s_{D_1} & s_{D_1\bar{\bigstar}_1D_2} & s_{D_1\bar{\bigstar}_1D_2\bar{\bigstar}_2D_3} & \cdots & s_{D_1\bar{\bigstar}_1D_2\bar{\bigstar}_2\cdots\bar{\bigstar}_{k-1}D_k}\\ 1 & s_{D_2} & s_{D_2\bar{\bigstar}_2D_3} & \cdots & s_{D_2\bar{\bigstar}_2\bar{\bigstar}_3\cdots\bar{\bigstar}_{k-1}D_k} \\ & 1 & s_{D_3} & \cdots & s_{D_3\bar{\bigstar}_3\cdots\bar{\bigstar}_{k-1}D_k} \\ & & \ddots & & \vdots \\ 0 & & &1 &s_{D_k} \end{array} \right ] &=\\ s_{D_1}\times \det \left [ \begin{array}{cccc} s_{D_2} & s_{D_2\bar{\bigstar}_2D_3} & \cdots & s_{D_2\bar{\bigstar}_2D_3\bar{\bigstar}_3\cdots\bar{\bigstar}_{k-1}D_k}\\ 1 & s_{D_3} & \cdots & s_{D_3\bar{\bigstar}_3\cdots\bar{\bigstar}_{k-1}D_k} \\ & \ddots & & \vdots \\ 0 & &1 &s_{D_k} \end{array} \right ] & -\\ \det \left [ \begin{array}{cccc} s_{D_1\bar{\bigstar}_1D_2} & s_{D_1\bar{\bigstar}_1D_2\bar{\bigstar}_2D_3} & \cdots & s_{D_1\bar{\bigstar}_1D_2\bar{\bigstar}_2D_3\bar{\bigstar}_3\cdots\bar{\bigstar}_{k-1}D_k}\\ 1 & s_{D_3} & \cdots & s_{D_3\bar{\bigstar}_3\cdots\bar{\bigstar}_{k-1}D_k} \\ & \ddots & & \vdots \\ 0 & &1 &s_{D_k} \end{array} \right ] \ . \label{skewschur2}\end{array}\end{E}\\ Note that the first and second determinant on the right side of \eqref{skewschur2} are equal to the determinant in \eqref{skewschur1} for, respectively, $D=D_2$ and $D=D_1\bar{\bigstar}_1D_2$. Thus, the equality in \eqref{skewschur1} implies that \eqref{skewschur2} is equal to $$s_{D_1}\times s_{D_2\bigstar_2D_3\bigstar_3\cdots\bigstar_{k-1}D_k}-s_{D_1\bar{\bigstar}_1D_2\bigstar_2D_3\bigstar_3\cdots\bigstar_{k-1}D_k}$$and because of \eqref{skewschur0}, the last expression is equal to $$s_{D_1\bigstar_1D_2\bigstar_2D_3\bigstar_3\cdots\bigstar_{k-1}D_k}.$$ This completes the induction.\end{proof} Let $\alpha$ be a ribbon such that $$\alpha =\square\bigstar _1 \square \bigstar _2 \cdots \bigstar _{k-1} \square$$and $|\alpha|=k$. In Proposition~\ref{propsd} set $D_i=D$ for $i$ odd and $D_i=D^t$ for $i$ even for some skew diagram $D$ so that $D\bigstar_1 D^t \bigstar_2 D\bigstar_3\cdots = \alpha\bullet D$. Note that $$\alpha^t\bullet D=D\bar{\bigstar}_{k-1}D^t\bar{\bigstar}_{k-2}D\bar{\bigstar}_{k-3}\cdots$$ therefore, $$(\alpha^t)^\circ\bullet D=D\bar{\bigstar}_1D^t\bar{\bigstar}_2D\bar{\bigstar}_3\cdots.$$ Using Proposition~\ref{propsd} with the above setting, we have the following corollary. \begin{corollary} $$s_{\alpha \bullet D}=\det \left [ \begin{array}{ccccc} \ast & \ast & \ast & \cdots & s_{(\alpha^t)^\circ\bullet D}\\ 1 & \ast & \ast & \cdots & \ast \\ & 1 & \ast & \cdots & \ast \\ & & \ddots & & \vdots \\ 0 & & &1 &\ast \end{array} \right ] $$where the skew Schur functions indexed by skew diagrams with fewer than $|\alpha|$ blocks of $D$ or $D^t$ are denoted by $\ast$. \label{dihamel} \end{corollary} We are now ready to derive our first ordinary skew Schur $Q$-function equalities. \begin{proposition} If $\alpha$ is a ribbon and $D$ is a skew diagram then ${\mathfrak s}_{\alpha \bullet D }={\mathfrak s}_{\alpha^\circ \bullet D }$ and ${\mathfrak s}_{\alpha \bullet D }={\mathfrak s}_{\alpha \bullet D ^t}.$ \label{diproprottrans}\end{proposition} \begin{proof} We induct on $|\alpha|$. The base case is easy as $1=1^\circ$ and ${\mathfrak s}_ D ={\mathfrak s}_{ D ^t}$ by \eqref{eq:Qtr}. Assume the proposition is true for $|\alpha|<n$. We first show that ${\mathfrak s}_{\alpha \bullet D }={\mathfrak s}_{\alpha^\circ \bullet D }$ for all $\alpha$'s with $|\alpha|=n$, by inducting on the number of parts in $\alpha$, that is $\ell(\alpha)$. The base case, $\ell(\alpha)=1$, is straightforward as $n=n^\circ$. Assume ${\mathfrak s}_{\alpha \bullet D }={\mathfrak s}_{\alpha^\circ \bullet D }$ is true for all compositions $\alpha$ with fewer than $k$ parts (the hypothesis for the second induction). Let $\alpha=\alpha_1\cdots\alpha_k$. Using \eqref{eq:Qmult}, we know that for all skew diagrams $V$ and $L$ we have $${\mathfrak s}_{ V\cdot L }={\mathfrak s}_ V{\mathfrak s}_L -{\mathfrak s}_{ V\odot L } . $$ We consider the following four cases. Note that in each case we set $V$ and $L$ such that $ V\cdot L =\alpha_1\cdots \alpha_{k-1}\alpha_k \bullet D =\alpha\bullet D$ and $V\odot L =\alpha_1\cdots(\alpha_{k-1}+\alpha_k) \bullet D$. Also, note that since $|\alpha_1\cdots\alpha_{k-1}|<n$ and $|\alpha_k|<n$, we can use the induction hypothesis of the first induction (i.e. we can rotate the first and transpose the second component). Furthermore, even though $|\alpha_1\cdots(\alpha_{k-1}+\alpha_k)|=n$, the number of parts in $\alpha_1\cdots(\alpha_{k-1}+\alpha_k)$ is $k-1$ and therefore we can use the induction hypothesis of the second induction: \\ {\it Case 1: $|\alpha_1\cdots \alpha_{k-1}|$ is even and $|\alpha_k|$ is even.} Set $V=\alpha_1 \cdots \alpha_{k-1} \bullet D$ and $L=\alpha_k \bullet D$. Then \begin{eqnarray*}{\mathfrak s}_{\alpha \bullet D}&=&{\mathfrak s}_{\alpha_1\cdots\alpha_{k-1}\bullet D } {\mathfrak s}_{\alpha_k \bullet D }-{\mathfrak s}_{\alpha_1\cdots(\alpha_{k-1}+\alpha_k)\bullet D }\\&=&{\mathfrak s}_{\alpha_k \bullet D }{\mathfrak s}_{\alpha_{k-1}\cdots\alpha_{1}\bullet D }-{\mathfrak s}_{(\alpha_k+\alpha_{k-1})\cdots \alpha_1\bullet D }={\mathfrak s}_{\alpha_k\alpha_{k-1}\cdots\alpha_{1}\bullet D }={\mathfrak s}_{\alpha^\circ \bullet D }. \end{eqnarray*} \\ {\it Case 2: $|\alpha_1\cdots \alpha_{k-1}|$ is even and $|\alpha_k|$ is odd.} Set $ V =\alpha_1 \cdots \alpha_{k-1} \bullet D $ and $ L =\alpha_k \bullet D $. Then \begin{eqnarray*}{\mathfrak s}_{\alpha \bullet D }&=&{\mathfrak s}_{\alpha_1\cdots\alpha_{k-1}\bullet D } {\mathfrak s}_{\alpha_k \bullet D }-{\mathfrak s}_{\alpha_1\cdots(\alpha_{k-1}+\alpha_k)\bullet D }\\&=&{\mathfrak s}_{\alpha_k \bullet D }{\mathfrak s}_{\alpha_{k-1}\cdots\alpha_{1}\bullet D ^t} -{\mathfrak s}_{(\alpha_k+\alpha_{k-1})\cdots \alpha_1\bullet D }={\mathfrak s}_{\alpha_k\alpha_{k-1}\cdots\alpha_{1}\bullet D }={\mathfrak s}_{\alpha^\circ \bullet D }. \end{eqnarray*} \\ {\it Case 3: $|\alpha_1\cdots \alpha_{k-1}|$ is odd and $|\alpha_k|$ is even.} Set $ V =\alpha_1 \cdots \alpha_{k-1} \bullet D $ and $ L =\alpha_k \bullet D ^t$. Then \begin{eqnarray*}{\mathfrak s}_{\alpha \bullet D }&=&{\mathfrak s}_{\alpha_1\cdots\alpha_{k-1}\bullet D } {\mathfrak s}_{\alpha_k \bullet D ^t}-{\mathfrak s}_{\alpha_1\cdots(\alpha_{k-1}+\alpha_k)\bullet D }\\&=&{\mathfrak s}_{\alpha_k \bullet D }{\mathfrak s}_{\alpha_{k-1}\cdots\alpha_{1}\bullet D }-{\mathfrak s}_{(\alpha_k+\alpha_{k-1})\cdots \alpha_1\bullet D }={\mathfrak s}_{\alpha_k\alpha_{k-1}\cdots\alpha_{1}\bullet D }={\mathfrak s}_{\alpha^\circ \bullet D }. \end{eqnarray*} \\ {\it Case 4: $|\alpha_1\cdots \alpha_{k-1}|$ is odd and $|\alpha_k|$ is odd.} Set $ V =\alpha_1 \cdots \alpha_{k-1} \bullet D $ and $ L =\alpha_k \bullet D ^t$. Then \begin{eqnarray*}{\mathfrak s}_{\alpha \bullet D }&=&{\mathfrak s}_{\alpha_1\cdots\alpha_{k-1}\bullet D } {\mathfrak s}_{\alpha_k \bullet D ^t}-{\mathfrak s}_{\alpha_1\cdots(\alpha_{k-1}+\alpha_k)\bullet D }\\&=&{\mathfrak s}_{\alpha_k \bullet D }{\mathfrak s}_{\alpha_{k-1}\cdots\alpha_{1}\bullet D ^t}-{\mathfrak s}_{(\alpha_k+\alpha_{k-1})\cdots \alpha_1\bullet D }={\mathfrak s}_{\alpha_k\alpha_{k-1}\cdots\alpha_{1}\bullet D }={\mathfrak s}_{\alpha^\circ \bullet D }. \end{eqnarray*} This completes the second induction. Now to complete the first induction, we show that ${\mathfrak s}_{\alpha \bullet D }={\mathfrak s}_{\alpha \bullet D ^t}$ where $|\alpha|=n$. \\ Suppose $n$ is odd. By Corollary \ref{dihamel}, we have $$s_{\alpha \bullet D }=\det \left [ \begin{array}{ccccc} \ast & \ast & \ast & \cdots & s_{(\alpha^t)^\circ\bullet D }\\ 1 & \ast & \ast & \cdots & \ast \\ & 1 & \ast & \cdots & \ast \\ & & \ddots & & \vdots \\ 0 & & &1 &\ast \end{array} \right ] . $$ Expanding the above determinant we have $$ s_{\alpha \bullet D }=X+s_{(\alpha^t)^\circ\bullet D } $$ where $X$ is comprised of skew Schur functions indexed by skew diagrams with fewer than $|\alpha|$ blocks of $ D $ or $ D ^t$. Applying $\theta$ to both sides of the above equation yields \begin{E} {\mathfrak s}_{\alpha \bullet D }={\mathfrak X}+{\mathfrak s}_{(\alpha^t)^\circ\bullet D }={\mathfrak X}+{\mathfrak s}_{\alpha^t\bullet D }= {\mathfrak X}+{\mathfrak s}_{(\alpha^t\bullet D )^t}={\mathfrak X}+{\mathfrak s}_{\alpha\bullet D ^t} \label{ditransodd1} \end{E}where we used the result of the second induction for the second, \eqref{eq:Qtr} for the third and \eqref{ditransposition} for the fourth equality. Similarly, $$s_{\alpha \bullet D ^t}=\det \left [ \begin{array}{ccccc} \ast & \ast & \ast & \cdots & s_{(\alpha^t)^\circ\bullet D ^t}\\ 1 & \ast & \ast & \cdots & \ast \\ & 1 & \ast & \cdots & \ast \\ & & \ddots & & \vdots \\ 0 & & &1 &\ast \end{array} \right ] $$and expanding the determinant we have $$ s_{\alpha \bullet D ^t}=X'+s_{(\alpha^t)^\circ\bullet D ^t} $$where $X'$ is again comprised of skew Schur functions indexed by skew diagrams with fewer than $|\alpha|$ blocks of $ D $ or $ D ^t$. By the induction hypothesis of the first induction (i.e. the induction on $|\alpha|$), we can assume $\theta(X')=\theta(X)={\mathfrak X}$. Now we apply $\theta$ to both sides of the above equation, thus \begin{E} {\mathfrak s}_{\alpha \bullet D ^t}={\mathfrak X}+{\mathfrak s}_{(\alpha^t)^\circ\bullet D ^t}={\mathfrak X}+{\mathfrak s}_{\alpha^t\bullet D ^t}= {\mathfrak X}+{\mathfrak s}_{(\alpha^t\bullet D ^t)^t}={\mathfrak X}+{\mathfrak s}_{\alpha\bullet D } \label{ditransodd2} \end{E}where, again, we used the result of the second induction for the second, \eqref{eq:Qtr} for the third and \eqref{ditransposition} for the fourth equality. Now \eqref{ditransodd1} and \eqref{ditransodd2} imply ${\mathfrak s}_{\alpha\bullet D }={\mathfrak s}_{\alpha \bullet D ^t}$ for the case $|\alpha|=n$ odd. \comment{Now suppose $n$ is even. Again by Corollary \ref{dihamel}, we have $$s_{\alpha \bullet D }=\det \left [ \begin{array}{ccccc} \ast & \ast & \ast & \cdots & s_{(\alpha^t)^\circ\bullet D }\\ 1 & \ast & \ast & \cdots & \ast \\ & 1 & \ast & \cdots & \ast \\ & & \ddots & & \vdots \\ 0 & & &1 &\ast \end{array} \right ] . $$ Expanding the above determinant we have $$ s_{\alpha \bullet D }=Y-s_{(\alpha^t)^\circ\bullet D } $$ where $Y$ is comprised of skew Schur functions indexed by skew diagrams with fewer than $|\alpha|$ blocks of $ D $ or $ D ^t$. Applying $\theta$ to both sides of the above equation yields \begin{E} {\mathfrak s}_{\alpha \bullet D }={\mathfrak Y}-{\mathfrak s}_{(\alpha^t)^\circ\bullet D }={\mathfrak Y}-{\mathfrak s}_{\alpha^t\bullet D }= {\mathfrak Y}-{\mathfrak s}_{(\alpha^t\bullet D )^t}={\mathfrak Y}-{\mathfrak s}_{\alpha\bullet D } \label{ditranseven1} \end{E}where we used the result of the second induction for the second, \eqref{eq:Qtr} for the third and \eqref{ditransposition} for the fourth equality. Similarly, $$s_{\alpha \bullet D ^t}=\det \left [ \begin{array}{ccccc} \ast & \ast & \ast & \cdots & s_{(\alpha^t)^\circ\bullet D ^t}\\ 1 & \ast & \ast & \cdots & \ast \\ & 1 & \ast & \cdots & \ast \\ & & \ddots & & \vdots \\ 0 & & &1 &\ast \end{array} \right ] $$and expanding the determinant we have $$ s_{\alpha \bullet D ^t}=Y'-s_{(\alpha^t)^\circ\bullet D ^t} $$where $Y'$ is again comprised of skew Schur functions indexed by skew diagrams with fewer than $|\alpha|$ blocks of $ D $ or $ D ^t$. By the induction hypothesis on $|\alpha|$ we can assume $\theta(Y')=\theta(Y)={\mathfrak Y}$. Now we apply $\theta$ to both sides of the above equation, thus \begin{E} {\mathfrak s}_{\alpha \bullet D ^t}={\mathfrak Y}-{\mathfrak s}_{(\alpha^t)^\circ\bullet D ^t}={\mathfrak Y}-{\mathfrak s}_{\alpha^t\bullet D ^t}= {\mathfrak Y}-{\mathfrak s}_{(\alpha^t\bullet D ^t)^t}={\mathfrak Y}-{\mathfrak s}_{\alpha\bullet D ^t} \label{ditranseven2} \end{E}where, again, we used the result of the second induction for the second, \eqref{eq:Qtr} for the third and \eqref{ditransposition} for the fourth equality. Now \eqref{ditranseven1} and \eqref{ditranseven2} imply ${\mathfrak s}_{\alpha\bullet D }={\mathfrak s}_{\alpha \bullet D ^t}$ for the case $|\alpha|=n$ even.} The case $n$ is even is similar. This completes the first induction and yields the proposition.\end{proof} \begin{corollary} If $\alpha$ is a ribbon and $D$ is a skew diagram then ${\mathfrak s}_{\alpha \bullet D }={\mathfrak s}_{\alpha \bullet D ^\circ}.$ \label{dicorrotation} \end{corollary} \begin{proof} Both cases $|\alpha|$ odd and $|\alpha|$ even follow from Proposition~\ref{diproprottrans}, \eqref{eq:Qrot} and \eqref{dirotation}.\end{proof} \comment{For $|\alpha|$ odd, $${\mathfrak s}_{\alpha\bullet D }={\mathfrak s}_{\alpha^\circ\bullet D }={\mathfrak s}_{(\alpha^\circ\bullet D )^\circ}={\mathfrak s}_{\alpha\bullet D ^\circ}$$ where we used Proposition~\ref{diproprottrans} for the first, \eqref{eq:Qrot} for the second and \eqref{dirotation} for the third equality. For $|\alpha|$ even, $${\mathfrak s}_{\alpha\bullet D }={\mathfrak s}_{\alpha^\circ\bullet D }={\mathfrak s}_{(\alpha^\circ\bullet D )^\circ}={\mathfrak s}_{\alpha\bullet( D ^t)^\circ}={\mathfrak s}_{\alpha\bullet D ^\circ}$$ where we used Proposition~\ref{diproprottrans} for the first, \eqref{eq:Qrot} for the second, \eqref{dirotation} for the third and Proposition \ref{diproprottrans} for the fourth equality. } \begin{corollary}If $\alpha$ is a ribbon and $D$ is a skew diagram then ${\mathfrak s}_{\alpha\bullet D }={\mathfrak s}_{\alpha^t \bullet D }$. \label{dicortranspose} \end{corollary} \begin{proof} Both cases $|\alpha|$ odd and $|\alpha|$ even follow from Proposition~\ref{diproprottrans}, \eqref{eq:Qtr} and \eqref{ditransposition}.\end{proof} \comment{For $|\alpha|$ even, $$ {\mathfrak s}_{\alpha\bullet D }={\mathfrak s}_{(\alpha\bullet D )^t}={\mathfrak s}_{\alpha^t\bullet D } $$ where we used \eqref{eq:Qtr} for the first and \eqref{ditransposition} for the second equality. For $|\alpha|$ odd, $$ {\mathfrak s}_{\alpha\bullet D }= {\mathfrak s}_{(\alpha\bullet D )^t}= {\mathfrak s}_{\alpha^t\bullet D ^t}= {\mathfrak s}_{\alpha^t\bullet D } $$ where we used \eqref{eq:Qtr} for the first, \eqref{ditransposition} for the second and Proposition~\ref{diproprottrans} for the third equality. } We can also derive new ordinary skew Schur $Q$-function equalities from known ones. \begin{proposition} For skew diagrams $D$ and $E$, if ${\mathfrak s}_ D ={\mathfrak s}_ E $ then ${\mathfrak s}_{ D \odot D ^t}={\mathfrak s}_{ D \cdot D ^t}={\mathfrak s}_{ E \cdot E ^t}={\mathfrak s}_{ E \odot E ^t} $. \label{diunexplained}\end{proposition} \begin{proof} Note that $ D \odot D ^t=2\bullet D $ and $ D \cdot D ^t=11\bullet D $. Since $2^t = 11$, we have by Corollary \ref{dicortranspose} that ${\mathfrak s}_{ D \odot D ^t}={\mathfrak s}_{ D \cdot D ^t}$. The result follows from \eqref{eq:Qmult} with $E= D^t$ yielding \begin{equation}{\mathfrak s}^2_ D =2{\mathfrak s}_{ D \odot D ^t} \label{diunexplainedalpha}. \end{equation}\end{proof} \comment{Now by \eqref{eq:Qmult} we have $${\mathfrak s}_ D {\mathfrak s}_{ D ^t}={\mathfrak s}_{ D \cdot D ^t}+{\mathfrak s}_{ D \odot D ^t}.$$ Substituting ${\mathfrak s}_ D ={\mathfrak s}_{ D ^t}$, ${\mathfrak s}_{ D \odot D ^t}={\mathfrak s}_{ D \cdot D ^t}$ and simplifying we get \begin{E}{\mathfrak s}^2_ D =2{\mathfrak s}_{ D \odot D ^t} \label{diunexplainedalpha}.\end{E} Similarly \begin{E}{\mathfrak s}^2_ E =2{\mathfrak s}_{ E \odot E ^t} \label{diunexplainedbeta}. \end{E} Since ${\mathfrak s}_ D ={\mathfrak s}_ E $, \eqref{diunexplainedalpha} and \eqref{diunexplainedbeta} yield ${\mathfrak s}_{ D \odot D ^t}={\mathfrak s}_{ E \odot E ^t}$. The result now follows.} \begin{proposition}\label{prop:power2} For skew diagrams $ D $ and $ E $, ${\mathfrak s}_ D ={\mathfrak s}_ E $ if and only if $${\mathfrak s}_{\underbrace{2\bullet\cdots\bullet 2}_{n}\bullet D }={\mathfrak s}_{\underbrace{2\bullet\cdots\bullet 2}_{n}\bullet E }.$$ \end{proposition} \begin{proof} This follows from a straightforward application of \eqref{diunexplainedalpha}.\end{proof} \comment{From \eqref{diunexplainedalpha}, we have $${\mathfrak s}^2_ D =2{\mathfrak s}_{2\bullet D }$$ therefore $${\mathfrak s}^2_{\underbrace{2\bullet\cdots\bullet 2}_{n-1}\bullet D }=2{\mathfrak s}_{2\bullet(\underbrace{2\bullet\cdots\bullet 2}_{n-1}\bullet D )}=2{\mathfrak s}_{\underbrace{2\bullet\cdots\bullet 2}_{n}\bullet D } $$and similarly $${\mathfrak s}^2_{\underbrace{2\bullet\cdots\bullet 2}_{n-1}\bullet E }=2{\mathfrak s}_{2\bullet(\underbrace{2\bullet\cdots\bullet 2}_{n-1}\bullet E )}=2{\mathfrak s}_{\underbrace{2\bullet\cdots\bullet 2}_{n}\bullet E }. $$ Hence, it is straightforward to see $${\mathfrak s}_{\underbrace{2\bullet\cdots\bullet 2}_{n}\bullet D }={\mathfrak s}_{\underbrace{2\bullet\cdots\bullet 2}_{n}\bullet E } \Longleftrightarrow {\mathfrak s}_{\underbrace{2\bullet\cdots\bullet 2}_{n-1}\bullet D }={\mathfrak s}_{\underbrace{2\bullet\cdots\bullet 2}_{n-1}\bullet E }.$$ Applying the above equivalence repeatedly, we have $${\mathfrak s}_{\underbrace{2\bullet\cdots\bullet 2}_{n}\bullet D }={\mathfrak s}_{\underbrace{2\bullet\cdots\bullet 2}_{n}\bullet E } \Longleftrightarrow {\mathfrak s}_ D ={\mathfrak s}_ E . $$} Before we prove our main result on equality we require the following map, which is analogous to the map $\circ s_D$ in \cite[Corollary 7.4]{HDL2}. \begin{proposition}\label{prop:wdmap} For a fixed skew diagram $D$, the map \begin{eqnarray*} \Q[z_{\alpha}] &\stackrel{(-)\bullet{\mathfrak s}_ D }{\longrightarrow} &\Omega\\ z_\alpha&\mapsto &{\mathfrak s}_{\alpha\bullet D } \\ 0&\mapsto &0 \end{eqnarray*} descends to a well-defined map $\Omega \rightarrow \Omega$. Hence it is well-defined to set $${\mathfrak r} _\alpha \bullet {\mathfrak s} _D = {\mathfrak s} _{\alpha \bullet D}$$where we abuse notation by using $\bullet$ for both the map and the composition of transpositions. \end{proposition} \begin{proof} Observe that by Theorem~\ref{ribbonrelations2} it suffices to prove that the expressions $$z_{\alpha}\ z_{\beta}-z_{\alpha\cdot \beta} - z_{\alpha \odot \beta}$$for ribbons $\alpha, \beta$ and $$z_{2m} - z_{\underbrace{1\ldots1}_{2m}}$$ for all positive integers $m$, are mapped to 0 by $(-)\bullet{\mathfrak s}_ D$. For the first expression, observe that for ribbons $\alpha, \beta$ and skew diagram $D$ $$\begin{array}{c} (\alpha\cdot\beta)\bullet D =(\alpha\bullet D )\cdot(\beta\bullet D ') \\ (\alpha\odot\beta)\bullet D =(\alpha\bullet D )\odot(\beta\bullet D ') \end{array}$$ where $ D '= D $ when $|\alpha|$ is even and $ D '= D ^t$ otherwise. Therefore $$z_\alpha z_\beta-z_{\alpha\cdot\beta}-z_{\alpha\odot\beta}$$is mapped to $$\begin{array}{ll} & {\mathfrak s}_{\alpha\bullet D }{\mathfrak s}_{\beta\bullet D }-{\mathfrak s}_{(\alpha\cdot\beta)\bullet D }-{\mathfrak s}_{(\alpha\odot\beta)\bullet D }\\ = & {\mathfrak s}_{\alpha\bullet D }{\mathfrak s}_{\beta\bullet D '}-{\mathfrak s}_{(\alpha\bullet D )\cdot(\beta\bullet D ')}-{\mathfrak s}_{(\alpha\bullet D )\odot(\beta\bullet D ')}\\ = & 0 \end{array}$$ where we used the above observation and Proposition~\ref{diproprottrans} for the first, and \eqref{eq:Qmult} for the second equality. For the second expression, observe $$z_{2m}-z_{\underbrace{1\ldots1}_{2m}}$$ goes to $${\mathfrak s}_{2m\bullet D }-{\mathfrak s}_{\underbrace{1\ldots1}_{2m}\bullet D }= {\mathfrak s}_{2m\bullet D }-{\mathfrak s}_{2m\bullet D }=0$$where we used Corollary~\ref{dicortranspose} for the first equality. \end{proof} \begin{proposition} For ribbons $\alpha$, $\beta$ and skew diagram $D$, if ${\mathfrak r}_\alpha={\mathfrak r}_\beta$ then ${\mathfrak s}_{\alpha\bullet D }={\mathfrak s}_{\beta\bullet D }$. \label{dibulletonright} \end{proposition} \begin{proof} This follows by Proposition~\ref{prop:wdmap}. \comment{ we have $${\mathfrak s}_{\alpha\bullet D }={\mathfrak r}_\alpha \bullet {\mathfrak s}_ D ={\mathfrak r}_\beta \bullet {\mathfrak s}_ D ={\mathfrak s}_{\beta\bullet D }$$ where the second equality holds because we are given ${\mathfrak r}_\alpha={\mathfrak r}_\beta$. This completes the proof.} \end{proof} We now come to our main result on equality of ordinary skew Schur $Q$-functions. \begin{theorem}\label{the:bigone} For ribbons $\alpha _1, \ldots , \alpha _m$ and skew diagram $D$ the ordinary skew Schur $Q$-function indexed by $$\alpha _1 \bullet \cdots \bullet \alpha _m \bullet D$$is equal to the ordinary skew Schur $Q$-function indexed by $$\beta _1 \bullet \cdots \bullet \beta _m \bullet E$$where $$ \beta _i \in \{ \alpha _i, \alpha _i ^t, \alpha _i ^\circ , (\alpha _i ^t)^\circ = (\alpha _i ^\circ)^t\} \quad 1\leq i \leq m,$$ $$ E\in \{ D, D^t, D^\circ , (D^t)^\circ = (D^\circ)^t\}.$$ \end{theorem} \begin{proof} We begin by restricting our attention to ribbons and proving that for ribbons $\alpha _1, \ldots , \alpha _m$ $${\mathfrak r} _{\alpha _1 \bullet \cdots \bullet \alpha _m} = {\mathfrak r} _{\beta _1 \bullet \cdots \bullet \beta _m}$$where $ \beta _i \in \{ \alpha _i, \alpha _i ^t, \alpha _i ^\circ , (\alpha _i ^t)^\circ = (\alpha _i ^\circ)^t\} \quad 1\leq i \leq m$. To simplify notation let $\lambda= \alpha_1 \bullet \cdots \bullet \alpha_m$ and $\mu=\beta_1 \bullet \cdots \bullet \beta_m$ where $\beta_i=\{\alpha_i, {\alpha_i}^t, {\alpha_i}^\circ, ({\alpha_i}^t)^\circ\}$ for $1 \leq i \leq m$. Let $i$ be the smallest index in $\mu$ such that $\alpha_i \neq \beta_i$. Suppose $\beta_i={\alpha_i}^t$, then by the associativity of $\bullet$ \begin{E} {\mathfrak r}_\mu={\mathfrak r}_{(\alpha_1 \bullet \cdots \bullet \alpha_{i-1}) \bullet({\alpha_i}^t\bullet \beta_{i+1}\bullet \cdots \bullet \beta_m)}={\mathfrak r}_{(\alpha_1 \bullet \cdots \bullet \alpha_{i-1}) \bullet({\alpha_i}^t\bullet \beta_{i+1}\bullet \cdots \bullet \beta_m)^t}={\mathfrak r}_{\alpha_1 \bullet \cdots \bullet \alpha_{i-1} \bullet{\alpha_i}\bullet (\beta_{i+1}\bullet \cdots \bullet \beta_m)'} \label{multitranspose} \end{E}where we used Proposition~\ref{diproprottrans} for the second and \eqref{ditransposition} for the third equality. Note that $(\beta_{i+1}\bullet \cdots \bullet \beta_m)'=\beta_{i+1}\bullet \cdots \bullet \beta_m$ if $|\alpha_i|$ is even, and $(\beta_{i+1}\bullet \cdots \bullet \beta_m)'=(\beta_{i+1}\bullet \cdots \bullet \beta_m)^t$ if $|\alpha_i|$ is odd. Now suppose $\beta_i={\alpha_i}^\circ$, then \begin{E} {\mathfrak r}_\mu={\mathfrak r}_{(\alpha_1 \bullet \cdots \bullet \alpha_{i-1}) \bullet({\alpha_i}^\circ \bullet \beta_{i+1}\bullet \cdots \bullet \beta_m)}={\mathfrak r}_{(\alpha_1 \bullet \cdots \bullet \alpha_{i-1}) \bullet({\alpha_i}^\circ\bullet \beta_{i+1}\bullet \cdots \bullet \beta_m)^\circ}={\mathfrak r}_{\alpha_1 \bullet \cdots \bullet \alpha_{i-1} \bullet{\alpha_i}\bullet (\beta_{i+1}\bullet \cdots \bullet \beta_m)'} \label{multirotation} \end{E}where we used Corollary~\ref{dicorrotation} for the second and \eqref{dirotation} for the third equality. Note that $(\beta_{i+1}\bullet \cdots \bullet \beta_m)'=(\beta_{i+1}\bullet \cdots \bullet \beta_m)^\circ$ if $|\alpha_i|$ is odd, and $(\beta_{i+1}\bullet \cdots \bullet \beta_m)'=((\beta_{i+1}\bullet \cdots \bullet \beta_m)^t)^\circ$ if $|\alpha_i|$ is even. For the case $\beta_i=({\alpha_i}^t)^\circ$ we can combine \eqref{multitranspose} and \eqref{multirotation} to arrive at $${\mathfrak r}_\mu ={\mathfrak r}_{\alpha_1 \bullet \cdots \bullet \alpha_{i-1} \bullet{\alpha_i}\bullet (\beta_{i+1}\bullet \cdots \bullet \beta_m)'} $$and $$(\beta_{i+1}\bullet \cdots \bullet \beta_m)'\in \{ (\beta_{i+1}\bullet \cdots \bullet \beta_m), (\beta_{i+1}\bullet \cdots \bullet \beta_m)^t, (\beta_{i+1}\bullet \cdots \bullet \beta_m)^\circ, ((\beta_{i+1}\bullet \cdots \bullet \beta_m)^t)^\circ\}.$$ Iterating the above process for each of the three cases, we recover ${\mathfrak r}_\lambda$. Applying Proposition~\ref{dibulletonright}, we have $${\mathfrak s}_{\alpha_1\bullet\cdots\bullet\alpha_m\bullet D}={\mathfrak s}_{\beta_1\bullet\cdots\bullet\beta_m\bullet D}.$$ Using Corollary \ref{dicorrotation} and Proposition \ref{diproprottrans} we know that ${\mathfrak s}_{\beta_1\bullet\cdots\bullet\beta_m\bullet D}={\mathfrak s}_{\beta_1\bullet\cdots\bullet\beta_m\bullet E}$ where $E=\{D, D^t,D^\circ,(D^t)^\circ\}$. The assertion follows from combining the latter equality with the above equality. \end{proof} \section{\texorpdfstring{Ribbon Schur $Q$-functions}{Ribbon Schur Q-functions}}\label{sec:ribbons} We have seen that ribbon Schur $Q$-functions yield a natural basis for $\Omega$ in Corollary~\ref{cor:rbasis} and establish a generating set of relations for $\Omega$ in Theorems~\ref{ribbonrelations} and \ref{ribbonrelations2}. Now we will see how they relate to enumeration in graded posets. Let $NC=\mathbb{Q} \langle y_1, y_2, \ldots \rangle$ be the free associative algebra on countably many generators $y_1, y_2, \ldots$ then \cite{BilleraLiu} showed that $NC$ is isomorphic to the non-commutative algebra of flag-enumeration functionals on graded posets. Furthermore, they showed that the non-commutative algebra of flag-enumeration functionals on Eulerian posets is isomorphic to $$A_\mathcal{E} = NC / \langle \chi _2, \chi _4 , \ldots\rangle$$where $\chi _{2m}$ is the even Euler form $\chi _{2m}=\sum _{r+s = 2m} (-1)^{r} y_ry_s$. Given a composition $\alpha = \alpha _1 \alpha _2\cdots \alpha _{\ell(\alpha)}$, the \emph{flag-$f$ operator} $y_\alpha$ is $$y_\alpha = y _{\alpha _1}y_{\alpha _2} \cdots y_{\alpha _{\ell(\alpha)}}$$and the \emph{flag-$h$ operator} $\mathfrak{h} _\alpha$ is $$\mathfrak{h} _\alpha= (-1)^{\ell(\alpha)} \sum _{\beta \succcurlyeq \alpha} (-1) ^{\ell(\beta)} y _{\beta}$$and $y _\alpha$ and $\mathfrak{h} _\alpha$ are described as being of Eulerian posets if we view them as elements of $A_\mathcal{E}$. We can now give the relationship between $A_\mathcal{E}$ and $\Omega$. \begin{theorem}\label{the:commconnection} Let $\alpha$ be a composition. The non-commutative analogue of $q_\alpha $ is the flag-$f$operator of Eulerian posets, $y_\alpha$. Furthermore, the non-commutative analogue of ${\mathfrak r} _\alpha$ is the flag-$h$ operator of Eulerian posets, $\mathfrak{h}_\alpha$. \end{theorem} \begin{proof} Consider the map \begin{eqnarray*} \psi: A _{\mathcal{E}}&\rightarrow&\Omega\\ y_i&\mapsto &q_i \end{eqnarray*} extended multiplicatively and by linearity. By \cite[Proposition 3.2]{BilleraLiu} all relations in $A _{\mathcal{E}}$ are generated by all $\chi _{n}= \sum _{i+j=n} (-1)^iy_iy_j$. Hence $\psi (\chi _n)= \sum _{i+j=n} (-1)^iq_iq_j = 0$ by \eqref{eq:qrels}, and hence $\psi$ is well-defined. Therefore, we have that $\psi$ is an algebra homomorphism. Since the flag-$h$ operator of Eulerian posets, $\mathfrak{h}_\alpha$, is defined to be $$\mathfrak{h}_\alpha= \sum _{\beta \succcurlyeq \alpha} (-1)^{\ell(\alpha)-\ell (\beta)} y _\beta $$ we have $\psi (\mathfrak{h}_\alpha) = {\mathfrak r} _\alpha $ by \eqref{eq:Qrib}.\end{proof} \comment{\begin{eqnarray*}\psi (\mathfrak{h}_\alpha) &=& \psi ( \sum _{\beta \succcurlyeq \alpha} (-1)^{\ell(\alpha)-\ell (\beta)} y _\beta)\\ &=& \sum _{\beta \succcurlyeq \alpha} (-1)^{\ell(\alpha)-\ell (\beta)} q _\beta\\ &=& (-1)^{\ell(\alpha)}\sum _{\beta \succcurlyeq \alpha} (-1)^{\ell (\beta)} q _{\lambda(\beta)}\\ &=& {\mathfrak r} _\alpha \mbox{ by \eqref{eq:Qrib}. } \end{eqnarray*}} \begin{remark} Note that we have the following commutative diagram $$\xymatrix{ NC\ar@{->}[r] ^{\theta ^N} \ar@{->>}[d] _\phi& A_{\mathcal{E}} \ar@{->>}[d] ^\psi \\ \Lambda \ar@{->}[r]^{\theta } & \Omega}$$ where $\phi(y_i)=h_i$ and $h_i$ is the $i$-th homogeneous symmetric function, and $\theta ^N (y_i)=y_i$ is the non-commutative analogue of the map $\theta$. Abusing notation, and denoting $\theta ^N$ by $\theta$ we summarize the relationships between non-symmetric, symmetric and quasisymmetric functions as follows $$\xymatrix{ NC\ar@{->}[r] ^{\theta } \ar@{->>}[rrdd] _\phi \ar@/^3pc/@{<->}[rrrr]^\ast & A_{\mathcal{E}} \ar@{->>}[rd] _\psi \ar@/^1pc/@{<->}[rr]^\ast && \Pi & \mathcal{Q}\ar@{->}[l] _\theta\\ && \Omega \ar@{_{(}->}[ru]\\ &&\Lambda \ar@{->}[u]^{\theta } \ar@{_{(}->}[rruu] }$$where $\mathcal{Q}$ is the algebra of qusaisymmetric functions and $\Pi$ is the algebra of peak quasisymmetric functions. For the interested reader, the duality between $NC$ and $\mathcal{Q}$ was established through \cite{Gelfand, Gessel, MR}, and between $A_\mathcal{E}$ and $\Pi$ in \cite{BMSvW}. The commutative diagram connecting $\Omega, \Lambda, \Pi$ and $\mathcal{Q}$ can be found in \cite{Stem}, and the relationship between $NC$ and $\Lambda$ in \cite{Gelfand}. \end{remark} \subsection{\texorpdfstring{Equality of ribbon Schur $Q$-functions}{Equality of ribbon Schur Q-functions}} From the above uses and connections it seems worthwhile to restrict our attention to ribbon Schur $Q$-functions in the hope that they will yield some insight into the general solution of when two skew Schur $Q$-functions are equal, as was the case with ribbon Schur functions \cite{HDL, HDL3, HDL2}. Certainly our search space is greatly reduced due to the following proposition. \begin{proposition} Equality of skew Schur $Q$-functions restricts to ribbons. That is, if ${\mathfrak r} _\alpha = Q_D$ for a skew diagram $D$ then the shifted skew diagram $\tilde{D}$ must be a ribbon. \end{proposition} \begin{proof} Recall that by definition $$Q_{D}=\sum _T x^T$$where the sum is over all weakly amenable tableaux of shape $\tilde{D}$. If $D$ has $n$ cells, we now consider the coefficient of $x_1 ^n$ in three scenarios. \begin{enumerate} \item $\tilde{D}$ is a ribbon: $[Q_D] _{x_1^n}=2$, which arises from the weakly amenable tableaux where every cell that has a cell to its left must be occupied by $1$, every cell that has a cell below it must be occupied by $1'$, and the bottommost and leftmost cell can be occupied by either $1$ or $1'$. $$\begin{matrix} &&&&&&\cdots&1'&1&\cdots&1\\ &&&&&\vdots\\ &&&&&1'\\ &&1'&1&\cdots&1\\ &&\vdots\\ &&1'\\ (1\mbox{ or }1')&\cdots & 1\\ \end{matrix}$$ \item $\tilde{D}$ is disconnected and each connected component is a ribbon: $[Q_D] _{x_1^n}=2^{c}$ where $c$ is the number of connected components. This is because the leftmost cell in the bottom row of all components can be filled with $1$ or $1'$ to create a weakly amenable tableau, and the remaining cells of each connected component can be filled as in the last case. \item $\tilde{D}$ contains a $2\times 2$ subdiagram: $[Q_D] _{x_1^n}=0$ as the $2\times 2$ subdiagram cannot be filled only with $1$ or $1'$ to create a weakly amenable tableau. \end{enumerate} Now note that if ${\mathfrak r} _\alpha = Q_D$ then the coefficient of $x_1 ^n$ must be the same in both ${\mathfrak r} _\alpha$ and $Q_D$. From the above case analysis we see that the coefficient of $x_1 ^n$ in ${\mathfrak r} _\alpha$ is 2, and hence also in $Q_D$. Therefore, by the above case analysis, $\tilde{D}$ must also be a ribbon. \end{proof} We now recast our main results from the previous section in terms of ribbon Schur $Q$-functions, and use this special case to illustrate our results. \begin{proposition} For ribbons $\alpha$ and $\beta$, ${\mathfrak r}_\alpha ={\mathfrak r}_\beta $ if and only if $${\mathfrak r}_{\underbrace{2\bullet\cdots\bullet 2}_{n}\bullet \alpha }={\mathfrak r}_{\underbrace{2\bullet\cdots\bullet 2}_{n}\bullet \beta }.$$ \label{prop:twos} \end{proposition} \begin{example} If we know ${\mathfrak r} _{2\bullet 2 \bullet 2} = {\mathfrak r} _{3311}= {\mathfrak r} _{1511} = {\mathfrak r} _{2\bullet 2 \bullet 11}$ then we have ${\mathfrak r} _2 = {\mathfrak r} _{11}$. This would be an alternative to deducing this result from \eqref{eq:Qtr}. $$2\bullet 2\bullet 2 = \tableau{&&\ &\ &\ \\ \ &\ &\ \\\ \\ \ } \quad 2\bullet 2\bullet 11 = \tableau{&&&&\ \\\ &\ &\ & \ &\ \\ \ \\ \ } $$ \end{example} \begin{remark} Note that the factor 2 appearing in the above proposition is of some fundamental importance since ${\mathfrak r} _{21\circ 14} = {\mathfrak r} _{12\circ 14}$ but ${\mathfrak r} _{3\bullet (21\circ 14)} \neq {\mathfrak r} _{3\bullet(12\circ 14)}$. \end{remark} \begin{proposition} For ribbons $\alpha, \beta, \gamma$, if ${\mathfrak r}_\alpha={\mathfrak r}_\beta$ then ${\mathfrak r}_{\alpha\bullet \gamma }={\mathfrak r}_{\beta\bullet \gamma }$.\label{prop:ribtorib} \end{proposition} \begin{example} Since ${\mathfrak r} _3 = {\mathfrak r} _{111}$ by \eqref{eq:Qtr} we have ${\mathfrak r} _{33141} = {\mathfrak r} _{3\bullet 31} = {\mathfrak r} _{111\bullet 31} = {\mathfrak r} _{3121131}.$ $$3\bullet 31 = \tableau{&&&&&\ &\ &\ \\ &&&\ &\ &\ \\ &&&\ \\ \ &\ &\ &\ \\\ }\quad 111\bullet 31 = \tableau{&&&\ &\ &\ \\&&&\ \\&&\ &\ \\&&\ \\&&\ \\\ &\ &\ \\\ }$$ However, we could also have deduced ${\mathfrak r} _{33141} = {\mathfrak r} _{3121131}$ from the following theorem. \end{example} \begin{theorem} For ribbons $\alpha _1, \ldots , \alpha _m$ the ribbon Schur $Q$-function indexed by $$\alpha _1 \bullet \cdots \bullet \alpha _m$$ is equal to the ribbon Schur $Q$-function indexed by $$\beta _1 \bullet \cdots \bullet \beta _m$$where $$\beta _i \in \{ \alpha _i, \alpha _i ^t, \alpha _i ^\circ , (\alpha _i ^t)^\circ = (\alpha _i ^\circ)^t\} \quad 1\leq i \leq m.$$ \end{theorem} \begin{example} If $\alpha _1=2$ and $\alpha _2 = 21$ then $${\mathfrak r} _{231} = {\mathfrak r} _{2121} = {\mathfrak r} _{132} = {\mathfrak r} _{1212}$$as $$2\bullet 21 = \tableau{&&\ &\ \\\ &\ &\ \\\ }\ , 2^t\bullet 21 = \tableau{&\ &\ \\&\ \\\ &\ \\\ }\ , 2\bullet (21)^\circ = \tableau{&&&\ \\&\ &\ &\ \\\ &\ }\ , 2^t\bullet (21)^\circ = \tableau{&&\ \\&\ &\ \\&\ \\\ &\ }\ ,$$but we could have equally well just chosen $\alpha = 231$ and concluded again \begin{eqnarray*} {\mathfrak r} _{231} &=& {\mathfrak r} _{(231)^t} = {\mathfrak r} _{(231)^\circ} = {\mathfrak r} _{((231)^t )^\circ}\\ &=&{\mathfrak r} _{2121} = {\mathfrak r} _{132} = {\mathfrak r} _{1212}.\\ \end{eqnarray*} \end{example} We begin to draw our study of ribbon Schur $Q$-functions to a close with the following conjecture, which we prove in one direction, and has been confirmed for ribbons with up to 13 cells. \begin{conjecture} For ribbons $\alpha, \beta$ we have ${\mathfrak r} _\alpha = {\mathfrak r} _\beta$ if and only if there exists $j, k, l$ so that $$\alpha = \alpha _1 \bullet \cdots \bullet \alpha _j \bullet (\gamma _1 \circ \cdots \circ \gamma _k)\bullet \varepsilon _1 \bullet \cdots \bullet\varepsilon _\ell$$and $$\beta = \beta _1 \bullet \cdots \bullet \beta _j \bullet (\delta _1 \circ \cdots \circ \delta _k)\bullet \eta _1 \bullet \cdots \bullet \eta _\ell$$where $$\alpha _i, \beta _i \in \{ 2, 11\}\quad 1\leq i \leq j,$$$$ \delta _i \in \{\gamma _i , \gamma _i ^\circ\} \quad 1\leq i \leq k,$$$$ \eta _i \in \{ \varepsilon _i, \varepsilon _i ^t, \varepsilon _i ^\circ , (\varepsilon _i ^t)^\circ = (\varepsilon _i ^\circ)^t\} \quad 1\leq i \leq \ell.$$ \end{conjecture} To prove one direction note that certainly if $\alpha$ and $\beta$ satisfy the criteria then ${\mathfrak r} _\alpha = {\mathfrak r} _\beta$ since by applying $\theta$ to \cite[Theorem 4.1]{HDL} we have $${\mathfrak r} _{\gamma _1 \circ \cdots \circ \gamma _k} = {\mathfrak r} _{\delta _1 \circ \cdots \circ \delta _k}.$$By Proposition~\ref{prop:twos} and Corollary~\ref{dicortranspose} we get $${\mathfrak r} _{11\bullet (\gamma _1 \circ \cdots \circ \gamma _k)} = {\mathfrak r} _{2\bullet (\gamma _1 \circ \cdots \circ \gamma _k)}= {\mathfrak r} _{2\bullet (\delta _1 \circ \cdots \circ \delta _k)}= {\mathfrak r} _{11\bullet (\delta _1 \circ \cdots \circ \delta _k)}$$and performing this repeatedly we get $${\mathfrak r} _{\alpha _1 \bullet \cdots \bullet \alpha _j \bullet (\gamma _1 \circ \cdots \circ \gamma _k)}= {\mathfrak r} _{\beta _1 \bullet \cdots \bullet \beta _j\bullet (\delta _1 \circ \cdots \circ \delta _k)}.$$By Proposition~\ref{prop:ribtorib}, Proposition~\ref{diproprottrans} and Corollary~\ref{dicorrotation} we get \begin{eqnarray*}{\mathfrak r} _{\beta _1 \bullet \cdots \bullet \beta _j\bullet (\delta _1 \circ \cdots \circ \delta _k)\bullet\varepsilon _1} &=& {\mathfrak r} _{\alpha _1 \bullet \cdots \bullet \alpha _j \bullet (\gamma _1 \circ \cdots \circ \gamma _k)\bullet\varepsilon _1}\\ &=& {\mathfrak r} _{\alpha _1 \bullet \cdots \bullet \alpha _j \bullet (\gamma _1 \circ \cdots \circ \gamma _k)\bullet\varepsilon _1^t}\\ &=& {\mathfrak r} _{\alpha _1 \bullet \cdots \bullet \alpha _j \bullet (\gamma _1 \circ \cdots \circ \gamma _k)\bullet\varepsilon _1^\circ}\\ &=& {\mathfrak r} _{\alpha _1 \bullet \cdots \bullet \alpha _j \bullet (\gamma _1 \circ \cdots \circ \gamma _k)\bullet(\varepsilon _1^t)^\circ} \end{eqnarray*}and performing this repeatedly and noting the associativity of $\bullet$ we obtain one direction of our conjecture. Proving the other direction may be difficult, as a useful tool in studying equality of skew Schur functions was the irreducibility of those indexed by a connected skew diagram \cite{HDL2}. However, irreducibility is a more complex issue when studying the equality of skew Schur $Q$-functions, as illustrated by restricting to ribbon Schur $Q$-functions. \begin{proposition}\label{prop:irrrib} Let $\alpha$ be a ribbon \begin{enumerate} \item for $|\alpha|$ odd, ${\mathfrak r}_\alpha$ is irreducible \item for $|\alpha|$ even, there are infinitely many examples in which ${\mathfrak r}_\alpha$ is irreducible and infinitely many examples in which ${\mathfrak r}_\alpha$ is reducible \end{enumerate} considered as an element of ${\mathbb Z}[q_1,q_3,\ldots]$. \end{proposition} \begin{proof} We first prove the first assertion. Let $|\alpha|=n$, where $n$ is an odd integer. Using \eqref{eq:Qrib}, we have $${\mathfrak r}_\alpha= \pm q_n+r$$ in which $r$ involves only $q_1,q_3,\ldots,q_{n-2}$. This shows that ${\mathfrak r}_\alpha$ is irreducible in ${\mathbb Z}[q_1,q_3,\ldots]$. For the second assertion, note that $${\mathfrak r} _\alpha ^2 = 2{\mathfrak r} _{\alpha \odot \alpha ^t}$$by \eqref{diunexplainedalpha}. Hence, ${\mathfrak r}_{\alpha\odot\alpha^t}$ is reducible for every choice of $\alpha$. Further, we show that ${\mathfrak r}_{(4x)2}$ is irreducible in ${\mathbb Z}[q_1,q_3,\ldots]$ for every positive integer $x$. By \eqref{eq:Qrib} we have $${\mathfrak r}_{(4x)2}=q_{4x}q_2-q_{4x+2}=-q_{4x+1}q_1\underbrace{+2q_{4x}q_2-q_{4x-1}q_3+\cdots+q_{2x+2}q_{2x}}_{A}-\frac{q_{2x+1}^2}{2} $$ where we substituted $q_{4x+2}$ using \eqref{qrelations} and simplified for the second equality. We use \eqref{qrelations} to reduce the terms in part $A$ into $q$'s with odd subscripts; however, note that no term in $A$ would contain $q_{4x+1}$ and the terms that contain $q_{2x+1}$ have at least two other $q$'s in them. Since the expansion of ${\mathfrak r}_{(4x)2}$ has $-q_{4x+1}q_1$ and no other term containing $q_{4x+1}$, if ${\mathfrak r}_{(4x)2}$ is reducible then $q_1$ has to be a factor of it. But because we have a non-vanishing term $\frac{q_{2x+1}^2}{2}$ in the expansion of ${\mathfrak r}_\alpha$, $q_1$ cannot be a factor.\end{proof}
1,116,691,499,441
arxiv
\section{Introduction} An important part in the design of non-player characters (NPCs) in video-games has to do with the artificial intelligence (AI) of the characters in terms of their actions in the game. This amounts to handling a variety of problems that also involve low-level aspects of the game, including for example pathfinding, detection of conditions, execution monitoring for the actions of the characters, and many more. While many of these aspects have been studied extensively separately, for instance pathfinding has been traditionally a very active research topic with significant impact on the video-game industry, others such as execution monitoring are typically treated in per case basis as part of the implementation of the underlying game engine or the particular character in question. As NPCs become more of real autonomous entities in the game, handling such components in a more principled way becomes very important. This is relevant both from the designers and developers point of view that benefit from an architecture that is easier to maintain and reuse, but also from the point of view of the more refined interactions with the game-world that NPCs can achieve. In this paper we propose an architecture for specifying the interaction of NPCs in the game-world in a way that abstracts common tasks in four main conceptual components, namely \emph{perception, deliberation, control, action}. The architecture is inspired by AI research on autonomous agents and robots, in particular the notion of \emph{high-level control} for \emph{cognitive robotics} as it is used in the context of deliberative agents and robots such as \cite{levesque98highlevel,Shanahan01Highlevel}. The motivation is that by adopting a clear role for each component and specifying a simple and clean interface between them, we can have several benefits as we discuss next. First, as there are many techniques for specifying how an NPC decides on the next action to pursue in the game world, an architecture that abstracts this part in an appropriate way could allow to easily switch between approaches. For example, it could facilitate experimentation with a finite-state machine \cite{Rabin02FSM}, a behavior tree \cite{Isla05BehaviorTrees} or a goal-oriented action planning approach \cite{orkin06fear} for deciding on NPC actions, in a way that keeps all other parts of the architecture agnostic to the actual method for deliberation. Second, this can provide the ground for a thorough investigation among the different ways of combining the available methodologies for different components, possibly leading to novel ways of using existing approaches. Also, this clear-cut separation of roles can encourage the development of modules that encapsulate existing approaches that have not been abstracted out of their application setting before, increasing re-usability of components. We also believe that this type of organization is a necessary prerequisite for enabling more advanced behaviors that rely on each NPC holding a \emph{personalized view} of the game-world that is separated from the current (updated and completely specified) state of affairs. In particular, we argue that it is important for believable NPCs to adopt a high-level view of relevant aspects of the game-world including the topology and connectivity of available areas in the world. In such cases then, when the deliberation is more tightly connected with low-level perception and action, we believe that a well-principled AI architecture becomes important for maintaining and debugging the development process. For example, typically the game engine includes a pathfinding module that is used by all NPCs in order to find their way in the game world. But what happens when one NPC knows that one path to the target destination is blocked while another NPC does not possess this information? The approach of handling all requests with a single pathfinding module cannot handle this differentiation unless the pathfinder takes into account the personalized knowledge of each NPC. This type of mixing perception, deliberation, and action can be handled in the low-level of pathfinding using various straightforward tricks, but making a separation of the high-level knowledge for each NPC and the low-level game-world state can be very useful. Adopting this separation, each NPC may keep a personalized high-level view of the game-world in terms of available areas or zones in the game and their connectivity, and use the low-level pathfinding module as a service only for the purpose of finding paths between areas that are \emph{known to the NPC} to be connected or points in the same area. A simple coarse-grained break-down of the game-world in large areas can ensure that the deliberation needed in the high-level is very simple and does not require effort that is at all similar to the low-level pathfinding. Alternatively, a more sophisticated approach would be to model this representation based on the actual one used by a hierarchical pathfinding approach such as \cite{Botea04HPAstar}, so that the high-level personalized deliberation is also facilitated by the pathfinding module but using the NPCs version of the higher level of the map. The rest of the paper is organized as follows. We continue with the description of our proposed \emph{CogBot} architecture. We then introduce a motivating example that shows the need for personalized knowledge of the game-world for NPCs, and report on an implementation of CogBots in the popular game engine of Unity.\footnote{\url{www.unity3d.com}} Then we continue with a discussion on the state-of-the-art for approaches that deal with AI for NPCs wrt to actions in the game-world, and finally, we conclude with our view on what are interesting future directions to investigate. \section{The CogBot NPC AI architecture} The proposed architecture formalizes the behavior of NPCs as far as their actions in the game-world is concerned, in terms of four basic components as follows. \medskip \centerline{\includegraphics[width=0.95\linewidth]{img/GeneralArchitecture.png}} \smallskip \begin{itemize} \item The \emph{perception} component (\emph{CogBotPerception}) is responsible for identifying objects and features of the game-world in the field of view of the NPC, including conditions or events that occur, which can be useful for the deliberation component. \item The \emph{deliberation} component (\emph{CogBotDeliberation}) is responsible for deciding the immediate action that should be performed by the NPC by taking into account the input from the perception component as well as internal representations. This component may be used to abstract the logic or strategy that the NPC should follow which could be for instance expressed in terms of reactive or proactive behavior following any of the existing approaches. \item The \emph{control} component (\emph{CogBotControl}) is responsible for going over a loop that passes information between the perception and deliberation components, and handling the execution of actions as they are decided. In particular, the controller is agnostic of the way that perception, deliberation and action is implemented, but is responsible for coordinating the information between the components while handling exceptions, monitoring conditions and actions, and allocating resources to the deliberator accordingly. \item The \emph{action} component (\emph{CogBotAction}) is responsible for realizing the conceptual actions that are decided by the deliberator in the game-world and provide information about the state of action execution, e.g., success or failure. \end{itemize} Essentially, the control component acts as a mediator that distributes information between the other components. Note that more than one instance of each component may be used in an NPC architecture. This particular specification of components should be seen as a means for structuring the number of various processes that need to operate and coordinate so that an NPC can perform challenging tasks in a game-world environment. One may think of cases where a single controller is used as a hub that manages many perception, deliberation, and action components, or other cases where networks of one instance of each these four components are used to mange different aspects of the NPC behavior. \subsection{Perception} The perception component is the main information source for the NPC control component. In the typical case it is attached to a mesh object surrounding the NPC and it provides instant information about all the game objects that lie in the area of the mesh object, e.g., a sight cone positioned on the head of the NPC. Also, the perception component may be monitoring the field of view for conditions or events which are also propagated to the control component. The communication with the control component is asynchronous as the perception component pushes information to the control component by calling appropriate callback functions as follows. \begin{itemize} \item An ``Object Entering FoV'' and ``Object Leaving FoV'' callback function is called when an object enters or leaves the field of view of the NPC. \item A ``Notify Object Status'' is called when the internal state of an object in the field of view is changed. \item A ``Notify Event'' callback function is called whenever an implemented monitoring condition check is triggered. \end{itemize} \medskip \centerline{\includegraphics[width=.95\linewidth]{img/PerceptionControl.png}} \medskip Moreover, a the perception component provides a filtering mechanism based on the type of objects and conditions so that only the ones for which the NPC registers for will be reported and others will be ignored. In general an NPC may have more than one instance of a perception component, each of which may have a different range and can be used to track different object types. A simple example is an NPC having one perception mesh for sight and one for hearing, which are both set up to communicate with the same control component. Finally, note that conditions that may be triggered in the environment are propagated as notification of events. This type of information is also used in other parts of the communication between components. This is adopted as a means to provide a simple uniform mechanism for communication between components. \subsection{Deliberation} The deliberation component is the bridge between the low-level space of perception and action in the game-world and the high-level space of conceptual representation of the state of affairs as far as the NPC is concerned. The deliberation component exposes an interface for the control component to asynchronously invoke communication as follows. \begin{itemize} \item A ``Get Next Action'' function abstracts the decision by the deliberation component with respect to the next immediate action to be performed by the NPC. \item A ``Notify Object'' function notifies the deliberation component about relevant objects that become visible or have changed their state. \item A ``Notify Event'' function notifies the deliberation component about relevant conditions received by the perception component that may affect the internal representation of the state or knowledge of the NPC. \end{itemize} \medskip \centerline{\includegraphics[width=0.95\linewidth]{img/DeliberatorControl.png}} \medskip The deliberation component is responsible for modeling the decision making of the NPC with respect to action executing in the game-world, using the information that is provided by the control component and other internal representations that are appropriate for each approach or specific implementation. In particular, one useful view is to think of the deliberation component as maintaining an internal model of the game-world that is initiated and updated by the low-level input coming from the perception component, and using this model along with other models of action-based behavior in order to specify the course of action of the NPC. The actual implementation or method for the model of the game-world and the model of desired behavior is not constrained in any way other than the type of input that is provided and the type of action descriptions that can be passed to the action component. Under this abstraction an NPC may for example just keep track of simple conditions in the game-world and a representation of its internal state in terms of health and inventory, and use a finite-state machine approach to specify the immediate next action to be performed at any time. Similarly, but following a totally different methodology, an NPC may keep track of a propositional logic literal-based model of the current state of the game-world, and use a automated planning decision making process for deciding on the next immediate action to be performed as part of a longer plan that achieves a desired goal for the NPC. Observe that in the general case this level of abstraction allows the decision making of the NPC to possess information that is different that the true state of affairs in the game-world. This may be either because some change happened for which the NPC was not informed by the perception component, for example because the NPC was simply not able to perceive this information, or even because the perception component itself is implemented in such way as to provide filtered or altered information, serving some aspect of the game design and game-play. \subsection{Control} As we mentioned earlier, the control component acts as a mediator that distributes information between the other components, including notifications about the state of objects, notifications about conditions in the game-world, as well as feedback about action execution. In order to do so it propagates a callback invocation from one component to the other (as for example in the case of the perception and deliberation component). The way in which this is performed depends on the particular case and implementation choices. For example different types of information may be delivered to different instances of the same component as we discussed earlier in this section. Also, the controller component goes over a loop that handles action execution. In its most simple form this could be just repeatedly calling in sequence the corresponding function of the deliberation component that informs about the next action to be performed, and then pass on this information to the action component so that the action is actually executed in the game-world. The architecture does not limit or prescribe the way that this loop should be implemented and indeed should be done in a variable way depending on the characteristics of the game. Nonetheless, the intention is that under this abstraction the control component can encourage more principled approaches for action execution monitoring, handling exceptions, and reviving from errors. \subsection{Action} The action component abstracts the actions of the NPC in the game-world, allowing the rest of the architecture to work at a symbolic level and the other components be be agnostic as per the implementation details for each action. Note that the architecture view we adopt does not prescribe the level of detail that these actions should be abstracted to. For some cases an action could be an atomic low-level task in the game-world as for example the task of turning the face of the NPC toward some target, while at other cases a more high-level view would be appropriate, essentially structuring NPC action behavior in terms of strategies or macro-actions. In the latter case, the action component may be used for example to connect a conceptual high-level view of low-level implementations of behaviors as it would be typically done when a finite-state machine is used for reactive behavior. In terms of the communication of the action component with the control component, again a very simple interface is adopted for asynchronous interaction as follows. \begin{itemize} \item An ``Invoke Action'' function abstracts the action execution from the point of view of the architecture, initiating an implemented internal function that operates in the game-world. \item A ``Notify Event'' callback function is called to inform the control component about information related to action execution, such as that the action has finished with success or that some error occurred, etc. \end{itemize} \medskip \centerline{\includegraphics[width=0.95\linewidth]{img/ActionControl.png}} \medskip As in the deliberation component an internal representation is assumed to model the personalized view of the game-world for the NPC, in the action component a similar conceptual representation needs to be utilized in order to capture the available actions and their characteristics. As in this case the representation can be more straightforward, we also assume the following simple schema for registering actions in the architecture. An new action can be registered by means of calling an internal ``Register Action'' function which requires i) a string representing the action name (for instance move-to, pick-up, etc, and ii) a reference to a function that implements the action. An appropriate account for additional parameters for actions is needed but this is an implementation detail. We now proceed do discuss a concrete scenario that shows how the CogBot architecture and an appropriate separation between the low-level state of affairs in the game-world and a personalized conceptual view of the game-world can enable novel behaviors for NPCs. \section{A motivating example} As a motivating running example we will consider a simple case of a room-based game-world in which the human player and some NPCs can move between rooms, pickup and drop objects, and use some of them to block pathways or set traps. Suppose also that the goal of the game is to eventually get hold of a special item and deliver it at the a designated spot. Now consider the scenario according to which the player decides to block a passage that works as a shortcut to some room, by putting some obstacle in the way after he passes through. This means that the characters will have to go round using a different route in order to get close to the human player. This is an important piece of information that actually should affect greatly how the NPCs will decide to move in the game-world. For example, at first, no NPC knows that the passage is blocked so perhaps an NPC that wants to go to the room in question will have to go through the normal (shortest) route that goes through the shortcut, see that it is blocked, and then go round using the alternative route. From then on and until the NPC sees or assumes that the passage is cleared, this is what the NPC would be expected to do in order to act in a believable way, and this line of reasoning should be adopted for each one of the NPCs separately. How would this behavior be realized in a game though? This simple scenario suggests that the pathfinding module of the game-engine should somehow keep track of which obstacles each NPC is aware and return a different \emph{personalized path} for each one that is consistent with their knowledge. To see why simpler ways to handle this would hurt the believability of NPCs consider that the pathfinding module follows an NPC-agnostic approach for finding a route such that i) does not take into account object obstacles and ii) does take into account object obstacles. In both cases, NPCs are assumed to replan if the planned route turns out to be unrealizable. It is easy to see that both approaches are problematic. In the first case, an NPC may try to go through the shortcut more than once, in a way that exposes that they has no way of remembering that the shortcut is blocked. In the second case, an NPC that did not first try to use the shortcut will nonetheless immediately choose to follow the alternative route even though they did not observe that the shortcut is blocked, probably ruining the player's plan to slow down other NPCs. Essentially, in order to maintain the believability of NPCs, each one should be performing pathfinding based on the information they possess which needs to be updated accordingly from their observations. Note how this account of knowledge of topology sets the ground for other more advanced features for NPCs as for example the capability of exchanging information in the sense that if a player sees that NPCs gathered together and talked his little trick is no longer able to slow down other NPCs any more. We now continue to discuss an application of the CogBot architecture in an implemented version of this scenario, which is able to handle the intended behavior for NPCs making use of an appropriate separation of the actual state of the game-world and the personalized view of the NPCs. \section{CogBots in GridWorld} \emph{GridWorld} is a grid-based environment developed in the Unity game engine modeled after the motivating example we introduced in the previous section. It was specifically designed to be used for performing automated tests with respect to NPC action-based behavior, therefore all of the components of the game-world are procedurally generated from an input text file that specifies the topology as well as the available objects and their state. For example we can design a map consisting of rooms interconnected through doors and use it to observe the behavior of one (or more) NPCs navigating in the environment while the user opens or closes the doors. In particular we will experiment with a prototype implementation of our proposed architecture. \subsection{GridWorld} The main component of GridWorld is the \emph{map} generated from an ASCII file that is taken as input. Each element in the grid including wall sections, floor tiles, and doors, is a low-level game-object of Unity with a component containing some basic information about the object such as: a flag that indicates whether the object is static or not and its type. Non-static objects, that is, objects that may be in more than one states, have an additional component for handle the internal state representation and the way it changes by means of interactions in the game-world. A simple A* heuristic search path-finding procedure is implemented based on the grid representation in order to provide a low-level navigation system. Moreover, in the initialization phase a processing of the map takes place that decomposes the map into interconnected areas using the standard method of connected-component labeling \cite{Dillencourt92Component} from the area of image processing. The resulting high-level topology is represented as: \begin{itemize} \item A list of \emph{areas}. An area is a fully connected set of tiles in which the movement from any tile to any tile is guaranteed to succeed. \item A list of \emph{way-points} between areas. In our case these way-points are explicitly represented by \emph{doors}. Each door connects two adjacent areas and has an internal state that can be open or close. \item A list of \emph{points of interest} that can be used to model the tiles in the map that are potential target destinations for the NPC with respect to the scenario in question. \end{itemize} This type of information will be maintained by our prototype CogBot NPC, which as we will see shortly, combined with the low-level pathfinding procedure can address the challenges raised by the motivating example we introduced earlier. The map data ground truth is stored in a global object in the root of the game scene graph. This object can be accessed by any other object in the game to ask for world information such as the size of the map, the object type in some $(i,j)$ position, etc. \subsection{CogBots} A prototype NPC in GridWorld is implemented following the CogBot architecture described in the previous section. The CogBot NPC consists of a standard game character object with a mesh, colliders, and animations. In addition to these the NPC has the following components: \begin{itemize} \item A \emph{conic collider} representing the field of view of the NPC. This collider is attached to the main body of the NPC positioned in such way as to simulate its sight cone. \item A \emph{CogBotPerception} component attached to the conic collider. The perception component is implemented so as to pass information about all visible objects in the field of view of the NPC. In this example, no conditions are raised by means of events by the perception component. \item A \emph{CogBotController} component that simply acts as a bridge for communication between perception, deliberation, and action. \item A \emph{PlayerAction} component. This component is a collection of actions of the human player/spectator in GridWorld that instruct the NPC to perform some activity. In this simple example we only have actions from the PlayerAction component that invoke moving actions for the NPC in order to reach a target destination tile. \item A \emph{CogBotAction} component that implements the moving actions of the NPC. \item A \emph{ManualDeliberator} component. This is an implementation of the CogBotDeliberator interface that provides the immediate action to be performed by the NPC based on an internal representation and an model for activity that we will describe with an example next. \end{itemize} \subsection{A simple example} Consider the following setting for the map of GridWorld.\footnote{ Our prototype implementation of the GridWorld test-bed and the CogBots architecture are available at \url{https://github.com/THeK3nger/gridworld} and \url{https://github.com/THeK3nger/unity-cogbot}. The simple example reported here can be found in the ``KBExample'' folder in the GridWorld repository.} \medskip \centerline{\includegraphics[width=0.8\linewidth]{img/GridWorldShortcutMap.png}} \medskip After the decomposition of the map we have three areas: area 1 where the NPC is located, area 2 that is symmetrical to area 1 connected through a door, and area 3, a little room that intervenes between areas 1 and 2 and connects to both with a door. In particular we call $D_{1,2}$ the door connect areas 1 and 2, and $D_{1,3}$, $D_{3,2}$ the doors that connect the little room depicted as area 3 with the other two areas. In this example the human player/observer can instruct the NPC to move to particular tiles by clicking on them. Suppose for example that the player asks the NPC to go from the initial position in area 1 into area 2 in a symmetrical position. The actual path to follow would be different depending on which doors are open allowing the NPC to go through. Similarly, assuming an internal representation for the connectivity of areas, and a personalized internal representation of the state of the doors, the NPC could \emph{deliberate in the level of areas} about the path to take. In other words, the NPC can first build a coarse-grained high-level plan of the form $\{$``move to area X'', ``move to area Y''$\}$, and then can use the low-level pathfinding procedure to generate paths for each part of the plan. Apart from achieving a hierarchical form of pathfinding that could be beneficial for various reasons, this approach actually handles the type of believability property that we discussed in the motivating example. At the beginning of the simulation the internal knowledge of the NPC assumes all doors to be closed. In this situation instructing the NPC to move to area 2, the deliberation component returns no plan as according to the NPC's knowledge it is impossible to go there as all doors are closed. If we open the doors between 1 and 2 by clicking on them, the deliberation component is still unable to find a plan because the NPC was not able to get this information through the perception component and its field of view. Even though the pathfinding procedure would return a valid path, this is irrelevant for the personalized view of the NPC. Now assume that we move the NPC close to the doors and the internal representation is updated to reflect the real world state and take it back to the starting point. If we instruct the NPC to move to area 2, the deliberation component produces the straightforward plan which is executed using the pathfinding procedure. Similarly, if we open the door $D_{1,3}$, the deliberate component would still return the same plan, as in order to get the shortest path that goes through area 3 the NPC needs to see that the door is open. \section{Challenges and related work} There is a variety of work that aims for action-based AI for NPCs. Traditionally, a combination of scripts and finite state machine are used in interactive games for controlling NPCs. These methods, even if fairly limited, allow the game designer to control every aspect of the NPCs actions. This approach has been employed in different types of succesfull videogames such as the Role Playing Game (RPG) \textit{Never Winter Nights} or the First Person Shooter (FPS) \textit{Unreal Tournament}. Scripts are written off-line in a high-level language and are used to define simple behaviors for NPCs. Procedural script generation has been proposed in \cite{Mcnaughton04scriptease:generative} by using simple patter templates which are tuned and combined by hand. Complex behaviors can be developed in a short amount of time but many of the intricacies of the classical scripting approach are not solved and it remains difficult to manage the NPCs as the complexity of the virtual world increases. State machines are still the preferred way to control NPCs in modern games, from FPS like the \textit{Quake} series to RTS like Blizzard's \textit{Warcraft III}. A problem with state machines is that they allow little reusability and they must often be rebuilt for every different case \cite{Orkin2003}. Furthermore, the number of the states grows exponentially if the behavior of the character becomes slightly more sophisticated which is problematic in the design, maintenance, and debugging of NPC behavior. In Bungie's \textit{Halo 2}, a form of hierarchical state machines or behavior trees are used \cite{Isla05BehaviorTrees}. In the FPSs from Monolith \textit{F.E.A.R.} and Epic's \textit{Unreal Tournament}, STRIPS and HTN planning have been used to define complex behaviors such as NPCs able to coordinate as squads and perform advanced strategies as sending for backup or flanking \cite{orkin06fear}. In Bethesda's \textit{Elder Scrolls IV Oblivion}, NPCs are controlled by goals that involve scheduling and are given for NPCs to achieve. This allows to define behaviors and tasks that depends on preconditions and scheduled times. Nonetheless, there is much less effort on standardizing approaches for action-based AI in a framework that would allow better comparison or collaboration between the existing techniques. To that end, our proposed architecture allows to: \begin{itemize} \item Try out different existing approaches and decision algorithms by abstracting them in the as different deliberation compoments that can be easily switched, allowing a comparative AI performance analysis. \item Build NPC that are able to dynamically change the underlying AI method depending on the state of the game or the vicinity of the player. For example a simple FSM can be used while an NPC is idle or far away from the player, and then switch to BT or GOAP when it must defend a location or attack the player. \item Build NPC that are able to use a combination of existing decision algorithms. For example we could think of a system that combines BTs for low level decisions and GOAP for high level tactical reasoning. The high level planner then would return a sequence of actions, each of which would invoke a corresponding BT. \item Develop NPCs with a \emph{personalized} conceptual representation of the game-world that enables rich and novel behaviors. The motivating example we examined is just a very simple case which to the best of our knowledge has not been handled by approaches in the literature. Moreover this view can lead to more interesting cases involving also communication between NPC and their personalized knowledge. \end{itemize} \section{Conclusions} In this paper we have introduced a robotics-inspired architecture for handling non-player character artificial intelligence for the purposes of specifying action in the game world. We demonstrated that certain benefits can come out of the principled approach for decomposing the artificial intelligence effort in components, in particular with respect to a separation between the low-level ground truth for the state of the game-world and a personalized conceptual representation of the world for each NPC. Our proposed architecture provides modularity allowing each of the four main components of the architecture, namely, perception, deliberation, control, and action, to encapsulate a self-contained independent functionality through a clear interface. We expect that this way of developing characters can enable better code reusability and speed up prototyping, testing and debugging. Moreover our proposed architecture provides the ground for revisiting problems and techniques that have been extensively studied, such as pahtfinding, and arrive to feasible methods for developing believable characters with their own view of the topology and connectivity of the game-world areas. \clearpage \bibliographystyle{aaai}
1,116,691,499,442
arxiv
\section*{\refname}} \usepackage[utf8]{inputenc} \usepackage{mathtools} \usepackage[T1]{fontenc} \usepackage{mathptmx} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{subcaption} \usepackage{amsmath} \usepackage{graphicx} \usepackage{subcaption} \usepackage{amsmath} \usepackage{gensymb} \usepackage{amssymb} \usepackage{dcolumn \usepackage{bm \usepackage{xr} \usepackage{cleveref} \usepackage{amssymb} \usepackage{dcolumn \usepackage{bm \usepackage{xr} \externaldocument{Supplementary} \usepackage{cleveref} \usepackage{soul} \usepackage{color} \usepackage{ulem} \usepackage[usenames,dvipsnames]{xcolor} \newcommand{\hlred}[1]{{\sethlcolor{red}\hl{#1}}} \newcommand{\hlmag}[1]{{\sethlcolor{magenta}\hl{#1}}} \newcommand{\hlgreen}[1]{{\sethlcolor{green}\hl{#1}}} \newcommand{\hlblue}[1]{{\sethlcolor{cyan}\hl{#1}}} \newcommand{\stred}[1]{\textcolor{red}{\sout{#1}}} \begin{document} \title{A novel heterogeneous structure formed by a single multiblock copolymer chain} \author{Artem Petrov} \email{[email protected]} \affiliation{Faculty of Physics, Lomonosov Moscow State University, 119991 Moscow, Russia} \author{Alexey Gavrilov} \affiliation{Faculty of Physics, Lomonosov Moscow State University, 119991 Moscow, Russia} \author{Alexander Chertovich} \affiliation{Semenov Federal Research Center for Chemical Physics, 119991 Moscow, Russia} \affiliation{Faculty of Physics, Lomonosov Moscow State University, 119991 Moscow, Russia} \date{\today} \begin{abstract} We studied structures formed by a single $(AB)_k$ multiblock copolymer chain, in which interaction between A-type beads is purely repulsive, and B-type beads tend to aggregate. We studied how attraction between A-type beads and B-type beads affects the structure of the chain. We discovered formation of an equilibrium globular structure, which had unique heterogeneous checkerboard-like distribution of contact density. Unlike the structures usually formed by a single $(AB)_k$ multiblock copolymer chain, this structure had contact enrichment at the boundaries of A and B blocks. This structure was formed by a multiblock copolymer chain, in which B-type beads could form maximum two reversible bonds either with A-type or B-type beads, A-type beads could form maximum one reversible bond with a B-type bead, and interactions between A-type beads were purely repulsive. Multiblock copolymer chains with this type of intrachain interactions can model structure of chromatin in various organisms. \end{abstract} \maketitle \section{Introduction} Melts and blends of block copolymers are extensively studied polymer systems that are widely used in the industry. The main difference between these systems and homopolymer melts is the phenomenon of microphase separation occurring in block copolymer systems. Phase diagrams of melts consisting of block copolymers with beads of two types (usually denoted as A and B) are well understood by theory \cite{leibler1980theory,semenov1985contribution,matsen1996unifying}, computer simulations \cite{gavrilov2013phase} and experiments \cite{khandpur1995polyisoprene}. However, phase behavior of a single $(AB)_k$ multiblock copolymer chain placed in selective solvent is less clear. A number of authors observed microphase separation in a globule formed by a single $(AB)_k$ multiblock copolymer chain in a poor solvent. If A-type and B-type beads tended to segregate, microphase separation occurred leading to formation of unusual structures due to finite size of the globule \cite{theodorakis2011microphase, parsons2007single,theodorakis2011phase,ivanov1999computer}. A multiblock copolymer chain placed in a selective solvent, which is poor for B-type beads and good for A-type beads, was predicted to form either a swollen chain of molecular micelles or a single micelle depending on the length of a chain and its composition \cite{halperin1991collapse}. Results of computer simulations supported this prediction \cite{pham2010collapse,hugouvieux2009amphiphilic,lewandowski2008protein,rissanou2014collapse,ulianov2016active,woloszczuk2008alternating,wang2014coil}. Authors also observed formation of layered and tubular structures \cite{hugouvieux2009amphiphilic}, as well as dynamic switching between swollen chain of micelles and a single micelle \cite{rissanou2014collapse}. However, the aforementioned simulation works \cite{pham2010collapse,hugouvieux2009amphiphilic,lewandowski2008protein,rissanou2014collapse,ulianov2016active,woloszczuk2008alternating,wang2014coil} had several limitations. First, either length of a block or the number of blocks in a multiblock copolymer chain was limited. Typical length of a block in the studied chains rarely exceeded 10 monomer unis (beads) \cite{pham2010collapse,hugouvieux2009amphiphilic,lewandowski2008protein,woloszczuk2008alternating,wang2014coil}. Authors of ref. \cite{rissanou2014collapse} studied chains with long blocks, but there were only 5 blocks in a chain. These limitations were overcome in the work \cite{ulianov2016active}, in which authors performed simulations of a multiblock copolymer chain consisting of many long blocks. However, the effect of composition of the chain, length of a block and strength of interactions on the chain conformation was not studied in this work. Second, the authors of refs. \cite{pham2010collapse,hugouvieux2009amphiphilic,lewandowski2008protein,rissanou2014collapse,ulianov2016active,woloszczuk2008alternating,wang2014coil,halperin1991collapse} studied the case of the so-called amphiphilic multiblock copolymers, in which attraction between A-type and B-type beads is either absent or significantly weaker than attraction between B-type beads. It is still unknown how strength of attraction between A-type and B-type beads affects the structure of a single multiblock copolymer chain. On the other hand, a single multiblock copolymer chain with different types of interactions between beads is a promising model of chromatin organization in various organisms \cite{ulianov2016active,barbieri2012complexity,jost2014modeling}. We have proposed to model interactions between nucleosomes as reversible bonds in our previous works \cite{ulianov2016active,petrov2020kinetic}. In this study, we introduced a modified version of this model. It is known that lysine 16 in histone H4 (histone H4 "tail") can form a complex with acidic region on H2A-H2B histone dimer (the so-called "acidic patch") \cite{histoneinteractions,shahbazian2007functions}. We may treat this interaction as a reversible bond between two beads representing a nucleosome \cite{ulianov2016active,petrov2020kinetic}. Each nucleosome has two histone H4 tails and two acidic regions on the H2A-H2B histone dimer. It is also known that histone H4 tails are mostly acetylated in active chromatin \cite{shahbazian2007functions}. In addition, the energy gain of forming a complex with the acidic patch is much smaller for an acetylated histone H4 tail than for a non-acetylated one. Thus, affinity of acidic patches in nucleosomes in the actively transcribed regions will be high to the histone H4 tails in inactive chromatin. In some sense, nucleosomes from actively transcribed regions may act as a surfactant for inactive chromatin. We suggest a simple and robust treatment of these complex interactions between active (A) and inactive (B) regions of chromatin. An "inactive" nucleosome may form two reversible bonds with any nucleosome. However, an "activated" (via acetylation) nucleosome may form only one reversible bond and only with an "inactive" nucleosome. In this work, we investigated the behavior of a single multiblock copolymer chain with various interactions between A and B blocks. We assessed how attraction between A-type beads and B-type beads affected the structure of the chain. If this interaction was purely repulsive, we observed formation of a chain of intramolecular micelles. However, an unusual compact structure was formed by the chain, which modeled interactions between nucleosomes by formation of reversible bonds between A-type and B-type beads. This structure had enrichment of contact density at the boundaries of A and B blocks, opposite to the situation observed in a chain of micelles, in which contact enrichment occurred inside the B blocks. We investigated the structure of such unusual state and described how distribution of contact density depended on the composition of a chain. \section{Methods} We studied a single flexible $(AB)_k$ multiblock copolymer chain of the length $N=10^4$ in an implicit solvent. The repeating unit of the copolymer consisted of one A and one B block (i.e. the copolymer sequence was $(A)_{nA}(B)_{nB}(A)_{nA}(B)_{nB}...$, where $n_A$ and $n_B$ were the lengths of A and B blocks, respectively). The total length of such repeating unit $(A)_{nA}(B)_{nB}$ was set to $n=n_A+n_B=400$. We characterized the copolymer composition by the fraction of B-type beads in the chain: $f = n_B/n$. We varied the value of $f$ from $f=0.2$ to $f=0.5$. Therefore, the lengths of the B-blocks varied from 80 to 200. LAMMPS package was used in our work to perform Brownian dynamics simulations; the 12-6 Lennard-Jones potential (LJ) with the following parameters was applied: $\sigma=1.0$, $\epsilon=0.8$. To simulate purely repulsive interactions (i.e. good solvent conditions), we set the cutoff radius of the LJ potential to $R_\text{cut}=1.12$. To model poor solvent conditions, the cutoff radius was increased to $R_\text{cut}=2.00$, as, according to our preliminary simulations, such value was sufficient to observe the coil-to-globule transition of a homopolymer chain of the length $N=10^4$. In what follows, we denote the cutoff radius of LJ potential between beads of type X and Y ($X=A,B$, $Y=A,B$) as $R_\text{cut}^{XY}$. Periodic boundary conditions were applied in our simulations, and the side of the cubic simulation box was equal to $350$. This choice ensured that a polymer chain of the length $N=10^4$ does not affect its own conformation in a good solvent. We used the harmonic potential $U=K(r-r_0)^2$ to simulate the bonded interactions; the following parameters were used: $K=5.0$ and $b_0=0.5$. Under these conditions the chain was phantom, i.e. the bonds could easily cross each other. The same parameters were applied to simulate the backbone and the pairwise reversible bonds (see below). In order to simulate the presence of reversible (dynamic) bonds between nucleosomes, we used the standard stochastic procedure for the creation and removal of bonds implemented in LAMMPS. These bonds were created in addition to the existing bonds in the chain backbone. Such bonds were created and broken every $N_\text{stp}=200$ MD steps. The probability of bond formation was fixed and equal to $1$, and the probability of breaking a bond was equal to $0.1$. The distance within which reversible bonds could be formed was set to $R_\text{max}^\text{create}=1.30$. This value was chosen so that the average lengths of the forming and breaking bonds were approximately equal: $<r_\text{form}>=1.1599$, $<r_\text{break}>=1.1601$. In addition, according to our preliminary simulations, these parameters led to the collapse of a homopolymer chain placed in athermal solvent if the monomer units could form maximum two reversible bonds with each other. In order to characterise the conformation of a polymer chain in different conditions, we calculated the dependencies of the average spatial distance between two monomer units on the distance between them along the chain, $R(s)$. We also determined the dependencies of the contact probability of the monomer units on the distance along the chain, $P(s)$, and the contact maps. In order to generate the initial chain conformations, we performed equilibration of a chain without reversible bonds in good solvent for $t_1=1.5\times 10^8$ MD steps. After this equilibration, we performed simulations for $t_2=1.5\times 10^8$ time steps under desired system conditions, and the last $5\times 10^7$ steps of that run were used to average $R(s)$ and $P(s)$ (11 conformations were used for averaging in total). To average the contact maps, we also performed 10 additional runs from different initial chain conformations. Therefore, the total number of chain conformations used for calculation of the contact maps was equal to $11\times 11=121$. To obtain the contact probabilities $P(s)$, we calculated $K(s)$: the total number of bead pairs separated by $s$ beads along the chain and located closer than $r_\text{c}=1.5$ to each other in space. The contact probability was calculated as $P(s) = K(s)/(N-s)$. A similar methodology was already used to study collapse of a homopolymer globule \cite{chertovich2014crumpled}. To analyze the heterogeneous distribution of contacts, we calculated $P(s)$ dependencies separately for the contacts between A-type beads ($P_\text{AA}(s)$), A-type and B-type beads ($P_\text{AB}(s)$), and B-type beads ($P_\text{BB}(s)$). To obtain a $P_\text{XY}(s)$ dependency ($X=A,B$, $Y=A,B$), we calculated the $K(s)$ value only for the beads of the types X and Y $K_\text{XY}(s)$, and then divided $K_\text{XY}(s)$ by the total number of bead pairs of type X and Y separated by $s$ beads along the chain. For brevity, further we denote contacts or interactions between beads of type $X$ and beads of type $Y$ as XY contacts or interactions, respectively. Contact maps are a visual tool to show how frequently different parts of the chain are in contact with each other. On its $i\times j$ position, a contact map contained the contact probability of the $i$th and $j$th monomer units. Since the simulated chain was rather long, we performed coarsening of the contact map for better visual appearance, so the resulting contact maps had the size of $0.1N \times 0.1N$. Further we denote the element of the coarsened contact map as $p(i,j)$. In addition, we performed the "sliding window" averaging procedure to obtain the averaged probability of contact between beads lying within five repeating units. To calculate the element of the contact map after the "sliding window" $p_{sliding}(i,j)$, $|i-j|<200$, we followed the following procedure. First, we added up the $p(i+kn, j+kn)$ values for all possible values of integer $k$, $k\in [1; 1 + N/n - 5]$. Second, we obtained $p_{sliding}(i,j)$ by dividing this sum by $1 + N/n - 5$. \section{Results} We have investigated the behavior of a multiblock copolymer chain with different interactions between beads. We describe structure of a multiblock copolymer chain with beads interacting only via LJ potential in the section III.A. The behavior of a chain placed in athermal solvent (all LJ interactions are purely repulsive) with reversible bonds forming between beads is described in the section III.B. \subsection{Volume Interactions} In this section, we describe the behavior of chains, in which either only BB, or BB and AB interactions are attractive. AA interactions are purely repulsive. First, we studied the behavior of a chain, in which only BB interaction was attractive. Other interactions were purely repulsive. To model such interactions, we set the cutoff radius of LJ potential acting between B-type beads to $R_\text{cut}^{BB}=2.0$. Cutoff radii of LJ potentials acting between other bead pairs were set to $R_\text{cut}^{AB}=R_\text{cut}^{AA}=1.12$. This case was well studied previously by theory and computer simulations and is the simplest model of a multiblock copolymer chain in highly selective solvent \cite{pham2010collapse,hugouvieux2009amphiphilic,lewandowski2008protein,rissanou2014collapse,ulianov2016active,woloszczuk2008alternating,wang2014coil,halperin1991collapse}. We observed formation of molecular micelles along the chain as predicted by theory \cite{halperin1991collapse}. The $R(s)$ dependencies of such conformations had a characteristic step-like shape, which supported visual observation of formation of molecular micelles along the chain (Fig. \ref{rsvol}a). The $P(s)$ dependencies had oscillations demonstrating local aggregation of blocks into micelles (Fig. \ref{psvol}a). It is worth mentioning that probability of contact was equal almost to zero starting from a certain large $s$ (Fig. \ref{psvol}a). This finding is not surprising, since the chain forms a self-avoiding walk of molecular micelles \cite{halperin1991collapse}. Halperin predicted how the average number of B-type beads in a micelle $N_\text{Bmic}$ scales with the number of B-type beads in a block: $N_\text{Bmic}\propto N_B^{9/5}$ \cite{halperin1991collapse}. We have observed an excellent agreement of simulation results with this prediction (Fig. \ref{blockvol}). \begin{figure}[htb] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{rsvol_1.eps} \caption{} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{rsvol_2.eps} \caption{} \end{subfigure} \caption{$R(s)$ dependencies in multiblock copolymer chains with different fraction of B-type beads $f$, attraction is realized via LJ potential. Snapshots of structures formed by a chain with $f=0.5$ are shown on each figure. Thin lines represent B-type beads, spheres represent A-type beads. The beads are colored according to their position along the chain. (a) Only BB interaction is attractive ($R_\text{cut}^{BB}=2.0$), other interactions are purely repulsive. (b) AA interactions are purely repulsive, AB and BB interactions are attractive (i.e. $R_\text{cut}^{AB}=R_\text{cut}^{BB}=2.0$).} \label{rsvol} \end{figure} \begin{figure}[htb] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{psvol_1.eps} \caption{} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{psvol_2.eps} \caption{} \end{subfigure} \caption{$P(s)$ dependencies in multiblock copolymer chains with different fraction of B-type beads $f$, attraction is realized via LJ potential. (a) Only BB interaction is attractive ($R_\text{cut}^{BB}=2.0$), other interactions are purely repulsive. (b) AA interactions are purely repulsive, AB and BB interactions are attractive (i.e. $R_\text{cut}^{AB}=R_\text{cut}^{BB}=2.0$).} \label{psvol} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth,keepaspectratio]{2.eps} \caption{The dependency of the average number of B-type beads in a micelle on the fraction of B-type beads in a chain $f$. Black dots: only BB interaction is attractive ($R_\text{cut}^{BB}=2.0$), other interactions are purely repulsive. Red dots: AA interactions are purely repulsive, AB and BB interactions are attractive (i.e. $R_\text{cut}^{AB}=R_\text{cut}^{BB}=2.0$). Dashed and dotted lines represent the minimal and maximal possible values for the number of B-type beads in a micelle, respectively.} \label{blockvol} \end{figure} Second, we studied the structure of a multiblock copolymer chain with strong attraction between A-type and B-type beads ($R_\text{cut}^{AB}=R_\text{cut}^{BB}=2.0$). Interactions between A-type beads were purely repulsive. We observed that the chain still resembled a chain of micelles (Fig. \ref{rsvol}b, \ref{psvol}b), but the number of B-type beads comprising one micelle was larger than in the previous case (Fig. \ref{blockvol}). This indicates that a certain portion of A-type beads acted as a "glue", "sticking" the B-type beads together. It is worth mentioning that $N_\text{Bmic}$ scaled similarly with $f$ as in the system without AB attraction (Fig. \ref{blockvol}). Therefore, the number of B-type beads comprising one micelle increased by a constant factor independent of $f$ after AB attraction had been switched on. \subsection{Reversible Bonds} In this section, we describe structure of a multiblock copolymer chain placed in athermal solvent (all volume interactions are purely repulsive, $R_\text{cut}^{AA} = R_\text{cut}^{AB}=R_\text{cut}^{AA}=1.12$). Reversible bonds may form between beads. Average lifetime of a bond is equal to $\tau=2\times 10^3$ MD steps, probability of bond formation is equal to unity. The first studied system was a multiblock copolymer chain, in which B-type beads could form maximum two reversible bonds only with each other. Other interactions were purely repulsive. In this case, the chain formed a string of micelles (Fig. \ref{rssat}a, \ref{pssat}a). We have observed qualitatively similar structures as in the chain, in which attraction between B-type beads was realized via LJ potential. The only difference we observed is a slight mixing of two micelles in the chain with $f=0.5$ (Fig. \ref{rssat}a). The average number of B-type beads in a micelle scaled approximately as predicted by theory \cite{halperin1991collapse} (Fig. \ref{blocksat}). \begin{figure}[htb] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{rssat_1.eps} \caption{} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{rssat_2.eps} \caption{} \end{subfigure} \caption{$R(s)$ dependencies in multiblock copolymer chains with different fraction of B-type beads $f$. Snapshots of structures formed by a chain with $f=0.5$ are shown on each figure. Thin lines represent B-type beads, spheres represent A-type beads. The beads are colored according to their position along the chain. (a) All interactions were purely repulsive with one exception: B-type beads could form maximum two reversible bonds with each other. (b) Interactions between A-type beads were purely repulsive. A-type beads could form maximum one bond with a B-type bead, B-type beads could form maximum two bonds either with A-type or B-type beads.} \label{rssat} \end{figure} \begin{figure}[htb] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{pssat_1.eps} \caption{} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{pssat_2.eps} \caption{} \end{subfigure} \caption{$P(s)$ dependencies in multiblock copolymer chains with different fraction of B-type beads $f$. (a) All interactions were purely repulsive with one exception: B-type beads could form maximum two reversible bonds with each other. (b) Interactions between A-type beads were purely repulsive. A-type beads could form maximum one bond with a B-type bead, B-type beads could form maximum two bonds either with A-type or B-type beads.} \label{pssat} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth,keepaspectratio]{3.eps} \caption{The dependency of the average number of B-type beads in a micelle on the fraction of B-type beads in a chain $f$. Green dots: all interactions were purely repulsive with one exception: B-type beads could form maximum two reversible bonds with each other. Blue dots: interactions between A-type beads were purely repulsive. A-type beads could form maximum one reversible with a B-type bead. B-type beads could form maximum two reversible bonds either with A-type or B-type beads. Dashed and dotted lines represent the minimal and maximal possible values for the number of B-type beads in a micelle, respectively.} \label{blocksat} \end{figure} The second studied case was a multiblock copolymer chain, in which an A-type bead could form maximum one reversible bond with a B-type bead. A B-type bead could form maximum two bonds with either A-type or B-type beads. Surprisingly, the chain behaved qualitatively different from the previous cases. First of all, the chain did not form a string of intramolecular micelles with a well-defined core constituted by the B-type beads and a corona consisting of A-type beads. Instead, we observed formation of a compact structure, since $R(s)$ dependencies reached plateau for large $s$ (Fig. \ref{rssat}b). In addition, B-type beads formed a single cluster for all values of $f$ (Fig. \ref{blocksat}). We did not study the case $f=0.2$, since formation of a unified collapsed structure was hampered due to insufficient number of B-type beads and therefore strong fluctuations of the structure. To study the collapsed structure in more detail, we built the $P(s)$ dependencies (Fig. \ref{pssat}b). Contact probability did not turn to zero for all $s$ (Fig. \ref{pssat}b) thus suggesting the globular structure of the chain in accord with Fig. \ref{rssat}b. Notably, we observed periodic behavior of the dependency for all values of $f$. Therefore, the globular structure formed in this case had internal heterogeneous distribution of contact density. We also built contact maps for the chains with $f=0.3$ and $f=0.5$ (Fig. \ref{pssat_2}a, \ref{pssat_3}a). Contact maps exhibited a unique checkerboard-like structure, demonstrating enrichment of contacts between A-type and B-type beads (yellow "stripes") and depletion of contacts between blocks containing beads of the same type (black "holes"). Contact maps after "sliding window" averaging procedure demonstrated this picture even more clearly (Fig. \ref{pssat_2}b, \ref{pssat_3}b). We also analyzed the dependencies of contact probability between beads of specific type $P_\text{XY}(s)$ ($X=A,B$, $Y=A,B$) as described in Methods (Fig. \ref{pssat_2}c, \ref{pssat_3}c). These dependencies demonstrated that AB contacts occurred much more frequently than AA or BB contacts. Moreover, the $P(s)$ dependencies had local minima at $s\approx q\times n$, where $q$ is an integer (Fig. \ref{pssat_2}c, \ref{pssat_3}c). Our data suggested that contact enrichment occurred at the boundaries of A and B blocks. We also studied how this unusual structure depended on the fraction of B-type beads in a chain $f$. We observed that in the chain with $f=0.5$ the checkerboard-like pattern contained three types of "squares" on the contact map (Fig. \ref{pssat_3}b). AB contacts occurred with the highest probability (green squares), BB contacts had intermediate frequency of occurrence (cyan squares), and AA contacts had the lowest probability of occurrence (squares containing blue dots, Fig. \ref{pssat_3}b). We did not observe such hierarchy of contact frequencies in the chain with $f=0.3$, in which probabilities of AA and BB contacts were almost equal on the large scale (Fig. \ref{pssat_2}c). This data suggests that we can govern distribution of contacts within the structure by altering the chain composition. \section{Discussion} In this work, we have studied structure of a single multiblock copolymer chain. We have shown that behavior of a chain becomes very nontrivial if B-type beads can form reversible bonds not only with B-type beads, but also with A-type beads. Such chain can also be treated as a model of chromatin organization, since reversible bonds can model interactions between nucleosomes in hetero- and euchromatin. Our data has demonstrated that the structure formed by such a multiblock copolymer chain has a unique internal distribution of contacts depending on the chain composition. Contact enrichment occurs at the boundaries of the blocks, and depletion of contact density is observed within the A and B blocks. Further research is needed to elucidate physical mechanisms of formation of such structures. \begin{figure}[H] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{contactmap_1.eps} \caption{} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{contactmap_1_1.eps} \caption{} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{pssat_3.eps} \caption{} \end{subfigure} \caption{Coarsened contact map (a), contact map after "sliding window" averaging (b), and the dependencies of contact probability on the distance along the chain (c) for a multiblock copolymer chain with reversible bonds, $f=0.3$. Interactions between A-type beads are purely repulsive. A-type beads could form maximum one bond with a B-type bead, B-type beads could form maximum two bonds either with A-type or B-type beads. The data was averaged over 11 initial conformations.} \label{pssat_2} \end{figure} \begin{figure}[H] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{contactmap_2.eps} \caption{} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{contactmap_2_1.eps} \caption{} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{pssat_4.eps} \caption{} \end{subfigure} \caption{Coarsened contact map (a), contact map after "sliding window" averaging (b), and the dependencies of contact probability on the distance along the chain (c) for a multiblock copolymer chain with reversible bonds, $f=0.5$. Interactions between A-type beads are purely repulsive. A-type beads could form maximum one bond with a B-type bead, B-type beads could form maximum two bonds either with A-type or B-type beads. The data was averaged over 11 initial conformations.} \label{pssat_3} \end{figure} \section*{Acknowledgements} We thank Pavel Kos for fruitful discussions and comments. The research is carried out using the equipment of the shared research facilities of HPC computing resources at Lomonosov Moscow State University. The reported study was funded by RFBR according to the research project \# 18-29-13041.
1,116,691,499,443
arxiv
\chapter{Introduction} \REF\lovelace{C. Lovelace, ``Stability of String Vacua. 1. A New Picture of the Renormalization Group,'' Nucl. Phys. {\bf B273} (1986) 413.} \REF\banks{T. Banks and E. Martinec, ``The Renormalization Group And String Field Theory,'' Nucl. Phys. {\bf B294} (1987) 733.} \REF\hughes{J. Hughes, J. Liu, and J. Polchinski, ``Virasoro-Shapiro {}From Wilson,'' Nucl. Phys. {\bf B316} (1989) 15.} \REF\periwal{V. Periwal, ``The Renormalization Group, Flows Of Two Dimensional Field Theory, and Connes' Geometry,'' Comm. Math. Phys. {\bf 120} (1988) 71.} \REF\ubanks{T. Banks, ``The Tachyon Potential In String Theory,'' Nucl. Phys. {\bf B361} (1991) 166.} \REF\bru{R. Brustein and S. de Alwis, ``Renormalization Group Equation And Non-Perturbative Effects in String Field Theory,'' Nucl. Phys. {\bf B352} (1991) 451.} \REF\rol{R. Brustein and K. Roland, ``Space-Time Versus World-Sheet Renormalization Group Equation In String Theory,'' Nucl. Phys. {\bf B372} (1992) 201.} Though gauge invariant open-string and closed-string field theories are now known, the problem of background dependence of string field theory has not been successfully addressed. This problem is fundamental because it is here that one really has to address the question of what kind of geometrical object the string represents. The world-sheet or $\sigma$-model formulation of string theory is the one known formulation in which anything can be done in a manifestly background independent way. It has therefore been widely suspected that somehow one should do string field theory in the ``space of all two-dimensional field theories,'' by finding an appropriate gauge invariant Lagrangian on that space. The tangent space to the ``space of all two-dimensional field theories'' should be the space of all local operators, including operators of very high dimension, time-dependent operators of negative dimension, and operators containing ghost fields. This approach, which has been pursued in [\lovelace-\rol], has two glaring difficulties: (1) because of the ultraviolet difficulties of quantum field theory, it is hard to define a ``space of all two-dimensional field theories'' with the desired tangent space (this is why the sigma model approach to string theory is limited in practice to a long wavelength expansion); (2) one has not known what properties such a space should have to enable the definition of a gauge invariant Lagrangian. In the present paper, I will propose a solution to the second problem, for the case of open (bosonic) strings, leaving the first problem to the future. Considering open strings means that we consider world-sheet actions of the form $I=I_0+I'$, where $I_0$ is a fixed bulk action (corresponding to a choice of closed string background) and $I'$ is a boundary term representing the open strings. For instance, the standard closed-string background is $$I_0=\int_\Sigma {\rm d}^2x \sqrt h\left({1\over 8\pi}h^{ij}\partial_iX^\mu\partial_j X_\mu +b^{ij}D_ic_j\right). \eqn\abo$$ Here $\Sigma$ is the world-sheet with metric $h$ with coordinates $x^k$, and $c_i$ and $b_{jk}$ are the usual ghost and antighost fields. This theory has the usual conserved BRST current $J^i$. The corresponding BRST charge $Q=\oint {\rm d} \sigma J^0$ ($\sigma$ is an angular parameter on a closed-string and ``0'' is the normal direction) obeys the usual relations, $$Q^2=0 ~ {\rm and}~ T_{ij}=\{Q,b_{ij}\},\eqn\aabo$$ with $T_{ij}$ being here the stress tensor. We then take $I'$ to be an arbitrary boundary interaction, $$I'=\int_{\partial\Sigma}{\rm d} {\sigma} ~~{\cal V}, \eqn\bbo$$ where ${\cal V}$ is an arbitrary local operator constructed from $X,b,c$; in this paper we consider two ${\cal V}$'s equivalent if they differ by a total derivative. A two dimensional theory with action $I=I_0+I'$, with $I_0$ defined as above and $I'$ allowed to vary, will be called an open-string world-sheet field theory. Our goal will be to define a gauge invariant Lagrangian on the space of all such open-string world-sheet theories (or actually a space introduced later with some additional degrees of freedom). \REF\bv{I. A. Batalin and G. A. Vilkovisky, ``Quantization Of Gauge Theories With Linearly Dependent Generators,'' Phys. Rev. {\bf D28} (1983) 2567, ``Existence Theorem For Gauge Algebras,'' J. Math. Phys. {\bf 26} (1985) 172.} \REF\stash{J. Fisch, M. Henneaux, J. Stasheff, and C. Teitelboim, ``Existence, Uniqueness, and Cohomology of the Classical BRST Charge With Ghosts for Ghosts,'' Comm. Math. Phys. {\bf 120} (1989) 379; M. Henneaux and C. Teitelboim, Comm. Math. Phys. {\bf 115} (1988) 213; J. Stasheff, Bull. Amer. Math. Soc. {\bf 19} (1988) 287.} \REF\henn{M. Henneaux, ``Lectures On The Antifield-BRST Formalism For Gauge Theories,'' proceedings of the XX GIFT meeting; M. Henneaux and C. Teitelboim, {\it Quantization Of Gauge Systems}, to be published by Princeton University Press.} \REF\ew{E. Witten, ``A Note On The Antibracket Formalism,'' Mod. Phys. Lett. A {\bf 5} (1990) 487.} \REF\schw{A. Schwarz, UC Davis preprint (1992).} \REF\siegel{W. Siegel, ``Covariantly Second-Quantized Strings, II, III,'' Phys. Lett. {\bf 151B} (1985) 391,396.} \REF\zwiebach{B. Zwiebach, ``Closed String Field Theory: Quantum Action And The B-V Master Equation,'' IASSNS-HEP-92/41.} \REF\thorn{C. Thorn, ``Perturbation Theory For Quantized String Fields,'' Nucl. Phys. {\bf B287} (1987) 61.} \REF\bocc{M. Bochicchio, ``Gauge Fixing For The Field Theory Of The Bosonic String,'' Phys. Lett. {\bf B193} (1987) 31.} \REF\wz{E. Witten and B. Zwiebach, ``Algebraic Structures And Differential Geometry In 2D String Theory,'' IASSNS-HEP-92/4.} \REF\everlinde{E. Verlinde, ``The Master Equation Of 2D String Theory,'' IASSNS-HEP-92/5.} This will be easier than it may sound because the Batalin-Vilkovisky formalism [\bv--\schw] will do much of the work for us. The use of this formalism was suggested by its role in constructing and understanding classical and quantum closed-string field theory [\zwiebach], its elegant use in quantizing open-string field theory [\thorn,\bocc], and its role in string theory Ward identities [\wz,\everlinde]. In particular, while the BV formalism was first invented for quantizing gauge invariant classical field theories that are already known, it was used in closed-string field theory [\zwiebach] as an aid in finding the unknown theory; that is how we will use it here. The BV formalism also has an interesting analogy with the renormalization group [\bru]. Here is a brief sketch of the relevant aspects of the BV formalism. (For more information see [\henn].) One starts with a super-manifold ${\cal M}$ with a $U(1)$ symmetry that we will call ghost number, generated by a vector field $U$. The essential structure on ${\cal M}$ is a non-degenerate fermionic two-form $\omega$ of $U=-1$ which is closed, ${\rm d}\omega=0$. One can think of $\omega$ as a fermionic symplectic form. As in the usual bosonic case, such an $\omega$ has no local invariants; $\omega$ can locally be put in the standard form $\omega=\sum_a {\rm d} \theta_a {\rm d} q^a$ with $q^a$ and $\theta_a$ bosonic and fermionic, respectively. Just as in the usual case, one can define Poisson brackets $$\{A,B\}={\partial_rA\over \partial u^K}\omega^{KL}{\partial_lB\over \partial u^L} \eqn\hbo$$ with $\omega^{KL}$ the inverse matrix to $\omega_{KL}$ and $u^I$ local coordinates on ${\cal M}$. (The subscripts $r$ and $l$ refer to derivatives from the right or left.) These Poisson brackets, which are the BV antibrackets, obey a graded Jacobi identity. (At the cost of some imprecision, I will sometimes refer to $\omega$ rather than the Poisson brackets derived from it as the antibracket.) The BV master equation is $$\{S,S\}=0 \eqn\nbo$$ (which would be vacuous if $\omega$ were bosonic). An action function $S$ obeying the master equation is automatically gauge invariant, with the gauge transformation law $$\delta u^I=\left(\omega^{IJ}{\partial^2 S\over \partial u^J\partial u^K} +{1\over 2}{\partial \omega^{IJ}\over\partial u^K}{\partial S\over\partial u^J}\right)\epsilon^K \eqn\pbo$$ with arbitrary infinitesimal parameters $\epsilon^K$. It is straightforward to see that $\delta S=\epsilon^K\partial_K\{S,S\}/2 =0$. (The gauge transformations \pbo\ will only close -- and are only well-defined, independent of the choice of coordinates $u^I$ -- modulo ``trivial'' gauge transformations that vanish on shell. These are of the form $\delta u^I=\lambda^{IJ}\partial S/\partial u^J$, with $\lambda^{IJ}=-\lambda^{JI}$.) Let ${\cal N}$ be the subspace of ${\cal M}$ on which $U=0$. We define the ``classical action'' $S_0$ to be the restriction of $S$ to ${\cal N}$. The classical action has a gauge invariance given, again, by \pbo, with the $\epsilon^K$ restricted to have $U=-1$. In usual applications of the BV formalism to gauge fixing, ${\cal N}$ and $S_0$ are given, and the first step is the construction of ${\cal M}$ and $S$ (the latter is required to obey a certain cohomological condition as well as the master equation). A general theorem shows that suitable ${\cal M}$ and ${S}$ exist, but their actual construction is usually rather painful. The insight of Thorn and Boccicchio [\thorn,\bocc] (extending earlier ideas, beginning with Siegel [\siegel], on the role of the ghosts in string theory) was that in string theory ${\cal M}$ and ${S}$ are related to ${\cal N}$ and $S_0$ just by relaxing the condition on the ghost number of the fields. Anticipating this structure was a help in developing closed-string field theory, as explained in [\zwiebach], and will be essential here. If $S$ is any function, not necessarily obeying the master equation, one can define a vector field $V$ by $$ V^K\omega_{KL} ={\partial_l S\over\partial u^L}. \eqn\ibo$$ If $S$ has $U=0$, then $V$ has $U=1$. If we take $S$ as an action functional, then the Euler-Lagrange equations $0={\rm d}\omega$ are equivalent to $V^I=0$. As we will see later, the master equation implies that $V^2=0$ or in components $$V^K{\partial \over\partial u^K}V^I = 0. \eqn\kubo$$ If we let $i_V$ be the operation of contraction with $V$, then the definition \ibo\ of $V$ can be written as $$ i_V\omega ={\rm d} S. \eqn\jbo$$ Under an infinitesimal diffeomorphism $u^I\rightarrow u^I+\epsilon V^I$ of ${\cal M}$, a two-form $\omega$ transforms as $\omega\rightarrow \omega +\epsilon (i_V{\rm d}+{\rm d} i_V) \omega$. $V$ therefore generates a symmetry of $\omega$ precisely if $$\left({\rm d} i_V+i_V{\rm d}\right) \omega = 0 . \eqn\jjbo$$ As ${\rm d}\omega=0$, this reduces to $${\rm d}(i_V\omega) = 0, \eqn\nnbo$$ and so is a consequence of \jbo. Therefore any vector field derived as in \ibo\ from a function $S$ generates a symmetry of $\omega$. Conversely, if $V$ is any symmetry of $\omega$, that is any vector field obeying $\nnbo$, then a function $S$ obeying \ibo\ always exists at least locally (and is unique up to an overall additive constant). Possible failure of global existence of $S$ would be analogous to the multi-valuedness of the Wess-Zumino and Chern-Simons functionals in field theory. Since topological questions analogous to this multi-valuedness would be out of reach at present in string theory, we will in this paper content ourselves with local construction of $S$. Suppose that one is given a vector field $V$ that generates a symmetry of $\omega$ and also obeys $V^2= 0$. One might wonder if it then follows that the associated function $S$ obeys the master equation. This is not quite true, but almost. The actual situation is that because of the Jacobi identity of the antibracket, the map \jbo\ from functions to vector fields is a homomorphism of Lie algebras; consequently, $V^2$ is the vector field derived from the function $\{S,S\}/2$, and vanishes precisely if $\{S,S\}$ is constant. To verify this, one can begin by writing the equation $V^2=0$ in the form $$ \left[{\rm d} i_V+i_V{\rm d},i_V\right] = 0 . \eqn\lbo$$ \jjbo\ then implies that $$\left({\rm d} i_V+i_V{\rm d}\right)i_V\omega = 0. \eqn\mbo$$ Using \nnbo, we get $$ {\rm d}\left(i_Vi_V\omega\right) = 0. \eqn\inbo$$ This is equivalent to $${\rm d}\{S,S\}=0, \eqn\obo$$ so that $\{S,S\}$ is a constant, perhaps not zero. Since this argument can also be read backwards, we have verified that $V^2=0$ if and only if $\{S,S\}$ is constant. Looking back at the proof of gauge invariance, we see that the master equation is stronger than necessary. A function $S$ obeying \obo\ is automatically gauge invariant, with gauge invariance \pbo. The generalization of permitting $\{S,S\}$ to be a non-zero constant is not very interesting in practice, for the following reason. If we take $S$ to be an action, then the corresponding Euler-Lagrange equations are $V=0$. If these equations have at least one solution, then by evaluating the constant $\{S,S\}$ at the zero of $V$, one finds that in fact $\{S,S\}=0$. Therefore, $\{S,S\}$ can be a non-zero constant only if the classical equations of motion are inconsistent. I can now explain the strategy for constructing a gauge invariant open-string Lagrangian. There are two steps. (1) On the space of all open-string world-sheet theories, we will find a fermionic vector field $V$, of ghost number 1, obeying $V^2=0$. (2) Then we will find, on the same space, a $V$-invariant antibracket, that is, a $V$-invariant fermionic symplectic form $\omega$ of ghost number $-1$. The Lagrangian $S$ is then determined (up to an additive constant) from ${\rm d} S=i_V\omega$; it is gauge invariant for reasons explained above. \foot{On the basis of what happens in field theory, I expect that when space-time is not compact, the formula ${\rm d} S=i_V\omega$ is valid only for variations of the fields of compact support; otherwise there are additional surface terms in the variation of $S$. Of course, a formula for the change of $S$ in variations of compact support suffices, together with locality, to determine $S$ up to an additive constant.} Of these two steps, the definition of $V$ is straightforward, as we will see. The definition of $\omega$ is less straightforward, and a proper understanding would depend on really understanding what is ``the space of all open-string world-sheet theories.'' I will give only a preliminary, formal definition of $\omega$. At least the discussion should serve to make clear what structures one should want ``the space of all two dimensional field theories'' to have. \chapter{Definition Of $V$} In this paper, our open-string quantum field theories will be formulated on a disc $\Sigma$. As one might expect, this is the relevant case in describing the classical Lagrangian. The open-string quantum field theories will be required to be invariant under rigid rotations of the disc, but are not required to have any other symmetries such as conformal invariance. That being so, $\Sigma$ must be endowed with a metric (not just a conformal structure). Since rotation invariance will eventually be important, we consider a rotationally invariant metric on $\Sigma$, say $$ {\rm d} s^2={\rm d} r^2+f(r){\rm d}\theta^2, ~~~~~0\leq r\leq 1,~~0\leq \theta\leq 2\pi. \eqn\mumo$$ The choice of $f$ does not matter; a change in $f$ would just induce a reparametrization of the space of possible boundary interactions. In any event, the metric on $\Sigma$ can be held fixed throughout this paper. As explained in the introduction, by an open-string world-sheet field theory we mean a two dimensional theory with action $I=I_0+I'$, where $I_0$ is the fixed bulk action \abo, and $I'$ is a boundary interaction that does not necessarily conserve the ghost number. Our first goal in the present section is to describe an anticommuting vector field, of ghost number one, on the space of such theories. (Later, in defining $\omega$, we will add new degrees of freedom to the open-string field theories. The construction of $V$ is sufficiently natural that it will automatically carry over to the new case.) One way to explain the definition of $V$ is as follows. An open-string field theory can be described by giving all possible correlation functions of local operators in the {\it interior} of the disc. Thus, the correlation functions we consider are $$\langle\prod_{i=1}^n{\cal O}_i(P_i)\rangle \eqn\dbo$$ with arbitrary local operators ${\cal O}_i$ and $P_i$ in the interior of $\Sigma$. The correlation functions \dbo\ obey Ward identities. Since we choose the $P_i$ to be {\it interior} points, the Ward identities are entirely determined by the bulk action $I_0$ of equation \abo\ and are independent of the choice of boundary contribution in the action. The boundary interactions determine not the structure of the Ward identities but the choice of a specific solution of them. It is reasonable to expect that the space of all solutions of the Ward identities, for all correlation functions in the interior of $\Sigma$, can be identified with the space of possible boundary interactions, since, roughly speaking, the boundary interaction determines how a left-moving wave incident on the boundary is scattered and returns as a right-moving wave. We will use this identification of the space of solutions of the Ward identities with the space of open-string theories to define a vector field on the space of theories. We also will give an alternative definition that does not use this identification. If one is given one solution of the Ward identities, corresponding to one boundary interaction, then another solution of the Ward identities can be found by conjugating by any symmetry of the interior action $I_0$. An important symmetry is the one generated by the BRST charge $Q$. Conjugating by $Q$ is particularly simple since $Q^2=0$. If $\epsilon$ is an anticommuting $c$-number, we can form a one-parameter family of solutions of the Ward identities with $$\langle\prod_{i=1}^n{\cal O}_i(P_i)\rangle_\epsilon =\langle\prod_{i=1}^n\left({\cal O}_i(P_i)-i\epsilon\{Q,{\cal O}(P_i)\} \right) \rangle. \eqn\ebo$$ At the tangent space level, this group action on the space of theories is generated by a vector field $V$, which is anticommuting and has ghost number 1, since those are the quantum numbers of $Q$, and obeys $V^2=0$ (or $\{V,V\}=0$) since $Q^2=0$. Here is an alternative description of $V$. Let $J^i$ be the conserved BRST current. Let $j=\epsilon_{ij}J^i {\rm d} x^j$ be the corresponding closed one-form. Let $C_\alpha$ be a circle that winds once around all of the $P_i$; for instance, $C_\alpha$ may be a circle a distance $\alpha$ from the boundary of $\Sigma$, for small $\alpha$. Since $j$ is closed, the contour integral $\oint_{C_\alpha}j$ is invariant under homotopically trivia l displacements of $C$. The term in \ebo\ proportional to $\epsilon$ is just $$\langle \oint_{C_\alpha}j\cdot \prod_i{\cal O}_i(P_i)\rangle,\eqn\fbo$$ as one sees upon shrinking the contour $C_\alpha$ to pick up terms of the form $\{Q,{\cal O}_i\}$. On the other hand, we can evaluate \fbo\ by taking the limit as $\alpha\rightarrow 0$, so that $C_\alpha$ approaches the boundary of the disc. In this limit, $\oint_{C_\alpha} j$ approaches $\int_{\partial \Sigma}{\cal V}$ for some local operator ${\cal V}$ defined on the boundary. There is no general formula for ${\cal V}$; its determination depends on the behavior of local operators (in this case the BRST current) near the boundary of $\Sigma$, and so on the details of the boundary interaction in the open-string field theory. But in general, we can interpret $\oint_{\partial\Sigma}{\cal V}$ as a correction to the boundary interaction of the theory, and as such it defines a tangent vector field to the space of all open-string field theories. This is an alternative description of the vector field $V$ defined in the previous paragraph. The correction $\oint_{\partial\Sigma}{\cal V}$ to the boundary Lagrangian resulted from a BRST transformation of that Lagrangian. Therefore, ${\cal V}$ vanishes when, and only when, the boundary interactions are BRST invariant. The BRST invariant world-sheet open-string theories are therefore precisely the zeros of $V$. In other words, the equations $$V^I=0 \eqn\micoco$$ are the equations of world-sheet BRST invariance. These equations are certainly background independent in the relevant sense; no {\it a priori} choice of an open-string background entered in the construction. As explained in the introduction, a gauge invariant Lagrangian with $V^I=0$ as the equations of motion can be constructed provided we can find a $V$-invariant antibracket on the space of open-string field theories. Before undertaking this task, let us make a few remarks about the relation of the vector field $V$ to BRST invariance. At a point at which a vector field does not vanish, there is no invariant way (lacking an affine connection) to differentiate it. However, at a zero of a vector field, that vector field has a well-defined derivative which is a linear transformation of the tangent space. For instance, if $V$ has a zero at -- say -- $u^K=0$, then we can expand $V^K=\sum_Lq^K{}_Lu^L+O(u^2)$, and $q^K{}_L$ is naturally defined as a tensor; in fact it can be regarded as a matrix acting on tangent vectors. Upon expanding the equation $V^2=0$ in powers of $u$, one finds that $q^K{}_Lq^L{}_M=0$, or more succinctly $$ q^2 = 0. \eqn\fbo$$ In the case of the vector field $V$ on the space of open-string world-sheet theories, the tangent space on which the matrix $q$ acts is the space of local operators that can be added to the boundary interaction; so it is closely related to the space of first-quantized open-string states. Thus essentially $q$ is an operator of ghost number one and square zero in the open-string Hilbert space; it is in fact simply the usual BRST operator, for the world-sheet theory with that particular boundary interaction. What we have come upon here seems to be the natural off-shell framework for BRST invariance. Off-shell one has a vector field $V$ of $V^2=0$. $V$ vanishes precisely on shell, and then the derivative of $V$ is the usual BRST operator $q$ of $q^2=0$. In fact, this structure can be seen -- but is perhaps not usually isolated -- in conventional versions of string field theory. \chapter{Definition Of The Antibracket} We now come to the more difficult part of our problem -- defining the antibracket. What will be said here is in no way definitive. It might be helpful first to explain how the antibracket is defined on shell; see also [\everlinde,\wz]. We start with a conformally invariant and BRST invariant world-sheet theory with action $I=I_0+I'$, where $$I'=\int_{\partial\Sigma}{\rm d}\sigma \,\,\,{\cal V}, \eqn\poco$$ for some ${\cal V}$. A tangent vector to the space of classical solutions of open-string theory is represented by a spin one primary field $\delta {\cal V}$. This perturbation must be BRST invariant in the sense that $$\{Q,\delta {\cal V}\}={\rm d} {\cal O} \eqn\mopo$$ for some ${\cal O}$, of ghost number one. If we are given two such tangent vectors $\delta_i{\cal V},\,\,\,i=1,2$, then $\{Q,\delta_i{\cal V}\}={\rm d}{\cal O}_i$ for two operators ${\cal O}_i$. Then we can define the antibracket: $$\omega(\delta_1{\cal V},\delta_2{\cal V})=\langle {\cal O}_1{\cal O}_2\rangle . \eqn\lopo$$ Here $\langle \dots\rangle$ is the expectation value of a product of operators inserted on the disc, in the world-sheet field theory, and the ${\cal O}_i$ are inserted at arbitrary points on the boundary of the disc. Conformal invariance ensures that the positions at which the ${\cal O}_i$ are inserted do not matter. With a view, however, to the later off-shell generalization, I prefer to write $$\omega(\delta_1{\cal V},\delta_2{\cal V})= \oint {\rm d}\sigma_1\oint {\rm d}\sigma_2\langle {\cal O}_1(\sigma_1){\cal O}_2 (\sigma_2)\rangle \eqn\loppo$$ with the length element ${\rm d}\sigma$ (determined from the metric on $\Sigma$) now normalized so that the circumference is 1. The correlation function in \loppo\ is BRST invariant and vanishes if either of the ${\cal O}_i$ is of the form of $\{Q,\dots\}$, so $\omega$ can be regarded as a two-form on the space of classical solutions. $\omega$ has ghost number $-1$ since the ghost number of the vacuum is $-3$ on the disc, and the shifts $\delta_i{\cal V}\rightarrow {\cal O}_i,\,\,i=1,2$ have shifted the ghost number by $+2$. Non-degeneracy of $\omega$ follows from its relation to the Zamolodchikov metric $g(\cdot,\cdot)$ on the space of conformal field theories. Indeed, if $V$ and $W$ are two spin one primary fields containing no ghost or antighost fields, and $\delta_1{\cal V}=V$, $\delta_2{\cal V}=\partial c \cdot W$, then $\omega(\delta_1{\cal V},\delta_2{\cal V})=g(V,W)$. According to the standard analysis of world-sheet BRST cohomology, every tangent vector to the space open string solutions can be put in the form of $\delta_1{\cal V}$ or $\delta_2{\cal V}$. The non-degeneracy of $\omega$ thus is a consequence of the non-degeneracy of the Zamolodchikov metric. $\omega(\cdot,\cdot)$ is really the correct analog of $g(\cdot,\cdot)$ when one includes the ghosts. In many respects, ${\cal O}$ is more fundamental than $\delta {\cal V}$. In string field theory, for instance, the classical string field is an object of ghost number $1$, corresponding to ${\cal O}$. At the level of states, the relation between $\delta{\cal V}$ and ${\cal O}$ can be written $$b_{-1}|{\cal O}\rangle =|\delta {\cal V}\rangle.\eqn\hoho$$ This equation has the following immediate consequence: $$ b_{-1}|\delta{\cal V}\rangle = 0 . \eqn\ofo$$ I want to reexpress these formulas in terms of operators inserted on the boundary of the disc (rather than states), so that they can be taken off-shell. A useful way to do this is as follows. Let $v^i$ be the Killing vector field that generates a rotation of the disc, and let $\epsilon^j{}_k$ be the complex structure of the disc. Since $ v$ is a Killing vector field, the operator-valued one-form $b(v)=v^ib_{ij}\epsilon^j{}_k{\rm d} x^k$ is closed. Let $$ b_\alpha =\oint_{C_\alpha} b(v)\eqn\ucu$$ where the contour $C_\alpha$ is a distance $\alpha$ from the boundary of the disc. Since $b(v)$ is closed, the operator $b_\alpha$, inserted in correlation functions, is independent of $\alpha$ except when the contour $C_\alpha$ crosses operator insertions. The operator $b_\alpha$ acts like $b_{-1}$ on an open string insertion on the boundary of the disc (it acts as $b_0-\overline b_0$ on a closed string insertion at the center of the disc). A version of \ofo\ that involves no assumption of conformal or BRST invariance, and hence makes sense off-shell, is the statement $$ \lim_{\alpha\rightarrow 0}b_\alpha = 0 . \eqn\nurmo$$ This captures the idea that the operators on the boundary of the disc, which is at $\alpha=0$, are annihilated by $b_{-1}$. A similar version of \hoho\ that makes sense off-shell is $$ \lim_{\alpha\rightarrow 0}b_\alpha {\cal O}(\sigma)=\delta{\cal V}(\sigma), \eqn\urmo$$ with $\sigma$ an arbitrary point on the boundary of the disc. We will use the symbol $b_{-1}$ as an abbreviation for $\lim_{\alpha\rightarrow 0} b_\alpha$, and so write \urmo\ as $b_{-1}{\cal O}=\delta{\cal V}$. On shell, when $\delta{\cal V}$ is given, ${\cal O}$ is uniquely determined, either by \mopo\ or by the pair of equations $$\delta{\cal V}=b_{-1}{\cal O} \eqn\mmopo$$ and $$ 0 =\{Q,\cal O\}. \eqn\mmmopo$$ Off-shell, neither \mopo\ nor \mmmopo\ makes sense. \mmopo\ still makes sense, but it does not determine ${\cal O}$ uniquely. It determines ${\cal O}$ only modulo addition of an operator of the form $b_{-1}(\dots)$. Actually, since we consider $\delta \cal V$ to be trivial if it is of the form ${\rm d}(\dots)$, ${\cal O}$ is also indeterminate up to addition of an operator of the form ${\rm d}(\dots)$. The possibility of adding a total derivative to ${\delta V}$ or ${\cal O}$ causes no problem. The indeterminacy that causes a problem is the possibility of adding $b_{-1}(\dots)$ to ${\cal O}$. We might want to define the antibracket off-shell by the same formula we used on-shell: $\omega(\delta_1{\cal V},\delta_2{\cal V}) =\oint {\rm d}\sigma_1\oint {\rm d}\sigma_2\langle {\cal O}_1(\sigma_1) {\cal O}_2(\sigma_2)\rangle$. But this formula is ambiguous, since the ${\cal O}$'s are not uniquely determined by the $\delta {\cal V}$'s. I will make a proposal, though far from definitive, for solving this problem. \section{The Enlarged Space Of Theories} By comparison to string field theory, it is easy to see the origin of the problem. In string field theory, the basic field is an object of ghost number 1 -- an ${\cal O}$, in our present terminology -- and the antibracket is defined, accordingly, by a two point function of ${\cal O}$'s. Since the perturbation of the (boundary term in the) Lagrangian of the two dimensional field theory is defined by $\delta{\cal V}=b_{-1}{\cal O}$, in passing from ${\cal O}$ to $\delta{\cal V}$, we are throwing away some of the degrees of freedom, namely the operators annihilated by $b_{-1}$. To solve the problem, one must find a role in the formalism for those operators. I will simply include them by hand. Instead of saying that the basic object is a world-sheet Lagrangian of the form $$I=I_0+\int_{\partial\Sigma}{\rm d}\sigma\,\,\,{\cal V}, \eqn\koko$$ I will henceforth say that the basic object is such a world-sheet Lagrangian together with a local operator ${\cal O}$ such that $$ {\cal V}=b_{-1}{\cal O}. \eqn\uru$$ The left hand side is now ${\cal V}$, not $\delta{\cal V}$, so we are changing the meaning of ${\cal O}$. Since ${\cal V}$ is determined by ${\cal O}$, we can consider the basic variable to be ${\cal O}$, just as in string field theory. (However, just as in string field theory, one defines the statistics of the field to be the natural statistics of of ${\cal V}$, and the opposite of the natural statistics of ${\cal O}$.) Now we can define the antibracket: $$\omega(\delta_1{\cal O},\delta_2{\cal O})=\oint{\rm d}\sigma_1\oint {\rm d} \sigma_2\langle\delta_1{\cal O}(\sigma_1)\delta_2{\cal O}(\sigma_2)\rangle. \eqn\hogo$$ To formally prove that ${\rm d}\omega=0$, one proceeds as follows. First of all, if $U_i(\sigma_i)$ are any local operators inserted at points $\sigma_i\in\partial\Sigma$, then $$ 0 =\langle b_{-1}\bigl(U_1(\sigma_1)\dots U_n(\sigma_n)\bigr)\rangle \eqn\hodoc$$ This is a consequence of the fact that (as all the operator insertions are on $\partial\Sigma$), the correlation function $\langle b_\alpha\cdot\prod_iU_i(\sigma_i)\rangle$ is independent of $\alpha$. Taking the limit as the contour $C_\alpha$ shrinks to a point, this correlation function vanishes; taking it to approach $\partial\Sigma$, we get \hodoc. This Ward identity can be written out in more detail as $$\eqalign{ 0 &=\langle \bigl( b_{-1}U_1(\sigma_1)\bigr) U_2(\sigma_2)\dots U_n(\sigma_n)\rangle -(-1)^{\eta_1}\langle U_1(\sigma_1)\bigl(b_{-1}U_2(\sigma_2)\bigr) \dots U_n(\sigma_n)) \cr &+(-1)^{\eta_1+\eta_2} \langle U_1(\sigma_1)U_2(\sigma_2)\left(b_{-1}U_3(\sigma_3)\right) \dots\rangle \rangle \pm \dots = 0,\cr} \eqn\huccu$$ with $\eta_i$ such that $(-1)^{\eta_i}$ is $\mp 1$ for $U_i$ bosonic or fermionic (and $\pm 1$ for $b_{-1}U_i$ bosonic or fermionic). Now if ${\cal O}={\cal O}_0+\sum_it_i{\cal O}_i$, then $${\rm d}\omega(\delta_i{\cal O},\delta_j{\cal O},\delta_k{\cal O}\rangle ={\partial\over\partial t_i}\omega(\delta_j{\cal O},\delta_k{\cal O}) \pm {\rm cyclic~permutations}.\eqn\alsoo$$ Also, since $\partial/\partial t_i$ is generated by an insertion of $\delta_i{\cal V}=b_{-1}\delta_i{\cal O}$, we have $${\partial\over\partial t_i}\omega(\delta_j{\cal O},\delta_k{\cal O}) =\oint{\rm d}\sigma_1\,\,{\rm d}\sigma_2\,\,{\rm d}\sigma_3 \langle \bigl(b_{-1}{\delta_i {\cal O}}(\sigma_1)\bigr) \cdot \delta_j{\cal O}(\sigma_2) \cdot \delta_k{\cal O}(\sigma_3)\rangle. \eqn\balsoo$$ Combining these formulas, we see that ${\rm d}\omega=0$ is a consequence of \huccu. To establish BRST invariance of $\omega$, one must show that ${\rm d}(i_V\omega)=0$, or in other words that $$0={\partial\over\partial t_i}\oint{\rm d}\sigma_1{\rm d}\sigma_2\langle \delta_j{\cal O}(\sigma_1)\cdot\{Q,{\cal O}\}(\sigma_2)\rangle \pm i\leftrightarrow j. \eqn\omigo$$ This is proved similarly, using the additional facts that $\{b_{-1},Q\}=v^i \partial_i$ (the operator that generates the rotation of the circle) and and $\oint{\rm d}\sigma\,\, v^i\partial_i{\cal O}=0$. \section{Critique} What is unsatisfactory about all this? To begin with, we have been working formally in a ``space of all open-string world-sheet theories,'' totally ignoring the ultraviolet divergences that arise when one starts adding arbitrary local operators (perhaps of very large positive or negative dimension) to the boundary action. Even worse, in my view, we have tacitly accepted the view that a theory is canonically determined by its Lagrangian, in this case $I=I_0+\int_{\partial\Sigma}{\rm d}\sigma \,\,\,{\cal V}$. That is fine for cutoff theories with a particular cutoff in place, but runs into difficulties when one tries to remove the cutoff. In the limit in which one removes the cutoff, the theory really depends on both ${\cal V}$ and the cutoff procedure that is used. In our construction, can we work with a cutoff theory or do we need to remove the cutoff? The ingredients we needed were rotation invariance, invariance under $b_{-1}$, and $Q$ invariance. There is no problem in picking a cutoff (such as a Pauli-Villars regulator in the interior of the disc) that preserves the first two (with a modified definition of $b_{-1}$), but there is presumably no cutoff that preserves $Q$. Therefore, we need to take the limit of removing the cutoff. With a cutoff in place, one can use the above procedure to define $\omega$ and prove ${\rm d}\omega=0$, but the cutoff $\omega$ will not be BRST invariant; one will have to hope to recover BRST invariance of $\omega$ in the limit in which the cutoff is removed. This may well work, if a ``space of all world-sheet theories'' (with the desired tangent space) does exist. The main point that arouses skepticism is actually the existence of the wished-for theory space. Even if such a space exists, there is something missing (even at a formal level) in my above definition of $\omega$. Because of the cutoff dependence at intermediate stages, an open-string field theory does not really have a naturally defined local operator ${\cal V}$ representing the boundary interaction. Even formally, there is some work to be done to explain what type of objects ${\cal V}$ and ${\cal O}$ are (independently of the particular cutoff procedure) such that the key equation ${\cal V}=b_{-1}{\cal O}$ makes sense. If this were accomplished, one could perhaps give a direct formal definition of $\omega$ manifestly independent of cutoff procedure. \chapter{Conclusions} I hope that I have at least demonstrated in this paper that in trying to make sense of the ``space of all open-string world-sheet field theories,'' the important structure that this space should possess is a BRST invariant antibracket. This will automatically lead to a natural, background independent open-string field theory in which classical solutions are BRST invariant world-sheet theories, and on-shell gauge transformations are generated by the world-sheet BRST operator. The reasons for hoping that the appropriate antibracket exists are that it exists on-shell, it exists in string field theory, and it would exist (as we saw in the last section) if one could totally ignore ultraviolet questions. Moreover, the antibracket is the one important structure that always exists in (appropriate) gauge fixing of classical field theory. Other structures, such as metrics in field space, etc., may or may not exist, but have no general significance in off shell classical field theory. Perhaps it is worth mentioning that although our considerations may appear abstract, they can be made concrete to the extent that one can make sense of the space of open-string field theories. One does not even need the space of {\it all} open-string field theories, since the considerations of this paper are local in theory space and never involve sums over unknown degrees of freedom. If one understands any concrete family of two-dimensional field theories, one can determine the function $S$ on the parameter space of this family (up to an additive constant) by integrating the formula $V^I\omega_{IJ}=\partial_JS$; this formula can be made entirely concrete (in terms of correlation functions in the given class of theories). I hope to give some examples of this elsewhere. It seems reasonable to expect that a natural antibracket also exists on the space of all two-dimensional closed-string field theories. It would be nice to understand at least a formal definition (even at the imprecise level of \S3). As for defining an anticommuting vector field on the space of closed-string theories, I hope that this can be done by embedding the two-dimensional world-sheet as a non-topological defect in a topological theory of higher dimension, and using the higher dimensional world much as we used the disc in the present paper. Background independent closed string field theory may therefore be closer than it appears. \ack{I would like to thank G. Segal and B. Zwiebach for discussions.} \refout \end
1,116,691,499,444
arxiv
\section{Introduction} In new generation communication systems, the number of devices participating in the network grows exponentially. Furthermore, data rate requirements become challenging to satisfy as the network density increases. Standard cellular systems where a set of mobile stations (MSs) are served by a single central base station (BS) have a limited performance due to inter/intra-cell interference. In 5G, Cloud Radio Access Network (C-RAN) is a candidate solution which uses the multi-cell cooperation idea. In C-RAN hierarchy, base stations are simple radio units called remote radio heads (RRHs) which only implement radio functionality such as RF conversions, filtering, and amplifying. All baseband processing is done over a pool of central processors (CPs) which are connected to RRHs with finite capacity fronthaul links. This approach decreases the cost of deployment as compared to the traditional systems where each BS has its own on-site baseband processor. Furthermore, multi-cell cooperation enables better resource allocation and enhances the performance. The main architecture of a typical C-RAN system is described in \cite{C-RAN}. In a C-RAN cluster of RRHs and MSs, all RRH-to-MS transmissions are performed at the same time and frequency band to use the spectrum efficiently. In traditional C-RAN networks, all RRHs are connected to a CP by means of wired fronthaul links with high capacity. User data is shared among RRHs using fronthaul links enabling an optimized resource allocation. On the other hand, in some situations, the cost of using wired links can be high especially for urban areas. As an alternative approach, one can use a large base station located close to CP to send the user data from CP to RRHs through wireless links. By this method, the rate of data transmission in fronthaul links can be adaptively adjusted using proper power allocations and beamforming schemes. In the wireless fronthaul case, frequency bands of the fronthaul and the access links (links between RRHs and MSs) may be the same or different. In in-band scenario where the two frequency bands are the same, the RRHs should be capable of performing self-interference cancellation which increases the equipment complexity. To make the RRHs simpler, either the two frequency bands may be separated or a time-division based transmission can be used. In a C-RAN system with wireless fronthaul, the main aim is to design proper beamformers to optimize the network. This problem is similar to a two-hop relay design problem. In relay systems, there are different types of multi-hop mechanisms such as amplify-and-forward (AF), decode-and-forward (DF), decompress-and-forward (DCF), etc. The corresponding method is determined by the operation applied by RRHs to the signal received from fronthaul links before transmitting to users. AF type systems are the simplest ones where RRHs only apply some scaling to the received data \cite{AF-C-RAN}-\cite{AF-Relay2}. In DF based systems, RRHs apply a decoding to the user data requiring baseband processing ability for RRHs \cite{DF-1}-\cite{DF-2}. In DCF based systems, both decoding and decompressing abilities are necessary \cite{DCF-1}-\cite{DCF-2}. In DF and DCF based systems, there is some cooperation between CP and RRHs to decide which RRHs to decode which user data. In general, this requires a combinatorial search making the design complex. On the other hand, as the user data is decoded, assuming a perfect decoding for sufficiently high signal-to-noise-ratio (SNR), the interference between user signals can be eliminated at RRHs allowing to satisfy a higher performance for users. In general, AF systems are simpler but the interference cannot be perfectly eliminated at RRHs. In C-RAN systems, it is intended to make RRHs as simple as possible to decrease the deployment cost making AF systems more attractive. To optimize a C-RAN network by designing beamformers, channel coefficients should be known with some accuracy. In general, perfect channel state information (CSI) is not available as the channel estimation is done via pilot signals with finite power. There are different models for channel estimation error. It can be shown that linear channel estimation methods with orthogonal pilot signals yield an additive channel estimation error. The error is a random vector whose statistics may be known or not known. Some works assume that first or second order statistics are known \cite{AF-Relay1}, \cite{Ch-Add-1}-\cite{AF-Relay3}, and some other works use the model where error is norm-bounded \cite{AF-C-RAN}, \cite{AF-Relay2}, \cite{AF-Relay4}. The first approach is used when quantization error in channel estimation is negligible and the second one is used when quantization error is dominant \cite{AF-Relay5-ch-err}. Using the knowledge about the channel error vectors, the beamforming design problem can be well optimized and robustness against errors can be achieved. In this paper, we consider a downlink C-RAN system with wireless fronthaul where the transmissions of fronthaul and access links are in the same frequency band but in different time-slots. We assume that there is a partial channel knowledge where the second order statistics of the channel error is perfectly known. We optimize fronthaul and access link beamformers with AF type relaying in RRHs. Optimization is performed to minimize total power spent under user signal-to-interference-and-noise-ratio (SINR) constraints. In the literature, the power minimization problem is referred as Quality-of-Service (QoS) \cite{AF-Relay2}. In this approach, it is guaranteed to satisfy a certain quality of service to each user and the total power spent, which is one of the major costs of an operator, is minimized. In this work, our main aim is to find a theoretical lower bound for total power spent in the system. In showing the tightness of a lower bound, existence of an algorithm that comes close to the bound is sufficient since no algorithm can perform better than a lower bound. To show that the given bound is tight enough, we consider four different design methods with different complexities. The first method is Alternating Optimization (AO), which consecutively solves a series of beamforming design problems using convex optimization with semi-definite relaxation (SDR) approach. Both fronthaul and access link beamformers are designed using convex optimizations. The performance of this method is close to the bound but its complexity is high in general. The second method is a modified version of AO which is called Total SNR Maximization (TSM), where fronthaul beamformer design is based on the maximization of total SNR at RRHs. The access link beamformers are found as in AO. The third and fourth methods are proposed as a mixture of standard beamforming design methods which are maximal ratio combining (MRC), zero forcing (ZF) and singular value decomposition (SVD). The third method is a combination of MRC and ZF so it is named as MRC-ZF. In this method, CP beamformers (related to fronthaul link) are found using MRC whereas RRH beamformers (related to access link) are found using ZF. The fourth method is called SVD-ZF and the corresponding CP and RRH beamformers are designed accordingly. MRC-ZF and SVD-ZF can directly find beamformers without using a convex optimization and hence they are simpler compared to AO and TSM. They are considered to make a comparison between the well-known beamforming methods and the high-complexity convex optimization based methods. The contributions of the paper can be listed as below: \begin{itemize} \item We derive a theoretical lower bound for total power spent in the system to serve multiple users for a given set of network parameters. By detailed simulations, we show the tightness of the bound. In general, the papers related to C-RAN proposes different design methods whose optimality are not known due to the lack of a theoretical bound or a globally optimum solution. To the best of our knowledge, there is no other work deriving a bound. \item We propose four novel design methods. Two of them are based on convex optimization with SDR approach and the other two are based on a combination of well-known methods. Because of the mixed structure of CP and RRH beamformers in SINR expressions, convex optimization cannot be directly applied. We organize the related expressions for which SDR approach is applicable. By similar reasons, the direct application of well-known methods is also not possible. We solve a system of matrix equations to apply MRC, ZF and SVD. \item We perform detailed simulations to observe the performances of the proposed methods. We make a comparison to the theoretical bound for different network parameters. \end{itemize} The organization of the paper is as follows. In Section II, related works are reviewed. Section III describes the general system model. In Section IV, a novel theoretical performance bound for the proposed problem is derived. Section V includes the convex optimization based methods AO and TSM. The modified beamforming methods MRC-ZF and SVD-ZF are described in Section VI. In Section VII, simulation results are presented. Finally, Section VIII concludes the paper. \subsection*{Notation} Throughout the paper, the vectors are denoted by bold lowercase letters and matrices are denoted by bold uppercase letters. $(\cdot)^T, (\cdot)^H,$ and $\tr(\cdot)$ indicates the transpose, conjugate transpose and trace operators, respectively. $\textbf{0}$ describes the all-zero matrix, and $\textbf{A} \succeq 0$ implies that the matrix $\textbf{A}$ is Hermitian and positive semi-definite. $\text{diag}(x_1, x_2, \ldots, x_n)$ denotes the diagonal matrix with diagonal elements $x_1, x_2, \ldots, x_n$ and $\textbf{I}_n$ denotes $n \times n$ identity matrix. $\lmin{\cdot}, \lambda_i(\cdot), e_i(\cdot)$ denotes the minimum eigenvalue, $i$-th largest eigenvalue and the corresponding unit-norm eigenvector of the corresponding Hermitian positive semi-definite matrix, respectively. $\norm{\cdot}$ denotes the $\ell_2$-norm of the corresponding matrix, $\mathbb{E}[\cdot]$ denotes the expectation operator. $\tvec{\textbf{A}}$ is the column vector consisting of columns of $\textbf{A}$. The symbol $\otimes$ denotes the Kronecker product. Finally, $\mathbb{C}$ denotes the set of complex numbers and $\delta[\cdot]$ corresponds to the discrete impulse function satisfying $\delta[0]=1, \: \delta[x]=0$ for all $x \neq 0$. \section{Related Studies} In this section, we review related studies existing in the literature. Firstly, we present the works related to wired fronthaul links and mention the main differences compared to the wireless case. Secondly, we review the studies related to AF, DF and DCF type wireless fronthaul systems and indicate the main differences with our work. Thirdly, we mention papers with different channel uncertainty models used in C-RAN system designs. Finally, we express the major differences of the papers related to standard relay networks. \vspace{-5mm} \subsection{Wired Fronthaul} There are a lot of studies existing in the literature related to multi-cell cooperation techniques for wired fronthaul. In \cite{Wired-Rate1}-\cite{Wired-Rate3}, optimization is performed to maximize the data rate of users under certain transmit power and fronthaul capacity constraints. The optimization of SINRs of users is analyzed in \cite{Wired-UDD} using uplink-downlink duality. In \cite{Wired-LimitedFronthaul}, the total transmit power is minimized under fronthaul capacity constraints. In \cite{Wired-SDR}, the cost function consists of a weighted sum of the total transmit power and the total fronthaul data. As another approach, \cite{Wired-UserMax} aims at finding the largest set of users which can be served by the system where each user data is sent only by a single RRH. The power consumption of RRHs under active and sleeping modes can also be included to the power minimization problem as done in \cite{Wired-GreenCRAN}. In \cite{Wired-ZF}, a standard ZF beamformer design is used, however its performance is limited in eliminating the interference. In \cite{Wired-Heuristic1}-\cite{Wired-Heuristic3}, the cooperation strategy is found using some heuristic search techniques. Possible strategies in imperfect channel case are considered in \cite{Ch-Add-3}, \cite{Wired-Imperfect-CSI}. Cluster formation \cite{Wired-ClusterFormation} and the effect of user traffic delay \cite{Wired-Delay} are also analyzed in the literature. For wired fronthaul case, as there is no interference between different users at RRHs, there is a natural combinatorial user selection problem. CP determines the set of users to be served by each RRH (possibly intersecting) and sends the corresponding data through fronthaul links. In general, most of the studies assume that perfect user data is available at RRHs after fronthaul transmission where some works also take the decompression error effect into account. Since the fronthaul transmission takes places over cables, there is no beamforming in CP. The design problem is to decide on the cooperation strategy and beamforming coefficients for access link. On the other hand, in wireless fronthaul networks, both fronthaul and access links have their own beamformers which are the main design parameters. Considering the differences in fronthaul structures, the methods proposed for wired case cannot be directly applied to wireless case. \vspace{-4mm} \subsection{Relaying Mechanism for Wireless Fronthaul} Works related to wireless fronthaul case are limited in number compared to the standard wired case. The problem for wireless fronthaul case is similar to two-hop relaying. Most studied relaying mechanisms for C-RAN with wireless fronthaul concept are AF, DF and DCF. In \cite{DF-1} DF based relaying is assumed where each RRH can decode only a single user's data at once. If more than one user's data is to be decoded, decoding is done by time division. The combinatorial problem of choosing the set of user data to be decoded by each RRH is solved in \cite{DF-1} while an SDR based beamformer optimization is done under perfect CSI assumption. \cite{DF-2} also analyzes DF based relaying where a weighted sum of user data rates is maximized under power transmit limit. There is a constraint that each RRH can serve a single user. Beamformer optimization is performed using SDR and perfect CSI is assumed. In \cite{DCF-1} both DF and DCF based approaches are considered where the set of user data to be decoded by each RRH is assumed to be known and beamforming optimization is done using difference of convex method. Data rate maximization under power limit is analyzed for perfect CSI case. \cite{DCF-2} is the generalized version of \cite{DCF-1} where there are more than one RRH clusters each controlled by a different CP. \cite{AF-C-RAN} uses AF type relaying with a norm-bounded channel estimation error model. Using worst-case SINR formulas total power is minimized under SINR constraints. In that work, fronthaul beamformers are assumed to be known and access link beamformers are designed using SDR based methods along with a ZF based approach implemented for comparison. In \cite{AF-Relay2}, a two-hop AF relaying problem is studied under norm-bounded channel error model. As all independent sources have a single antenna, fronthaul beamforming is not applicable, only access link beamforming design is studied. SDR based optimization is used to minimize total transmit power under SINR constraints. Because of the combinatorial nature of DF and DCF based relaying schemes, the methods used for fronthaul beamforming design cannot be directly adapted to AF type relaying. For access link beamforming design, SDR based approach is widely used for all types of relaying schemes. Some works also consider well known beamforming methods (such as ZF) for comparison. To the best of our knowledge, there is a very limited amount of work about C-RAN with wireless fronthaul and AF relaying. Furthermore, in such studies, neither the fronthaul and access link beamforming design is jointly considered nor a theoretical bound is derived. \vspace{-5mm} \subsection{Channel Error Model} In C-RAN concept, three types of channel error models are mostly used. The first one is perfect CSI model where channel coefficients are assumed to be perfectly known. Although it is unrealistic, the methods proposed for this case may provide some insights. Furthermore, in most of the cases, it is possible to modify the corresponding algorithms accordingly when the channel is partially known. The papers \cite{DF-1}-\cite{DCF-2}, \cite{Wired-Rate1}-\cite{Wired-Heuristic3} all assume perfect CSI. The second approach is the norm-bounded error assumption. In this assumption, it is assumed that the error vectors are in some sphere with known radius. The works with this assumption perform beamforming design using worst case SINRs which can be defined as the minimum value of SINRs for given error norm bounds. \cite{AF-C-RAN}, \cite{AF-Relay2}, \cite{AF-Relay4}-\cite{AF-Relay5-ch-err} and some references therein use this method. The third approach which is also used in our work assumes that second order statistics (mean and covariance matrices) of the channel estimation error vectors are known. When this approach is used the mean powers of signal, interference and noise terms are used in the design process. \cite{AF-Relay1}, \cite{Ch-Add-1}-\cite{AF-Relay3} use the last approach. \vspace{-5mm} \subsection{Standard Relay Networks} The C-RAN with wireless fronthaul concept is similar to two-hop multi source/destination multi-antenna relaying networks and some beamforming design techniques used in standard relaying literature (such as SDR) can be adapted to C-RAN framework. On the other hand, joint optimization of fronthaul and access link beamformers is not widely considered in standard relaying problems. \cite{AF-Relay1}-\cite{AF-Relay2}, \cite{AF-Relay3}-\cite{AF-Relay5-ch-err}, \cite{AF-Relay6}-\cite{AF-Relay7} include beamforming design for standard relaying problems which are all special cases of our problem of concern. Hence, some methods proposed for relaying problems can be used for our purposes but none of them directly provides a solution. \section{System Model} We consider the downlink of a C-RAN cluster including a CP with $M$ antennas, $N$ RRHs each with $L$ transmit/receive antennas, and $K$ MSs each with a single antenna. All CP-to-RRH and RRH-to-MS channels are assumed to be flat, constant over a transmission period and known by CP with some additive Gaussian error with known second order statistics. We assume a two stage transmission scheme where fronthaul and access link transmissions are performed in different time slots. In the first stage, the user data is sent from CP to RRHs over wireless channels. RRHs apply some linear transformation to the received data as in AF relaying using beamforming matrices and forward the transformed signal to the MSs in the second stage. We assume that RRHs are simple radio units without the capability of baseband processing and hence they cannot decode the user data. Therefore, AF relaying mechanism is considered in this model. In Fig. 1, we see the general block diagram of the model used. \vspace{-5mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \includegraphics[width=0.62\textwidth]{wireless_fronthaul_v3.png} \caption{Block Diagram of Downlink C-RAN with Wireless Fronthaul.} \end{figure} \vspace{-5mm} We denote the channel between CP and $n$-th RRH as $\textbf{G}_n \in \mathbb{C}^{M \times L}$, the channel between $n$-th RRH and $k$-th MS as $\textbf{h}_{kn} \in \mathbb{C}^L$, the beamformer vector of CP for $k$-th user as $\textbf{v}_{k}\in \mathbb{C}^M$, and beamforming matrix for $n$-th RRH as $\textbf{W}_n\in \mathbb{C}^{L \times L}$. The received signal of the $n$-th RRH in the first transmission stage can be written as \begin{equation} \textbf{x}_n = \textbf{G}_n^H\displaystyle\sum_{k=1}^K\textbf{v}_{k}s_k+\textbf{z}_n, \quad n=1, 2, \ldots, N \end{equation} where $s_k$ denotes the $k$-th user data which satisfies $\mathbb{E}[|s_k|^2]=1, \quad \forall k=1, 2, \ldots, K$ and $\textbf{z}_n \sim \mathcal{C}\mathcal{N}(\textbf{0}, \sigma_{\text{RRH}}^2\textbf{I}_{L})$ is the noise term in the corresponding RRH. After the first stage, the transformed signal by $n$-th RRH is given by \begin{equation} \textbf{y}_n = \textbf{W}_n \textbf{x}_n, \quad n=1, 2, \ldots, N. \end{equation} In this case, the received signal by the $k$-th MS can be expressed by \begin{equation} \begin{aligned} r_k &= \displaystyle\sum_{n=1}^N \textbf{h}_{kn}^H\textbf{y}_n + n_k = \displaystyle\sum_{n=1}^N \textbf{h}_{kn}^H \textbf{W}_n \left(\textbf{G}_n^H\displaystyle\sum_{\ell=1}^K\textbf{v}_{\ell}s_{\ell} + \textbf{z}_n\right) + n_k \\ &= \displaystyle\sum_{n=1}^N \textbf{h}_{kn}^H \textbf{W}_n \textbf{G}_n^H \textbf{v}_{k}s_{k}+\displaystyle\sum_{n=1}^N \displaystyle\sum_{\ell \neq k}^K \textbf{h}_{kn}^H \textbf{W}_n \textbf{G}_n^H \textbf{v}_{\ell}s_{\ell} + \displaystyle\sum_{n=1}^N \textbf{h}_{kn}^H \textbf{W}_n \textbf{z}_n + n_k. \end{aligned} \end{equation} Here $n_k \sim \mathcal{C}\mathcal{N}(0, \sigma_{\text{MS}}^2)$ denotes the noise term in the $k$-th MS. In order to simplify expressions, we define augmented channel, beamformer and noise vectors/matrices as given below: \begin{equation} \begin{aligned} \textbf{h}_k &= [\textbf{h}_{k1}^T \: \textbf{h}_{k2}^T \: \cdots \: \textbf{h}_{kN}^T]^T \: : \: NL \times 1, \quad \textbf{W} = \text{diag}\left(\textbf{W}_1, \: \textbf{W}_2, \: \ldots, \: \textbf{W}_N \right) \: : \: NL \times NL, \\ \textbf{G} &= \left[\textbf{G}_1 \: \textbf{G}_2 \: \cdots \: \textbf{G}_N \right] \: : \: M \times NL, \quad \textbf{z} = [ \textbf{z}_1^T \: \textbf{z}_2^T \: \cdots \: \textbf{z}_N^T]^T \: : \: NL \times 1. \\ \end{aligned} \end{equation} Using the augmented variables, we can write $r_k$ as \begin{equation} \label{r_k_1} r_k = \textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_k s_k + \displaystyle\sum_{\ell\neq k} \textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_{\ell}s_{\ell} + \textbf{h}_k^H\textbf{W}\textbf{z} + n_k. \end{equation} We model the channel estimates as $\textbf{G}_n = \widehat{\textbf{G}}_n + \Delta \textbf{G}_n, \: \: \textbf{h}_{kn} = \widehat{\textbf{h}}_{kn} + \Delta \textbf{h}_{kn}$ where $\widehat{\textbf{G}}_n $ and $\widehat{\textbf{h}}_{kn}$ are channel estimates, $\Delta \textbf{G}_n$ is a zero-mean complex Gaussian matrix with independent entries each with variance $\sigma_{1,n}^2$ and $\Delta \textbf{h}_{kn} \sim \mathcal{C}\mathcal{N}\left(\textbf{0}, \sigma_{2,k,n}^2\textbf{I}_{L}\right)$ is a circularly symmetric Gaussian vector. We also assume that $\Delta \textbf{G}_n$ and $\Delta \textbf{h}_{kn}$ are independent for all $n$ and $k$. Using the error vectors and matrices, we can form the corresponding augmented variables as shown in (\ref{h_aug}). \begin{equation} \label{h_aug} \begin{aligned} \widehat{\textbf{h}}_k &= [\widehat{\textbf{h}}_{k1}^T \: \widehat{\textbf{h}}_{k2}^T \: \cdots \: \widehat{\textbf{h}}_{kN}^T]^T \: : \: NL \times 1, \quad \widehat{\textbf{G}} = \left[\widehat{\textbf{G}}_1 \: \widehat{\textbf{G}}_2 \: \cdots \: \widehat{\textbf{G}}_N \right] \: : \: M \times NL, \\ \Delta \textbf{h}_k &= [\Delta \textbf{h}_{k1}^T \: \Delta \textbf{h}_{k2}^T \: \cdots \: \Delta \textbf{h}_{kN}^T]^T \: : \: NL \times 1, \quad \Delta \textbf{G} = \left[\Delta \textbf{G}_1 \: \Delta \textbf{G}_2 \: \cdots \: \Delta \textbf{G}_N \right] \: : \: M \times NL. \end{aligned} \end{equation} Using the new variables and (\ref{r_k_1}), we can write $r_k$ as \begin{equation} \label{r_k_2} r_k = \underbrace{\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k s_k}_{\text{desired}} + \underbrace{\left(\textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_k - \widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right) s_k}_{\text{interference part 1}} + \underbrace{\displaystyle\sum_{\ell\neq k} \textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_{\ell}s_{\ell}}_{\text{interference part 2}} + \underbrace{\textbf{h}_k^H\textbf{W}\textbf{z} + n_k}_{\text{noise}}. \end{equation} In (\ref{r_k_2}), the desired part includes the desired signal for the $k$-th MS. Notice that it contains only the channel estimates for the $k$-th user which is the only useful part for the receiver of corresponding MS. Interference part 1 is related to the channel mismatch for the $k$-th user signal. Although it includes $s_k$ term, the corresponding signal is not useful as its coefficient is not known by the receiver due to uncertainty in the channel estimates. Interference part 2 is the actual interference signal including the signals for other users. Noise term is the combination of the amplified and forwarded RRH receiver noise and MS receiver noise. Using the equation in (\ref{r_k_2}), we define \begin{equation} \label{SINR_1} \text{SINR}_k = \dfrac{P_d}{P_{I,1}+P_{I,2}+P_n} \end{equation} where \begin{equation} \label{P_d} \begin{aligned} P_d &= \mathbb{E}\left\{\left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k s_k\right|^2\right\}, \: \: &&P_{I,1} = \mathbb{E}\left\{\left|\left(\textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_k - \widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right) s_k \right|^2\right\} \\ P_{I,2} &= \mathbb{E}\left\{\left|\displaystyle\sum_{\ell\neq k} \textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_{\ell}s_{\ell} \right|^2\right\}, \: \: &&P_n = \mathbb{E}\left\{\left|\textbf{h}_k^H\textbf{W}\textbf{z} + n_k \right|^2\right\}. \end{aligned} \end{equation} Using the fact that $\mathbb{E}\left[s_k^Hs_{\ell}\right]=\delta[k-\ell]$ and statistics of the channel error matrices/vectors and noise terms, we find that \begin{equation} \label{SINR_2} \text{SINR}_k = \dfrac{\left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2}{\displaystyle\sum_{\ell=1}^K\tr\left(\textbf{D}_k\textbf{W}\textbf{C}_{\ell}\textbf{W}^H\right) - \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 + \sigma_{\text{RRH}}^2\tr\left(\textbf{D}_k\textbf{W}\textbf{W}^H\right) + \sigma_{\text{MS}}^2} \end{equation} where \begin{equation} \begin{aligned} \textbf{D}_k &= \widehat{\textbf{h}}_k\widehat{\textbf{h}}_k^H + \bm{\Sigma}_{2,k}, \: \: \textbf{C}_k = \widehat{\textbf{G}}^H\textbf{v}_k\textbf{v}_k^H \widehat{\textbf{G}} + (\textbf{v}_k^H\textbf{v}_k)\bm{\Sigma}_1, \\ \bm{\Sigma}_1 &= \text{diag}\left(\sigma_{1,1}^2\textbf{I}_L, \sigma_{1,2}^2\textbf{I}_L, \ldots, \sigma_{1,N}^2\textbf{I}_L\right), \: \: \bm{\Sigma}_{2,k} = \text{diag}\left(\sigma_{2,k,1}^2\textbf{I}_L, \sigma_{2,k,2}^2\textbf{I}_L, \ldots, \sigma_{2,k,N}^2\textbf{I}_L\right). \end{aligned} \end{equation} In Appendix A, we show that the rate $\log_2(1+\text{SINR}_k)$ is achievable for the $k$-th user. Hence, the SINR that we defined can be used as a design criteria. Another design term that can be optimized is the total power spent in the system. The total power $P$ has two components $P_{\text{CP}}$ and $P_{\text{RRH}}$ which correspond to the power transmitted by CP and RRHs, respectively. Using the fact that $\mathbb{E}\left[s_k^H s_{\ell}\right]=\delta[k-\ell]$, we can write\footnote{Actual power terms include a constant multiplier which does not affect the solution, and hence they are omitted.} \begin{equation} P_{\text{CP}} = \mathbb{E}\left[\left|\displaystyle\sum_{k=1}^K\textbf{v}_ks_k\right|^2\right] \\ = \displaystyle\sum_{k=1}^K \textbf{v}_k^H \textbf{v}_k, \end{equation} and \begin{equation} \begin{aligned} P_{\text{RRH}} &= \displaystyle\sum_{n=1}^N \mathbb{E}\left[\left|\textbf{y}_n\right|^2\right] = \displaystyle\sum_{n=1}^N \mathbb{E}\left[\left|\textbf{W}_n\left(\textbf{G}_n^H\displaystyle\sum_{k=1}^K\textbf{v}_{k}s_k+\textbf{z}_n\right)\right|^2\right] \\ &= \displaystyle\sum_{k=1}^K \textbf{v}_k^H \textbf{G}\textbf{W}^H \textbf{W} \textbf{G}^H \textbf{v}_k + \sigma_{\text{RRH}}^2 \tr\left(\textbf{W}^H \textbf{W}\right). \end{aligned} \end{equation} Due to imperfect channel state information, $P_{\text{RRH}}$ includes random terms. Therefore, we optimize the mean power $P=P_{\text{CP}} + \mathbb{E}\left\{P_{\text{RRH}}\right\}$ which can be evaluated as \begin{equation} \label{P_eqn} P = \displaystyle\sum_{k=1}^K \tr\left(\bm{\tau_0} \textbf{v}_k\textbf{v}_k^H\right) + \sigma_{\text{RRH}}^2 \tr\left(\textbf{W}^H \textbf{W}\right) \end{equation} where $\bm{\tau_0}= \textbf{I}_M + \widehat{\textbf{G}}\textbf{W}^H\textbf{W}\widehat{\textbf{G}}^H + \tr\left(\textbf{W}^H \textbf{W} \bm{\Sigma}_1\right)\textbf{I}_M$. In this study, we aim to minimize total mean power $P$ under SINR constraints $\text{SINR}_k \geq \gamma_k$ where $\left\{\gamma_k\right\}_{k=1}^K$ are given SINR thresholds.\footnote{Feasibility cannot be guaranteed. Bad channel conditions and/or high SINR thresholds may yield infeasible results.} As shown in Appendix A, the SINR constraints provide that the rate $\log_2(1+\gamma_k)$ is achievable for the $k$-th user. This type of problem is studied under Quality-of-Service (QoS) in the literature where we minimize the power spent in the system by satisfying a certain rate (or SINR) for each user. User rates can be adjusted according to the priority of users by changing the corresponding threshold values. The main optimization problem (P0) can be formulated as \begin{equation} (\text{P}0) \: \: \min_{\textbf{W}, \{\textbf{v}_k\}_{k=1}^K} P \quad \text{such that} \quad \text{SINR}_k \geq \gamma_k, \quad \forall k=1, 2, \ldots, K. \end{equation} \section{A Theoretical Performance Bound} In this section, we find a novel performance bound for (P0). We find a lower bound for the total mean power P under SINR constraints. Using the SINR constraints, for all $k$ we have \begin{equation} \label{B1} \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 \geq \gamma_k\left(\tr\left(\textbf{D}_k\textbf{W}\textbf{C}_k\textbf{W}^H\right) - \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 + \sigma_{\text{RRH}}^2\tr\left(\textbf{D}_k\textbf{W}\textbf{W}^H\right) + \sigma_{\text{MS}}^2\right). \end{equation} Numerical manipulations reveal that \begin{equation} \label{B2} \begin{aligned} \tr\left(\textbf{D}_k\textbf{W}\textbf{C}_k\textbf{W}^H\right) - \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 &= \tr\left(\bm{\Sigma}_{2,k} (\textbf{W}\Gcap^H\textbf{v}_k)(\textbf{W}\Gcap^H\textbf{v}_k)^H\right) + \\ &\textbf{v}_k^H \textbf{v}_k\left[\tr\left((\hcap_k^H\textbf{W})^H(\hcap_k^H\textbf{W})\bm{\Sigma}_1\right) + \tr\left(\textbf{W}^H\textbf{W}\bm{\Sigma}_{2,k}\bm{\Sigma}_1\right)\right]. \end{aligned} \end{equation} To show (\ref{B2}), we use the facts $\textbf{W}\bm{\Sigma}_1=\bm{\Sigma}_1\textbf{W}$ and $\textbf{W}\bm{\Sigma}_{2,k}=\bm{\Sigma}_{2,k}\textbf{W}$. We know that using Von-Neumann's Inequality \cite{Von-Neumann}, for any two $c \times c$ Hermitian positive semi-definite matrices $\textbf{A}$ and $\textbf{B}$ we have $\tr\left(\textbf{A}\textbf{B}\right) \geq \displaystyle\sum_{i=1}^c \lambda_i(\textbf{A}) \lambda_{c-i+1}(\textbf{B}) \geq \lambda_c(\textbf{A}) \lambda_1(\textbf{B}) = \lmin{\textbf{A}}\norm{\textbf{B}}$. Using this fact and (\ref{B2}), we have \begin{equation} \begin{aligned} \tr\left(\textbf{D}_k\textbf{W}\textbf{C}_k\textbf{W}^H\right) - \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 &\geq \lmin{\bm{\Sigma}_{2,k}}\norm{\textbf{W}\Gcap^H\textbf{v}_k}^2 + \\ &\textbf{v}_k^H\textbf{v}_k\left[\lmin{\bm{\Sigma}_1}\norm{\hcap_k^H\textbf{W}}^2 + \lmin{\bm{\Sigma}_{2,k}\bm{\Sigma}_1}\norm{\textbf{W}}^2\right]. \end{aligned} \end{equation} Similarly, we get $\tr\left(\textbf{D}_k\textbf{W}\textbf{W}^H\right) = \tr\left((\hcap_k\hcap_k^H+\bm{\Sigma}_{2,k})\textbf{W}\textbf{W}^H\right) \geq \norm{\hcap_k\textbf{W}}^2+\lmin{\bm{\Sigma}_{2,k}}\norm{\textbf{W}}^2$. Therefore, we obtain that \begin{align} \label{B3} \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 &\geq \gamma_k \Big[\lmin{\bm{\Sigma}_{2,k}}\norm{\textbf{W}\Gcap^H\textbf{v}_k}^2 + \lmin{\textbf{v}_k^H\textbf{v}_k\bm{\Sigma}_1+\sigma_{\text{RRH}}^2\textbf{I}_{NL}}\norm{\hcap_k^H\textbf{W}}^2 + \notag \\ &\lmin{\textbf{v}_k^H\textbf{v}_k\bm{\Sigma}_{2,k}\bm{\Sigma}_1 + \sigma_{\text{RRH}}^2\bm{\Sigma}_{2,k}}\norm{\textbf{W}}^2 + \sigma_{\text{MS}}^2 \Big] \\ &= \gamma_k \left[\sigma_{2,k}^2\norm{\textbf{W}\Gcap^H\textbf{v}_k}^2+(\textbf{v}_k^H\textbf{v}_k\sigma_1^2+\sigma_{\text{RRH}}^2)\left(\norm{\hcap_k^H\textbf{W}}^2 + \sigma_{2,k}^2\norm{\textbf{W}}^2\right) + \sigma_{\text{MS}}^2 \right] \notag \end{align} where $\sigma_1^2 = \min\limits_{n} \sigma_{1,n}^2$ and $\sigma_{2,k}^2=\min\limits_{n} \sigma_{2,k,n}^2$. Similarly, we obtain that \begin{equation} \label{B4} \begin{aligned} P &\geq \displaystyle\sum_{k=1}^K \left(\textbf{v}_k^H\textbf{v}_k + \norm{\textbf{W}\Gcap^H\textbf{v}_k}^2 + \textbf{v}_k^H\textbf{v}_k \lmin{\bm{\Sigma}_1}\norm{\textbf{W}}^2\right)+\sigma_{\text{RRH}}^2\tr\left(\textbf{W}^H\textbf{W}\right) \\ &\geq \displaystyle\sum_{k=1}^K \Bigl( \underbrace{\textbf{v}_k^H\textbf{v}_k + \norm{\textbf{W}\Gcap^H\textbf{v}_k}^2 + \textbf{v}_k^H\textbf{v}_k\sigma_1^2\norm{\textbf{W}}^2 + \dfrac{\sigma_{\text{RRH}}^2}{K}\norm{\textbf{W}}^2}_{P_k} \Bigr). \end{aligned} \end{equation} We will find a lower bound for $P_k$ for all $k$ using (\ref{B3}). To simplify the notations, we define \begin{equation} \label{B_def} \begin{aligned} x_1=\left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2, \: x_2=\norm{\textbf{W}\Gcap^H\textbf{v}_k}^2, \: x_3&=\norm{\hcap_k^H\textbf{W}}^2, \: x_4=\norm{\textbf{W}}^2, \: x_5=\textbf{v}_k^H\textbf{v}_k, \: y=P_k \\ c_1=\gamma_k, \: c_2=\sigma_{2,k}^2, \: c_3= \sigma_1^2, \: c_4=\sigma_{\text{RRH}}^2, \: c_5&=\sigma_{\text{MS}}^2, \: c_6 = \dfrac{\sigma_{\text{RRH}}^2}{K}, \: d_1=\norm{\hcap_k}^2, \: d_2 = \norm{\Gcap}^2. \end{aligned} \end{equation} (\ref{B3}) and (\ref{B4}) can be written in terms of new variables as \begin{equation} \label{B5} x_1 \geq c_1\left[c_2x_2 + (c_3x_5+c_4)(x_3+c_2x_4)+c_5\right], \: \: y = x_2+x_5+c_3x_4x_5+c_6x_4. \end{equation} By Cauchy-Schwarz Inequality \cite{CS} and submultiplicativity of $\ell_2$-norm, we get \begin{equation} \label{B6} \norm{\hcap_k}^2 \norm{\textbf{W}\Gcap^H\textbf{v}_k}^2 \geq \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 \: \Longrightarrow \: x_2 d_1 \geq x_1. \end{equation} \begin{equation} \label{B7} \norm{\textbf{W}}^2 \norm{\hcap_k}^2 \geq \norm{\hcap_k^H\textbf{W}}^2 \: \Longrightarrow \: x_4 d_1 \geq x_3. \end{equation} \begin{equation} \label{B8} \norm{\hcap_k^H\textbf{W}}^2 \norm{\textbf{v}_k}^2 \norm{\Gcap}^2 \geq \norm{\hcap_k^H\textbf{W}}^2 \norm{\textbf{v}_k^H\Gcap}^2 \geq \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 \: \Longrightarrow \: x_3 x_5 d_2 \geq x_1. \end{equation} In Appendix B, using (\ref{B5})-(\ref{B8}) and Arithmetic-Geometric Mean Inequality \cite{AM-GM}, we show that \begin{equation} \label{B_bound} y \geq \dfrac{1}{a}\left(b+c_3c_5+2\sqrt{c_3c_5b+c_5(d_2+c_6)a}\right), \end{equation} where $a=\dfrac{d_1d_2}{c_1}-c_2d_2-c_3d_1-c_2c_3, \: b=c_4(d_1+c_2)$. Together with the feasibility condition also found in Appendix B, we can express the bound as \begin{equation} P \geq \displaystyle\sum_{k=1}^K \dfrac{\Htilde_k\sigma_{\text{RRH}}^2+\Gtilde\sigma_{\text{MS}}^2+2\sigma_{\text{RRH}}\sigma_{\text{MS}}\sqrt{\Htilde_k\Gtilde+\dfrac{\Delta_k}{K}}}{\Delta_k}, \: \: \Delta_k>0, \: \forall k \end{equation} where \begin{equation} \Htilde_k = \norm{\hcap_k}^2+\sigma_{2,k}^2, \: \Gtilde = \norm{\Gcap}^2+\sigma_1^2, \: \Delta_k=\left(1+\dfrac{1}{\gamma_k}\right)\norm{\hcap_k}^2\norm{\Gcap}^2-\Htilde_k\Gtilde. \end{equation} Using (\ref{B_def}), it can be shown that $a=\Delta_k$. In Appendix B, we show that $a>0$ (equivalently $\Delta_k>0, \: \forall k$) is a necessary (but not sufficient) feasibility condition which has to be satisfied to obtain a proper solution for (P0).\footnote{We can find upper bounds for SINR thresholds considering $\Delta_k=0$ to obtain a feasible solution.} It it easy to show that the lower bound is an increasing function of $\sigma_{\text{RRH}}, \sigma_{\text{MS}}, \sigma_1, \sigma_{2,k}, \gamma_k$ and a decreasing function of $\norm{\hcap_k}$ and $\norm{\Gcap}$, as expected. \section{Convex Optimization Methods} In the previous section, we have found a performance bound for problem (P0). To observe the tightness of the proposed lower bound, we consider different methods to solve the joint beamformer design problem. In this section, we present two convex optimization based methods to solve (P0). Both methods apply successive convex optimizations with the SDR idea. Firstly, we will show that each one of fronthaul and access link beamformers can be found using convex optimization with SDR when the other one is fixed. Using this observation, we will propose two methods with different complexities. \vspace{-5mm} \subsection{Access Link Beamformer Design} Let $\textbf{v}_k$'s be given. In this case, the matrices $\textbf{D}_k$ and $\textbf{C}_{\ell}$ become constant. For any matrices $\textbf{X}, \textbf{Y}, \textbf{Z}$ with suitable dimensions, we have $\tr\left(\textbf{X}^H\textbf{Y}\textbf{X}\textbf{Z}\right) = (\tvec{\textbf{X}})^H\left(\textbf{Z}^T\otimes\textbf{Y}\right)\tvec{\textbf{X}}$ \cite{vec-eqn}. Using this property, we get \begin{align} \label{C1} \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 &= (\tvec{\textbf{W}})^H\left((\Gcap^H\textbf{v}_k\textbf{v}_k^H\Gcap)^T \otimes (\hcap_k\hcap_k^H) \right) \tvec{\textbf{W}} \notag \\ \tr\left(\textbf{D}_k\textbf{W}\textbf{C}_{\ell}\textbf{W}^H\right) &= (\tvec{\textbf{W}})^H\left(\textbf{C}_{\ell}^T \otimes \textbf{D}_k\right) \tvec{\textbf{W}}, \\ \tr\left(\textbf{D}_k\textbf{W}\textbf{W}^H\right) &= (\tvec{\textbf{W}})^H\left(\textbf{I}_{NL} \otimes \textbf{D}_k\right) \tvec{\textbf{W}}, \: \: \tr\left(\textbf{W}^H\textbf{W}\right) = (\tvec{\textbf{W}})^H\tvec{\textbf{W}}. \notag \end{align} Similarly, we obtain that \begin{align} \label{C2} \tr\left(\bm{\tau_0} \textbf{v}_k\textbf{v}_k^H\right) &= \textbf{v}_k^H\textbf{v}_k + \tr\left(\textbf{W}^H\textbf{W}\Gcap^H\textbf{v}_k\textbf{v}_k^H\Gcap\right) + \tr\left(\textbf{W}^H\textbf{W}\bm{\Sigma}_1\right)\textbf{v}_k^H\textbf{v}_k \notag \\ &= \textbf{v}_k^H\textbf{v}_k + (\tvec{\textbf{W}})^H\left((\Gcap^H\textbf{v}_k\textbf{v}_k^H\Gcap)^T \otimes \textbf{I}_{NL} + (\textbf{v}_k^H\textbf{v}_k)(\bm{\Sigma}_1 \otimes \textbf{I}_{NL}) \right) \tvec{\textbf{W}} \notag \\ &= \textbf{v}_k^H\textbf{v}_k + (\tvec{\textbf{W}})^H\left(\textbf{C}_k^T \otimes \textbf{I}_{NL}\right)\tvec{\textbf{W}}. \end{align} Define $\textbf{T}_k = (\Gcap^H\textbf{v}_k\textbf{v}_k^H\Gcap)^T \otimes (\hcap_k\hcap_k^H), \: \textbf{F}_{\ell,k}=\textbf{C}_{\ell}^T \otimes \textbf{D}_k, \: \textbf{E}_k=\textbf{I}_{NL} \otimes \textbf{D}_k, \: \textbf{J}_k = \textbf{C}_k^T \otimes \textbf{I}_{NL}$ for all $k$. Then we can write SINR conditions and total mean power as \begin{equation} \label{C3} \begin{aligned} (\tvec{\textbf{W}})^H\left[(1+\gamma_k)\textbf{T}_k-\gamma_k\displaystyle\sum_{\ell=1}^K\textbf{F}_{\ell,k}-\gamma_k\sigma_{\text{RRH}}^2\textbf{E}_k\right]\tvec{\textbf{W}} \geq \gamma_k\sigma_{\text{MS}}^2, \: \: \forall k \\ P = \displaystyle\sum_{k=1}^K \textbf{v}_k^H\textbf{v}_k + (\tvec{\textbf{W}})^H\left(\sigma_{\text{RRH}}^2\textbf{I}_{N^2L^2}+\displaystyle\sum_{k=1}^K\textbf{J}_k\right)\tvec{\textbf{W}}. \end{aligned} \end{equation} The matrix $\textbf{W}$ is block diagonal and it includes $NL^2$ many unknowns. Other $(N^2-N)L^2$ entries are zero. There exists a matrix $\textbf{U}: \: N^2L^2 \times NL^2$ and a vector of unknown variables $\textbf{w}_0: \: NL^2 \times 1$ such that $\tvec{\textbf{W}} = \textbf{U} \textbf{w}_0$. Here, each column of $\textbf{U}$ includes a single 1 and other entries are equal to 0. We put the 1's at the entries corresponding to the unknown variables in $\tvec{\textbf{W}}$. After this observation, we can write the problem in terms of $\textbf{w}_0$: \begin{equation} \label{C4} \begin{aligned} \textbf{w}_0^H\textbf{U}^H\left[(1+\gamma_k)\textbf{T}_k-\gamma_k\displaystyle\sum_{\ell=1}^K\textbf{F}_{\ell,k}-\gamma_k\sigma_{\text{RRH}}^2\textbf{E}_k\right]\textbf{U}\textbf{w}_0\geq \gamma_k\sigma_{\text{MS}}^2, \: \: \forall k \\ P = \displaystyle\sum_{k=1}^K \textbf{v}_k^H\textbf{v}_k + \textbf{w}_0^H\textbf{U}^H\left(\sigma_{\text{RRH}}^2\textbf{I}_{N^2L^2}+\displaystyle\sum_{k=1}^K\textbf{J}_k\right)\textbf{U}\textbf{w}_0. \end{aligned} \end{equation} Finally we define $\bm{\mathcal{W}} = \textbf{w}_0\textbf{w}_0^H$ satisfying $\bm{\mathcal{W}} \succeq 0$ and $\text{rank}(\bm{\mathcal{W}})=1$. Using the variable $\bm{\mathcal{W}}$, we can formulate the problem as \begin{equation} \label{C5} \begin{aligned} &(\text{P}1) \: \: \underset{\bm{\mathcal{W}}}{\min} \: \displaystyle\sum_{k=1}^K \textbf{v}_k^H\textbf{v}_k + \tr\left[\left(\textbf{U}^H\left(\sigma_{\text{RRH}}^2\textbf{I}_{N^2L^2}+\displaystyle\sum_{k=1}^K\textbf{J}_k\right)\textbf{U}\right)\bm{\mathcal{W}}\right] \\ &\text{such that} \: \tr\left[\left(\textbf{U}^H\left((1+\gamma_k)\textbf{T}_k-\gamma_k\displaystyle\sum_{\ell=1}^K\textbf{F}_{\ell,k}-\gamma_k\sigma_{\text{RRH}}^2\textbf{E}_k\right)\textbf{U}\right)\bm{\mathcal{W}} \right] \geq \gamma_k\sigma_{\text{MS}}^2, \: \: \forall k \\ &\bm{\mathcal{W}} \succeq 0, \: \text{rank}(\bm{\mathcal{W}})=1. \end{aligned} \end{equation} In (P1), cost and all constraints except the rank constraint are convex. By omitting the rank constraint it can be solved with SDR using standard convex optimization tools such as SeDuMi \cite{SeDuMi}, CVX \cite{CVX}, Mosek \cite{Mosek}. \vspace{-5mm} \subsection{Fronthaul Link Beamformer Design} In this part, we consider the case where $\textbf{W}$ is fixed. In this case, we can write \begin{equation} \label{C6} \begin{aligned} \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 &= \textbf{v}_k^H\Gcap\textbf{W}^H\hcap_k\hcap_k^H\textbf{W}\Gcap^H\textbf{v}_k \\ \tr\left(\textbf{D}_k\textbf{W}\textbf{C}_{\ell}\textbf{W}^H\right) &= \tr\left(\textbf{W}^H\textbf{D}_k\textbf{W}\left(\Gcap^H\textbf{v}_{\ell}\textbf{v}_{\ell}^H\Gcap+(\textbf{v}_{\ell}^H\textbf{v}_{\ell})\bm{\Sigma}_1\right)\right) \\ &= \textbf{v}_{\ell}^H\left[\Gcap\textbf{W}^H\textbf{D}_k\textbf{W}\Gcap^H+\tr(\textbf{W}^H\textbf{D}_k\textbf{W}\bm{\Sigma}_1)\textbf{I}_M\right]\textbf{v}_{\ell}. \end{aligned} \end{equation} Let $\textbf{A}_k=\Gcap\textbf{W}^H\hcap_k\hcap_k^H\textbf{W}\Gcap^H, \: \textbf{B}_k = \Gcap\textbf{W}^H\textbf{D}_k\textbf{W}\Gcap^H+\tr(\textbf{W}^H\textbf{D}_k\textbf{W}\bm{\Sigma}_1)\textbf{I}_M, \: \textbf{V}_k=\textbf{v}_k\textbf{v}_k^H, \: \forall k$ and $a=\sigma_{\text{RRH}}^2 \tr\left(\textbf{W}^H \textbf{W}\right) , \: b=\sigma_{\text{RRH}}^2\tr\left(\textbf{D}_k\textbf{W}\textbf{W}^H\right)+\sigma_{\text{MS}}^2$. Using (\ref{P_eqn}) and (\ref{C6}), we formulate the problem as \begin{equation} \label{C7} \begin{aligned} &(\text{P}2) \: \: \underset{\{\textbf{V}_k\}_{k=1}^K}{\min} \: \displaystyle\sum_{k=1}^K \tr\left(\bm{\tau_0} \textbf{V}_k\right) + a \\ &\text{such that} \: \dfrac{\tr\left(\textbf{A}_k\textbf{V}_k\right)}{\displaystyle\sum_{\ell=1}^K\tr\left(\textbf{B}_k\textbf{V}_{\ell}\right)-\tr\left(\textbf{A}_k\textbf{V}_k\right)+b} \geq \gamma_k, \: \: \forall k, \quad \textbf{V}_k \succeq 0, \: \text{rank}(\textbf{V}_k)=1, \: \: \forall k. \end{aligned} \end{equation} (P2) can also be solved using convex optimization tools by omitting the rank constraints. \vspace{-5mm} \subsection{Rank-1 Approximation for SDR} In both fronthaul and access link beamformer designs, we find a solution by omitting the rank constraint. If the result is rank-1, the solution becomes optimal. Otherwise, we apply a widely used randomization method \cite{AF-Relay1}-\cite{DF-2}, \cite{Ch-Add-1}. Let $\textbf{X}$ be the matrix found after convex optimization. We want to find a vector $\textbf{x}$ satisfying $\textbf{X}=\textbf{x}\textbf{x}^H$ which is not possible if $\text{rank}(\textbf{X})>1$. In such a case, we select $\textbf{x}=\textbf{E}\bm{\Lambda}^{1/2}\textbf{y}$ where $\textbf{X}=\textbf{E}\bm{\Lambda}\textbf{E}^H$ is the eigenvalue decomposition of $\textbf{X}$ and $\textbf{y}$ is a zero-mean real Gaussian random vector with unity covariance matrix. \vspace{-5mm} \subsection{Alternating Optimization (AO) Method} We know that each one of fronthaul and access link beamformers can be found using convex optimization with SDR approach by fixing the other. Using this idea we can find a solution for (P0) by alternating optimization of fronthaul and access link beamformers. In general alternating optimization methods converge to local optimum points. The choice of initial point affects the performance. We consider the CP-to-RRH transmissions and use the total SNR at RRHs to find a suitable initial point. Let $\text{SNR}_{kn}$ be the SNR of $k$-th user at $n$-th RRH, i.e., $\text{SNR}_{kn} = \dfrac{\lVert \Gcap_n^H\textbf{v}_k \rVert^2}{\sigma_{\text{RRH}}^2}.$ The total SNR is given by $\text{SNR}_{\text{tot}} = \displaystyle\sum_{n=1}^N\displaystyle\sum_{k=1}^K \text{SNR}_{kn} = \tr\left(\textbf{V}^H\textbf{G}_0\textbf{V}\right)$ where $\textbf{G}_0 = \dfrac{1}{\sigma_{\text{RRH}}^2}\displaystyle\sum_{n=1}^N \Gcap_n\Gcap_n^H$ and $\textbf{V} = \left[ \textbf{v}_1 \: \textbf{v}_2 \: \cdots \: \textbf{v}_K \right]$. We know that $P_{\text{CP}}=\tr\left(\textbf{V}^H\textbf{V}\right)$. Furthermore, in order to send the user data from CP to RRHs properly, we need $M \geq K$ and $\text{rank}(\textbf{V})=K$. To satisfy these constraints, we choose $\textbf{V}$ such that $\textbf{V}^H\textbf{V}=\sqrt{\dfrac{P_{\text{CP}}}{K}} \textbf{I}_K$. We aim to find $\textbf{V}$ maximizing $\text{SNR}_{\text{tot}}$. By Von-Neumann's Inequality, we have \begin{equation} \tr\left(\textbf{V}\textbf{V}^H\textbf{G}_0 \right) \leq \displaystyle\sum_{i=1}^M \lambda_i\left(\textbf{V}\textbf{V}^H\right)\lambda_i\left(\textbf{G}_0 \right) = \displaystyle\sum_{i=1}^K \lambda_i\left(\textbf{V}\textbf{V}^H\right)\lambda_i\left(\textbf{G}_0 \right) = \dfrac{P_{\text{CP}}}{K}\displaystyle\sum_{i=1}^K \lambda_i\left(\textbf{G}_0 \right). \end{equation} Notice that the $K$ largest eigenvalues of $\textbf{V}\textbf{V}^H$ are equal to $\dfrac{P_{\text{CP}}}{K}$ and other $M-K$ are equal to zero. The equality holds when we have \begin{equation} \label{Init} \textbf{v}_k = \sqrt{\dfrac{P_{\text{CP}}}{K}}e_k\left(\textbf{G}_0\right), \: \forall k. \end{equation} To find a suitable initial point we select the CP beamformers as in (\ref{Init}). On the other hand, the selection of initial $P_{\text{CP}}$ is also required. To perform this task, we use Algorithm 0. \vspace{-3mm} \noindent\rule{\textwidth}{0.8pt} \vspace{-3mm} \noindent\textbf{Algorithm 0} (Initialization for Alternating Optimization) \vspace{-5mm} \\ \noindent\rule{\textwidth}{0.4pt} \\ Set $P_{\text{CP}}^{(0)}=1, \: \mu_0 = 1.05, \: t_{\text{max}, 0}=100$. For $t=0, 1, 2, \ldots, t_{\text{max}, 0}$ repeat the following steps: \begin{itemize} \item Form $\textbf{v}_k^{(t)} = \sqrt{\dfrac{P_{\text{CP}}^{(t)}}{K}}e_k\left(\textbf{G}_0\right), \: \forall k$. Solve (P1) to find $\textbf{W}^{(t)}$. \item If the problem is feasible, then set the initial value of $\textbf{W}$ as $\textbf{W}^{(t)}$ and terminate. \item Set $P_{\text{CP}}^{(t+1)}=\mu_0 P_{\text{CP}}^{(t)}$. \end{itemize} \vspace{-5mm} \noindent\rule{\textwidth}{0.4pt} Algorithm 0 is used to find the initial value of $\textbf{W}$. Starting from this value, we apply alternating optimization by solving (P1) and (P2) iteratively. At each iteration, $P$ decreases since both (P1) and (P2) minimizes $P$ when one of fronthaul and access link beamformers is fixed. As the power is limited below ($P \geq 0$) we conclude by Monotone Convergence Theorem \cite{MCT} that this method is convergent. When the rate of change of $P$ is small enough we stop the iteration and find the final solution. The method is summarized in Algorithm 1. \vspace{-3mm} \noindent\rule{\textwidth}{0.8pt} \vspace{-3mm} \noindent\textbf{Algorithm 1} (Alternating Optimization) \vspace{-5mm} \\ \noindent\rule{\textwidth}{0.4pt} \\ Using Algorithm 0, find the initial value $\textbf{W}^{(0)}$. Define $t_{\text{max}, 1}=100, \: \eta=10^{-3}$. For $t=0, 1, \ldots, t_{\text{max}, 1}$, repeat the following steps: \begin{itemize} \item Solve (P2) to find $\textbf{v}_k^{(t)}, \: \forall k$. Solve (P1) to find $\textbf{W}^{(t)}$. \item If $|P^{(t)}-P^{(t-1)}|<\eta P^{(t)}$, then terminate. \end{itemize} \vspace{-5mm} \noindent\rule{\textwidth}{0.4pt} \vspace{-5mm} \subsection{Total SNR Max (TSM) Method } Algorithm 0 is used to find an initial point for AO method. By extending Algorithm 0, we propose another iterative method, called the Total SNR Max (TSM) Method, which is computationally less complex compared to AO. Firstly, we make an observation for values of $P$ as $P_{\text{CP}}$ increases. Assume that we use (\ref{Init}) to form CP beamformers. Starting from a small value, we increase $P_{\text{CP}}$ continuously and at each time we find the corresponding RRH beamforming matrix by solving (P1) as in Algorithm 0. We observe that in general there exist two iteration indices $0<t_1<t_2$ such that the problem is infeasible for $t<t_1$, $P^{(t)}$ is decreasing for $t_1<t<t_2$, and increasing for $t>t_2$. This shows that optimal value of $P$ is achieved when $t=t_2$. By the motivation of this observation, we propose Algorithm 2. \vspace{-3mm} \noindent\rule{\textwidth}{0.8pt} \vspace{-3mm} \noindent\textbf{Algorithm 2} (Total SNR Max Method) \vspace{-5mm} \\ \noindent\rule{\textwidth}{0.4pt} \\ Set $P_{\text{CP}}^{(0)}=1, \: P^{(0)}=0, \: \mu_2=1.05$. For $t=0, 1, \ldots, t_{\text{max}, 2}=100$, repeat the following steps: \begin{itemize} \item Form $\textbf{v}_k^{(t)}=\sqrt{\dfrac{P_{\text{CP}}^{(t)}}{K}}e_k\left(\textbf{G}_0\right), \: \forall k$. \item Solve (P1) to find $\textbf{W}^{(t)}$. If the problem is feasible then evaluate $P^{(t)}$ using CP and RRH beamformers. Otherwise, set $P^{(t)}=0$. \item If $P^{(t)}>P^{(t-1)}>0$, then terminate. \item Set $P_{\text{CP}}^{(t+1)}=\mu_2 P_{\text{CP}}^{(t)}$. \end{itemize} \vspace{-5mm} \noindent\rule{\textwidth}{0.4pt} This algorithm finds CP beamformers using the approach given in (\ref{Init}) by iteratively changing the $P_{\text{CP}}$ value. RRH beamforming selection is done as in AO method. \vspace{-5mm} \subsection{Complexity of Convex Optimization Methods} In general, we can measure the computational complexity of AO and TSM as the product of number of iterations and the complexity at each iteration. At each iteration, the main component of complexity is related to the convex optimization and all other operations can be neglected. We use SeduMi as the convex optimization tool to implement AO and TSM. In both methods, at each iteration, we minimize $c^Hx$ subject to $Ax=b$ where $x \in \mathbb{C}^{n}$ is the vector of all unknowns and $A \in \mathbb{C}^{m \times n}, \: b\in\mathbb{C}^m, \: c\in \mathbb{C}^{n}$ are known vectors/matrices. We know by \cite{SeDuMi} that the corresponding computational complexity is $\mathcal{O}(n^2m^{2.5}+m^{3.5})$ for SeDuMi. The corresponding $m$ and $n$ values for fronthaul and access link beamforming designs are calculated as \begin{equation} \text{Fronthaul Link}: \: \: m=K, \: \: n=K+KM^2, \: \: \text{Access Link}: \: \: m=K, \: \: n=K+N^2L^4. \end{equation} In AO, both fronthaul and access link beamformer designs are done by convex optimization, meanwhile, TSM uses convex optimization only for access link. Hence, the corresponding computational complexities are given by \begin{equation} \begin{aligned} \text{Complexity of AO}: &\quad \mathcal{O}\left(N_{\text{AO}}K^{2.5}\left[(K+KM^2)^2+(K+N^2L^4)^2+2K\right]\right), \\ \text{Complexity of TSM}: &\quad \mathcal{O}\left(N_{\text{TSM}}K^{2.5}\left[(K+N^2L^4)^2+K\right]\right) \end{aligned} \end{equation} where $N_{\text{AO}}$ and $N_{\text{TSM}}$ are number of iterations for AO and TSM, respectively. In simulation results, we show that the number of iterations for both methods are similar and average complexity of AO is larger than that of TSM, as expected. \vspace{-4mm} \section{Standard Beamforming Methods} In this section, we present two algorithms adapted from well-known beamforming methods. These methods are based on MRC, ZF and SVD. The purpose of considering these methods is to observe the performance of well-known methods in our joint beamforming design problem. We also make a comparison with the performance bound and relatively complex convex optimization methods described in the previous section. In the first method, called MRC-ZF, we design fronthaul beamformers using the MRC idea. Access link beamformers are chosen as in ZF to cancel the interference due to other user signals. The second method is called SVD-ZF where the fronthaul beamformers are designed by an SVD algorithm. The access link beamformers are chosen to cancel the interference as in MRC-ZF. Because of the nature of the problem, a direct implementation is not possible. We need some adaptations to use MRC, ZF, and SVD. \vspace{-5mm} \subsection{MRC-ZF} We know that MRC optimizes the signal power by a coherent reception. ZF eliminates the interference and hence enhances the SINR. By the motivation of these beamforming methods, we choose the fronthaul and access link beamformers as \begin{equation} \label{S1} \textbf{v}_k = \left(\hcap_k^H\textbf{W}\Gcap^H\right)^H, \: \: \forall k, \quad \hcap_k^H\textbf{W}\Gcap^H\textbf{v}_{\ell} = \delta[k-\ell], \: \: \forall k, \ell. \end{equation} In this method, $\textbf{v}_k$'s are chosen as the conjugate-transpose of the corresponding effective channel $\hcap_k^H\textbf{W}\Gcap^H$. The matrix $\textbf{W}$ is chosen to cancel the interference due to undesired user signals. Notice that both beamformers are chosen in terms of channel estimates only. This approach is used to make the algorithm simpler. Using (\ref{S1}), we get \begin{equation} \hcap_k^H\textbf{W}\Gcap^H\Gcap\textbf{W}^H\hcap_{\ell}=\tr\left(\textbf{W}^H\hcap_{\ell}\hcap_k^H\textbf{W}\Gcap^H\Gcap\right)=\delta[k-\ell] , \: \: \forall k, \ell. \end{equation} Using the fact that $\tvec{\textbf{W}}=\textbf{U}\textbf{w}_0$, we obtain that \begin{equation} \label{S2} \textbf{w}_0^H \textbf{U}^H\left[(\Gcap^H\Gcap)^T \otimes (\hcap_{\ell}\hcap_k^H)\right]\textbf{U}\textbf{w}_0=\delta[k-\ell], \: \: \forall k, \ell. \end{equation} (\ref{S2}) is a quadratically constrained quadratic program (QCQP) type problem including a set of second order matrix equations with $NL^2$ unknowns and $K^2$ equations. If $NL^2 \geq K^2$, then we can find a solution using a standard QCQP solver. Let $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$ be some solutions of (\ref{S1}). We use $\textbf{v}_k=\sqrt{a}\textbf{v}_{k,0}, \: \forall k$ and $\textbf{W}=\sqrt{b}\textbf{W}_0$ where $a$ and $b$ are two non-negative real numbers. We use $a$ and $b$ to optimize the power allocation and minimize the total power spent. Using the beamformer expressions, we can write SINR constraints and total mean power as \vspace{-5mm} \begin{equation} \label{S3} \dfrac{ab\cdot c_{k,1}}{ab\cdot c_{k,2}-ab\cdot c_{k,1}+b\cdot c_{k,3}+c_{k,4}} \geq \gamma_k, \: \: \forall k, \: \: P = a\cdot d_5+ab\cdot d_6+b\cdot d_7 \end{equation} where \begin{equation} \begin{aligned} c_{k,1}&=\left|\widehat{\textbf{h}}_k^H\textbf{W}_0\widehat{\textbf{G}}^H\textbf{v}_{k,0}\right|^2, \: \: c_{k,2}=\displaystyle\sum_{\ell=1}^K\tr\left(\textbf{D}_k\textbf{W}_0\left(\Gcap^H\textbf{v}_{\ell,0}\textbf{v}_{\ell,0}^H\Gcap+(\textbf{v}_{\ell,0}^H\textbf{v}_{\ell,0})\bm{\Sigma}_1\right)\textbf{W}_0^H\right) \\ c_{k,3}&=\sigma_{\text{RRH}}^2\tr\left(\textbf{D}_k\textbf{W}_0\textbf{W}_0^H\right), \: \: c_{k,4}=\sigma_{\text{RRH}}^2, \: \: d_5=\displaystyle\sum_{k=1}^K \textbf{v}_{k,0}^H\textbf{v}_{k,0} \\ d_6 &= \displaystyle\sum_{k=1}^K \textbf{v}_{k,0}^H\Gcap\textbf{W}_0^H\textbf{W}_0\Gcap^H\textbf{v}_{k,0} + \tr\left(\textbf{W}_0^H\textbf{W}_0\bm{\Sigma}_1\right)\displaystyle\sum_{k=1}^K \textbf{v}_{k,0}^H\textbf{v}_{k,0}, \: \: d_7 = \sigma_{\text{RRH}}^2\tr\left(\textbf{W}_0^H\textbf{W}_0\right). \end{aligned} \end{equation} Using the SINR constaints in (\ref{S3}), we get \begin{equation} \label{S4} a \geq d_{k,1}+\dfrac{d_{k,2}}{b}, \: \: (1+\gamma_k)c_{k,1}>\gamma_kc_{k,2}, \: \: \forall k \end{equation} where $d_{k,1}=\dfrac{\gamma_kc_{k,3}}{(1+\gamma_k)c_{k,1}-\gamma_kc_{k,2}}, \: d_{k,2}=\dfrac{\gamma_kc_{k,4}}{(1+\gamma_k)c_{k,1}-\gamma_kc_{k,2}}, \: \forall k$. The first condition in (\ref{S4}) provides $K$ inequalities for $a$ and $b$. The second condition should be satisfied to obtain a feasible solution. The problem of minimizing $P$ in (\ref{S3}) under SINR constraints given by (\ref{S4}) is a two-variable QCQP problem which can be solved directly. The solution steps are explained in Algorithm 3. \vspace{-3mm} \noindent\rule{\textwidth}{0.8pt} \vspace{-3mm} \noindent\textbf{Algorithm 3} (MRC-ZF) \vspace{-5mm} \\ \noindent\rule{\textwidth}{0.4pt} \begin{itemize} \item Find $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$ by solving (\ref{S2}) using a QCQP solver. \item Check the feasibility condition given by (\ref{S4}). If it is not satisfied, then terminate. \item For all $k$ evaluate $d_{k,1}, d_{k,2}, d_5, d_6, d_7$ using $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$. \end{itemize} For each $k=1, 2, \ldots, K$ repeat the following steps: \begin{itemize} \item Find the solution interval $[b_1, b_2] \subseteq [0, \infty)$ of $b$ satisfying $d_{k,1}+\dfrac{d_{k,2}}{b} \geq d_{\ell,1}+\dfrac{d_{\ell,2}}{b}, \: \: \forall \ell \neq k$. \item Evaluate the minimum value $P_{k,0}$ of $P = a\cdot d_5+ab\cdot d_6+b\cdot d_7$ for $a=d_{k,1}+\dfrac{d_{k,2}}{b}$ which is given by $P_{k,0}=d_{k,1}d_5+d_{k,2}d_6+d_7+2\sqrt{d_{k,1}d_{k,2}d_5d_6}$. \item Evaluate the values of $P = a\cdot d_5+ab\cdot d_6+b\cdot d_7$ for $a=d_{k,1}+\dfrac{d_{k,2}}{b_1}, \: \: b=b_1$ and $a=d_{k,1}+\dfrac{d_{k,2}}{b_2}, \: \: b=b_2$ as $P_{k,1}$ and $P_{k,2}$. \item Evaluate the global minimum candidate for $k$ as $P_{\text{min},k}=\min (P_{k,0}, P_{k,1}, P_{k,2})$. \end{itemize} Find the solution as $P_{\text{min}}=\min\limits_{k} P_{\text{min},k}$. \vspace{-3mm} \noindent\rule{\textwidth}{0.4pt} Algorithm 3 optimally solves the beamforming design problem defined by MRC-ZF method. Notice that there is a feasibility condition defined by (\ref{S4}) which has to be satisfied in order to find a suitable beamformer. By the design method, the algorithm cancels the interference due to undesired user signals. As it uses the channel estimates only, the interference due to channel mismatch part cannot be canceled. The channel estimation error should be small enough to satisfy the feasibility condition. There is also another condition $NL^2 \geq K^2$ to find a solution for the matrix equation in (\ref{S2}). These conditions imply that MRC-ZF can be used if the channel estimation quality is good enough and the number of users is small enough. \vspace{-5mm} \subsection{SVD-ZF} In TSM method, fronthaul beamformers are designed by maximizing the total SNR at RRHs. We have shown that the corresponding beamformer is found using SVD of a sum of channel components related to CP-to-RRH channels. We use this approach to design fronthaul beamformers and access link beamformers are found as in MRC-ZF. By the motivation of the SVD and ZF type operations, we name this method as SVD-ZF. We first consider the system of equations \begin{equation} \label{S5} \textbf{v}_k = e_k\left(\textbf{G}_0\right), \: \: \forall k, \quad \hcap_k^H\textbf{W}\Gcap^H\textbf{v}_{\ell} = \delta[k-\ell], \: \: \forall k, \ell \end{equation} where $\textbf{G}_0 = \dfrac{1}{\sigma_{\text{RRH}}^2}\displaystyle\sum_{n=1}^N \Gcap_n\Gcap_n^H$. The first condition in (\ref{S5}) maximizes the total SNR at RRHs and the second condition eliminates the interference. As in MRC-ZF, we only use channel estimates to design beamformers for simplicity. Using the transformation $\tvec{\textbf{W}}=\textbf{U}\textbf{w}_0$, we obtain that \begin{equation} \label{S6} \hcap_k^H\textbf{W}\Gcap^He_{\ell}\left(\textbf{G}_0\right) = \left(\tvec{[\Gcap^He_{\ell}\left(\textbf{G}_0\right)\hcap_k^H]^T}\right)^T\textbf{U}\textbf{w}_0 \\ =\delta[k-\ell], \: \: \forall k, \ell. \end{equation} (\ref{S6}) includes a system of linear equations with $NL^2$ unknowns and $K^2$ equations. For $NL^2 \geq K^2$, we can find a solution using generalized matrix inversion. As in MRC-ZF we optimize the power allocation to minimize the total power spent. Let $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$ be some solutions of (\ref{S5}). We use $\textbf{v}_k=\sqrt{a}\textbf{v}_{k,0}, \: \forall k$ and $\textbf{W}=\sqrt{b}\textbf{W}_0$ where $a$ and $b$ are two non-negative real numbers. After this point, we can formulate the problem in terms of $a$ and $b$ as in MRC-ZF and find the optimal values following the same procedure. SVD-ZF is summarized in Algorithm 4. \vspace{-3mm} \noindent\rule{\textwidth}{0.8pt} \vspace{-3mm} \noindent\textbf{Algorithm 4} (SVD-ZF) \vspace{-5mm} \\ \noindent\rule{\textwidth}{0.4pt} \begin{itemize} \item Find $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$ by solving (\ref{S6}) using generalized matrix inversion. \item Apply the same procedure done in MRC-ZF to find the solution. \end{itemize} \vspace{-5mm} \noindent\rule{\textwidth}{0.4pt} As in MRC-ZF, this method includes a feasibility condition including $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$. We also need $NL^2 \geq K^2$ to find a solution for (\ref{S6}). Hence SVD-ZF also requires a good channel estimation quality and relatively small number of users. One can say that SVD-ZF is computationally less complex compared to MRC-ZF as it does not require a QCQP solver. \vspace{-4mm} \section{Numerical Results} In this section we compare the performances of the proposed methods with the performance bound by Monte Carlo simulations. Throughout the simulations, we assume that $\gamma_k = \gamma, \: \forall k$. We use a realistic channel model including path-loss, shadowing and small-scale fading defined in a 3GPP standard \cite{3GPP}. We consider a circular region in which CP is at the center, RRHs and MSs are distributed uniformly.\footnote{We choose the configurations where CP-to-RRH, CP-to-MS and RRH-to-MS distances are all at least 50 meters.} In Table I, the model parameters are presented. \vspace{-4mm} {\renewcommand{\arraystretch}{0.7} \begin{table}[ht] \caption{Model parameters used in simulations} \centering \begin{tabular}{| c | c |} \hline Cell radius & $1$ km \\ \hline Path-loss for Fronthaul Link ($P_{L,1}$) & $P_{L,1}=24.6+39.1\log_{10}d$ where $d$ is in meters \\ \hline Path-loss for Access Link ($P_{L,2}$) & $P_{L,2}=36.8+36.7\log_{10}d$ where $d$ is in meters \\ \hline Antenna gain (CP, RRH, MS) & $(9, 0, 0)$ dBi \\ \hline Noise Figure (RRH, MS) & $(2, 10)$ dB \\ \hline Bandwidth & 10 MHz \\ \hline Noise power spectral density & $-174$ dBm/Hz \\ \hline Small-scale fading model & Rayleigh, $\mathcal{C}\mathcal{N}(\textbf{0},\textbf{I})$ \\ \hline Log-normal shadowing variance (CP, RRH) & $(6, 4)$ dB \\ \hline \end{tabular} \end{table} } \vspace{-4mm} To generate channel estimates and channel estimation errors, we assume that pilot signal powers are adjusted according to the channel amplitudes so that the power ratios of $\mathbb{E}(|\Delta\textbf{G}_n|^2)/|\textbf{G}_n|^2$ and $\mathbb{E}(|\Delta\textbf{h}_{kn}|^2)/|\textbf{h}_{kn}|^2, \: \forall n, k$ are all equal to some known constant $\gamma_{\text{ch}}$. Here $\gamma_{\text{ch}}$ is a measure of channel estimation quality. Using the channel estimates and $\gamma_{\text{ch}}$, one can evaluate $\sigma_{1,n}^2, \: \forall n$ and $\sigma_{2,k,n}^2, \: \forall n, k$ accordingly. In simulations, we observe the effect of parameters $\gamma, K, N, L, M, \gamma_{\text{ch}}$. We know that there is always a non-zero probability of having an infeasible solution. To measure the ratio of feasibility, we define $P_{\text{success}}$ showing the percentage of feasible designs. We run $100$ Monte Carlo trials in each case. To evaluate the $P$ values for a method, we average the results over Monte Carlo trials with feasible solutions. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \includegraphics[width=0.32\textwidth]{convergence.eps} \caption{Convergence characteristics of AO and TSM.} \end{figure} \vspace{-6mm} In Fig. 2, we see a typical convergence graph of AO and TSM for $(K, N, L, M)=(4, 4, 4, 8)$, $\gamma=5$ dB and $\gamma_{\text{ch}}=0.01$. We observe that $P$ values decrease smoothly and both algorithms obtain a solution after a few iterations. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.27\textwidth]{P_vs_gamma.eps}}} \qquad \subfloat{{\includegraphics[width=0.27\textwidth]{P_suc_vs_gamma.eps}}} \caption{$P$ and $P_{\text{success}}$ vs $\gamma$. $(K, N, L, M)=(4, 4, 4, 8), \: \gamma_{\text{ch}}=0.01$.} \end{figure} \vspace{-7mm} In Fig. 3, we observe the effect of SINR threshold $\gamma$. For all $\gamma$ values, the performance loss compared to the bound are roughly $3$ and $6$ dB for AO and TSM, respectively. MRC-ZF and SVD-ZF have significantly worse performance than those of convex optimization methods. We observe that $P_{\text{success}}$ values of both methods decrease with $\gamma$. Even when $\gamma=0$ dB, infeasibility ratio is about $30$ percent for both methods. The results imply that even for a relatively low channel estimation error, the methods MRC-ZF and SVD-ZF may fail to solve the joint beamforming design problem with a large probability. We observe that AO can solve the problem with almost 100 percent whereas TSM feasibility ratio is slightly smaller than that of AO. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.27\textwidth]{P_vs_K.eps}}} \qquad \subfloat{{\includegraphics[width=0.27\textwidth]{P_suc_vs_K.eps}}} \caption{$P$ and $P_{\text{success}}$ vs $K$. $(N, L, M)=(4, 4, 8), \: \gamma=5$ dB, $\gamma_{\text{ch}}=0.01$.} \end{figure} \vspace{-7mm} Fig. 4 shows the performances as the number of MSs $K$ varies. We observe that the performance loss of all methods compared to the bound increase with $K$. This is due to the fact that bound can only be achieved when the interference due to undesired users is completely eliminated which becomes harder as $K$ increases. We see that for large $K$ values, the feasibility ratios of MRC-ZF and SVD-ZF become very small meaning that these methods cannot be used when the number of users is not small enough. Although it outperforms MRC-ZF and SVD-ZF, TSM performance also degrades for large number of users. On the other hand, AO can successfully design beamformers with 100 percent feasibility and it requires less power for all $K$ values compared to other three methods. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.30\textwidth]{P_vs_N.eps}}} \qquad \subfloat{{\includegraphics[width=0.30\textwidth]{P_suc_vs_N.eps}}} \caption{$P$ and $P_{\text{success}}$ vs $N$. $(K, L, M)=(4, 4, 8), \: \gamma=5$ dB, $\gamma_{\text{ch}}=0.01$.} \end{figure} \vspace{-7mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.30\textwidth]{P_vs_L.eps}}} \qquad \subfloat{{\includegraphics[width=0.30\textwidth]{P_suc_vs_L.eps}}} \caption{$P$ and $P_{\text{success}}$ vs $L$. $(K, N, M)=(4, 4, 8), \: \gamma=5$ dB, $\gamma_{\text{ch}}=0.01$.} \end{figure} \vspace{-7mm} In Fig. 5-6, we observe the effects of the number of RRHs $N$ and the number of RRH antennas $L$. The results show that AO has the best performance for all cases. Its feasibility ratio is always 100 percent in these two simulations and the power difference with the bound is generally less than $5$ dB. The difference becomes smaller as $N$ or $L$ increases. As in the previous cases, MRC-ZF performs better than SVD-ZF and worse than TSM. We also observe that there is a significant difference in the bound values between $N=2$ and $N=8$ and the same fact is true for $L=2$ and $L=8$. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.30\textwidth]{P_vs_M.eps}}} \qquad \subfloat{{\includegraphics[width=0.30\textwidth]{P_suc_vs_M.eps}}} \caption{$P$ and $P_{\text{success}}$ vs $M$. $(K, N, L)=(4, 4, 4), \: \gamma=5$ dB, $\gamma_{\text{ch}}=0.01$.} \end{figure} \vspace{-7mm} In Fig. 7, we see the effect of the number of CP antennas $M$. The main observation is that the performance enhancement obtained by increasing $M$ is very limited. Adding an extra antenna to CP mainly affects the power spent in fronthaul transmissions. In our channel model, CP-to-RRH channels are better than RRH-to-MS channels in terms of path-loss, antenna gains and receiver characteristics. This is due to the fact that RRHs are stationary and one can place them by optimizing the corresponding fronthaul channel conditions. Therefore, the portion of $P_{\text{CP}}$ in the total power $P$ is small in general and hence the effect of $M$ on the performance is small compared to the effects of $N$ and $L$. \vspace{-4mm} {\renewcommand{\arraystretch}{0.7} \begin{table}[H] \caption{$P$ values in dBW for various quadruples of $(K, N, L, M)$ for $\gamma=5$ dB and $\gamma_{\text{ch}}=0.01$} \centering \begin{tabular}{| c | c | c | c | c | c | c | c | c | c |} \hline $K$ & $N$ & $L$ & $M$ & $P$ (AO) & $P$ (TSM) & $P$ (MRC-ZF) & $P$ (SVD-ZF) & $P$ (Bound) & $P$ (AO) $\: - \: P$ (Bound) \\ \hline $2$ & $2$ & $4$ & $4$ & $27.52$ & $28.15$ & $33.3$ & $33.61$ & $24.43$ & $3.11$ \\ \hline $3$ & $2$ & $4$ & $6$ & $27.15$ & $29.42$ & $30.26$ & $31.23$ & $23.82$ & $3.33$ \\ \hline $4$ & $2$ & $4$ & $8$ & $27.5$ & $29.47$ & $32.41$ & $36.49$ & $24.03$ & $3.47$ \\ \hline $3$ & $3$ & $4$ & $4$ & $24.07$ & $27.02$ & $30.67$ & $31.32$ & $20.54$ & $3.53$ \\ \hline $4$ & $4$ & $4$ & $4$ & $25.69$ & $26.23$ & $35.31$ & $36.62$ & $20.48$ & $5.21$ \\ \hline $3$ & $4$ & $3$ & $4$ & $23.73$ & $25.54$ & $30.44$ & $32.83$ & $19.7$ & $4.03$ \\ \hline $2$ & $4$ & $2$ & $4$ & $23.59$ & $25.11$ & $31.4$ & $33.33$ & $21.28$ & $2.31$ \\ \hline \end{tabular} \end{table} } \vspace{-5mm} In Table II, we compare the performances when the ratios $\dfrac{K}{M}, \dfrac{K}{N}, \dfrac{K}{L}$ are fixed. The first three rows show the cases where $\dfrac{K}{M}, N, L$ are fixed; the first, fourth and fifth rows are related to the case where $\dfrac{K}{N}, M, L$ are fixed, and finally the last three rows correspond to the case where $\dfrac{K}{L}, M, N$ are fixed. We observe that for each three cases, the performance loss of the best method AO compared to the bound is an increasing function of the number of users $K$. This is due to the fact that achieving bound requires perfect elimination of the interference due to undesired users which becomes harder as the number of users increases. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.30\textwidth]{P_suc_vs_gamma_ch_1.eps}}} \qquad \subfloat{{\includegraphics[width=0.27\textwidth]{P_suc_vs_gamma_ch_2.eps}}} \caption{$P_{\text{success}}$ vs $\gamma_{\text{ch}}$. $(K, N, L, M)=(4, 4, 4, 8), \: \gamma=5$ dB.} \end{figure} \vspace{-7mm} Fig. 8 presents the feasibility ratios with respect to the channel estimation error quality. We observe that if the channel estimation error is large enough, all methods completely fail in the design process. We conclude that convex optimization based methods are more robust to channel errors compared to methods adapted from known beamforming algorithms. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \includegraphics[width=0.30\textwidth]{run_times.eps} \caption{Average Complexity Comparison.} \end{figure} \vspace{-7mm} Fig. 9 shows the normalized average run-times for all methods. Here we take the average over all previously described simulations. We observe that complexity is high for convex optimization based methods. The average run-time of AO is slightly larger than that of TSM. Among all methods we consider, SVD-ZF is the less complex one since it directly finds the solution (if feasible) by solving a linear matrix equation without any solver. On the other hand, its performance is generally not satisfactory in most of the cases. In the second part of simulations, we observe the power allocation of users, power sharing between fronthaul and access links, and effect of different user SINR thresholds. We consider two scenarios where RRH and MS locations are fixed. In the both cases, there are a CP with $4$ antennas, $2$ RRHs each with $4$ antennas and $4$ MSs. We only consider AO method to present the results. The first scenario includes various RRH-to-MS distances and second one considers a symmetric placement. In Fig. 10, we present the RRH and MS placements of the two scenarios. \vspace{-4mm} \begin{figure}[ht] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.31\textwidth]{cp_rrh_ms_locations.eps}}} \qquad \subfloat{{\includegraphics[width=0.31\textwidth]{cp_rrh_ms_locations2.eps}}} \caption{Scenario 1 (left) and Scenario 2 (right) RRH and MS placements.} \end{figure} \vspace{-7mm} To present the power allocation of users for both fronthaul and access links, we define \begin{equation} \label{Pow_alloc1} \begin{aligned} P_{\text{CP},k}&=\textbf{v}_k^H\textbf{v}_k, \: P_{\text{RRH},k}=\textbf{v}_k^H \left(\widehat{\textbf{G}}\textbf{W}^H\textbf{W}\widehat{\textbf{G}}^H + \tr\left(\textbf{W}^H \textbf{W} \bm{\Sigma}_1\right)\textbf{I}_M\right) \textbf{v}_k, \\ P_{\text{RRH,amp-noise},k}&=\dfrac{1}{K}\sigma_{\text{RRH}}^2\tr(\textbf{W}^H\textbf{W}), \: \forall k \end{aligned} \end{equation} where $P_{\text{CP},k}, P_{\text{RRH},k}, P_{\text{RRH,amp-noise},k}$ are the fronthaul link power, access link power and RRH amplified noise power for $k$-th user. Notice that we have \vspace{-4mm} \begin{equation} \label{Pow_alloc2} P_{\text{CP}}=\displaystyle\sum_{k=1}^K P_{\text{CP},k}, \: P_{\text{RRH}} = \displaystyle\sum_{k=1}^K \left(P_{\text{RRH},k} + P_{\text{RRH,amp-noise},k}\right). \end{equation} We know that RRH receiver noise is amplified and forwarded to users in AF type relaying. The related term is given in (\ref{r_k_2}) as the first part of the noise term. We equally divide RRH amplified noise power between users as shown in (\ref{Pow_alloc1}). \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.32\textwidth]{power_allocation1.eps}}} \qquad \subfloat{{\includegraphics[width=0.32\textwidth]{power_allocation2.eps}}} \caption{Power Allocations for Scenario 1 (left) and Scenario 2 (right).} \end{figure} \vspace{-7mm} In the left part of Fig. 11, we observe the power allocation of users for Scenario 1. We take equal SINR thresholds $\gamma_k=\gamma=5$ dB for all users. Notice that fronthaul powers are smaller compared to access link powers. This is due to the path-loss and antenna gain model that we use. As CP and RRHs are stationary, we assume that one can optimize the locations of CP and RRHs so that the corresponding channel conditions are good. We also assume that CP antenna array design is more flexible compared to RRH and MS equipments, and hence we use higher gain antennas for CP. We also observe that $P_{\text{RRH},4} > P_{\text{RRH},1}>P_{\text{RRH},2} \approx P_{\text{RRH},3}$. This is expected considering the locations of users. The distance between MS $4$ and both two RRHs is large and hence it requires the largest power. On the other hand, since MS $2$ and $3$ are close to some RRH, they require the smallest power. MS $1$ distance to both RRHs is at intermediate level and hence the corresponding power is in between the other three MSs. As a final remark, we observe that RRH amplified noise powers are significantly large and this shows that a well-optimized network design is needed to obtain sufficiently large user SINRs for AF type relaying. We present the power allocation of users for Scenario 2 in the right part of Fig. 11. In this case, we use a symmetric placement of RRHs and MSs and consider the effect of different user SINR thresholds by taking $\gamma_1=4, \gamma_2=6, \gamma_3=8, \gamma_4=10$ dB. We observe that as the SINR threshold increases, the corresponding user power of both fronthaul and access links also increases. The operator can adjust the user SINR thresholds according to the priority of users. The power required to serve a more prior user will be larger as also presented in this example scenario. \vspace{-5mm} \section{Conclusions} In this study, we analyzed the joint beamformer design problem in downlink C-RAN with wireless fronthaul. We considered the case where AF type relaying is used in RRHs without the capability of baseband processing. We assumed that channel coefficients are available with some additive error with known second order statistics. We derived a novel theoretical lower bound for the total power spent under SINR constraints. We proposed two convex optimization based methods and two other methods adapted from known beamforming strategies to observe the tightness of the bound. We have shown that first two methods have better performances but their complexities are also higher. In general, the performance of the best method is close to the bound and the difference is less than $1$ dB for some cases. The results show the effectiveness of the bound as well as the performances of various solution techniques. For C-RAN systems, there are other beamforming design techniques that are not analyzed in this study but studied in the literature. We have found at least one method performing close to the bound and this is enough to show the tightness of the bound proposed. As a future work, the approach used in this study to derive a performance bound can be adapted to DF and DCF based relaying and also to full-duplex RRH case. In all simulations, we observed that SDR based methods always produce rank-1 results. This fact can be proved in a future study. Finally one can search the necessary conditions required for the equality case of the bound to gain insight about the optimal algorithm. \vspace{-3mm} \appendices \section{Achievability of Rate} We use the idea given in \cite{InformationTheory} to show that the rate $\log_2(1+\text{SINR}_k)$ is achievable for $k$-th user where $\text{SINR}_k$ is defined by (\ref{SINR_2}). We find a lower bound to the mutual information $I(r_k; s_k)$ between the received signal $r_k$ and the information signal $s_k$. Using the facts that conditioning decreases entropy $h(\cdot)$, the entropy is maximized for Gaussian distribution when the variance is fixed, the entropy is invariant under translation, and $s_k$ and $r_k$ are zero-mean, we can write \begin{equation} \label{App1} \begin{aligned} I(r_k; s_k)&=h(s_k)-h(s_k | r_k) = h(s_k)-h(s_k-\alpha r_k | r_k) \geq h(s_k)-h(s_k-\alpha r_k) \\ &\geq \log\left(\pi e \mathbb{E}\left[|s_k|^2\right]\right) - \log\left(\pi e \mathbb{E}\left[|s_k-\alpha r_k|^2\right]\right) = \log\left(\dfrac{\mathbb{E}\left[|s_k|^2\right]}{\mathbb{E}\left[|s_k-\alpha r_k|^2\right]}\right). \end{aligned} \end{equation} Here we assume that $s_k$ is complex Gaussian and $\alpha$ is any complex constant. (\ref{App1}) is true for any $\alpha$ and specifically we choose $\alpha=\mathbb{E}\left[r_k^{*}s_k\right]/\mathbb{E}\left[|r_k|^2\right]$ to get \vspace{-5mm} \begin{equation} \label{App2} I(r_k; s_k) \geq \log\left(1 + \dfrac{|\mathbb{E}\left[r_k^{*}s_k\right]|^2}{\mathbb{E}\left[|r_k|^2\right] \cdot \mathbb{E}\left[|s_k|^2\right] - |\mathbb{E}\left[r_k^{*}s_k\right]|^2}\right). \end{equation} \noindent Using the equation of $r_k$ in (\ref{r_k_2}) and the fact $\mathbb{E}\left[|s_k|^2\right]=1$, we obtain that $|\mathbb{E}\left[r_k^{*}s_k\right]|^2 = P_d$ and $\mathbb{E}\left[|r_k|^2\right] = P_d + P_{I,1}+P_{I,2}+P_n$ where $P_d, P_{I,1}, P_{I,2}, P_n$ are defined in (\ref{P_d}). Therefore we conclude that $I(r_k; s_k)$ is at least $\log_2\left(1+\dfrac{P_d}{P_{I,1}+P_{I,2}+P_n}\right) = \log_2(1+\text{SINR}_k)$ bits. \section{Proof of (\ref{B_bound})} Using (\ref{B5}) and (\ref{B7}), we get \begin{equation} \label{B9} x_1 \geq c_1\left[c_2x_2 + (c_3x_5+c_4)\left(x_3+\dfrac{c_2}{d_1}x_3\right)+c_5\right], \: \: y \geq x_2+x_5+(c_3x_5+c_6)\dfrac{x_3}{d_1}. \end{equation} (\ref{B6}) and (\ref{B9}) yields \begin{equation} \label{B10} (d_1-c_1c_2)x_2 \geq c_1\left[(c_3x_5+c_4)\left(x_3+\dfrac{c_2}{d_1}x_3\right)+c_5\right] \end{equation} and (\ref{B8}) and (\ref{B9}) yields \begin{equation} \label{B11} \left[\dfrac{x_5d_2}{c_1}-(c_3x_5+c_4)\left(1+\dfrac{c_2}{d_1}\right)\right]x_3 \geq c_2x_2+c_5. \end{equation} (\ref{B10}) implies that $d_1>c_1c_2$. By (\ref{B10}), (\ref{B11}) and some simplifications, we obtain that \begin{equation} \label{B12} x_3 \geq \dfrac{d_1c_5}{\left(\dfrac{d_1d_2}{c_1}-c_2d_2-c_3d_1-c_2c_3\right)x_5-c_4(d_1+c_2)} \end{equation} and the denominator in (\ref{B12}) should be positive. Using (\ref{B10}) and (\ref{B12}) we get \begin{equation} \label{B13} x_2 \geq \dfrac{d_2c_5}{\left(\dfrac{d_1d_2}{c_1}-c_2d_2-c_3d_1-c_2c_3\right)x_5-c_4(d_1+c_2)}. \end{equation} Using (\ref{B9}), (\ref{B12}) and (\ref{B13}) we find that \begin{equation} \label{B14} y \geq x_5+\dfrac{d_2c_5+c_5(c_3x_5+c_6)}{\left(\dfrac{d_1d_2}{c_1}-c_2d_2-c_3d_1-c_2c_3\right)x_5-c_4(d_1+c_2)}. \end{equation} Define $x=ax_5-b$ where $a=\dfrac{d_1d_2}{c_1}-c_2d_2-c_3d_1-c_2c_3, \: b=c_4(d_1+c_2)$. Since $x$ is the denominator of (\ref{B12}), it is positive. As $x_5$ and $b$ are positive, we conclude that $a$ is also positive. We can write (\ref{B14}) in terms of $x$ as \begin{equation} \label{B15} y \geq \dfrac{1}{a}\left(b+c_3c_5+x+\dfrac{c_3c_5b+c_5(d_2+c_6)a}{x}\right). \end{equation} Finally, using (\ref{B15}) and Arithmetic-Geometric Mean Inequality, we get the desired result in (\ref{B_bound}). \section{Introduction} In new generation communication systems, the number of devices participating in the network grows exponentially. Furthermore, data rate requirements become challenging to satisfy as the network density increases. Standard cellular systems where a set of mobile stations (MSs) are served by a single central base station (BS) have a limited performance due to inter/intra-cell interference. In 5G, Cloud Radio Access Network (C-RAN) is a candidate solution which uses the multi-cell cooperation idea. In C-RAN hierarchy, base stations are simple radio units called remote radio heads (RRHs) which only implement radio functionality such as RF conversions, filtering, and amplifying. All baseband processing is done over a pool of central processors (CPs) which are connected to RRHs with finite capacity fronthaul links. This approach decreases the cost of deployment as compared to the traditional systems where each BS has its own on-site baseband processor. Furthermore, multi-cell cooperation enables better resource allocation and enhances the performance. The main architecture of a typical C-RAN system is described in \cite{C-RAN}. In a C-RAN cluster of RRHs and MSs, all RRH-to-MS transmissions are performed at the same time and frequency band to use the spectrum efficiently. In traditional C-RAN networks, all RRHs are connected to a CP by means of wired fronthaul links with high capacity. User data is shared among RRHs using fronthaul links enabling an optimized resource allocation. On the other hand, in some situations, the cost of using wired links can be high especially for urban areas. As an alternative approach, one can use a large base station located close to CP to send the user data from CP to RRHs through wireless links. By this method, the rate of data transmission in fronthaul links can be adaptively adjusted using proper power allocations and beamforming schemes. In the wireless fronthaul case, frequency bands of the fronthaul and the access links (links between RRHs and MSs) may be the same or different. In in-band scenario where the two frequency bands are the same, the RRHs should be capable of performing self-interference cancellation which increases the equipment complexity. To make the RRHs simpler, either the two frequency bands may be separated or a time-division based transmission can be used. In a C-RAN system with wireless fronthaul, the main aim is to design proper beamformers to optimize the network. This problem is similar to a two-hop relay design problem. In relay systems, there are different types of multi-hop mechanisms such as amplify-and-forward (AF), decode-and-forward (DF), decompress-and-forward (DCF), etc. The corresponding method is determined by the operation applied by RRHs to the signal received from fronthaul links before transmitting to users. AF type systems are the simplest ones where RRHs only apply some scaling to the received data \cite{AF-C-RAN}-\cite{AF-Relay2}. In DF based systems, RRHs apply a decoding to the user data requiring baseband processing ability for RRHs \cite{DF-1}-\cite{DF-2}. In DCF based systems, both decoding and decompressing abilities are necessary \cite{DCF-1}-\cite{DCF-2}. In DF and DCF based systems, there is some cooperation between CP and RRHs to decide which RRHs to decode which user data. In general, this requires a combinatorial search making the design complex. On the other hand, as the user data is decoded, assuming a perfect decoding for sufficiently high signal-to-noise-ratio (SNR), the interference between user signals can be eliminated at RRHs allowing to satisfy a higher performance for users. In general, AF systems are simpler but the interference cannot be perfectly eliminated at RRHs. In C-RAN systems, it is intended to make RRHs as simple as possible to decrease the deployment cost making AF systems more attractive. To optimize a C-RAN network by designing beamformers, channel coefficients should be known with some accuracy. In general, perfect channel state information (CSI) is not available as the channel estimation is done via pilot signals with finite power. There are different models for channel estimation error. It can be shown that linear channel estimation methods with orthogonal pilot signals yield an additive channel estimation error. The error is a random vector whose statistics may be known or not known. Some works assume that first or second order statistics are known \cite{AF-Relay1}, \cite{Ch-Add-1}-\cite{AF-Relay3}, and some other works use the model where error is norm-bounded \cite{AF-C-RAN}, \cite{AF-Relay2}, \cite{AF-Relay4}. The first approach is used when quantization error in channel estimation is negligible and the second one is used when quantization error is dominant \cite{AF-Relay5-ch-err}. Using the knowledge about the channel error vectors, the beamforming design problem can be well optimized and robustness against errors can be achieved. In this paper, we consider a downlink C-RAN system with wireless fronthaul where the transmissions of fronthaul and access links are in the same frequency band but in different time-slots. We assume that there is a partial channel knowledge where the second order statistics of the channel error is perfectly known. We optimize fronthaul and access link beamformers with AF type relaying in RRHs. Optimization is performed to minimize total power spent under user signal-to-interference-and-noise-ratio (SINR) constraints. In the literature, the power minimization problem is referred as Quality-of-Service (QoS) \cite{AF-Relay2}. In this approach, it is guaranteed to satisfy a certain quality of service to each user and the total power spent, which is one of the major costs of an operator, is minimized. In this work, our main aim is to find a theoretical lower bound for total power spent in the system. In showing the tightness of a lower bound, existence of an algorithm that comes close to the bound is sufficient since no algorithm can perform better than a lower bound. To show that the given bound is tight enough, we consider four different design methods with different complexities. The first method is Alternating Optimization (AO), which consecutively solves a series of beamforming design problems using convex optimization with semi-definite relaxation (SDR) approach. Both fronthaul and access link beamformers are designed using convex optimizations. The performance of this method is close to the bound but its complexity is high in general. The second method is a modified version of AO which is called Total SNR Maximization (TSM), where fronthaul beamformer design is based on the maximization of total SNR at RRHs. The access link beamformers are found as in AO. The third and fourth methods are proposed as a mixture of standard beamforming design methods which are maximal ratio combining (MRC), zero forcing (ZF) and singular value decomposition (SVD). The third method is a combination of MRC and ZF so it is named as MRC-ZF. In this method, CP beamformers (related to fronthaul link) are found using MRC whereas RRH beamformers (related to access link) are found using ZF. The fourth method is called SVD-ZF and the corresponding CP and RRH beamformers are designed accordingly. MRC-ZF and SVD-ZF can directly find beamformers without using a convex optimization and hence they are simpler compared to AO and TSM. They are considered to make a comparison between the well-known beamforming methods and the high-complexity convex optimization based methods. The contributions of the paper can be listed as below: \begin{itemize} \item We derive a theoretical lower bound for total power spent in the system to serve multiple users for a given set of network parameters. By detailed simulations, we show the tightness of the bound. In general, the papers related to C-RAN proposes different design methods whose optimality are not known due to the lack of a theoretical bound or a globally optimum solution. To the best of our knowledge, there is no other work deriving a bound. \item We propose four novel design methods. Two of them are based on convex optimization with SDR approach and the other two are based on a combination of well-known methods. Because of the mixed structure of CP and RRH beamformers in SINR expressions, convex optimization cannot be directly applied. We organize the related expressions for which SDR approach is applicable. By similar reasons, the direct application of well-known methods is also not possible. We solve a system of matrix equations to apply MRC, ZF and SVD. \item We perform detailed simulations to observe the performances of the proposed methods. We make a comparison to the theoretical bound for different network parameters. \end{itemize} The organization of the paper is as follows. In Section II, related works are reviewed. Section III describes the general system model. In Section IV, a novel theoretical performance bound for the proposed problem is derived. Section V includes the convex optimization based methods AO and TSM. The modified beamforming methods MRC-ZF and SVD-ZF are described in Section VI. In Section VII, simulation results are presented. Finally, Section VIII concludes the paper. \subsection*{Notation} Throughout the paper, the vectors are denoted by bold lowercase letters and matrices are denoted by bold uppercase letters. $(\cdot)^T, (\cdot)^H,$ and $\tr(\cdot)$ indicates the transpose, conjugate transpose and trace operators, respectively. $\textbf{0}$ describes the all-zero matrix, and $\textbf{A} \succeq 0$ implies that the matrix $\textbf{A}$ is Hermitian and positive semi-definite. $\text{diag}(x_1, x_2, \ldots, x_n)$ denotes the diagonal matrix with diagonal elements $x_1, x_2, \ldots, x_n$ and $\textbf{I}_n$ denotes $n \times n$ identity matrix. $\lmin{\cdot}, \lambda_i(\cdot), e_i(\cdot)$ denotes the minimum eigenvalue, $i$-th largest eigenvalue and the corresponding unit-norm eigenvector of the corresponding Hermitian positive semi-definite matrix, respectively. $\norm{\cdot}$ denotes the $\ell_2$-norm of the corresponding matrix, $\mathbb{E}[\cdot]$ denotes the expectation operator. $\tvec{\textbf{A}}$ is the column vector consisting of columns of $\textbf{A}$. The symbol $\otimes$ denotes the Kronecker product. Finally, $\mathbb{C}$ denotes the set of complex numbers and $\delta[\cdot]$ corresponds to the discrete impulse function satisfying $\delta[0]=1, \: \delta[x]=0$ for all $x \neq 0$. \section{Related Studies} In this section, we review related studies existing in the literature. Firstly, we present the works related to wired fronthaul links and mention the main differences compared to the wireless case. Secondly, we review the studies related to AF, DF and DCF type wireless fronthaul systems and indicate the main differences with our work. Thirdly, we mention papers with different channel uncertainty models used in C-RAN system designs. Finally, we express the major differences of the papers related to standard relay networks. \vspace{-5mm} \subsection{Wired Fronthaul} There are a lot of studies existing in the literature related to multi-cell cooperation techniques for wired fronthaul. In \cite{Wired-Rate1}-\cite{Wired-Rate3}, optimization is performed to maximize the data rate of users under certain transmit power and fronthaul capacity constraints. The optimization of SINRs of users is analyzed in \cite{Wired-UDD} using uplink-downlink duality. In \cite{Wired-LimitedFronthaul}, the total transmit power is minimized under fronthaul capacity constraints. In \cite{Wired-SDR}, the cost function consists of a weighted sum of the total transmit power and the total fronthaul data. As another approach, \cite{Wired-UserMax} aims at finding the largest set of users which can be served by the system where each user data is sent only by a single RRH. The power consumption of RRHs under active and sleeping modes can also be included to the power minimization problem as done in \cite{Wired-GreenCRAN}. In \cite{Wired-ZF}, a standard ZF beamformer design is used, however its performance is limited in eliminating the interference. In \cite{Wired-Heuristic1}-\cite{Wired-Heuristic3}, the cooperation strategy is found using some heuristic search techniques. Possible strategies in imperfect channel case are considered in \cite{Ch-Add-3}, \cite{Wired-Imperfect-CSI}. Cluster formation \cite{Wired-ClusterFormation} and the effect of user traffic delay \cite{Wired-Delay} are also analyzed in the literature. For wired fronthaul case, as there is no interference between different users at RRHs, there is a natural combinatorial user selection problem. CP determines the set of users to be served by each RRH (possibly intersecting) and sends the corresponding data through fronthaul links. In general, most of the studies assume that perfect user data is available at RRHs after fronthaul transmission where some works also take the decompression error effect into account. Since the fronthaul transmission takes places over cables, there is no beamforming in CP. The design problem is to decide on the cooperation strategy and beamforming coefficients for access link. On the other hand, in wireless fronthaul networks, both fronthaul and access links have their own beamformers which are the main design parameters. Considering the differences in fronthaul structures, the methods proposed for wired case cannot be directly applied to wireless case. \vspace{-4mm} \subsection{Relaying Mechanism for Wireless Fronthaul} Works related to wireless fronthaul case are limited in number compared to the standard wired case. The problem for wireless fronthaul case is similar to two-hop relaying. Most studied relaying mechanisms for C-RAN with wireless fronthaul concept are AF, DF and DCF. In \cite{DF-1} DF based relaying is assumed where each RRH can decode only a single user's data at once. If more than one user's data is to be decoded, decoding is done by time division. The combinatorial problem of choosing the set of user data to be decoded by each RRH is solved in \cite{DF-1} while an SDR based beamformer optimization is done under perfect CSI assumption. \cite{DF-2} also analyzes DF based relaying where a weighted sum of user data rates is maximized under power transmit limit. There is a constraint that each RRH can serve a single user. Beamformer optimization is performed using SDR and perfect CSI is assumed. In \cite{DCF-1} both DF and DCF based approaches are considered where the set of user data to be decoded by each RRH is assumed to be known and beamforming optimization is done using difference of convex method. Data rate maximization under power limit is analyzed for perfect CSI case. \cite{DCF-2} is the generalized version of \cite{DCF-1} where there are more than one RRH clusters each controlled by a different CP. \cite{AF-C-RAN} uses AF type relaying with a norm-bounded channel estimation error model. Using worst-case SINR formulas total power is minimized under SINR constraints. In that work, fronthaul beamformers are assumed to be known and access link beamformers are designed using SDR based methods along with a ZF based approach implemented for comparison. In \cite{AF-Relay2}, a two-hop AF relaying problem is studied under norm-bounded channel error model. As all independent sources have a single antenna, fronthaul beamforming is not applicable, only access link beamforming design is studied. SDR based optimization is used to minimize total transmit power under SINR constraints. Because of the combinatorial nature of DF and DCF based relaying schemes, the methods used for fronthaul beamforming design cannot be directly adapted to AF type relaying. For access link beamforming design, SDR based approach is widely used for all types of relaying schemes. Some works also consider well known beamforming methods (such as ZF) for comparison. To the best of our knowledge, there is a very limited amount of work about C-RAN with wireless fronthaul and AF relaying. Furthermore, in such studies, neither the fronthaul and access link beamforming design is jointly considered nor a theoretical bound is derived. \vspace{-5mm} \subsection{Channel Error Model} In C-RAN concept, three types of channel error models are mostly used. The first one is perfect CSI model where channel coefficients are assumed to be perfectly known. Although it is unrealistic, the methods proposed for this case may provide some insights. Furthermore, in most of the cases, it is possible to modify the corresponding algorithms accordingly when the channel is partially known. The papers \cite{DF-1}-\cite{DCF-2}, \cite{Wired-Rate1}-\cite{Wired-Heuristic3} all assume perfect CSI. The second approach is the norm-bounded error assumption. In this assumption, it is assumed that the error vectors are in some sphere with known radius. The works with this assumption perform beamforming design using worst case SINRs which can be defined as the minimum value of SINRs for given error norm bounds. \cite{AF-C-RAN}, \cite{AF-Relay2}, \cite{AF-Relay4}-\cite{AF-Relay5-ch-err} and some references therein use this method. The third approach which is also used in our work assumes that second order statistics (mean and covariance matrices) of the channel estimation error vectors are known. When this approach is used the mean powers of signal, interference and noise terms are used in the design process. \cite{AF-Relay1}, \cite{Ch-Add-1}-\cite{AF-Relay3} use the last approach. \vspace{-5mm} \subsection{Standard Relay Networks} The C-RAN with wireless fronthaul concept is similar to two-hop multi source/destination multi-antenna relaying networks and some beamforming design techniques used in standard relaying literature (such as SDR) can be adapted to C-RAN framework. On the other hand, joint optimization of fronthaul and access link beamformers is not widely considered in standard relaying problems. \cite{AF-Relay1}-\cite{AF-Relay2}, \cite{AF-Relay3}-\cite{AF-Relay5-ch-err}, \cite{AF-Relay6}-\cite{AF-Relay7} include beamforming design for standard relaying problems which are all special cases of our problem of concern. Hence, some methods proposed for relaying problems can be used for our purposes but none of them directly provides a solution. \section{System Model} We consider the downlink of a C-RAN cluster including a CP with $M$ antennas, $N$ RRHs each with $L$ transmit/receive antennas, and $K$ MSs each with a single antenna. All CP-to-RRH and RRH-to-MS channels are assumed to be flat, constant over a transmission period and known by CP with some additive Gaussian error with known second order statistics. We assume a two stage transmission scheme where fronthaul and access link transmissions are performed in different time slots. In the first stage, the user data is sent from CP to RRHs over wireless channels. RRHs apply some linear transformation to the received data as in AF relaying using beamforming matrices and forward the transformed signal to the MSs in the second stage. We assume that RRHs are simple radio units without the capability of baseband processing and hence they cannot decode the user data. Therefore, AF relaying mechanism is considered in this model. In Fig. 1, we see the general block diagram of the model used. \vspace{-5mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \includegraphics[width=0.62\textwidth]{wireless_fronthaul_v3.png} \caption{Block Diagram of Downlink C-RAN with Wireless Fronthaul.} \end{figure} \vspace{-5mm} We denote the channel between CP and $n$-th RRH as $\textbf{G}_n \in \mathbb{C}^{M \times L}$, the channel between $n$-th RRH and $k$-th MS as $\textbf{h}_{kn} \in \mathbb{C}^L$, the beamformer vector of CP for $k$-th user as $\textbf{v}_{k}\in \mathbb{C}^M$, and beamforming matrix for $n$-th RRH as $\textbf{W}_n\in \mathbb{C}^{L \times L}$. The received signal of the $n$-th RRH in the first transmission stage can be written as \begin{equation} \textbf{x}_n = \textbf{G}_n^H\displaystyle\sum_{k=1}^K\textbf{v}_{k}s_k+\textbf{z}_n, \quad n=1, 2, \ldots, N \end{equation} where $s_k$ denotes the $k$-th user data which satisfies $\mathbb{E}[|s_k|^2]=1, \quad \forall k=1, 2, \ldots, K$ and $\textbf{z}_n \sim \mathcal{C}\mathcal{N}(\textbf{0}, \sigma_{\text{RRH}}^2\textbf{I}_{L})$ is the noise term in the corresponding RRH. After the first stage, the transformed signal by $n$-th RRH is given by \begin{equation} \textbf{y}_n = \textbf{W}_n \textbf{x}_n, \quad n=1, 2, \ldots, N. \end{equation} In this case, the received signal by the $k$-th MS can be expressed by \begin{equation} \begin{aligned} r_k &= \displaystyle\sum_{n=1}^N \textbf{h}_{kn}^H\textbf{y}_n + n_k = \displaystyle\sum_{n=1}^N \textbf{h}_{kn}^H \textbf{W}_n \left(\textbf{G}_n^H\displaystyle\sum_{\ell=1}^K\textbf{v}_{\ell}s_{\ell} + \textbf{z}_n\right) + n_k \\ &= \displaystyle\sum_{n=1}^N \textbf{h}_{kn}^H \textbf{W}_n \textbf{G}_n^H \textbf{v}_{k}s_{k}+\displaystyle\sum_{n=1}^N \displaystyle\sum_{\ell \neq k}^K \textbf{h}_{kn}^H \textbf{W}_n \textbf{G}_n^H \textbf{v}_{\ell}s_{\ell} + \displaystyle\sum_{n=1}^N \textbf{h}_{kn}^H \textbf{W}_n \textbf{z}_n + n_k. \end{aligned} \end{equation} Here $n_k \sim \mathcal{C}\mathcal{N}(0, \sigma_{\text{MS}}^2)$ denotes the noise term in the $k$-th MS. In order to simplify expressions, we define augmented channel, beamformer and noise vectors/matrices as given below: \begin{equation} \begin{aligned} \textbf{h}_k &= [\textbf{h}_{k1}^T \: \textbf{h}_{k2}^T \: \cdots \: \textbf{h}_{kN}^T]^T \: : \: NL \times 1, \quad \textbf{W} = \text{diag}\left(\textbf{W}_1, \: \textbf{W}_2, \: \ldots, \: \textbf{W}_N \right) \: : \: NL \times NL, \\ \textbf{G} &= \left[\textbf{G}_1 \: \textbf{G}_2 \: \cdots \: \textbf{G}_N \right] \: : \: M \times NL, \quad \textbf{z} = [ \textbf{z}_1^T \: \textbf{z}_2^T \: \cdots \: \textbf{z}_N^T]^T \: : \: NL \times 1. \\ \end{aligned} \end{equation} Using the augmented variables, we can write $r_k$ as \begin{equation} \label{r_k_1} r_k = \textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_k s_k + \displaystyle\sum_{\ell\neq k} \textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_{\ell}s_{\ell} + \textbf{h}_k^H\textbf{W}\textbf{z} + n_k. \end{equation} We model the channel estimates as $\textbf{G}_n = \widehat{\textbf{G}}_n + \Delta \textbf{G}_n, \: \: \textbf{h}_{kn} = \widehat{\textbf{h}}_{kn} + \Delta \textbf{h}_{kn}$ where $\widehat{\textbf{G}}_n $ and $\widehat{\textbf{h}}_{kn}$ are channel estimates, $\Delta \textbf{G}_n$ is a zero-mean complex Gaussian matrix with independent entries each with variance $\sigma_{1,n}^2$ and $\Delta \textbf{h}_{kn} \sim \mathcal{C}\mathcal{N}\left(\textbf{0}, \sigma_{2,k,n}^2\textbf{I}_{L}\right)$ is a circularly symmetric Gaussian vector. We also assume that $\Delta \textbf{G}_n$ and $\Delta \textbf{h}_{kn}$ are independent for all $n$ and $k$. Using the error vectors and matrices, we can form the corresponding augmented variables as shown in (\ref{h_aug}). \begin{equation} \label{h_aug} \begin{aligned} \widehat{\textbf{h}}_k &= [\widehat{\textbf{h}}_{k1}^T \: \widehat{\textbf{h}}_{k2}^T \: \cdots \: \widehat{\textbf{h}}_{kN}^T]^T \: : \: NL \times 1, \quad \widehat{\textbf{G}} = \left[\widehat{\textbf{G}}_1 \: \widehat{\textbf{G}}_2 \: \cdots \: \widehat{\textbf{G}}_N \right] \: : \: M \times NL, \\ \Delta \textbf{h}_k &= [\Delta \textbf{h}_{k1}^T \: \Delta \textbf{h}_{k2}^T \: \cdots \: \Delta \textbf{h}_{kN}^T]^T \: : \: NL \times 1, \quad \Delta \textbf{G} = \left[\Delta \textbf{G}_1 \: \Delta \textbf{G}_2 \: \cdots \: \Delta \textbf{G}_N \right] \: : \: M \times NL. \end{aligned} \end{equation} Using the new variables and (\ref{r_k_1}), we can write $r_k$ as \begin{equation} \label{r_k_2} r_k = \underbrace{\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k s_k}_{\text{desired}} + \underbrace{\left(\textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_k - \widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right) s_k}_{\text{interference part 1}} + \underbrace{\displaystyle\sum_{\ell\neq k} \textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_{\ell}s_{\ell}}_{\text{interference part 2}} + \underbrace{\textbf{h}_k^H\textbf{W}\textbf{z} + n_k}_{\text{noise}}. \end{equation} In (\ref{r_k_2}), the desired part includes the desired signal for the $k$-th MS. Notice that it contains only the channel estimates for the $k$-th user which is the only useful part for the receiver of corresponding MS. Interference part 1 is related to the channel mismatch for the $k$-th user signal. Although it includes $s_k$ term, the corresponding signal is not useful as its coefficient is not known by the receiver due to uncertainty in the channel estimates. Interference part 2 is the actual interference signal including the signals for other users. Noise term is the combination of the amplified and forwarded RRH receiver noise and MS receiver noise. Using the equation in (\ref{r_k_2}), we define \begin{equation} \label{SINR_1} \text{SINR}_k = \dfrac{P_d}{P_{I,1}+P_{I,2}+P_n} \end{equation} where \begin{equation} \label{P_d} \begin{aligned} P_d &= \mathbb{E}\left\{\left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k s_k\right|^2\right\}, \: \: &&P_{I,1} = \mathbb{E}\left\{\left|\left(\textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_k - \widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right) s_k \right|^2\right\} \\ P_{I,2} &= \mathbb{E}\left\{\left|\displaystyle\sum_{\ell\neq k} \textbf{h}_k^H\textbf{W}\textbf{G}^H\textbf{v}_{\ell}s_{\ell} \right|^2\right\}, \: \: &&P_n = \mathbb{E}\left\{\left|\textbf{h}_k^H\textbf{W}\textbf{z} + n_k \right|^2\right\}. \end{aligned} \end{equation} Using the fact that $\mathbb{E}\left[s_k^Hs_{\ell}\right]=\delta[k-\ell]$ and statistics of the channel error matrices/vectors and noise terms, we find that \begin{equation} \label{SINR_2} \text{SINR}_k = \dfrac{\left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2}{\displaystyle\sum_{\ell=1}^K\tr\left(\textbf{D}_k\textbf{W}\textbf{C}_{\ell}\textbf{W}^H\right) - \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 + \sigma_{\text{RRH}}^2\tr\left(\textbf{D}_k\textbf{W}\textbf{W}^H\right) + \sigma_{\text{MS}}^2} \end{equation} where \begin{equation} \begin{aligned} \textbf{D}_k &= \widehat{\textbf{h}}_k\widehat{\textbf{h}}_k^H + \bm{\Sigma}_{2,k}, \: \: \textbf{C}_k = \widehat{\textbf{G}}^H\textbf{v}_k\textbf{v}_k^H \widehat{\textbf{G}} + (\textbf{v}_k^H\textbf{v}_k)\bm{\Sigma}_1, \\ \bm{\Sigma}_1 &= \text{diag}\left(\sigma_{1,1}^2\textbf{I}_L, \sigma_{1,2}^2\textbf{I}_L, \ldots, \sigma_{1,N}^2\textbf{I}_L\right), \: \: \bm{\Sigma}_{2,k} = \text{diag}\left(\sigma_{2,k,1}^2\textbf{I}_L, \sigma_{2,k,2}^2\textbf{I}_L, \ldots, \sigma_{2,k,N}^2\textbf{I}_L\right). \end{aligned} \end{equation} In Appendix A, we show that the rate $\log_2(1+\text{SINR}_k)$ is achievable for the $k$-th user. Hence, the SINR that we defined can be used as a design criteria. Another design term that can be optimized is the total power spent in the system. The total power $P$ has two components $P_{\text{CP}}$ and $P_{\text{RRH}}$ which correspond to the power transmitted by CP and RRHs, respectively. Using the fact that $\mathbb{E}\left[s_k^H s_{\ell}\right]=\delta[k-\ell]$, we can write\footnote{Actual power terms include a constant multiplier which does not affect the solution, and hence they are omitted.} \begin{equation} P_{\text{CP}} = \mathbb{E}\left[\left|\displaystyle\sum_{k=1}^K\textbf{v}_ks_k\right|^2\right] \\ = \displaystyle\sum_{k=1}^K \textbf{v}_k^H \textbf{v}_k, \end{equation} and \begin{equation} \begin{aligned} P_{\text{RRH}} &= \displaystyle\sum_{n=1}^N \mathbb{E}\left[\left|\textbf{y}_n\right|^2\right] = \displaystyle\sum_{n=1}^N \mathbb{E}\left[\left|\textbf{W}_n\left(\textbf{G}_n^H\displaystyle\sum_{k=1}^K\textbf{v}_{k}s_k+\textbf{z}_n\right)\right|^2\right] \\ &= \displaystyle\sum_{k=1}^K \textbf{v}_k^H \textbf{G}\textbf{W}^H \textbf{W} \textbf{G}^H \textbf{v}_k + \sigma_{\text{RRH}}^2 \tr\left(\textbf{W}^H \textbf{W}\right). \end{aligned} \end{equation} Due to imperfect channel state information, $P_{\text{RRH}}$ includes random terms. Therefore, we optimize the mean power $P=P_{\text{CP}} + \mathbb{E}\left\{P_{\text{RRH}}\right\}$ which can be evaluated as \begin{equation} \label{P_eqn} P = \displaystyle\sum_{k=1}^K \tr\left(\bm{\tau_0} \textbf{v}_k\textbf{v}_k^H\right) + \sigma_{\text{RRH}}^2 \tr\left(\textbf{W}^H \textbf{W}\right) \end{equation} where $\bm{\tau_0}= \textbf{I}_M + \widehat{\textbf{G}}\textbf{W}^H\textbf{W}\widehat{\textbf{G}}^H + \tr\left(\textbf{W}^H \textbf{W} \bm{\Sigma}_1\right)\textbf{I}_M$. In this study, we aim to minimize total mean power $P$ under SINR constraints $\text{SINR}_k \geq \gamma_k$ where $\left\{\gamma_k\right\}_{k=1}^K$ are given SINR thresholds.\footnote{Feasibility cannot be guaranteed. Bad channel conditions and/or high SINR thresholds may yield infeasible results.} As shown in Appendix A, the SINR constraints provide that the rate $\log_2(1+\gamma_k)$ is achievable for the $k$-th user. This type of problem is studied under Quality-of-Service (QoS) in the literature where we minimize the power spent in the system by satisfying a certain rate (or SINR) for each user. User rates can be adjusted according to the priority of users by changing the corresponding threshold values. The main optimization problem (P0) can be formulated as \begin{equation} (\text{P}0) \: \: \min_{\textbf{W}, \{\textbf{v}_k\}_{k=1}^K} P \quad \text{such that} \quad \text{SINR}_k \geq \gamma_k, \quad \forall k=1, 2, \ldots, K. \end{equation} \section{A Theoretical Performance Bound} In this section, we find a novel performance bound for (P0). We find a lower bound for the total mean power P under SINR constraints. Using the SINR constraints, for all $k$ we have \begin{equation} \label{B1} \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 \geq \gamma_k\left(\tr\left(\textbf{D}_k\textbf{W}\textbf{C}_k\textbf{W}^H\right) - \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 + \sigma_{\text{RRH}}^2\tr\left(\textbf{D}_k\textbf{W}\textbf{W}^H\right) + \sigma_{\text{MS}}^2\right). \end{equation} Numerical manipulations reveal that \begin{equation} \label{B2} \begin{aligned} \tr\left(\textbf{D}_k\textbf{W}\textbf{C}_k\textbf{W}^H\right) - \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 &= \tr\left(\bm{\Sigma}_{2,k} (\textbf{W}\Gcap^H\textbf{v}_k)(\textbf{W}\Gcap^H\textbf{v}_k)^H\right) + \\ &\textbf{v}_k^H \textbf{v}_k\left[\tr\left((\hcap_k^H\textbf{W})^H(\hcap_k^H\textbf{W})\bm{\Sigma}_1\right) + \tr\left(\textbf{W}^H\textbf{W}\bm{\Sigma}_{2,k}\bm{\Sigma}_1\right)\right]. \end{aligned} \end{equation} To show (\ref{B2}), we use the facts $\textbf{W}\bm{\Sigma}_1=\bm{\Sigma}_1\textbf{W}$ and $\textbf{W}\bm{\Sigma}_{2,k}=\bm{\Sigma}_{2,k}\textbf{W}$. We know that using Von-Neumann's Inequality \cite{Von-Neumann}, for any two $c \times c$ Hermitian positive semi-definite matrices $\textbf{A}$ and $\textbf{B}$ we have $\tr\left(\textbf{A}\textbf{B}\right) \geq \displaystyle\sum_{i=1}^c \lambda_i(\textbf{A}) \lambda_{c-i+1}(\textbf{B}) \geq \lambda_c(\textbf{A}) \lambda_1(\textbf{B}) = \lmin{\textbf{A}}\norm{\textbf{B}}$. Using this fact and (\ref{B2}), we have \begin{equation} \begin{aligned} \tr\left(\textbf{D}_k\textbf{W}\textbf{C}_k\textbf{W}^H\right) - \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 &\geq \lmin{\bm{\Sigma}_{2,k}}\norm{\textbf{W}\Gcap^H\textbf{v}_k}^2 + \\ &\textbf{v}_k^H\textbf{v}_k\left[\lmin{\bm{\Sigma}_1}\norm{\hcap_k^H\textbf{W}}^2 + \lmin{\bm{\Sigma}_{2,k}\bm{\Sigma}_1}\norm{\textbf{W}}^2\right]. \end{aligned} \end{equation} Similarly, we get $\tr\left(\textbf{D}_k\textbf{W}\textbf{W}^H\right) = \tr\left((\hcap_k\hcap_k^H+\bm{\Sigma}_{2,k})\textbf{W}\textbf{W}^H\right) \geq \norm{\hcap_k\textbf{W}}^2+\lmin{\bm{\Sigma}_{2,k}}\norm{\textbf{W}}^2$. Therefore, we obtain that \begin{align} \label{B3} \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 &\geq \gamma_k \Big[\lmin{\bm{\Sigma}_{2,k}}\norm{\textbf{W}\Gcap^H\textbf{v}_k}^2 + \lmin{\textbf{v}_k^H\textbf{v}_k\bm{\Sigma}_1+\sigma_{\text{RRH}}^2\textbf{I}_{NL}}\norm{\hcap_k^H\textbf{W}}^2 + \notag \\ &\lmin{\textbf{v}_k^H\textbf{v}_k\bm{\Sigma}_{2,k}\bm{\Sigma}_1 + \sigma_{\text{RRH}}^2\bm{\Sigma}_{2,k}}\norm{\textbf{W}}^2 + \sigma_{\text{MS}}^2 \Big] \\ &= \gamma_k \left[\sigma_{2,k}^2\norm{\textbf{W}\Gcap^H\textbf{v}_k}^2+(\textbf{v}_k^H\textbf{v}_k\sigma_1^2+\sigma_{\text{RRH}}^2)\left(\norm{\hcap_k^H\textbf{W}}^2 + \sigma_{2,k}^2\norm{\textbf{W}}^2\right) + \sigma_{\text{MS}}^2 \right] \notag \end{align} where $\sigma_1^2 = \min\limits_{n} \sigma_{1,n}^2$ and $\sigma_{2,k}^2=\min\limits_{n} \sigma_{2,k,n}^2$. Similarly, we obtain that \begin{equation} \label{B4} \begin{aligned} P &\geq \displaystyle\sum_{k=1}^K \left(\textbf{v}_k^H\textbf{v}_k + \norm{\textbf{W}\Gcap^H\textbf{v}_k}^2 + \textbf{v}_k^H\textbf{v}_k \lmin{\bm{\Sigma}_1}\norm{\textbf{W}}^2\right)+\sigma_{\text{RRH}}^2\tr\left(\textbf{W}^H\textbf{W}\right) \\ &\geq \displaystyle\sum_{k=1}^K \Bigl( \underbrace{\textbf{v}_k^H\textbf{v}_k + \norm{\textbf{W}\Gcap^H\textbf{v}_k}^2 + \textbf{v}_k^H\textbf{v}_k\sigma_1^2\norm{\textbf{W}}^2 + \dfrac{\sigma_{\text{RRH}}^2}{K}\norm{\textbf{W}}^2}_{P_k} \Bigr). \end{aligned} \end{equation} We will find a lower bound for $P_k$ for all $k$ using (\ref{B3}). To simplify the notations, we define \begin{equation} \label{B_def} \begin{aligned} x_1=\left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2, \: x_2=\norm{\textbf{W}\Gcap^H\textbf{v}_k}^2, \: x_3&=\norm{\hcap_k^H\textbf{W}}^2, \: x_4=\norm{\textbf{W}}^2, \: x_5=\textbf{v}_k^H\textbf{v}_k, \: y=P_k \\ c_1=\gamma_k, \: c_2=\sigma_{2,k}^2, \: c_3= \sigma_1^2, \: c_4=\sigma_{\text{RRH}}^2, \: c_5&=\sigma_{\text{MS}}^2, \: c_6 = \dfrac{\sigma_{\text{RRH}}^2}{K}, \: d_1=\norm{\hcap_k}^2, \: d_2 = \norm{\Gcap}^2. \end{aligned} \end{equation} (\ref{B3}) and (\ref{B4}) can be written in terms of new variables as \begin{equation} \label{B5} x_1 \geq c_1\left[c_2x_2 + (c_3x_5+c_4)(x_3+c_2x_4)+c_5\right], \: \: y = x_2+x_5+c_3x_4x_5+c_6x_4. \end{equation} By Cauchy-Schwarz Inequality \cite{CS} and submultiplicativity of $\ell_2$-norm, we get \begin{equation} \label{B6} \norm{\hcap_k}^2 \norm{\textbf{W}\Gcap^H\textbf{v}_k}^2 \geq \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 \: \Longrightarrow \: x_2 d_1 \geq x_1. \end{equation} \begin{equation} \label{B7} \norm{\textbf{W}}^2 \norm{\hcap_k}^2 \geq \norm{\hcap_k^H\textbf{W}}^2 \: \Longrightarrow \: x_4 d_1 \geq x_3. \end{equation} \begin{equation} \label{B8} \norm{\hcap_k^H\textbf{W}}^2 \norm{\textbf{v}_k}^2 \norm{\Gcap}^2 \geq \norm{\hcap_k^H\textbf{W}}^2 \norm{\textbf{v}_k^H\Gcap}^2 \geq \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 \: \Longrightarrow \: x_3 x_5 d_2 \geq x_1. \end{equation} In Appendix B, using (\ref{B5})-(\ref{B8}) and Arithmetic-Geometric Mean Inequality \cite{AM-GM}, we show that \begin{equation} \label{B_bound} y \geq \dfrac{1}{a}\left(b+c_3c_5+2\sqrt{c_3c_5b+c_5(d_2+c_6)a}\right), \end{equation} where $a=\dfrac{d_1d_2}{c_1}-c_2d_2-c_3d_1-c_2c_3, \: b=c_4(d_1+c_2)$. Together with the feasibility condition also found in Appendix B, we can express the bound as \begin{equation} P \geq \displaystyle\sum_{k=1}^K \dfrac{\Htilde_k\sigma_{\text{RRH}}^2+\Gtilde\sigma_{\text{MS}}^2+2\sigma_{\text{RRH}}\sigma_{\text{MS}}\sqrt{\Htilde_k\Gtilde+\dfrac{\Delta_k}{K}}}{\Delta_k}, \: \: \Delta_k>0, \: \forall k \end{equation} where \begin{equation} \Htilde_k = \norm{\hcap_k}^2+\sigma_{2,k}^2, \: \Gtilde = \norm{\Gcap}^2+\sigma_1^2, \: \Delta_k=\left(1+\dfrac{1}{\gamma_k}\right)\norm{\hcap_k}^2\norm{\Gcap}^2-\Htilde_k\Gtilde. \end{equation} Using (\ref{B_def}), it can be shown that $a=\Delta_k$. In Appendix B, we show that $a>0$ (equivalently $\Delta_k>0, \: \forall k$) is a necessary (but not sufficient) feasibility condition which has to be satisfied to obtain a proper solution for (P0).\footnote{We can find upper bounds for SINR thresholds considering $\Delta_k=0$ to obtain a feasible solution.} It it easy to show that the lower bound is an increasing function of $\sigma_{\text{RRH}}, \sigma_{\text{MS}}, \sigma_1, \sigma_{2,k}, \gamma_k$ and a decreasing function of $\norm{\hcap_k}$ and $\norm{\Gcap}$, as expected. \section{Convex Optimization Methods} In the previous section, we have found a performance bound for problem (P0). To observe the tightness of the proposed lower bound, we consider different methods to solve the joint beamformer design problem. In this section, we present two convex optimization based methods to solve (P0). Both methods apply successive convex optimizations with the SDR idea. Firstly, we will show that each one of fronthaul and access link beamformers can be found using convex optimization with SDR when the other one is fixed. Using this observation, we will propose two methods with different complexities. \vspace{-5mm} \subsection{Access Link Beamformer Design} Let $\textbf{v}_k$'s be given. In this case, the matrices $\textbf{D}_k$ and $\textbf{C}_{\ell}$ become constant. For any matrices $\textbf{X}, \textbf{Y}, \textbf{Z}$ with suitable dimensions, we have $\tr\left(\textbf{X}^H\textbf{Y}\textbf{X}\textbf{Z}\right) = (\tvec{\textbf{X}})^H\left(\textbf{Z}^T\otimes\textbf{Y}\right)\tvec{\textbf{X}}$ \cite{vec-eqn}. Using this property, we get \begin{align} \label{C1} \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 &= (\tvec{\textbf{W}})^H\left((\Gcap^H\textbf{v}_k\textbf{v}_k^H\Gcap)^T \otimes (\hcap_k\hcap_k^H) \right) \tvec{\textbf{W}} \notag \\ \tr\left(\textbf{D}_k\textbf{W}\textbf{C}_{\ell}\textbf{W}^H\right) &= (\tvec{\textbf{W}})^H\left(\textbf{C}_{\ell}^T \otimes \textbf{D}_k\right) \tvec{\textbf{W}}, \\ \tr\left(\textbf{D}_k\textbf{W}\textbf{W}^H\right) &= (\tvec{\textbf{W}})^H\left(\textbf{I}_{NL} \otimes \textbf{D}_k\right) \tvec{\textbf{W}}, \: \: \tr\left(\textbf{W}^H\textbf{W}\right) = (\tvec{\textbf{W}})^H\tvec{\textbf{W}}. \notag \end{align} Similarly, we obtain that \begin{align} \label{C2} \tr\left(\bm{\tau_0} \textbf{v}_k\textbf{v}_k^H\right) &= \textbf{v}_k^H\textbf{v}_k + \tr\left(\textbf{W}^H\textbf{W}\Gcap^H\textbf{v}_k\textbf{v}_k^H\Gcap\right) + \tr\left(\textbf{W}^H\textbf{W}\bm{\Sigma}_1\right)\textbf{v}_k^H\textbf{v}_k \notag \\ &= \textbf{v}_k^H\textbf{v}_k + (\tvec{\textbf{W}})^H\left((\Gcap^H\textbf{v}_k\textbf{v}_k^H\Gcap)^T \otimes \textbf{I}_{NL} + (\textbf{v}_k^H\textbf{v}_k)(\bm{\Sigma}_1 \otimes \textbf{I}_{NL}) \right) \tvec{\textbf{W}} \notag \\ &= \textbf{v}_k^H\textbf{v}_k + (\tvec{\textbf{W}})^H\left(\textbf{C}_k^T \otimes \textbf{I}_{NL}\right)\tvec{\textbf{W}}. \end{align} Define $\textbf{T}_k = (\Gcap^H\textbf{v}_k\textbf{v}_k^H\Gcap)^T \otimes (\hcap_k\hcap_k^H), \: \textbf{F}_{\ell,k}=\textbf{C}_{\ell}^T \otimes \textbf{D}_k, \: \textbf{E}_k=\textbf{I}_{NL} \otimes \textbf{D}_k, \: \textbf{J}_k = \textbf{C}_k^T \otimes \textbf{I}_{NL}$ for all $k$. Then we can write SINR conditions and total mean power as \begin{equation} \label{C3} \begin{aligned} (\tvec{\textbf{W}})^H\left[(1+\gamma_k)\textbf{T}_k-\gamma_k\displaystyle\sum_{\ell=1}^K\textbf{F}_{\ell,k}-\gamma_k\sigma_{\text{RRH}}^2\textbf{E}_k\right]\tvec{\textbf{W}} \geq \gamma_k\sigma_{\text{MS}}^2, \: \: \forall k \\ P = \displaystyle\sum_{k=1}^K \textbf{v}_k^H\textbf{v}_k + (\tvec{\textbf{W}})^H\left(\sigma_{\text{RRH}}^2\textbf{I}_{N^2L^2}+\displaystyle\sum_{k=1}^K\textbf{J}_k\right)\tvec{\textbf{W}}. \end{aligned} \end{equation} The matrix $\textbf{W}$ is block diagonal and it includes $NL^2$ many unknowns. Other $(N^2-N)L^2$ entries are zero. There exists a matrix $\textbf{U}: \: N^2L^2 \times NL^2$ and a vector of unknown variables $\textbf{w}_0: \: NL^2 \times 1$ such that $\tvec{\textbf{W}} = \textbf{U} \textbf{w}_0$. Here, each column of $\textbf{U}$ includes a single 1 and other entries are equal to 0. We put the 1's at the entries corresponding to the unknown variables in $\tvec{\textbf{W}}$. After this observation, we can write the problem in terms of $\textbf{w}_0$: \begin{equation} \label{C4} \begin{aligned} \textbf{w}_0^H\textbf{U}^H\left[(1+\gamma_k)\textbf{T}_k-\gamma_k\displaystyle\sum_{\ell=1}^K\textbf{F}_{\ell,k}-\gamma_k\sigma_{\text{RRH}}^2\textbf{E}_k\right]\textbf{U}\textbf{w}_0\geq \gamma_k\sigma_{\text{MS}}^2, \: \: \forall k \\ P = \displaystyle\sum_{k=1}^K \textbf{v}_k^H\textbf{v}_k + \textbf{w}_0^H\textbf{U}^H\left(\sigma_{\text{RRH}}^2\textbf{I}_{N^2L^2}+\displaystyle\sum_{k=1}^K\textbf{J}_k\right)\textbf{U}\textbf{w}_0. \end{aligned} \end{equation} Finally we define $\bm{\mathcal{W}} = \textbf{w}_0\textbf{w}_0^H$ satisfying $\bm{\mathcal{W}} \succeq 0$ and $\text{rank}(\bm{\mathcal{W}})=1$. Using the variable $\bm{\mathcal{W}}$, we can formulate the problem as \begin{equation} \label{C5} \begin{aligned} &(\text{P}1) \: \: \underset{\bm{\mathcal{W}}}{\min} \: \displaystyle\sum_{k=1}^K \textbf{v}_k^H\textbf{v}_k + \tr\left[\left(\textbf{U}^H\left(\sigma_{\text{RRH}}^2\textbf{I}_{N^2L^2}+\displaystyle\sum_{k=1}^K\textbf{J}_k\right)\textbf{U}\right)\bm{\mathcal{W}}\right] \\ &\text{such that} \: \tr\left[\left(\textbf{U}^H\left((1+\gamma_k)\textbf{T}_k-\gamma_k\displaystyle\sum_{\ell=1}^K\textbf{F}_{\ell,k}-\gamma_k\sigma_{\text{RRH}}^2\textbf{E}_k\right)\textbf{U}\right)\bm{\mathcal{W}} \right] \geq \gamma_k\sigma_{\text{MS}}^2, \: \: \forall k \\ &\bm{\mathcal{W}} \succeq 0, \: \text{rank}(\bm{\mathcal{W}})=1. \end{aligned} \end{equation} In (P1), cost and all constraints except the rank constraint are convex. By omitting the rank constraint it can be solved with SDR using standard convex optimization tools such as SeDuMi \cite{SeDuMi}, CVX \cite{CVX}, Mosek \cite{Mosek}. \vspace{-5mm} \subsection{Fronthaul Link Beamformer Design} In this part, we consider the case where $\textbf{W}$ is fixed. In this case, we can write \begin{equation} \label{C6} \begin{aligned} \left|\widehat{\textbf{h}}_k^H\textbf{W}\widehat{\textbf{G}}^H\textbf{v}_k\right|^2 &= \textbf{v}_k^H\Gcap\textbf{W}^H\hcap_k\hcap_k^H\textbf{W}\Gcap^H\textbf{v}_k \\ \tr\left(\textbf{D}_k\textbf{W}\textbf{C}_{\ell}\textbf{W}^H\right) &= \tr\left(\textbf{W}^H\textbf{D}_k\textbf{W}\left(\Gcap^H\textbf{v}_{\ell}\textbf{v}_{\ell}^H\Gcap+(\textbf{v}_{\ell}^H\textbf{v}_{\ell})\bm{\Sigma}_1\right)\right) \\ &= \textbf{v}_{\ell}^H\left[\Gcap\textbf{W}^H\textbf{D}_k\textbf{W}\Gcap^H+\tr(\textbf{W}^H\textbf{D}_k\textbf{W}\bm{\Sigma}_1)\textbf{I}_M\right]\textbf{v}_{\ell}. \end{aligned} \end{equation} Let $\textbf{A}_k=\Gcap\textbf{W}^H\hcap_k\hcap_k^H\textbf{W}\Gcap^H, \: \textbf{B}_k = \Gcap\textbf{W}^H\textbf{D}_k\textbf{W}\Gcap^H+\tr(\textbf{W}^H\textbf{D}_k\textbf{W}\bm{\Sigma}_1)\textbf{I}_M, \: \textbf{V}_k=\textbf{v}_k\textbf{v}_k^H, \: \forall k$ and $a=\sigma_{\text{RRH}}^2 \tr\left(\textbf{W}^H \textbf{W}\right) , \: b=\sigma_{\text{RRH}}^2\tr\left(\textbf{D}_k\textbf{W}\textbf{W}^H\right)+\sigma_{\text{MS}}^2$. Using (\ref{P_eqn}) and (\ref{C6}), we formulate the problem as \begin{equation} \label{C7} \begin{aligned} &(\text{P}2) \: \: \underset{\{\textbf{V}_k\}_{k=1}^K}{\min} \: \displaystyle\sum_{k=1}^K \tr\left(\bm{\tau_0} \textbf{V}_k\right) + a \\ &\text{such that} \: \dfrac{\tr\left(\textbf{A}_k\textbf{V}_k\right)}{\displaystyle\sum_{\ell=1}^K\tr\left(\textbf{B}_k\textbf{V}_{\ell}\right)-\tr\left(\textbf{A}_k\textbf{V}_k\right)+b} \geq \gamma_k, \: \: \forall k, \quad \textbf{V}_k \succeq 0, \: \text{rank}(\textbf{V}_k)=1, \: \: \forall k. \end{aligned} \end{equation} (P2) can also be solved using convex optimization tools by omitting the rank constraints. \vspace{-5mm} \subsection{Rank-1 Approximation for SDR} In both fronthaul and access link beamformer designs, we find a solution by omitting the rank constraint. If the result is rank-1, the solution becomes optimal. Otherwise, we apply a widely used randomization method \cite{AF-Relay1}-\cite{DF-2}, \cite{Ch-Add-1}. Let $\textbf{X}$ be the matrix found after convex optimization. We want to find a vector $\textbf{x}$ satisfying $\textbf{X}=\textbf{x}\textbf{x}^H$ which is not possible if $\text{rank}(\textbf{X})>1$. In such a case, we select $\textbf{x}=\textbf{E}\bm{\Lambda}^{1/2}\textbf{y}$ where $\textbf{X}=\textbf{E}\bm{\Lambda}\textbf{E}^H$ is the eigenvalue decomposition of $\textbf{X}$ and $\textbf{y}$ is a zero-mean real Gaussian random vector with unity covariance matrix. \vspace{-5mm} \subsection{Alternating Optimization (AO) Method} We know that each one of fronthaul and access link beamformers can be found using convex optimization with SDR approach by fixing the other. Using this idea we can find a solution for (P0) by alternating optimization of fronthaul and access link beamformers. In general alternating optimization methods converge to local optimum points. The choice of initial point affects the performance. We consider the CP-to-RRH transmissions and use the total SNR at RRHs to find a suitable initial point. Let $\text{SNR}_{kn}$ be the SNR of $k$-th user at $n$-th RRH, i.e., $\text{SNR}_{kn} = \dfrac{\lVert \Gcap_n^H\textbf{v}_k \rVert^2}{\sigma_{\text{RRH}}^2}.$ The total SNR is given by $\text{SNR}_{\text{tot}} = \displaystyle\sum_{n=1}^N\displaystyle\sum_{k=1}^K \text{SNR}_{kn} = \tr\left(\textbf{V}^H\textbf{G}_0\textbf{V}\right)$ where $\textbf{G}_0 = \dfrac{1}{\sigma_{\text{RRH}}^2}\displaystyle\sum_{n=1}^N \Gcap_n\Gcap_n^H$ and $\textbf{V} = \left[ \textbf{v}_1 \: \textbf{v}_2 \: \cdots \: \textbf{v}_K \right]$. We know that $P_{\text{CP}}=\tr\left(\textbf{V}^H\textbf{V}\right)$. Furthermore, in order to send the user data from CP to RRHs properly, we need $M \geq K$ and $\text{rank}(\textbf{V})=K$. To satisfy these constraints, we choose $\textbf{V}$ such that $\textbf{V}^H\textbf{V}=\sqrt{\dfrac{P_{\text{CP}}}{K}} \textbf{I}_K$. We aim to find $\textbf{V}$ maximizing $\text{SNR}_{\text{tot}}$. By Von-Neumann's Inequality, we have \begin{equation} \tr\left(\textbf{V}\textbf{V}^H\textbf{G}_0 \right) \leq \displaystyle\sum_{i=1}^M \lambda_i\left(\textbf{V}\textbf{V}^H\right)\lambda_i\left(\textbf{G}_0 \right) = \displaystyle\sum_{i=1}^K \lambda_i\left(\textbf{V}\textbf{V}^H\right)\lambda_i\left(\textbf{G}_0 \right) = \dfrac{P_{\text{CP}}}{K}\displaystyle\sum_{i=1}^K \lambda_i\left(\textbf{G}_0 \right). \end{equation} Notice that the $K$ largest eigenvalues of $\textbf{V}\textbf{V}^H$ are equal to $\dfrac{P_{\text{CP}}}{K}$ and other $M-K$ are equal to zero. The equality holds when we have \begin{equation} \label{Init} \textbf{v}_k = \sqrt{\dfrac{P_{\text{CP}}}{K}}e_k\left(\textbf{G}_0\right), \: \forall k. \end{equation} To find a suitable initial point we select the CP beamformers as in (\ref{Init}). On the other hand, the selection of initial $P_{\text{CP}}$ is also required. To perform this task, we use Algorithm 0. \vspace{-3mm} \noindent\rule{\textwidth}{0.8pt} \vspace{-3mm} \noindent\textbf{Algorithm 0} (Initialization for Alternating Optimization) \vspace{-5mm} \\ \noindent\rule{\textwidth}{0.4pt} \\ Set $P_{\text{CP}}^{(0)}=1, \: \mu_0 = 1.05, \: t_{\text{max}, 0}=100$. For $t=0, 1, 2, \ldots, t_{\text{max}, 0}$ repeat the following steps: \begin{itemize} \item Form $\textbf{v}_k^{(t)} = \sqrt{\dfrac{P_{\text{CP}}^{(t)}}{K}}e_k\left(\textbf{G}_0\right), \: \forall k$. Solve (P1) to find $\textbf{W}^{(t)}$. \item If the problem is feasible, then set the initial value of $\textbf{W}$ as $\textbf{W}^{(t)}$ and terminate. \item Set $P_{\text{CP}}^{(t+1)}=\mu_0 P_{\text{CP}}^{(t)}$. \end{itemize} \vspace{-5mm} \noindent\rule{\textwidth}{0.4pt} Algorithm 0 is used to find the initial value of $\textbf{W}$. Starting from this value, we apply alternating optimization by solving (P1) and (P2) iteratively. At each iteration, $P$ decreases since both (P1) and (P2) minimizes $P$ when one of fronthaul and access link beamformers is fixed. As the power is limited below ($P \geq 0$) we conclude by Monotone Convergence Theorem \cite{MCT} that this method is convergent. When the rate of change of $P$ is small enough we stop the iteration and find the final solution. The method is summarized in Algorithm 1. \vspace{-3mm} \noindent\rule{\textwidth}{0.8pt} \vspace{-3mm} \noindent\textbf{Algorithm 1} (Alternating Optimization) \vspace{-5mm} \\ \noindent\rule{\textwidth}{0.4pt} \\ Using Algorithm 0, find the initial value $\textbf{W}^{(0)}$. Define $t_{\text{max}, 1}=100, \: \eta=10^{-3}$. For $t=0, 1, \ldots, t_{\text{max}, 1}$, repeat the following steps: \begin{itemize} \item Solve (P2) to find $\textbf{v}_k^{(t)}, \: \forall k$. Solve (P1) to find $\textbf{W}^{(t)}$. \item If $|P^{(t)}-P^{(t-1)}|<\eta P^{(t)}$, then terminate. \end{itemize} \vspace{-5mm} \noindent\rule{\textwidth}{0.4pt} \vspace{-5mm} \subsection{Total SNR Max (TSM) Method } Algorithm 0 is used to find an initial point for AO method. By extending Algorithm 0, we propose another iterative method, called the Total SNR Max (TSM) Method, which is computationally less complex compared to AO. Firstly, we make an observation for values of $P$ as $P_{\text{CP}}$ increases. Assume that we use (\ref{Init}) to form CP beamformers. Starting from a small value, we increase $P_{\text{CP}}$ continuously and at each time we find the corresponding RRH beamforming matrix by solving (P1) as in Algorithm 0. We observe that in general there exist two iteration indices $0<t_1<t_2$ such that the problem is infeasible for $t<t_1$, $P^{(t)}$ is decreasing for $t_1<t<t_2$, and increasing for $t>t_2$. This shows that optimal value of $P$ is achieved when $t=t_2$. By the motivation of this observation, we propose Algorithm 2. \vspace{-3mm} \noindent\rule{\textwidth}{0.8pt} \vspace{-3mm} \noindent\textbf{Algorithm 2} (Total SNR Max Method) \vspace{-5mm} \\ \noindent\rule{\textwidth}{0.4pt} \\ Set $P_{\text{CP}}^{(0)}=1, \: P^{(0)}=0, \: \mu_2=1.05$. For $t=0, 1, \ldots, t_{\text{max}, 2}=100$, repeat the following steps: \begin{itemize} \item Form $\textbf{v}_k^{(t)}=\sqrt{\dfrac{P_{\text{CP}}^{(t)}}{K}}e_k\left(\textbf{G}_0\right), \: \forall k$. \item Solve (P1) to find $\textbf{W}^{(t)}$. If the problem is feasible then evaluate $P^{(t)}$ using CP and RRH beamformers. Otherwise, set $P^{(t)}=0$. \item If $P^{(t)}>P^{(t-1)}>0$, then terminate. \item Set $P_{\text{CP}}^{(t+1)}=\mu_2 P_{\text{CP}}^{(t)}$. \end{itemize} \vspace{-5mm} \noindent\rule{\textwidth}{0.4pt} This algorithm finds CP beamformers using the approach given in (\ref{Init}) by iteratively changing the $P_{\text{CP}}$ value. RRH beamforming selection is done as in AO method. \vspace{-5mm} \subsection{Complexity of Convex Optimization Methods} In general, we can measure the computational complexity of AO and TSM as the product of number of iterations and the complexity at each iteration. At each iteration, the main component of complexity is related to the convex optimization and all other operations can be neglected. We use SeduMi as the convex optimization tool to implement AO and TSM. In both methods, at each iteration, we minimize $c^Hx$ subject to $Ax=b$ where $x \in \mathbb{C}^{n}$ is the vector of all unknowns and $A \in \mathbb{C}^{m \times n}, \: b\in\mathbb{C}^m, \: c\in \mathbb{C}^{n}$ are known vectors/matrices. We know by \cite{SeDuMi} that the corresponding computational complexity is $\mathcal{O}(n^2m^{2.5}+m^{3.5})$ for SeDuMi. The corresponding $m$ and $n$ values for fronthaul and access link beamforming designs are calculated as \begin{equation} \text{Fronthaul Link}: \: \: m=K, \: \: n=K+KM^2, \: \: \text{Access Link}: \: \: m=K, \: \: n=K+N^2L^4. \end{equation} In AO, both fronthaul and access link beamformer designs are done by convex optimization, meanwhile, TSM uses convex optimization only for access link. Hence, the corresponding computational complexities are given by \begin{equation} \begin{aligned} \text{Complexity of AO}: &\quad \mathcal{O}\left(N_{\text{AO}}K^{2.5}\left[(K+KM^2)^2+(K+N^2L^4)^2+2K\right]\right), \\ \text{Complexity of TSM}: &\quad \mathcal{O}\left(N_{\text{TSM}}K^{2.5}\left[(K+N^2L^4)^2+K\right]\right) \end{aligned} \end{equation} where $N_{\text{AO}}$ and $N_{\text{TSM}}$ are number of iterations for AO and TSM, respectively. In simulation results, we show that the number of iterations for both methods are similar and average complexity of AO is larger than that of TSM, as expected. \vspace{-4mm} \section{Standard Beamforming Methods} In this section, we present two algorithms adapted from well-known beamforming methods. These methods are based on MRC, ZF and SVD. The purpose of considering these methods is to observe the performance of well-known methods in our joint beamforming design problem. We also make a comparison with the performance bound and relatively complex convex optimization methods described in the previous section. In the first method, called MRC-ZF, we design fronthaul beamformers using the MRC idea. Access link beamformers are chosen as in ZF to cancel the interference due to other user signals. The second method is called SVD-ZF where the fronthaul beamformers are designed by an SVD algorithm. The access link beamformers are chosen to cancel the interference as in MRC-ZF. Because of the nature of the problem, a direct implementation is not possible. We need some adaptations to use MRC, ZF, and SVD. \vspace{-5mm} \subsection{MRC-ZF} We know that MRC optimizes the signal power by a coherent reception. ZF eliminates the interference and hence enhances the SINR. By the motivation of these beamforming methods, we choose the fronthaul and access link beamformers as \begin{equation} \label{S1} \textbf{v}_k = \left(\hcap_k^H\textbf{W}\Gcap^H\right)^H, \: \: \forall k, \quad \hcap_k^H\textbf{W}\Gcap^H\textbf{v}_{\ell} = \delta[k-\ell], \: \: \forall k, \ell. \end{equation} In this method, $\textbf{v}_k$'s are chosen as the conjugate-transpose of the corresponding effective channel $\hcap_k^H\textbf{W}\Gcap^H$. The matrix $\textbf{W}$ is chosen to cancel the interference due to undesired user signals. Notice that both beamformers are chosen in terms of channel estimates only. This approach is used to make the algorithm simpler. Using (\ref{S1}), we get \begin{equation} \hcap_k^H\textbf{W}\Gcap^H\Gcap\textbf{W}^H\hcap_{\ell}=\tr\left(\textbf{W}^H\hcap_{\ell}\hcap_k^H\textbf{W}\Gcap^H\Gcap\right)=\delta[k-\ell] , \: \: \forall k, \ell. \end{equation} Using the fact that $\tvec{\textbf{W}}=\textbf{U}\textbf{w}_0$, we obtain that \begin{equation} \label{S2} \textbf{w}_0^H \textbf{U}^H\left[(\Gcap^H\Gcap)^T \otimes (\hcap_{\ell}\hcap_k^H)\right]\textbf{U}\textbf{w}_0=\delta[k-\ell], \: \: \forall k, \ell. \end{equation} (\ref{S2}) is a quadratically constrained quadratic program (QCQP) type problem including a set of second order matrix equations with $NL^2$ unknowns and $K^2$ equations. If $NL^2 \geq K^2$, then we can find a solution using a standard QCQP solver. Let $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$ be some solutions of (\ref{S1}). We use $\textbf{v}_k=\sqrt{a}\textbf{v}_{k,0}, \: \forall k$ and $\textbf{W}=\sqrt{b}\textbf{W}_0$ where $a$ and $b$ are two non-negative real numbers. We use $a$ and $b$ to optimize the power allocation and minimize the total power spent. Using the beamformer expressions, we can write SINR constraints and total mean power as \vspace{-5mm} \begin{equation} \label{S3} \dfrac{ab\cdot c_{k,1}}{ab\cdot c_{k,2}-ab\cdot c_{k,1}+b\cdot c_{k,3}+c_{k,4}} \geq \gamma_k, \: \: \forall k, \: \: P = a\cdot d_5+ab\cdot d_6+b\cdot d_7 \end{equation} where \begin{equation} \begin{aligned} c_{k,1}&=\left|\widehat{\textbf{h}}_k^H\textbf{W}_0\widehat{\textbf{G}}^H\textbf{v}_{k,0}\right|^2, \: \: c_{k,2}=\displaystyle\sum_{\ell=1}^K\tr\left(\textbf{D}_k\textbf{W}_0\left(\Gcap^H\textbf{v}_{\ell,0}\textbf{v}_{\ell,0}^H\Gcap+(\textbf{v}_{\ell,0}^H\textbf{v}_{\ell,0})\bm{\Sigma}_1\right)\textbf{W}_0^H\right) \\ c_{k,3}&=\sigma_{\text{RRH}}^2\tr\left(\textbf{D}_k\textbf{W}_0\textbf{W}_0^H\right), \: \: c_{k,4}=\sigma_{\text{RRH}}^2, \: \: d_5=\displaystyle\sum_{k=1}^K \textbf{v}_{k,0}^H\textbf{v}_{k,0} \\ d_6 &= \displaystyle\sum_{k=1}^K \textbf{v}_{k,0}^H\Gcap\textbf{W}_0^H\textbf{W}_0\Gcap^H\textbf{v}_{k,0} + \tr\left(\textbf{W}_0^H\textbf{W}_0\bm{\Sigma}_1\right)\displaystyle\sum_{k=1}^K \textbf{v}_{k,0}^H\textbf{v}_{k,0}, \: \: d_7 = \sigma_{\text{RRH}}^2\tr\left(\textbf{W}_0^H\textbf{W}_0\right). \end{aligned} \end{equation} Using the SINR constaints in (\ref{S3}), we get \begin{equation} \label{S4} a \geq d_{k,1}+\dfrac{d_{k,2}}{b}, \: \: (1+\gamma_k)c_{k,1}>\gamma_kc_{k,2}, \: \: \forall k \end{equation} where $d_{k,1}=\dfrac{\gamma_kc_{k,3}}{(1+\gamma_k)c_{k,1}-\gamma_kc_{k,2}}, \: d_{k,2}=\dfrac{\gamma_kc_{k,4}}{(1+\gamma_k)c_{k,1}-\gamma_kc_{k,2}}, \: \forall k$. The first condition in (\ref{S4}) provides $K$ inequalities for $a$ and $b$. The second condition should be satisfied to obtain a feasible solution. The problem of minimizing $P$ in (\ref{S3}) under SINR constraints given by (\ref{S4}) is a two-variable QCQP problem which can be solved directly. The solution steps are explained in Algorithm 3. \vspace{-3mm} \noindent\rule{\textwidth}{0.8pt} \vspace{-3mm} \noindent\textbf{Algorithm 3} (MRC-ZF) \vspace{-5mm} \\ \noindent\rule{\textwidth}{0.4pt} \begin{itemize} \item Find $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$ by solving (\ref{S2}) using a QCQP solver. \item Check the feasibility condition given by (\ref{S4}). If it is not satisfied, then terminate. \item For all $k$ evaluate $d_{k,1}, d_{k,2}, d_5, d_6, d_7$ using $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$. \end{itemize} For each $k=1, 2, \ldots, K$ repeat the following steps: \begin{itemize} \item Find the solution interval $[b_1, b_2] \subseteq [0, \infty)$ of $b$ satisfying $d_{k,1}+\dfrac{d_{k,2}}{b} \geq d_{\ell,1}+\dfrac{d_{\ell,2}}{b}, \: \: \forall \ell \neq k$. \item Evaluate the minimum value $P_{k,0}$ of $P = a\cdot d_5+ab\cdot d_6+b\cdot d_7$ for $a=d_{k,1}+\dfrac{d_{k,2}}{b}$ which is given by $P_{k,0}=d_{k,1}d_5+d_{k,2}d_6+d_7+2\sqrt{d_{k,1}d_{k,2}d_5d_6}$. \item Evaluate the values of $P = a\cdot d_5+ab\cdot d_6+b\cdot d_7$ for $a=d_{k,1}+\dfrac{d_{k,2}}{b_1}, \: \: b=b_1$ and $a=d_{k,1}+\dfrac{d_{k,2}}{b_2}, \: \: b=b_2$ as $P_{k,1}$ and $P_{k,2}$. \item Evaluate the global minimum candidate for $k$ as $P_{\text{min},k}=\min (P_{k,0}, P_{k,1}, P_{k,2})$. \end{itemize} Find the solution as $P_{\text{min}}=\min\limits_{k} P_{\text{min},k}$. \vspace{-3mm} \noindent\rule{\textwidth}{0.4pt} Algorithm 3 optimally solves the beamforming design problem defined by MRC-ZF method. Notice that there is a feasibility condition defined by (\ref{S4}) which has to be satisfied in order to find a suitable beamformer. By the design method, the algorithm cancels the interference due to undesired user signals. As it uses the channel estimates only, the interference due to channel mismatch part cannot be canceled. The channel estimation error should be small enough to satisfy the feasibility condition. There is also another condition $NL^2 \geq K^2$ to find a solution for the matrix equation in (\ref{S2}). These conditions imply that MRC-ZF can be used if the channel estimation quality is good enough and the number of users is small enough. \vspace{-5mm} \subsection{SVD-ZF} In TSM method, fronthaul beamformers are designed by maximizing the total SNR at RRHs. We have shown that the corresponding beamformer is found using SVD of a sum of channel components related to CP-to-RRH channels. We use this approach to design fronthaul beamformers and access link beamformers are found as in MRC-ZF. By the motivation of the SVD and ZF type operations, we name this method as SVD-ZF. We first consider the system of equations \begin{equation} \label{S5} \textbf{v}_k = e_k\left(\textbf{G}_0\right), \: \: \forall k, \quad \hcap_k^H\textbf{W}\Gcap^H\textbf{v}_{\ell} = \delta[k-\ell], \: \: \forall k, \ell \end{equation} where $\textbf{G}_0 = \dfrac{1}{\sigma_{\text{RRH}}^2}\displaystyle\sum_{n=1}^N \Gcap_n\Gcap_n^H$. The first condition in (\ref{S5}) maximizes the total SNR at RRHs and the second condition eliminates the interference. As in MRC-ZF, we only use channel estimates to design beamformers for simplicity. Using the transformation $\tvec{\textbf{W}}=\textbf{U}\textbf{w}_0$, we obtain that \begin{equation} \label{S6} \hcap_k^H\textbf{W}\Gcap^He_{\ell}\left(\textbf{G}_0\right) = \left(\tvec{[\Gcap^He_{\ell}\left(\textbf{G}_0\right)\hcap_k^H]^T}\right)^T\textbf{U}\textbf{w}_0 \\ =\delta[k-\ell], \: \: \forall k, \ell. \end{equation} (\ref{S6}) includes a system of linear equations with $NL^2$ unknowns and $K^2$ equations. For $NL^2 \geq K^2$, we can find a solution using generalized matrix inversion. As in MRC-ZF we optimize the power allocation to minimize the total power spent. Let $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$ be some solutions of (\ref{S5}). We use $\textbf{v}_k=\sqrt{a}\textbf{v}_{k,0}, \: \forall k$ and $\textbf{W}=\sqrt{b}\textbf{W}_0$ where $a$ and $b$ are two non-negative real numbers. After this point, we can formulate the problem in terms of $a$ and $b$ as in MRC-ZF and find the optimal values following the same procedure. SVD-ZF is summarized in Algorithm 4. \vspace{-3mm} \noindent\rule{\textwidth}{0.8pt} \vspace{-3mm} \noindent\textbf{Algorithm 4} (SVD-ZF) \vspace{-5mm} \\ \noindent\rule{\textwidth}{0.4pt} \begin{itemize} \item Find $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$ by solving (\ref{S6}) using generalized matrix inversion. \item Apply the same procedure done in MRC-ZF to find the solution. \end{itemize} \vspace{-5mm} \noindent\rule{\textwidth}{0.4pt} As in MRC-ZF, this method includes a feasibility condition including $\textbf{W}_0$ and $\{\textbf{v}_{k,0}\}_{k=1}^K$. We also need $NL^2 \geq K^2$ to find a solution for (\ref{S6}). Hence SVD-ZF also requires a good channel estimation quality and relatively small number of users. One can say that SVD-ZF is computationally less complex compared to MRC-ZF as it does not require a QCQP solver. \vspace{-4mm} \section{Numerical Results} In this section we compare the performances of the proposed methods with the performance bound by Monte Carlo simulations. Throughout the simulations, we assume that $\gamma_k = \gamma, \: \forall k$. We use a realistic channel model including path-loss, shadowing and small-scale fading defined in a 3GPP standard \cite{3GPP}. We consider a circular region in which CP is at the center, RRHs and MSs are distributed uniformly.\footnote{We choose the configurations where CP-to-RRH, CP-to-MS and RRH-to-MS distances are all at least 50 meters.} In Table I, the model parameters are presented. \vspace{-4mm} {\renewcommand{\arraystretch}{0.7} \begin{table}[ht] \caption{Model parameters used in simulations} \centering \begin{tabular}{| c | c |} \hline Cell radius & $1$ km \\ \hline Path-loss for Fronthaul Link ($P_{L,1}$) & $P_{L,1}=24.6+39.1\log_{10}d$ where $d$ is in meters \\ \hline Path-loss for Access Link ($P_{L,2}$) & $P_{L,2}=36.8+36.7\log_{10}d$ where $d$ is in meters \\ \hline Antenna gain (CP, RRH, MS) & $(9, 0, 0)$ dBi \\ \hline Noise Figure (RRH, MS) & $(2, 10)$ dB \\ \hline Bandwidth & 10 MHz \\ \hline Noise power spectral density & $-174$ dBm/Hz \\ \hline Small-scale fading model & Rayleigh, $\mathcal{C}\mathcal{N}(\textbf{0},\textbf{I})$ \\ \hline Log-normal shadowing variance (CP, RRH) & $(6, 4)$ dB \\ \hline \end{tabular} \end{table} } \vspace{-4mm} To generate channel estimates and channel estimation errors, we assume that pilot signal powers are adjusted according to the channel amplitudes so that the power ratios of $\mathbb{E}(|\Delta\textbf{G}_n|^2)/|\textbf{G}_n|^2$ and $\mathbb{E}(|\Delta\textbf{h}_{kn}|^2)/|\textbf{h}_{kn}|^2, \: \forall n, k$ are all equal to some known constant $\gamma_{\text{ch}}$. Here $\gamma_{\text{ch}}$ is a measure of channel estimation quality. Using the channel estimates and $\gamma_{\text{ch}}$, one can evaluate $\sigma_{1,n}^2, \: \forall n$ and $\sigma_{2,k,n}^2, \: \forall n, k$ accordingly. In simulations, we observe the effect of parameters $\gamma, K, N, L, M, \gamma_{\text{ch}}$. We know that there is always a non-zero probability of having an infeasible solution. To measure the ratio of feasibility, we define $P_{\text{success}}$ showing the percentage of feasible designs. We run $100$ Monte Carlo trials in each case. To evaluate the $P$ values for a method, we average the results over Monte Carlo trials with feasible solutions. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \includegraphics[width=0.32\textwidth]{convergence.eps} \caption{Convergence characteristics of AO and TSM.} \end{figure} \vspace{-6mm} In Fig. 2, we see a typical convergence graph of AO and TSM for $(K, N, L, M)=(4, 4, 4, 8)$, $\gamma=5$ dB and $\gamma_{\text{ch}}=0.01$. We observe that $P$ values decrease smoothly and both algorithms obtain a solution after a few iterations. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.27\textwidth]{P_vs_gamma.eps}}} \qquad \subfloat{{\includegraphics[width=0.27\textwidth]{P_suc_vs_gamma.eps}}} \caption{$P$ and $P_{\text{success}}$ vs $\gamma$. $(K, N, L, M)=(4, 4, 4, 8), \: \gamma_{\text{ch}}=0.01$.} \end{figure} \vspace{-7mm} In Fig. 3, we observe the effect of SINR threshold $\gamma$. For all $\gamma$ values, the performance loss compared to the bound are roughly $3$ and $6$ dB for AO and TSM, respectively. MRC-ZF and SVD-ZF have significantly worse performance than those of convex optimization methods. We observe that $P_{\text{success}}$ values of both methods decrease with $\gamma$. Even when $\gamma=0$ dB, infeasibility ratio is about $30$ percent for both methods. The results imply that even for a relatively low channel estimation error, the methods MRC-ZF and SVD-ZF may fail to solve the joint beamforming design problem with a large probability. We observe that AO can solve the problem with almost 100 percent whereas TSM feasibility ratio is slightly smaller than that of AO. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.27\textwidth]{P_vs_K.eps}}} \qquad \subfloat{{\includegraphics[width=0.27\textwidth]{P_suc_vs_K.eps}}} \caption{$P$ and $P_{\text{success}}$ vs $K$. $(N, L, M)=(4, 4, 8), \: \gamma=5$ dB, $\gamma_{\text{ch}}=0.01$.} \end{figure} \vspace{-7mm} Fig. 4 shows the performances as the number of MSs $K$ varies. We observe that the performance loss of all methods compared to the bound increase with $K$. This is due to the fact that bound can only be achieved when the interference due to undesired users is completely eliminated which becomes harder as $K$ increases. We see that for large $K$ values, the feasibility ratios of MRC-ZF and SVD-ZF become very small meaning that these methods cannot be used when the number of users is not small enough. Although it outperforms MRC-ZF and SVD-ZF, TSM performance also degrades for large number of users. On the other hand, AO can successfully design beamformers with 100 percent feasibility and it requires less power for all $K$ values compared to other three methods. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.30\textwidth]{P_vs_N.eps}}} \qquad \subfloat{{\includegraphics[width=0.30\textwidth]{P_suc_vs_N.eps}}} \caption{$P$ and $P_{\text{success}}$ vs $N$. $(K, L, M)=(4, 4, 8), \: \gamma=5$ dB, $\gamma_{\text{ch}}=0.01$.} \end{figure} \vspace{-7mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.30\textwidth]{P_vs_L.eps}}} \qquad \subfloat{{\includegraphics[width=0.30\textwidth]{P_suc_vs_L.eps}}} \caption{$P$ and $P_{\text{success}}$ vs $L$. $(K, N, M)=(4, 4, 8), \: \gamma=5$ dB, $\gamma_{\text{ch}}=0.01$.} \end{figure} \vspace{-7mm} In Fig. 5-6, we observe the effects of the number of RRHs $N$ and the number of RRH antennas $L$. The results show that AO has the best performance for all cases. Its feasibility ratio is always 100 percent in these two simulations and the power difference with the bound is generally less than $5$ dB. The difference becomes smaller as $N$ or $L$ increases. As in the previous cases, MRC-ZF performs better than SVD-ZF and worse than TSM. We also observe that there is a significant difference in the bound values between $N=2$ and $N=8$ and the same fact is true for $L=2$ and $L=8$. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.30\textwidth]{P_vs_M.eps}}} \qquad \subfloat{{\includegraphics[width=0.30\textwidth]{P_suc_vs_M.eps}}} \caption{$P$ and $P_{\text{success}}$ vs $M$. $(K, N, L)=(4, 4, 4), \: \gamma=5$ dB, $\gamma_{\text{ch}}=0.01$.} \end{figure} \vspace{-7mm} In Fig. 7, we see the effect of the number of CP antennas $M$. The main observation is that the performance enhancement obtained by increasing $M$ is very limited. Adding an extra antenna to CP mainly affects the power spent in fronthaul transmissions. In our channel model, CP-to-RRH channels are better than RRH-to-MS channels in terms of path-loss, antenna gains and receiver characteristics. This is due to the fact that RRHs are stationary and one can place them by optimizing the corresponding fronthaul channel conditions. Therefore, the portion of $P_{\text{CP}}$ in the total power $P$ is small in general and hence the effect of $M$ on the performance is small compared to the effects of $N$ and $L$. \vspace{-4mm} {\renewcommand{\arraystretch}{0.7} \begin{table}[H] \caption{$P$ values in dBW for various quadruples of $(K, N, L, M)$ for $\gamma=5$ dB and $\gamma_{\text{ch}}=0.01$} \centering \begin{tabular}{| c | c | c | c | c | c | c | c | c | c |} \hline $K$ & $N$ & $L$ & $M$ & $P$ (AO) & $P$ (TSM) & $P$ (MRC-ZF) & $P$ (SVD-ZF) & $P$ (Bound) & $P$ (AO) $\: - \: P$ (Bound) \\ \hline $2$ & $2$ & $4$ & $4$ & $27.52$ & $28.15$ & $33.3$ & $33.61$ & $24.43$ & $3.11$ \\ \hline $3$ & $2$ & $4$ & $6$ & $27.15$ & $29.42$ & $30.26$ & $31.23$ & $23.82$ & $3.33$ \\ \hline $4$ & $2$ & $4$ & $8$ & $27.5$ & $29.47$ & $32.41$ & $36.49$ & $24.03$ & $3.47$ \\ \hline $3$ & $3$ & $4$ & $4$ & $24.07$ & $27.02$ & $30.67$ & $31.32$ & $20.54$ & $3.53$ \\ \hline $4$ & $4$ & $4$ & $4$ & $25.69$ & $26.23$ & $35.31$ & $36.62$ & $20.48$ & $5.21$ \\ \hline $3$ & $4$ & $3$ & $4$ & $23.73$ & $25.54$ & $30.44$ & $32.83$ & $19.7$ & $4.03$ \\ \hline $2$ & $4$ & $2$ & $4$ & $23.59$ & $25.11$ & $31.4$ & $33.33$ & $21.28$ & $2.31$ \\ \hline \end{tabular} \end{table} } \vspace{-5mm} In Table II, we compare the performances when the ratios $\dfrac{K}{M}, \dfrac{K}{N}, \dfrac{K}{L}$ are fixed. The first three rows show the cases where $\dfrac{K}{M}, N, L$ are fixed; the first, fourth and fifth rows are related to the case where $\dfrac{K}{N}, M, L$ are fixed, and finally the last three rows correspond to the case where $\dfrac{K}{L}, M, N$ are fixed. We observe that for each three cases, the performance loss of the best method AO compared to the bound is an increasing function of the number of users $K$. This is due to the fact that achieving bound requires perfect elimination of the interference due to undesired users which becomes harder as the number of users increases. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.30\textwidth]{P_suc_vs_gamma_ch_1.eps}}} \qquad \subfloat{{\includegraphics[width=0.27\textwidth]{P_suc_vs_gamma_ch_2.eps}}} \caption{$P_{\text{success}}$ vs $\gamma_{\text{ch}}$. $(K, N, L, M)=(4, 4, 4, 8), \: \gamma=5$ dB.} \end{figure} \vspace{-7mm} Fig. 8 presents the feasibility ratios with respect to the channel estimation error quality. We observe that if the channel estimation error is large enough, all methods completely fail in the design process. We conclude that convex optimization based methods are more robust to channel errors compared to methods adapted from known beamforming algorithms. \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \includegraphics[width=0.30\textwidth]{run_times.eps} \caption{Average Complexity Comparison.} \end{figure} \vspace{-7mm} Fig. 9 shows the normalized average run-times for all methods. Here we take the average over all previously described simulations. We observe that complexity is high for convex optimization based methods. The average run-time of AO is slightly larger than that of TSM. Among all methods we consider, SVD-ZF is the less complex one since it directly finds the solution (if feasible) by solving a linear matrix equation without any solver. On the other hand, its performance is generally not satisfactory in most of the cases. In the second part of simulations, we observe the power allocation of users, power sharing between fronthaul and access links, and effect of different user SINR thresholds. We consider two scenarios where RRH and MS locations are fixed. In the both cases, there are a CP with $4$ antennas, $2$ RRHs each with $4$ antennas and $4$ MSs. We only consider AO method to present the results. The first scenario includes various RRH-to-MS distances and second one considers a symmetric placement. In Fig. 10, we present the RRH and MS placements of the two scenarios. \vspace{-4mm} \begin{figure}[ht] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.31\textwidth]{cp_rrh_ms_locations.eps}}} \qquad \subfloat{{\includegraphics[width=0.31\textwidth]{cp_rrh_ms_locations2.eps}}} \caption{Scenario 1 (left) and Scenario 2 (right) RRH and MS placements.} \end{figure} \vspace{-7mm} To present the power allocation of users for both fronthaul and access links, we define \begin{equation} \label{Pow_alloc1} \begin{aligned} P_{\text{CP},k}&=\textbf{v}_k^H\textbf{v}_k, \: P_{\text{RRH},k}=\textbf{v}_k^H \left(\widehat{\textbf{G}}\textbf{W}^H\textbf{W}\widehat{\textbf{G}}^H + \tr\left(\textbf{W}^H \textbf{W} \bm{\Sigma}_1\right)\textbf{I}_M\right) \textbf{v}_k, \\ P_{\text{RRH,amp-noise},k}&=\dfrac{1}{K}\sigma_{\text{RRH}}^2\tr(\textbf{W}^H\textbf{W}), \: \forall k \end{aligned} \end{equation} where $P_{\text{CP},k}, P_{\text{RRH},k}, P_{\text{RRH,amp-noise},k}$ are the fronthaul link power, access link power and RRH amplified noise power for $k$-th user. Notice that we have \vspace{-4mm} \begin{equation} \label{Pow_alloc2} P_{\text{CP}}=\displaystyle\sum_{k=1}^K P_{\text{CP},k}, \: P_{\text{RRH}} = \displaystyle\sum_{k=1}^K \left(P_{\text{RRH},k} + P_{\text{RRH,amp-noise},k}\right). \end{equation} We know that RRH receiver noise is amplified and forwarded to users in AF type relaying. The related term is given in (\ref{r_k_2}) as the first part of the noise term. We equally divide RRH amplified noise power between users as shown in (\ref{Pow_alloc1}). \vspace{-4mm} \begin{figure}[H] \centering \captionsetup{justification=centering} \subfloat{{\includegraphics[width=0.32\textwidth]{power_allocation1.eps}}} \qquad \subfloat{{\includegraphics[width=0.32\textwidth]{power_allocation2.eps}}} \caption{Power Allocations for Scenario 1 (left) and Scenario 2 (right).} \end{figure} \vspace{-7mm} In the left part of Fig. 11, we observe the power allocation of users for Scenario 1. We take equal SINR thresholds $\gamma_k=\gamma=5$ dB for all users. Notice that fronthaul powers are smaller compared to access link powers. This is due to the path-loss and antenna gain model that we use. As CP and RRHs are stationary, we assume that one can optimize the locations of CP and RRHs so that the corresponding channel conditions are good. We also assume that CP antenna array design is more flexible compared to RRH and MS equipments, and hence we use higher gain antennas for CP. We also observe that $P_{\text{RRH},4} > P_{\text{RRH},1}>P_{\text{RRH},2} \approx P_{\text{RRH},3}$. This is expected considering the locations of users. The distance between MS $4$ and both two RRHs is large and hence it requires the largest power. On the other hand, since MS $2$ and $3$ are close to some RRH, they require the smallest power. MS $1$ distance to both RRHs is at intermediate level and hence the corresponding power is in between the other three MSs. As a final remark, we observe that RRH amplified noise powers are significantly large and this shows that a well-optimized network design is needed to obtain sufficiently large user SINRs for AF type relaying. We present the power allocation of users for Scenario 2 in the right part of Fig. 11. In this case, we use a symmetric placement of RRHs and MSs and consider the effect of different user SINR thresholds by taking $\gamma_1=4, \gamma_2=6, \gamma_3=8, \gamma_4=10$ dB. We observe that as the SINR threshold increases, the corresponding user power of both fronthaul and access links also increases. The operator can adjust the user SINR thresholds according to the priority of users. The power required to serve a more prior user will be larger as also presented in this example scenario. \vspace{-5mm} \section{Conclusions} In this study, we analyzed the joint beamformer design problem in downlink C-RAN with wireless fronthaul. We considered the case where AF type relaying is used in RRHs without the capability of baseband processing. We assumed that channel coefficients are available with some additive error with known second order statistics. We derived a novel theoretical lower bound for the total power spent under SINR constraints. We proposed two convex optimization based methods and two other methods adapted from known beamforming strategies to observe the tightness of the bound. We have shown that first two methods have better performances but their complexities are also higher. In general, the performance of the best method is close to the bound and the difference is less than $1$ dB for some cases. The results show the effectiveness of the bound as well as the performances of various solution techniques. For C-RAN systems, there are other beamforming design techniques that are not analyzed in this study but studied in the literature. We have found at least one method performing close to the bound and this is enough to show the tightness of the bound proposed. As a future work, the approach used in this study to derive a performance bound can be adapted to DF and DCF based relaying and also to full-duplex RRH case. In all simulations, we observed that SDR based methods always produce rank-1 results. This fact can be proved in a future study. Finally one can search the necessary conditions required for the equality case of the bound to gain insight about the optimal algorithm. \vspace{-3mm} \appendices \section{Achievability of Rate} We use the idea given in \cite{InformationTheory} to show that the rate $\log_2(1+\text{SINR}_k)$ is achievable for $k$-th user where $\text{SINR}_k$ is defined by (\ref{SINR_2}). We find a lower bound to the mutual information $I(r_k; s_k)$ between the received signal $r_k$ and the information signal $s_k$. Using the facts that conditioning decreases entropy $h(\cdot)$, the entropy is maximized for Gaussian distribution when the variance is fixed, the entropy is invariant under translation, and $s_k$ and $r_k$ are zero-mean, we can write \begin{equation} \label{App1} \begin{aligned} I(r_k; s_k)&=h(s_k)-h(s_k | r_k) = h(s_k)-h(s_k-\alpha r_k | r_k) \geq h(s_k)-h(s_k-\alpha r_k) \\ &\geq \log\left(\pi e \mathbb{E}\left[|s_k|^2\right]\right) - \log\left(\pi e \mathbb{E}\left[|s_k-\alpha r_k|^2\right]\right) = \log\left(\dfrac{\mathbb{E}\left[|s_k|^2\right]}{\mathbb{E}\left[|s_k-\alpha r_k|^2\right]}\right). \end{aligned} \end{equation} Here we assume that $s_k$ is complex Gaussian and $\alpha$ is any complex constant. (\ref{App1}) is true for any $\alpha$ and specifically we choose $\alpha=\mathbb{E}\left[r_k^{*}s_k\right]/\mathbb{E}\left[|r_k|^2\right]$ to get \vspace{-5mm} \begin{equation} \label{App2} I(r_k; s_k) \geq \log\left(1 + \dfrac{|\mathbb{E}\left[r_k^{*}s_k\right]|^2}{\mathbb{E}\left[|r_k|^2\right] \cdot \mathbb{E}\left[|s_k|^2\right] - |\mathbb{E}\left[r_k^{*}s_k\right]|^2}\right). \end{equation} \noindent Using the equation of $r_k$ in (\ref{r_k_2}) and the fact $\mathbb{E}\left[|s_k|^2\right]=1$, we obtain that $|\mathbb{E}\left[r_k^{*}s_k\right]|^2 = P_d$ and $\mathbb{E}\left[|r_k|^2\right] = P_d + P_{I,1}+P_{I,2}+P_n$ where $P_d, P_{I,1}, P_{I,2}, P_n$ are defined in (\ref{P_d}). Therefore we conclude that $I(r_k; s_k)$ is at least $\log_2\left(1+\dfrac{P_d}{P_{I,1}+P_{I,2}+P_n}\right) = \log_2(1+\text{SINR}_k)$ bits. \section{Proof of (\ref{B_bound})} Using (\ref{B5}) and (\ref{B7}), we get \begin{equation} \label{B9} x_1 \geq c_1\left[c_2x_2 + (c_3x_5+c_4)\left(x_3+\dfrac{c_2}{d_1}x_3\right)+c_5\right], \: \: y \geq x_2+x_5+(c_3x_5+c_6)\dfrac{x_3}{d_1}. \end{equation} (\ref{B6}) and (\ref{B9}) yields \begin{equation} \label{B10} (d_1-c_1c_2)x_2 \geq c_1\left[(c_3x_5+c_4)\left(x_3+\dfrac{c_2}{d_1}x_3\right)+c_5\right] \end{equation} and (\ref{B8}) and (\ref{B9}) yields \begin{equation} \label{B11} \left[\dfrac{x_5d_2}{c_1}-(c_3x_5+c_4)\left(1+\dfrac{c_2}{d_1}\right)\right]x_3 \geq c_2x_2+c_5. \end{equation} (\ref{B10}) implies that $d_1>c_1c_2$. By (\ref{B10}), (\ref{B11}) and some simplifications, we obtain that \begin{equation} \label{B12} x_3 \geq \dfrac{d_1c_5}{\left(\dfrac{d_1d_2}{c_1}-c_2d_2-c_3d_1-c_2c_3\right)x_5-c_4(d_1+c_2)} \end{equation} and the denominator in (\ref{B12}) should be positive. Using (\ref{B10}) and (\ref{B12}) we get \begin{equation} \label{B13} x_2 \geq \dfrac{d_2c_5}{\left(\dfrac{d_1d_2}{c_1}-c_2d_2-c_3d_1-c_2c_3\right)x_5-c_4(d_1+c_2)}. \end{equation} Using (\ref{B9}), (\ref{B12}) and (\ref{B13}) we find that \begin{equation} \label{B14} y \geq x_5+\dfrac{d_2c_5+c_5(c_3x_5+c_6)}{\left(\dfrac{d_1d_2}{c_1}-c_2d_2-c_3d_1-c_2c_3\right)x_5-c_4(d_1+c_2)}. \end{equation} Define $x=ax_5-b$ where $a=\dfrac{d_1d_2}{c_1}-c_2d_2-c_3d_1-c_2c_3, \: b=c_4(d_1+c_2)$. Since $x$ is the denominator of (\ref{B12}), it is positive. As $x_5$ and $b$ are positive, we conclude that $a$ is also positive. We can write (\ref{B14}) in terms of $x$ as \begin{equation} \label{B15} y \geq \dfrac{1}{a}\left(b+c_3c_5+x+\dfrac{c_3c_5b+c_5(d_2+c_6)a}{x}\right). \end{equation} Finally, using (\ref{B15}) and Arithmetic-Geometric Mean Inequality, we get the desired result in (\ref{B_bound}).
1,116,691,499,445
arxiv
\section{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex}{2.3ex plus .2ex}{\large\bf}} \defAppendix{\arabic{section}} \def\arabic{subsection}{\arabic{subsection}} \def\arabic{subsubsection}{\arabic{subsubsection}} \def
1,116,691,499,446
arxiv
\section{Introduction} Group IV transition metals including titanium (Ti), zirconium (Zr), and hafnium (Hf) and their alloys have attracted enormous research and technological interests due to their excellent properties such as high strength-to-weight, high rigidity-to-weight ratio, low thermal neutron absorption cross section and good corrosion resistance \cite{Zhilyaev2009}. Their narrow d-band characterized as in the midst of a broad sp-band is the origin of scientific interest. And the pressure-induced electrons transfer from sp-band to d-band is the driving force behind the structural and electronic transitions. As a member of the group IV alloys, Zr-Ti alloys have wide applications in aerospace, medical, nuclear industries. At ambient condition, pure zirconium and titanium both crystallizes in hexagonal closed packed(hcp) structure ($\alpha$ phase). For zirconium, experiments\cite{HuiXia1990,HuiXia1991} found that it undergoes a crystallographic phase transition from hcp($\alpha$ phase) to another hexagonal structure ($\omega$ phase) at pressure of 2-7 GPa, and it will transform to the body-centered-cubic structure ($\beta$ phase) at the pressure of 30-35 GPa. However, for Ti, the experimental phase transition order at room temperature is alpha omega gama delta, its beta phase has not been found until 216 GPa\cite{Y.Akahama2001}. For the ZrTi system, experimentally, it is characterized by full solubility of its components\cite{I.O.Bashkin2000,I.O.Bashkin2003}. As in pure zirconium and titanium, three phases($\alpha$, $\beta$ and $\omega$) are observed in the ZrTi system. At ambient condition, they crystallize in hexagonal close-packed (hcp) structure ($\alpha$ phase), and transforms to body-centered cubic (bcc) $\beta$ phase under high temperature and a three atoms hexagonal structure ($\omega$ phase) under pressure. The aim of the present paper is to use first-principles calculations to theoretically investigate the compositional dependence of the structural, elastic and bond properties of the Zr$_{x}$Ti$_{1-x}$ binary alloy system. \section{Methods} In present work,the first-principles DFT calculations are performed using the projector augmented wave (PAW) \cite{Blochl1994} as implemented in the Vienna ab initio simulation package (VASP) \cite{Kresse1996}. To describe the exchange-correlation potential, the Perdew-Burke-Ernzerhof (PBE) \cite{Perdew1996} form of the generalized gradient approximation (GGA) is employed. The Zr 4d$^2$5s$^2$5p$^0$ and the Ti 4d$^2$5s$^2$5p$^0$ orbitals are treated as valance electrons. To get accurate results, the plane wave cut-off energy is chosen as 400 eV. The Brillouin-zone integrations are performed using $\gamma$-centered grids of kpoints of 25 $\times$ 25 $\times$ 25 for $\alpha$-Zr, 25 $\times$ 25 $\times$ 25 for $\alpha$-Ti, 15$\times$ 15 $\times$ 15 for Zr$_{x}$Ti$_{1-x}$ in terms of Monkhorst-Pack scheme \cite{Monkhorst1976}. The geometries are optimized until the Hellmann-Feynman forces are less than 0.01 eV/{\AA}, and the total energy is relaxed until the difference value becoming smaller than 10-5 eV. Using the wavefunction obtained from the DFT calculation, the QUAMBO method \cite{Lu2004,Chan2007,Qian2008,Yao2010} was implemented to exactly down-fold the occupied states to a representation of a minimal-basis set without losing any electronic structure information. The constructed orbitals are atomiclike and highly localized, and they are adapted to perform the chemical bonding analysis to the interaction mechanism of Ti and Zr. The concept of SQS was first proposed by Zunger et al. \cite{Zunger1990} to mimic disordered (random) solution. There exists a one-to-one correspondence between a given structure and a set of correlation functions, which is the key for SQS methods. In a substitutional binary alloy case, the correlation function $\prod_{k,l}$ for a figure (cluster) $f(k,l)$ with $k$ vertices and separated by an $l$th neighbor distance is defined as follows: $$\prod_{k,l}=\frac{1}{N_{k,l}}\sum_{k,l}\sigma_1\sigma_2\cdots\sigma_k \eqno(1)$$ where $\sigma_k$ is a spinlike variable which takes the value of +1 or -1 depending on whether the atomic site is occupied by an $A$ or $B$ atom. Specially, for a random alloy of $A_{1-x}B_x$, Eq. (1) is simply by ${(2x-1)}^k$. The optimum SQS for a given number of atoms is the one that best matches with the correlation function of the random alloy. In the present work, SQS models were generated using the Monte Carlo algorithm implemented by Walle et al.\cite{VandeWalle2013}. Their pair correction functions $\Pi_{2,l}$(l up to the 11th nearest neighbor) are shown in Table 1. For Zr$_{4}$Ti$_{12}$ ,the $\Pi_{2,l}$ match the random ones well until l=6; and l=8 for Zr$_{8}$Ti$_{8}$. An example(Zr$_{4}$Ti$_{12}$) of SQSs generated in present work is given in Fig. 1. In principle, the point group symmetry of the original alloy is broken by the SQS method. As a result, there will be 21 elastic constant elements for a SQS model \cite{Tasnadi2012a}. Traditional, the energy-strain approach \cite{LePage2001} and the stress-strain approach \cite{LePage2002a} are two ways of calculating single crystal elastic constants from first-principles calculations. In order to obtain 21 elastic constant with the energy-strain approach, we need to impose 21 independent deformation on the original structure. Extremely computing power is required. In order to calculate single crystal elastic constants from first-principles calculations,the stress-strain approach was adopted in the present work. A set of small strains $\bm\varepsilon=(\varepsilon_1\ \varepsilon_2\ \varepsilon_3\ \varepsilon_4\ \varepsilon_5\ \varepsilon_6)$ (where $\varepsilon_1$, $\varepsilon_2$, and $\varepsilon_3$ are the normal strains, $\varepsilon_4$, $\varepsilon_5$, and $\varepsilon_6$ are the shear strains in Voigt's notation) is imposed on a crystal, the deformed structure lattice vectors ($\overline {\bm{R}}$) are obtained by transforming the original one ($\bm{R}$) as follows: \begin{equation} \overline{\bm{R}}=\bm{R} \left( \begin{array}{ccc} 1+\varepsilon_1 & \varepsilon_6/2 & \varepsilon_5/2\\ \varepsilon_6/2 & 1+\varepsilon_2 & \varepsilon_4/2\\ \varepsilon_5/2 & \varepsilon_4/2 & 1+\varepsilon_3\\ \end{array} \right) \end{equation} As a result, a set of stress $\bm{t}=(t_1\ t_2\ t_3\ t_4\ t_5\ t_6)$ is determined by first-principles calculations in this work. In present work, we apply the following six linearly independent sets of strains \cite{Shang2007} \begin{equation} \left( \begin{array}{cccccc} x&0&0&0&0&0\\ 0&x&0&0&0&0\\ 0&0&x&0&0&0\\ 0&0&0&x&0&0\\ 0&0&0&0&x&0\\ 0&0&0&0&0&x\\ \end{array} \right) \end{equation} with $x=\pm{0.007}$, Using a $6\times6$ elastic constants matrix, $\bm{C}$, with components of $C_{ij}$ in Voigt's notation, the generalized Hooke's law is expressed as $\bm{t}=\bm{\varepsilon}\bm{C}$. Consequently, the stiffness constants matrix is obtained from $$\bm{C}={\bm{\varepsilon}}^{-1}\bm{t},$$ where ``-1'' represents the pseudo-inverse, which can be solved by the singular value decomposition method. Finally, we get the macroscopic hcp elastic constant, $\overline{C}_{11},\overline {C}_{12}, \overline{C}_{13},\overline {C}_{33},\overline {C}_{44}$, by averaging \cite{Moakher2006} \begin{equation} \begin{array}{l} \overline {C}_{11} = 3(C_{11} + {C_{22}})/8+C_{12}/4 + {C_{66}}/2 \\ \overline { C}_{12} = ({C_{11}} + {C_{22}})/8+ 3{C_{12}}/4 - {C_{66}}/2 \\ \overline { C}_{13} = ({C_{13}} + {C_{23}})/2 \\ \overline { C}_{33} = {C_{33}} \\ \overline { C}_{44} = ({C_{44}} + {C_{55}})/2 \\ \end{array} \end{equation} \section{Results and discussion} \subsection{Elastic properties } Elastic constants are very important because they can measure the resistance and mechanical properties of a solid to external stress or pressure. All independent elastic constants of Zr$_{x}$Ti$_{1-x}$ (x=0,0.25,0.5,0.75,1) are calculated using strain-stress method in present work, and the results are summarized in Table 2. Small deviations from a perfect hcp structure are observed in the elastic tensors of the SQS models. These elastic constants decrease as Zr content increases except $C_{12},C_{13},C_{23}$. According to Eq.(3), the averaged $C_{11},C_{12},C_{13},C_{33},C_{44}$ are obtained for hcp crystals. The obtained constants of all composition meet the requirement of the Born stability criteria\cite{Nye1985} for hcp system $$C_{11}>0, C_{44}>0, C_{11}>|C_{12}|,(C_{11}+2C_{12})C_{33}>2{C^2_{13}}.$$, The polycrystalline bulk modulus B, shear modulus G are deduced from the Voigt-Reuss-Hill(VRH) approach \cite{Hill1952}. Young's modulus and Poisson's ($\upsilon$) ratio are calculated by the following formulas: $$E = 9BG/(3B+G), \upsilon = (3B-2G)/[2(3B+G)]$$. The results and B/G are listed in Table 3. The bulk moduli for all the composition show a excellent agreement with those obtained by fitting to a Birch-Murnaghan equation of state (list in Table 1), which is a proof of consistency and reliability of our calculations. Additionally, the deduced bulk moduli B change smoothly and decrease with increasing the amount of Zr, while the Young's modulus E and shear modulus G show the same trend until x=0.75. Empirical, there are two common ways to judge a material ductile or brittle. According Pugh's suggestion, a higher ratio( $>$ 1.75) of bulk to shear moduli, B/G, indicates ductile behavior \cite{Pugh1954}. Another is the Poisson's ration, the transsion from brittleness to ductility occurs when $\upsilon$$\approx$1/3\cite{Frantsevich1983}. Poisson's ratio, $\upsilon$, and the B/G ratio as a function of Zr content, x, are listed in Table 3. Both criterion confirms the ductile behavior of Zr$_{x}$Ti$_{1-x}$ over the whole composition range. The value of B/G and $\upsilon$ for ZrTi alloy are higher than the one for pure metal, indicating the ductility of Zr is enhanced when alloying with Ti. \subsection{Mulliken charge} In order to understand the bond property between atoms, the atomic Mulliken charge of Zr-Ti binary alloy are investigated, and the results are given in Table 4, which clearly indicates that Ti atoms gain electrons while Zr atoms loss electrons . To investigate the origin of charge transfer, we further investigate the relationship between charge transfer and the amount of other element atom of its nearest neighbors by fitting the data using a line relationship. The results are showed in Fig. 2. Obviously, they are line-like. So we conclude that the charge transfer is mainly determined by the number of other element atom in its nearest neighbors. \section{Conclusion} The structural, elastic and bond properties of the Zr$_{x}$Ti$_{1-x}$ alloy have been studied using first-principles calculations. The SQS method are adopted to mimic the ZrTi random system. It can be found that hcp structured Zr$_{x}$Ti$_{1-x}$ is a ductile material over the whole composition. The bulk Bulk modulus, B, Young's modulus, E, and shear modulus, G all have a decreasing trend with increasing the content of Zr. The effect of alloy will enhanced the ductility of pure metal Zr or Ti. From mulliken charge analysis, we could conclude that the amount of charge transfer is determined by the number of other element in its nearest neighbors, and there is a line-like relationship between them. \newpage \noindent \textbf{Acknowledgments}\\ \noindent This work was supported by the National Science Foundation of China under Grant Nos. 11275229 \& NSAF U1230202, special Funds for Major State Basic Research Project of China (973) under Grant No. 2012CB933702, Hefei Center for Physical Science and Technology under Grant No. 2012FXZY004, Anhui Provincial Natural Science Foundation under Grant No. 1208085QA05, and Director Grants of CASHIPS. Part of the calculations were performed at the Center for Computational Science of CASHIPS, the ScGrid of Supercomputing Center, and the Computer Network Information Center of the Chinese Academy of Sciences. \bibliographystyle{elsarticle-num}
1,116,691,499,447
arxiv
\section{Introduction} \label{intro} Fermi surface nesting is a very popular and important concept in condensed matter physics~\cite{Khomskii_book2010}. The existence of two fragments of the Fermi surface, which can be matched upon translation by a certain reciprocal lattice vector (nesting vector), entails an instability of a Fermi-liquid state. A superstructure or additional order parameter related to nesting vector is generated due to the instability. The nesting is widely invoked for the analysis of charge density wave (CDW) states~\cite{Gruner_RMP1988_CDW,Monceau_AdvPh2012_CDW}, spin density waves (SDW) states~\cite{Overhauser_PR1962_SDW,Gruner_RMP1994_SDW}, mechanisms of high-$T_c$ superconductivity~\cite{RuvaldsPRB1995_nesting_supercond, GabovichSST2001_SDW-CDW_supercond,TerashimaPNAS2009_nesting_Fe_based}, fluctuating charge/orbital modulation in magnetic oxides~\cite{ChuangScience_SO_fluct}, chromium and its alloys~\cite{shibatani_first1969,ShibataniJPSJ1969_mag_field_chromium, shibatani1970,Rice}, etc. It is important to emphasize that in a real material the nesting may be imperfect, i.e. the Fermi surface fragments can only match approximately. One of the earliest studies of imperfect nesting was performed by Rice~\cite{Rice} in the context of chromium and its alloys (see also the review articles Refs.~\onlinecite{Tugushev_UFN1984_SDW_Cr,Fawcet_RMP1988_SDW_Cr}). The notion of nesting and related concepts were broadly employed in the recent studies of iron-based pnictides~\cite{eremin_chub2010,Chubukov2009,graser2009,vavilov2010, kondo2010,brydon2011,timm2012}. For example, Ref.~\onlinecite{eremin_chub2010} argued that the deviation from the perfect nesting lifts degeneracy between several competing magnetically ordered states. The influence of the imperfect nesting on the phase coexistence was discussed in Ref.~\onlinecite{vavilov2010}. Many theoretical investigations assume from the outset the homogeneity of the electron state. This assumption may be violated in systems with imperfect nesting. Indeed, it was demonstrated that the imperfect-nesting mechanism can be responsible for the nanoscale phase separation in quasi-one-dimensional metals~\cite{tokatly1992}, chromium alloys~\cite{WeImperf}, iron-based superconductors~\cite{Sboychakov_PRB2013_PS_pnict}, and in doped bilayer graphene~\cite{ourBLGreview, Sboychakov_PRB2013_MIT_AAgraph, Sboychakov_PRB2013_PS_AAgraph}. Several experiments on pnictides~\cite{PSexp1,PSexp2,PSexp3,goko2009,phasep_exp2012,phasep_exp2014, phasep_exp2016} and chalcogenides~\cite{PSexp4,bianconi_phasep2011,bianconi_phasep2015} support the possibility of phase separation (see also review article~[\onlinecite{Dagotto_review2012}]). In similar context of imperfect nesting, studies of spin and charge inhomogeneities are currently active in the physics of low-dimensional compounds.~\cite{Narayanan_RRL2014,Campi_Nature2015,Chen_PRB2014} Other types of inhomogeneous states (``stripes'', domain walls, impurity levels) were also discussed in the literature in the framework of analogous models~\cite{zaanen_stripes1989,tokatly1992,Akzyanov2015}. Moreover, it was shown that the possibility of SDW ordering in systems with itinerant charge carriers results in very rich and complicated phase diagrams involving phase-separation regions~\cite{Igoshev_JPCM2015,Igoshev_JMMM2015}. An applied magnetic field $\mathbf{B}$ alters the quasiparticle states, changing the nesting conditions. In the present paper, we explore the physical consequences of the applied magnetic field for weakly-correlated electron systems with imperfect nesting. In a generic situation, the magnetic field enters the Hamiltonian both via the Zeeman term, and via the substitution $\hat{\mathbf{p}} \rightarrow \hat{\mathbf{p}}+(e/c)\mathbf{A}$. The Zeeman term lifts the degeneracy with respect to the spin projection. Both electron and hole Fermi surface sheets become split into two spin-polarized components. As a result, two different SDW order parameters corresponding to spin projections parallel and antiparallel to the direction of $\mathbf{B}$ can be constructed. When the electron-hole symmetry between the electron and hole pockets is absent, the effects of the Zeeman term is especially pronounced. In particular, new antiferromagnetic (AFM) phases, both homogeneous and inhomogeneous, appear in the phase diagram. The boundaries between different phases exhibit strong dependence on $B$. For example, such effects were found in the analysis of the magnetic phase diagram of doped rare-earth borides~\cite{Sluchanko_PRB2015}. We also study separately phenomena caused by the Landau quantization. In the range of high magnetic fields, Landau quantization leads to characteristic oscillations of the SDW order parameters and of the N\'eel temperature as a function of $B$. However, these oscillations are most clearly pronounced in the case of symmetric electron and hole pockets. Otherwise, the nesting and, hence, the SDW ordering would be completely destroyed by the magnetic field before any detectable oscillations would occur. Similar oscillatory effects are well-known in the context of quasi-one-dimensional compounds~\cite{GorkovLebed_JdePhL1984_magn_field_stab1D, Lebed_JETP1985_magn_field_PhDiag,Yakovenko_JETP1987_field_ind_PhTr, Azbel_PRB1989_field_ind_SDW,Pudalov_Springer2008_magn_field_SDW, Lebed_Springer2008_magn_field_SDW}. However, the role of the magnetic field in nesting-related phenomena in the usual three-dimensional materials has received only limited attention~\cite{YamasakiJPSJ1967_mag_field_SDW, ShibataniJPSJ1969_mag_field_chromium}. This paper is organized as follows. In Section~\ref{model}, we formulate the model. Section~\ref{SectLowB} deals with the effects related to the Zeeman term. Phenomena occurring due to the Landau quantization are treated in Section~\ref{SectHighB}. A discussion of the results is given in Section~\ref{disco}. Some details of the calculations are presented in the Appendix. \section{Model} \label{model} \subsection{Hamiltonian} \label{subsect::hamilt} The model under study is schematically illustrated in Fig.~\ref{band}. It describes two bands: an electronic band ($a$) and a hole band ($b$). The hole Fermi surface coincides with the electron Fermi surface after a translation by a reciprocal lattice vector $\mathbf{Q}_0$. The quasiparticles interact with each other via a short-range repulsive potential. Formally, the Hamiltonian is represented as \begin{equation} \label{ham_summa} \hat{H}=\hat{H}_e+\hat{H}_{\textrm{int}}\,, \end{equation} where $\hat{H}_e$ is the single-electron term, and $\hat{H}_{\textrm{int}}$ corresponds to the interaction between quasiparticles. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{Fig1.eps} \caption{(Color online) Band structure of the electron model in an applied magnetic field. The magnetic field lifts the degeneracy of the electron-like ($a$) and hole-like ($b$) bands with respect to the electron spin. The red arrows indicate the interband coupling giving rise to the order parameters. The splitting into Landau levels is not shown. \label{band} } \end{figure} Regarding the single-electron term, we assume a quadratic dispersion for both bands and use the Wigner--Seitz approximation. Specifically, in the electron band, the wave vector $\mathbf{k}$ is confined within a sphere of finite radius centered at zero, and in the hole band, such sphere is centered at $\mathbf{Q}_0$. The kinetic energies of these states are spread between the minimum values (denoted by $\varepsilon^{a,b}_{\textrm{min}}$) and maximum values $\varepsilon^{a,b}_{\textrm{max}}$, see Fig.~\ref{band}. Thus, the energy spectra for the electron and hole pockets, measured relative to the Fermi energy $\mu$, have the form ($\hbar = 1$) \begin{eqnarray} \label{spectra} &&\varepsilon^a(\mathbf{k}) = \frac{\mathbf{k}^2}{2m_a} + \varepsilon^{a}_{\textrm{min}}-\mu, \quad \varepsilon^{a}_{\textrm{min}}<\varepsilon^a<\varepsilon^{a}_{\textrm{max}}, \\ &&\varepsilon^b(\mathbf{k}+\mathbf{Q}_0) = -\frac{\mathbf{k}^2}{2m_b}+\varepsilon^{b}_{\textrm{max}}-\mu, \quad \varepsilon^{b}_{\textrm{min}} < \varepsilon^b < \varepsilon^{b}_{\textrm{max}}. \nonumber \end{eqnarray} The nesting conditions mean that for some $\mu=\mu_0$, the Fermi surfaces of the $a$ and $b$ bands coincide after a translation by the vector $\mathbf{Q}_0$, and both Fermi spheres are characterized by the single Fermi momentum $k_F$. Using Eqs.~\eqref{spectra}, we readily obtain \begin{equation} \label{nesting} \!\!k_F^2\!=\!\frac{2m_am_b}{m_a\!+\!m_b}\left(\varepsilon^{b}_{\textrm{max}} \!-\!\varepsilon^{a}_{\textrm{min}}\right),\,\, \mu_0\!=\!\frac{m_b\varepsilon^{b}_{\textrm{max}}\!+\!m_a \varepsilon^{a}_{\textrm{min}}}{m_a\!+\!m_b}. \end{equation} Below, we will measure the momentum of the $b$ band from the nesting vector $\mathbf{Q}_0$, that is, we replace $\varepsilon^b(\mathbf{k}+\mathbf{Q}_0)\to\varepsilon^b(\mathbf{k})$ in Eq.~\eqref{spectra}. For perfect electron--hole symmetry, when $m_a=m_b=m$ and $\varepsilon^{b}_{\textrm{max}}=-\varepsilon^{a}_{\textrm{min}}$, we obtain $\mu_0=0$. In an applied uniform dc magnetic field $\mathbf{B}$, the single-electron part of our model can be written as \begin{equation} \label{ham_e} \hat{H}_e=\sum_{\alpha\sigma}\int\!d^3x\,\psi^\dag_{\alpha\sigma} (\mathbf{x})\hat{H}_{\alpha \sigma}\psi_{\alpha\sigma}(\mathbf{x})\,, \end{equation} where [see Eqs.~\eqref{spectra}] \begin{eqnarray} \label{ham_e_ham} \nonumber \hat{H}_{a\sigma} &=& \frac{\left(\hat{\mathbf{p}}+\frac{e}{c}\mathbf{A}\right)^2}{2m_a} + \sigma g_a\omega_a + \varepsilon_{\textrm{min}}^a-\mu, \\ \hat{H}_{b\sigma} &=& -\frac{\left(\hat{\mathbf{p}}+\frac{e}{c}\mathbf{A}\right)^2}{2m_b} + \sigma g_b\omega_b+\varepsilon_{\textrm{max}}^b-\mu. \end{eqnarray} In these equations, $\alpha=a,b$, $\mathbf{\hat{p}}=-i\bm{\nabla}$ is the momentum operator, $\sigma=\pm 1$ is the spin projection, $\omega_\alpha=eB/cm_\alpha$ are the cyclotron frequencies for the electron and hole bands, and $g_{\alpha}$ are the corresponding Land\'e factors. We assume that the magnetic field is directed along the $z$ axis and choose the Landau gauge for the vector potential, $\mathbf{A}=(-By,0,0)$. The second term in Eq.~\eqref{ham_summa} describes the interaction between electrons and holes. For treating the SDW instability, it is sufficient to keep only the interaction between the $a$ and $b$ bands. The neglected intraband contributions can only renormalize the parameters. We also assume that this interaction is a short-range one. Thus, we can write \begin{equation} \label{ham_int} \hat{H}_{\textrm{int}}\! =\! V\sum_{\sigma\sigma'} \int\!{d^3x\, \psi^\dag_{a\sigma}(\mathbf{x}) \psi_{a\sigma}(\mathbf{x}) \psi^\dag_{b\sigma'}(\mathbf{x}) \psi_{b\sigma'}(\mathbf{x}) }\,. \end{equation} The coupling constant $V$ is positive, which corresponds to repulsion. \subsection{Single-electron spectrum in a magnetic field} \label{subsect::non-interact} Let us start with a brief discussion of the properties of the single-electron Hamiltonian. When the magnetic field is zero, the single-electron spectrum consists of two bands of free fermions with two-fold spin degeneracy. In a non-zero applied magnetic field ${\bf B}$, the operator $\psi_{\alpha\sigma}(\mathbf{x})$ can be expressed as a series expansion in terms of eigenfunctions of Hamiltonian \eqref{ham_e}, \begin{equation} \label{psi_expand} \psi_{\alpha\sigma}(\mathbf{x}) = \sum_{\mathbf{p}n} {\frac{e^{i(p_xx+p_zz)}}{\sqrt{{\cal V}^{2/3}l_B}}}\, \chi_n\!\! \left(\frac{y-p_xl_B^2}{l_B}\right) \psi_{\mathbf{p}n\alpha\sigma}\,, \end{equation} where $\psi^{\phantom{\dag}}_{\mathbf{p}n\alpha\sigma}$ is the annihilation operator for an electron in band $\alpha$ with 2D momentum $\mathbf{p}=(p_x,p_z)$ and spin projection $\sigma$ at the Landau level $n$, symbol ${\cal V}$ denotes the system volume, $l_B=\sqrt{c/eB}$ is the magnetic length, \begin{equation} \label{hi_n} \chi_n(\xi)=\frac{1}{\sqrt{2^nn!\sqrt{\pi}}}e^{-\xi^2/2}H_n(\xi)\,, \end{equation} and $H_n(\xi)$ is the Hermite polynomial of degree $n$. In this basis, the Hamiltonian can be expressed as \begin{equation} \label{ham_e1} \hat{H}_e = \sum_{\mathbf{p}n\alpha\sigma}\! \varepsilon_{\alpha \sigma}(p_z,n) \psi^\dag_{\mathbf{p}n\alpha\sigma} \psi^{\phantom{\dag}}_{\mathbf{p}n\alpha\sigma}\,, \end{equation} where the single-particle eigenenergies are \begin{eqnarray} \label{E_alpha} \varepsilon^a_{\sigma}(p_z,n) &\!\!=\!&\! \omega_a\!\left(n+\frac{1}{2}+\sigma g_a\right) + \frac{p_z^2}{2m_a}+\varepsilon^{a}_{\textrm{min}}-\mu, \\ \varepsilon^b_{\sigma}(p_z,n) &\!\!=\!&\! -\omega_b\!\left(n+\frac{1}{2}-\sigma g_b\right) -\frac{p_z^2}{2m_b}+\varepsilon^{b}_{\textrm{max}}-\mu. \nonumber \end{eqnarray} The spectrum consists of four bands (see Fig.~\ref{band}) since the Zeeman term (the term, proportional to $\sigma$) lifts the degeneracy with respect to the electron spin. \subsection{Energy scales} \label{subsect::scales} The energy spectrum of the model is characterized by two single-particle energy scales. The first is the Fermi energy $\varepsilon_{F\alpha}=k_F^2/2m_\alpha$, and the second is $\omega_{\alpha}$, which is the distance between the Landau levels in band $\alpha$. Furthermore, we assume that $\varepsilon_{Fa}\approx \varepsilon_{Fb}$ and $\omega_{a}\approx\omega_{b}$. The energy scale associated with the interactions will be characterized by the value of a spectral gap $\Delta_0$. The latter parameter is defined as follows. When $\mu\approx\mu_0$, the nesting between the two sheets of the Fermi surface is nearly perfect. It is known that, under such condition, the interaction between the electron- and hole-like bands opens a gap $\Delta(T,B)$ in the electron spectrum. The value of the gap at zero temperature $T=0$ and zero magnetic field $B=0$ will be denoted as $\Delta_0 = \Delta(0,0)$. Below we consider the case $\Delta_0\ll\varepsilon_{F\alpha}$, which corresponds to a weak electron-hole coupling. We also classify the magnetic field as low if $\omega_{\alpha}\lesssim\Delta_0$, and high if $\omega_{\alpha}\gtrsim\Delta_0$. The Landau quantization is of importance in the high-field range, whereas at low fields, it can be neglected. In the regime of low magnetic fields considered in the next Section~\ref{SectLowB}, we neglect any corrections associated with the small ratio $\omega_{\alpha}/\varepsilon_{F\alpha}$, while for $\omega_{\alpha}\gtrsim\Delta_0$ (this regime is considered in Section~\ref{SectHighB}), we take into account these corrections in the leading order, which turns out to be of the order of $(\omega_{\alpha}/\varepsilon_{F\alpha})^{1/2}$. \section{Electron-hole coupling: Low magnetic field} \label{SectLowB} \subsection{Main definitions} At low magnetic fields, we can neglect the effect of the Landau quantization on the electron spectrum and take into account only the Zeeman splitting. In this approximation, the single-electron Hamiltonian \eqref{ham_e1} has the form \begin{equation} \label{SEHam} \hat{H}_e=\sum_{\mathbf{k}\alpha\sigma}\!\varepsilon_{\alpha\sigma} (\mathbf{k})\psi^\dag_{\mathbf{k}\alpha\sigma} \psi^{\phantom{\dag}}_{\mathbf{k}\alpha\sigma}\,, \end{equation} where $\psi^\dag_{\mathbf{k}\alpha\sigma}$ and $\psi^{\phantom{\dag}}_{\mathbf{k}\alpha\sigma}$ are the creation and annihilation operators of an electron in band $\alpha$ with (3D) momentum $\mathbf{k}$ and spin projection $\sigma$, while the electron spectra now read \begin{eqnarray} \label{E_alpha-LowB_kF} \nonumber \varepsilon^a_{\sigma}(\mathbf{k}) &=& \frac{k^2-k^2_F}{2m_a}+ \sigma g_a\omega_a-\delta\mu\,, \\ \varepsilon^b_{\sigma}(\mathbf{k}) &=& -\frac{k^2-k^2_F}{2m_b}+ \sigma g_b\omega_b-\delta\mu\,, \end{eqnarray} where $\delta\mu=\mu-\mu_0$. If the applied magnetic field is zero, the commensurate SDW order parameter can be written as \begin{equation} \label{rice} \Delta=\frac{V}{{\cal V}} \sum_{\mathbf{k}} \left\langle \psi^\dag_{\mathbf{k}a\sigma} \psi^{\phantom{\dag}}_{\mathbf{k}b\bar{\sigma}} \right\rangle\,, \end{equation} where $\bar{\sigma}$ means $-\sigma$. This order parameter is degenerate with respect to spin. If ${\bf B} \ne 0$, this degeneracy is lifted and we introduce a two-component order parameter corresponding to the nesting vectors shown by the arrows in Fig.~\ref{band} \begin{equation} \label{order_lowB} \Delta_\uparrow \!=\! \frac{V}{{\cal V}}\! \sum_{\mathbf{k}}\! \left\langle\! \psi^\dag_{\mathbf{k}a\uparrow} \psi^{\phantom{\dag}}_{\mathbf{k}b\downarrow}\! \right\rangle,\,\,\, \Delta_\downarrow \!=\! \frac{V}{{\cal V}}\! \sum_{\mathbf{k}}\! \left\langle\! \psi^\dag_{\mathbf{k}a\downarrow} \psi^{\phantom{\dag}}_{\mathbf{k}b\uparrow}\! \right\rangle\,. \end{equation} The mean-field spectrum of the model has a form \begin{equation} \label{Emf_LB} E^\sigma_{1,2}(\mathbf{k}) = \frac{\varepsilon_\sigma^a(\mathbf{k}) + \varepsilon_{-\sigma}^b(\mathbf{k})}{2} \pm \sqrt{\Delta_\sigma^2 +\! \left( \frac{\varepsilon_\sigma^a(\mathbf{k}) - \varepsilon_{-\sigma}^b(\mathbf{k}) }{2} \right)^{\!\!2} }. \end{equation} Using these spectra, we can write the grand potential of the system in the mean-field approximation as a sum of two ``decoupled'' terms $\Omega = \Omega_\uparrow + \Omega_\downarrow$, where ``partial'' grand potentials are equal to \begin{eqnarray} \label{GrPot} \Omega_\sigma = {\cal V}\left[\!\frac{\Delta^2_\sigma}{V}- T\!\!\sum_{s=1,2}\int\!\frac{d^3\mathbf{k}}{(2\pi)^3} \ln{\!\left(1\!+\!e^{-E_s^\sigma(\mathbf{k})/T}\right)}\!\right]\!. \end{eqnarray} The order parameters are found by minimizing $\Omega$ with respect to $\Delta_{\sigma}$. \subsection{SDW order parameters} The case when the electron and hole bands are perfectly symmetric is, of course, the simplest. In such a situation, however, the effect of weak magnetic fields on the electron spectrum is zero, as it will be evident below. Thus, we should introduce some electron-hole asymmetry to obtain non-trivial results in the low-field range. Qualitatively, a particular source of the asymmetry is not of importance. Here, we assume for simplicity that $m_a=m_b=m$ (hence, $\omega_a=\omega_b=\omega_H$ and $\varepsilon_{Fa}=\varepsilon_{Fb}=\varepsilon_F$), but $g_a\neq g_b$. It is also assumed that the difference $g_a-g_b$ is of the same order as $g_a$ and $g_b$. We rewrite Eqs.~\eqref{E_alpha-LowB_kF} in the following convenient form \begin{eqnarray} \label{E_alpha-LowB_ksig} \varepsilon^a_{\sigma}(\mathbf{k}) &=& \left(\frac{k^2}{2m}-E_{F\sigma}\right)-\mu_\sigma, \nonumber \\ \varepsilon^b_{-\sigma}(\mathbf{k}) &=& -\left(\frac{k^2}{2m} -E_{F\sigma}\right)-\mu_\sigma, \end{eqnarray} where the following notation is used \begin{eqnarray} \label{notations} g &=& \frac{g_a+g_b}{2},\quad \Delta g = \frac{g_a-g_b}{2}, \nonumber \\ E_{F\sigma} &=& \frac{k_F^2}{2m}-\sigma g\omega_H, \quad \mu_\sigma=\delta\mu-\sigma \Delta g\omega_H. \end{eqnarray} As we stated above, in the low-field range, we neglect corrections of the order of $\omega_H/E_F$, since $\omega_H\ll\Delta_0\ll E_F$. Then, we take into account only terms of the order of  $\omega_H/\Delta_0$. Expanding the spectra in Eqs.~\eqref{E_alpha-LowB_ksig} near the Fermi momentum, we obtain \begin{eqnarray} \label{pmE_PEH_LB} \varepsilon_\sigma^a(\mathbf{k}) &\approx& v_F\delta k\, + \sigma g \omega_H -\mu_\sigma\,, \\ \nonumber \varepsilon_{-\sigma}^b(\mathbf{k}) &\approx& -v_F\delta k\, - \sigma g \omega_H -\mu_\sigma\,, \end{eqnarray} where $\delta k=|\mathbf{k}|-k_F$ and $v_F=k_F/m$. Substituting Eqs.~\eqref{pmE_PEH_LB} in Eqs.~\eqref{Emf_LB} and \eqref{GrPot} and performing integration, we obtain the expression for grand potential \begin{eqnarray} \label{GrPot_LB_EHS} &&\frac{\Omega}{{\cal V}}=2N_F \sum_{\sigma}\left[-\frac{\Delta^2_\sigma}{2}\left(\ln{\frac{\Delta_0} {\Delta_\sigma}}+\frac{1}{2}\right)+\phantom{\int\limits_0^\infty}\right.\\ &&\left.T\!\int\limits_0^\infty\!\! d\xi\ln{\left[f_F(\sqrt{\Delta_{\sigma}^2+\xi^2}-\mu_\sigma)f_F (\sqrt{\Delta_{\sigma}^2+\xi^2}+\mu_\sigma)\right]}\right]\!,\nonumber \end{eqnarray} where $f_F(\epsilon)=1/[1+\exp{(\epsilon/T)}]$ is the Fermi function, and $\Delta_0$ is the SDW gap at zero field, temperature, and doping ($\mu=\mu_0$) \begin{equation} \label{BCS} \Delta_0\approx\varepsilon_F\exp{(-1/VN_F)},\quad N_F=\frac{k_F^2}{2\pi^2v_F}. \end{equation} From the minimization conditions $\partial\Omega/\partial\Delta_{\sigma}=0$, we derive equations for the order parameters \begin{equation} \label{Delta_LF-EHS} \!\!\ln{\frac{\Delta_0}{\Delta_\sigma}}\!=\!\!\int\limits_0^\infty\!\!d\xi \frac{f_F(\!\sqrt{\Delta_{\sigma}^2\!+\!\xi^2}\!+\!\mu_\sigma\!)\!+ \!f_F(\!\sqrt{\Delta_{\sigma}^2\!+\!\xi^2}\!-\!\mu_\sigma\!)} {\sqrt{\Delta_{\sigma}^2+\xi^2}}. \end{equation} The electron density is \begin{equation} \label{n} N=\frac{1}{{\cal V}}\sum_{\mathbf{k}s\sigma}{f_F[E^\sigma_s(\mathbf{k})]}\,. \end{equation} The parameter $N_0$ corresponds to the ideal nesting, $\delta\mu=0$, in the absence of the magnetic field, $B=0$. We define the doping level as $X=N-N_0$. The equation for $X$ can be written in the form \begin{equation} \label{n_LF_EHS} \frac{X}{N_F}\!=\!\!\!\sum_\sigma\!\!\int\limits_0^\infty\!\!d\xi\! \left[\!f_F(\!\sqrt{\Delta_{\sigma}^2\!+\!\xi^2}\!-\!\mu_\sigma\!)\! -\!\!f_F(\!\sqrt{\Delta_{\sigma}^2\!+\!\xi^2}\!+\!\mu_\sigma\!)\!\right]. \end{equation} The derivation of this equation is straightforward (the details can be found in Ref.~\onlinecite{WeImperf}). The value of $X$ can also be considered as a shift from the position of ideal nesting. In our terms, ``zero doping'' really means ``perfect nesting''. For further calculations, it is convenient to introduce the following dimensionless variables \begin{equation} \label{dimen} x\!=\!\frac{X}{2N_F\Delta_0},\,\, \nu\!=\!\frac{\delta\mu}{\Delta_0},\,\, b\!=\!\frac{\Delta g\omega_H}{\Delta_0},\,\, \delta_\sigma\!=\!\frac{\Delta_\sigma}{\Delta_0}\,. \end{equation} In this notation, we rewrite Eqs.~\eqref{Delta_LF-EHS} and~\eqref{n_LF_EHS} as \begin{eqnarray} \label{dimenEq_LF_EHS} \ln{\frac{1}{\delta_\sigma}}\!\!&=&\!\!\int\limits_0^\infty\!\! \frac{d\xi}{\eta_{\sigma}}\left[f_F(\eta_{\sigma}\!+\!\nu\!-\!\sigma b)\!+\!f_F(\eta_{\sigma}\!-\!\nu\!+\!\sigma b)\right],\\ \!\!x\!&=&\!\!\int\limits_0^\infty\!\!d\xi\sum_\sigma \left[f_F(\eta_{\sigma}\!-\!\nu\!+\!\sigma b)\!-\!f_F(\eta_{\sigma}\!+\!\nu\!-\!\sigma b)\right],\nonumber \end{eqnarray} where $\eta_{\sigma}=\sqrt{\delta_\sigma^2+\xi^2}$. We also introduce the dimensionless grand potential \begin{equation} \label{GP_dim} \varphi=\frac{\pi^2v_F}{k_F^2\Delta_0^2}\frac{\Omega}{{\cal V}}\,. \end{equation} Using notation Eqs.~\eqref{dimen}, we rewrite Eq.~\eqref{GrPot_LB_EHS} in the dimensionless form \begin{eqnarray} \label{GrPot_LB_EHS_dim} \varphi &=& \sum_\sigma\varphi_\sigma =\! \sum_\sigma \left\{ -\frac{\delta_\sigma^2}{2} \left(\ln{\frac{1}{\delta_\sigma}}+\frac{1}{2}\right) + \phantom{\int\limits_0^\infty} \right.\\ && \left. t\int\limits_0^\infty\!\! d\xi \ln{\left[ f_F(\eta_\sigma+\nu-\sigma b) f_F(\eta_\sigma-\nu+\sigma b) \right]} \right\}\,, \nonumber \end{eqnarray} where $t=T/\Delta_0$. We need to consider the system at fixed doping rather than at fixed chemical potential. Such a choice is better suited for describing usual experimental conditions. To work at fixed $x$, we should calculate the system's free energy $f=\varphi+\nu x$. To do this, we solve the system of equations~\eqref{dimenEq_LF_EHS} at a given doping level $x$. Then, we calculate $\varphi$ and $f$ using the obtained values of $\delta_\sigma$ and $\nu$. In the paramagnetic state, $\delta_\uparrow=\delta_\downarrow=0$, we readily find from Eqs.~\eqref{dimenEq_LF_EHS} and~\eqref{GrPot_LB_EHS_dim} that the chemical potential is proportional to the doping $\nu=x/2$, and \begin{eqnarray} \label{OmegaFPM} \varphi&=&-\nu^2-b^2-\frac{\pi^2t^2}{3}\,,\nonumber\\ f&=&\frac{x^2}{4}-b^2-\frac{\pi^2t^2}{3}\,. \end{eqnarray} The properties of the ordered phases will be discussed below. \subsection{Homogeneous phases at zero temperature} \label{subsect::homogen} First, let us discuss the homogeneous phases allowed by our mean-field scheme. In what follows, we will limit ourselves to the case $T=0$. The task is simplified by the fact that in the mean-field approach, our system becomes ``decoupled'' and consists of two independent subsystems, labeled by the index $\sigma$. The order parameters of these subsystems are mutually independent [see Fig.~\ref{band}, Eqs.~(\ref{order_lowB}) and~(\ref{GrPot})]. For such a situation, the thermodynamic phases of the system are characterized by a pair of order parameters $(\Delta_\uparrow, \Delta_\downarrow)$. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{FigFreeEnergy1.eps}\\ \includegraphics[width=0.95\columnwidth]{FigFreeEnergy2.eps} \caption{(Color online) Dimensionless free energies $f$ for the phases AF1, AF2, and PM (for definition, see the text) versus doping $x$ calculated for $b=0.6$ (upper panel) and $b=1.2$ (lower panel). All other homogeneous phases have larger free energies at any doping level. Thin solid (black) lines show the free energies of the phase-separated states found by the Maxwell construction. \label{FigFreeEnergy} } \end{figure} At zero temperature, we can replace the Fermi functions in the equations above by the Heaviside step function $f_F(\varepsilon)\rightarrow \Theta(-\varepsilon)$. After this substitution, the integrations in Eqs.~\eqref{dimenEq_LF_EHS} and~\eqref{GrPot_LB_EHS_dim} are easily performed and we obtain explicitly \begin{eqnarray} \label{T_0_main} \delta_\sigma&=&\sqrt{2|\nu_\sigma|-1}=\sqrt{1-2|x_\sigma|}\,,\nonumber\\ x_{\sigma}&=&-\frac{\partial \varphi_\sigma}{\partial \nu} =\sgn(\nu_\sigma)(1-|\nu_\sigma|)\,,\quad x = \sum_\sigma x_\sigma,\nonumber\\ \varphi_{\sigma}&=&\frac14-|\nu_\sigma|+\frac{\nu_\sigma^2}{2}\,, \end{eqnarray} where $\nu_\sigma=\nu-\sigma b$ is a measure of the de-nesting in subsystem $\sigma$. Equations~(\ref{T_0_main}) are valid, when $|\nu_\sigma|>\delta_\sigma$ and $\delta_\sigma\neq0$. This state is metallic with a well-defined Fermi surface and we will refer to it as AF$_\sigma^{\rm met}$. When $|\nu_\sigma|<\delta_\sigma$, we derive from Eqs.~\eqref{dimenEq_LF_EHS} and~\eqref{GrPot_LB_EHS_dim} that $\delta_\sigma=1$, $x_\sigma=0$, and $\varphi_\sigma=-1/4$. This is an insulating state with the gap in the electron spectrum. We will denote it as AF$_\sigma^{\rm ins}$. In the paramagnetic state, $\delta_\sigma=0$ (further referred to as PM$_\sigma$), we obtain $x_\sigma=\nu_\sigma$ and $\varphi_\sigma=-\nu_\sigma^2/2$. The model is symmetric with respect to the sign of doping and the direction of the magnetic field (up to the replacement $\sigma\to-\sigma$). Consequently, we can consider only the case of electron doping, $x\geq0$, and $b\geq0$. Thus, we have nine possible homogeneous phases: (AF$_\uparrow^{\rm ins}$, AF$_\downarrow^{\rm ins}$), (AF$_\uparrow^{\rm met}$, AF$_\downarrow^{\rm met}$), (PM$_\uparrow$, PM$_\downarrow$), (AF$_\sigma^{\rm ins}$, AF$_{-\sigma}^{\rm met}$), (AF$_\sigma^{\rm ins}$, PM$_{-\sigma}$), and (AF$_\sigma^{\rm met}$, PM$_{-\sigma}$), where $\sigma=\uparrow,\,\downarrow$. We compared the free energies of these phases and found that only three of them can correspond to the ground states of the system ($b>0$): AF1=(AF$_\uparrow^{\rm ins}$, AF$_{\downarrow}^{\rm met}$), AF2=(AF$_\uparrow^{\rm ins}$, PM$_{\downarrow}$), and PM=(PM$_\uparrow$, PM$_{\downarrow}$). The plots of free energies of these phases versus doping $x$ are shown in Fig.~\ref{FigFreeEnergy}. The phase diagram in the ($x,b$) plane for the homogeneous phases is shown in the upper panel of Fig.~\ref{FigPhDiag}. Note that the phases AF1, AF2, and PM are metallic if $x \ne 0$: one subsystem ($\sigma = \downarrow$) for AF1 and AF2, and both subsystems for the PM phase have a Fermi surface. \subsection{Phase separation} \label{subsect::phasep} The phase diagram discussed above was calculated neglecting the possibility of phase separation. However, the shape of the $f(x)$ curves implies such a possibility near the transition lines between the homogeneous states [see solid (black) lines in Fig.~\ref{FigFreeEnergy}]. Indeed, the compressibility of the AF1 phase is negative, $\partial^2f/\partial x^2<0$ (see Fig.~\ref{FigFreeEnergy}), in the whole doping range where this phase exists. Thus, the homogeneous phase AF1 is unstable, and the separation into the AF1 phase with $x=0$ and AF2 phase occurs in the system. Let us refer to this phase-separated state as PS1. The range of doping where the phase-separated state is the ground state can be found using the Maxwell construction~\cite{Landau}. The analysis shows that the PS1 phase corresponds to the ground state of the system if $b<b_{c2}=1/\sqrt{2}$ and $0<x<1/\sqrt{2}$. Other regions of the phase diagram, where an inhomogeneous phase is the ground state, appear in the vicinity of the line separating the AF2 and PM states. The corresponding inhomogeneous phase will be referred to as PS2. The phase PS2 corresponds to the ground state of the system within the doping range $2b+1/\sqrt{2}<x<2b+\sqrt{2}$, for any value of $b$, and also within the range $2b-\sqrt{2}<x<2b-1/\sqrt{2}$, if $b>b_{c2}$. The resulting phase diagram of the model is shown in the lower panel of Fig.~\ref{FigPhDiag}. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{FigPhDiag.eps}\\ \includegraphics[width=0.95\columnwidth]{FigPhDiagPS.eps} \caption{(Color online) Phase diagram of the electron model at low fields and zero temperature, calculated neglecting the effect of the Landau quantization. In the upper panel, only homogeneous states are shown. The high-$b$ boundary of the AF1 phase is given by the equation $b = (x + \sqrt{4 x^2 - 4x + 2})/2$, while all other boundaries are straight lines. The phase diagram in the lower panel takes into account the possibility of phase separation. Thin dashed curves in the lower panel retrace the phase boundaries from the upper panel. The phase PS1 lies within the boundaries $b < 1/\sqrt{2}$ and $ x<1/\sqrt{2}$. At zero magnetic field the critical doping separating the PS2 and PM phases is equal to $x =\sqrt{2}$. The definitions of homogeneous phases are given in Subsection~\ref{subsect::homogen}. The phase-separated states are defined in Subsection~\ref{subsect::phasep}.\label{FigPhDiag}} \end{figure} \section{Electron--hole coupling: high magnetic field} \label{SectHighB} For higher magnetic field, we predict the existence of oscillations of the SDW order parameter due to the Landau quantization. This phenomenon is similar to the well known de Haas--van Alphen effect, manifesting itself in the oscillations of the magnetic moment in metals~\cite{Lifshitz}. To outline the general effects and to avoid excessive mathematical difficulties, we restrict our consideration to the case of ideal nesting at zero magnetic field and ideal electron--hole symmetry, that is, $x=0$ (the grand potential coincides with the free energy), $m_a=m_b=m$, $\Delta g=0$, $\varepsilon^a_{\text{max}}=\varepsilon^b_{\text{max}}\equiv\varepsilon_{\text{max}}$, and $\varepsilon^a_{\text{min}}=\varepsilon^b_{\text{min}}\equiv\varepsilon_{\text{min}}$. Using Eqs.~\eqref{psi_expand} and~\eqref{hi_n}, we rewrite the interaction part of Hamiltonian \eqref{ham_int} in the form \begin{eqnarray} \label{H_int_1} \hat{H}_{\textrm{int}}&=&\sum_{\sigma\sigma'}\sum_{nmn'm'} \sum_{\mathbf{p}\mathbf{p}'\mathbf{q}}{V_{nmn'm'} (\!p_x,p_x\!-\!q,p'_x,p'_x\!-\!q_x\!)}\times\nonumber\\ &&\psi^\dag_{\mathbf{p}na\sigma}\psi_{\mathbf{p}'n'a\sigma} \psi^\dag_{\mathbf{p'-\mathbf{q}}m'b\sigma'} \psi_{\mathbf{p-\mathbf{q}}mb\sigma'}\,, \end{eqnarray} where we introduce the matrix elements \begin{eqnarray} \label{vnmpq} &&V_{nmn'm'}(\!p_x,p_x\!-\!q_x,p'_x,p'_x\!-\!q_x\!)=\frac{V}{{\cal V}^{2/3}l_B}\int\limits_{-\infty}^{+\infty}\!d\xi\times\nonumber\\ &&\chi_n(\xi-l_Bp_x)\chi_m[\xi-l_B(p_x-q_x)]\times\nonumber\\ &&\chi_{n'}(\xi-l_Bp'_x)\chi_{m'}[\xi-l_B(p'_x-q_x)]\,. \end{eqnarray} Let us remind that $\mathbf{p}$, $\mathbf{p}'$, and $\mathbf{q}$ in Eq.~\eqref{H_int_1} are 2D momenta having $x$ and $z$ components. In the mean-field approximation we apply the following replacement in the interaction Hamiltonian \begin{eqnarray} \label{MFbreaking} &&\psi^\dag_{\mathbf{p}na\sigma} \psi^{\phantom{\dag}}_{\mathbf{p}'n'a\sigma} \psi^\dag_{\mathbf{p'-\mathbf{q}}m'b\sigma'} \psi^{\phantom{\dag}}_{\mathbf{p-\mathbf{q}}mb\sigma'}\to\\ &&\delta_{\mathbf{q0}} \left[ \eta_{nm\uparrow}(\mathbf{p}) \eta^*_{n'm'\uparrow}(\mathbf{p'}) + \eta_{nm\downarrow}(\mathbf{p}) \eta^*_{n'm'\downarrow}(\mathbf{p'}) - \phantom{\psi^\dag_{\mathbf{p'}m'b\uparrow}} \right. \nonumber \\ &&\left( \eta_{nm\uparrow}(\mathbf{p}) \psi^\dag_{\mathbf{p'}m'b\downarrow} \psi^{\phantom{\dag}}_{\mathbf{p'}n'a\uparrow} + \eta_{nm\downarrow}(\mathbf{p}) \psi^\dag_{\mathbf{p'}m'b\uparrow} \psi^{\phantom{\dag}}_{\mathbf{p'}n'a\downarrow} \right) - \nonumber \\ &&\left. \left( \eta^*_{n'm'\uparrow}(\mathbf{p'}) \psi^\dag_{\mathbf{p}na\uparrow} \psi^{\phantom{\dag}}_{\mathbf{p}mb\downarrow} + \eta^*_{n'm'\downarrow}(\mathbf{p'}) \psi^\dag_{\mathbf{p}na\downarrow} \psi^{\phantom{\dag}}_{\mathbf{p}mb\uparrow} \right) \right], \nonumber \end{eqnarray} where we assume that mean values $\langle\psi^\dag_{\mathbf{p}na\sigma} \psi^{\phantom{\dag}}_{\mathbf{p}'mb\bar{\sigma}}\rangle=0$ if $\mathbf{p}\neq\mathbf{p}'$ and introduce the notation \begin{equation} \label{average_def} \eta_{nm\sigma}(\mathbf{p}) = \langle \psi^\dag_{\mathbf{p}na\sigma} \psi^{\phantom{\dag}}_{\mathbf{p}mb\bar{\sigma}} \rangle = \langle \psi^\dag_{\mathbf{p}mb\bar{\sigma}} \psi^{\phantom{\dag}}_{\mathbf{p}na\sigma} \rangle^*. \end{equation} Substitution \eqref{MFbreaking} makes the total Hamiltonian quadratic in the electron operators. As a result, we are able to calculate the electron spectrum and the grand potential of the system. Minimization of the grand potential with respect to $\eta_{nm\sigma}(\mathbf{p})$ would give us the infinite number of integral equations for the functions $\eta_{nm\sigma}(\mathbf{p})$. This procedure can be substantially simplified if we assume that the functions $\eta_{nm\sigma}(\mathbf{p})$ are independent of the momentum $p_x$. In other words, we assume here that the electron--electron interactions do not lift the degeneracy of the Landau levels, Eq.~\eqref{E_alpha}, with respect to the momentum $p_x$. Making these assumptions, we effectively restrict the class of variational mean-field wave functions, from which the approximate ground state wave function is chosen. Without the latter simplifications, the calculations become poorly tractable. Once this approximation is accepted, we obtain the following relation for the mean-field interaction Hamiltonian \begin{equation} \label{HamIntMf} \hat{H}_{\textrm{int}}^{MF}\!\!=\!\!\sum_{p_x\sigma}\!\left[\!\frac{4\pi {\cal V}^{\scriptstyle\frac13}l_B^2\Delta_{\sigma}^2}{V}\!-\! \sum_{p_zn}\!\left(\!\Delta_{\sigma}\psi^\dag_{\mathbf{p}nb\bar{\sigma}} \psi^{\phantom{\dag}}_{\mathbf{p}na\sigma}\!+h.c.\right) \!\right], \end{equation} where the SDW order parameters $\Delta_{\sigma}$ now have the form \begin{equation} \label{Delta_HF-EHS} \Delta_{\sigma}=\frac{V}{2\pi{\cal V}^{1/3}l_B^2}\sum_{p_zn}\eta_{nn\sigma}(p_z)\,. \end{equation} Thus, similar to the case of low magnetic fields considered in the previous Section, we have two variational parameters to minimize the grand potential. We diagonalize the total mean-field Hamiltonian $\hat{H}_e+\hat{H}_{\textrm{int}}^{MF}$ and derive the expression for the grand potential of the system at zero doping (perfect nesting) \begin{eqnarray} \label{E00} \Omega &=& {\cal V}^{1/3}\! \sum_{p_x,\sigma}\! \left\{ \frac{4\pi l_B^2\Delta_\sigma^2}{V} - \phantom{\frac{\sqrt{\Delta_\sigma^2}}{2}} \right. \\ && \left. 2T\sum_{n}\int\!\!\frac{dp_z}{2\pi} \ln\!\!\left[ 2\cosh\!\left( \frac{ \sqrt{ \Delta_\sigma^2\! +\! \varepsilon_\sigma^2 \left(p_z,n\right) }}{2T} \right) \right] \right\}, \nonumber \end{eqnarray} where \begin{equation} \label{epsilon} \varepsilon_\sigma(p_z,n) = \omega_H\!\!\left(n+\frac{1}{2}\right)+\frac{p_z^2}{2m}-E_{F\sigma}\,. \end{equation} In Eq.~\eqref{E00}, the summation over $n$ and the integration over $p_z$ are taken within the range determined by the inequalities $\varepsilon_{\text{min}} < \varepsilon_{\sigma}(p_z,n) <\varepsilon_{\text{max}}$. The summation over $n$ in Eq.~\eqref{E00} can be replaced by the integration over the 2D momentum $\mathbf{p}=(p_x,\,p_y)$, when the distance between Landau levels is smaller than the SDW band gap ($\omega_H\ll\Delta_0$). In this case we have \begin{equation*} n\to\frac{\mathbf{p}^2l_B^2}{2}\,,\;\;\frac{1}{l_B^2} \sum_n\ldots\to\int\!d\!\left(\!\frac{p^2}{2}\!\right)\ldots = \int\frac{dp_xdp_y}{2\pi}\ldots \end{equation*} As a result, Eq.~\eqref{E00} is replaced by Eq.~\eqref{GrPot}, where the integration is performed over 3D momentum. This justifies the assumption made in the previous Section that we can neglect the effect of the Landau level quantization at low fields. Minimization of the potential $\Omega$ gives the equation for the gap \begin{equation} \label{Gap00} \frac{1}{4\pi^2l_B^2}\sum_{n}\int\!dp_z\frac{\tanh\! \left(\!\sqrt{\Delta_\sigma^2\!+\!\varepsilon_\sigma^2 \left(p_z,n\right)}/2T\right)} {\sqrt{\Delta_\sigma^2\!+\!\varepsilon_\sigma^2\left(p_z,n\right)}}=\frac{2}{V}\,. \end{equation} We introduce the density of states \begin{equation} \label{DOS_def} \rho_{B}(E)\!=\!\frac{1}{4\pi^2l_B^2}\sum_n\!\int\!\! dp_z\delta\!\left[E-\omega_H\!\!\left(\!n\!+\!\frac{1}{2}\!\right)-\frac{p_z^2}{2m}\right], \end{equation} and rewrite Eq.~\eqref{Gap00} in the form \begin{equation} \label{Gap0DOS} \!\!\!\int\limits_{-E_{F\sigma}}^{\varepsilon_{\text{max}} -E_{F\sigma}}\!\!\!\!\!\!\!\!\!\!d\varepsilon\,\rho_{B} (\varepsilon+E_{F\sigma})\frac{\tanh\!\left(\!\sqrt{\Delta_\sigma^2\!+ \!\varepsilon^2}/2T\right)} {\sqrt{\Delta_\sigma^2+\varepsilon^2}}=\frac{2}{V}\,. \end{equation} The density of states exhibits equidistant peaks at energies $E=\omega_H(n+1/2)$. This results in the oscillatory behavior of the order parameters $\Delta_\sigma$ on the magnetic field similar to the de Haas--van Alphen effect~\cite{Lifshitz}. In the limit $\omega_H/\varepsilon_{F}\ll1$, one can calculate the density of states analytically. Details of the calculations are presented in the Appendix, where for $\rho_{B}(E)$ we derive expression \eqref{DOS_fourier}. Substituting this expression into Eq.~\eqref{Gap0DOS} at $T=0$, we obtain \begin{eqnarray} \label{GAP_FIN} &&\ln\!\left(\!\frac{\Delta_{\sigma}}{\Delta_0}\!\right) +\sqrt{\frac{\omega_H}{2\varepsilon_F}}\times\\ \nonumber &&\sum_{l=1}^{\infty}\frac{(-1)^l}{\sqrt{l}}\cos\!\left(\!\frac{2\pi lE_{F\sigma}}{\omega_H}-\frac{\pi}{4}\right)K_0\!\left(\!\frac{2\pi l \Delta_{\sigma}}{\omega_H}\!\right)=0\,, \end{eqnarray} where $K_0(z)$ is the Macdonald function of zeroth order. We solve this equation numerically. Since the Macdonald functions decay exponentially at large values of their arguments, the series in Eq.~\eqref{GAP_FIN} converges quickly. \begin{figure} \begin{center} \includegraphics[width=0.95\columnwidth]{Fig4.eps} \end{center} \caption{(Color online) SDW gaps versus magnetic field calculated at different ratios $\Delta_0/\varepsilon_{F}$ and different values of $g$ specified in the plots; $\Delta_\uparrow$ is shown by the (red) solid line with open circles, while $\Delta_\downarrow$ by the (blue) solid line without symbols. \label{Gap} } \end{figure} The calculated parameters $\Delta_\sigma(B)$ for different $g$ are shown in Fig.~\ref{Gap}. We see that both order parameters $\Delta_\sigma(B)$ oscillate when the magnetic field is varied. The amplitudes of the oscillations increase when the ratio $\Delta_0/\varepsilon_F$ grows (that is, the interaction increases). The oscillations of $\Delta_\uparrow$ and $\Delta_\downarrow$ have the same phases, if $g$ is an integer, and different phases otherwise. It is seen in Fig.~\ref{Gap} that the order parameters oscillate about some mean value $\widetilde \Delta_\sigma$, which is quite robust against the growth of $B$. This stability, however, is a consequence of the perfect nesting. The value of $\widetilde \Delta_\sigma$ decreases with $B$ if we take into account either doping or electron--hole asymmetry. When the doping or asymmetry is high, the SDW order disappears before pronounced oscillations arise. Note also that the SDW phase is stable at low temperatures since when $T\rightarrow0$ the free energy of the magnetically-ordered phase is lower than the PM one. This can be checked directly. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{Fig5.eps} \end{center} \caption{(Color online) Dependence of the N\'eel temperatures on the magnetic field; $T_{N\uparrow}$ is shown by (red) dashed line, while $T_{N\downarrow}$ by (blue) solid one. Parameters are specified in the plot. \label{figTN} } \end{figure} In addition to the behavior of the order parameters, the temperatures of the phase transitions also oscillate as a function of the magnetic field. Note that for $B\neq0$, there are two transition temperatures, $T_{N\sigma}$, where, as usual, $\sigma = \uparrow, \downarrow$. These temperatures can be calculated using Eq.~\eqref{Gap0DOS} by taking the limit $\Delta_\sigma\to0$. This gives the equation \begin{equation} \label{TN0DOS} \int\!d\varepsilon\,\rho_{B}(\varepsilon+E_{F\sigma})\frac{\tanh\! \left(\!\varepsilon/2T_{N\sigma}\right)}{\varepsilon}=\frac{2}{V}\,. \end{equation} Using the density of states \eqref{DOS_fourier}, we obtain \begin{eqnarray} \nonumber\label{TNfin} \ln\frac{T_{N\sigma}}{T_{N0}}&=&\sqrt{\frac{\omega_H}{2\varepsilon_F}} \sum\limits_{l=1}^{\infty}\frac{(-1)^l}{\sqrt{l}}\cos\!\left(\!\frac{2\pi lE_{F\sigma}}{\omega_H}-\frac{\pi}{4}\!\right)\times\\ &&\ln{\!\left[\tanh\left(\frac{\pi^2lT_{N\sigma}}{\omega_H}\right)\right]}\,, \end{eqnarray} where $T_{N0}$ is the N\'{e}el temperature at zero field, which is related to the SDW gap according to the BCS-like formula $T_{N0}\cong0.567\Delta_0$. The results of these calculations are shown in Fig.~\ref{figTN}. \section{Discussion} \label{disco} In this work, we investigated the effect of an applied magnetic field on weakly-correlated electron systems with imperfect nesting. Such study may be relevant for recent experiments on doped rare-earth borides~\cite{Sluchanko_PRB2015}. We found that, when the cyclotron frequency $\omega_\alpha$ is comparable to the electron energy gap, $\Delta_0$, the magnetic field effects must be taken into account. The magnetic field enters the model Hamiltonian via two channels: (i) the Zeeman term, and (ii) orbital (or, diamagnetic) contribution. At low field, $\omega_\alpha<\Delta_0$, and not too small Land\'e factors $g_\alpha$, one can neglect the latter contribution and take into account only the Zeeman term. We investigated the combined effects of both terms in the limit of ideal electron--hole symmetry and ideal nesting. Our study demonstrated that in the presence of the Zeeman term, the number of possible homogeneous magnetically-ordered phases significantly increases, compared to the case of $B=0$. In Subsection~\ref{subsect::homogen}, we defined as many as nine possible states with different symmetries. If necessary, this list may be increased by taking into account incommensurate SDW phases~\cite{Rice} and phases with ``stripes''~\cite{zaanen_stripes1989,tokatly1992}. Of this abundance, only two ordered homogeneous phases could serve as a ground state of our model. When inhomogeneous states are included into consideration, even the zero-temperature phase diagram becomes quite complex. We would like to remind a reader that, theoretically, the phase separation is a very robust phenomenon. Its generality goes beyond the weak-coupling nesting instabilities of a Fermi surface: the phase separation is found in multi-band Hubbard and Hubbard-like models, where the nesting is not crucial~\cite{bascones2012,dagotto2014,WePRL2005,WePRB2007,We_grA_grE2012}. It is therefore important to account for its possibility both theoretically, and experimentally. However, phase separation is not universal: the simplifications of our approach, and the contributions, which we have intentionally omitted (Coulomb interaction, lattice effects, realistic shape of the Fermi surface, disorder, etc), can restore the stability of homogeneous states for a given set of model parameters. For example, the long-range Coulomb repulsion, caused by the charge redistribution in inhomogeneous state, suppresses the phase separation~\cite{DiCastro1,DiCastro2,BianconiArrested}. Therefore, on experiments the inhomogeneous states may occupy fairly modest part of the phase diagram, as seen, for example, in Refs.~\onlinecite{phasep_exp2012,Narayanan_RRL2014}. The final location of the segregated region on the phase diagram is affected by the Zeeman energy, as our calculations demonstrated. The orbital contribution to the Hamiltonian leads to the Landau quantization of the single-particle orbits. As a result, we have demonstrated that both order parameters and the N\'eel temperatures oscillate as the magnetic field changes. This behavior is associated with the oscillatory part of the single-particle density of states, which emerges due to the Landau quantization. The same oscillations of the density of states are also the cause of the de Haas--van Alphen effect. Yet another related phenomenon, the so-called field-induced SDW, is known to occur in quasi-one-dimensional materials~\cite{GorkovLebed_JdePhL1984_magn_field_stab1D, Lebed_JETP1985_magn_field_PhDiag,Yakovenko_JETP1987_field_ind_PhTr, Azbel_PRB1989_field_ind_SDW,Pudalov_Springer2008_magn_field_SDW, Lebed_Springer2008_magn_field_SDW}. Pronounced oscillations of both $\Delta_\sigma$ and $T_{N \sigma}$ develop at sufficiently large magnetic fields. This circumstance makes their experimental observation a delicate issue. Indeed, the results of Section~\ref{SectHighB} were obtained under the assumption of perfect electron--hole symmetry. In a more realistic case, this symmetry is broken, and the magnetic field may cause a transition into a phase with a different order parameter, or destroy the SDW completely before an observable oscillatory trend sets in. We demonstrated that in electron systems with imperfect nesting the applied magnetic field leads to a significant increase in the number of possible ordered states. It also affects the inhomogeneous, phase-separated states of the system. At higher fields, the Landau quantization causes oscillations of the SDW order parameters and of the corresponding N\'eel temperatures. \section*{Acknowledgements} \label{ackno} This work is partially supported by the Russian Foundation for Basic Research (projects 14-02-00276 and 15-02-02128), Russian Ministry of Education and Science (grant 14.613.21.0019 (RFMEFI61314X0019)), RIKEN iTHES Project,the MURI Center for Dynamic Magneto-Optics via the AFOSR award number FA9550-14-1-0040, the IMPACT program of JST, a Grant-in-Aid for Scientific Research (A), and a grant from the John Templeton Foundation.
1,116,691,499,448
arxiv
\section{Introduction} As Metropolis {\it et al.} showed in 1953\cite{mrt}, Markov random walks can be used to sample the Boltzmann distribution thereby calculate thermodynamic properties of classical many-body systems. The algorithm they introduced is one of the most important and pervasive numerical algorithms used on computers because it is a general method of sampling arbitrary highly-dimensional probability distributions. Since then many extensions have been developed\cite{MCreview}. In addition to the sampling of classical systems, many Quantum Monte Carlo algorithms such as Path Integral Monte Carlo\cite{rmp}, variational Monte Carlo\cite{cck} and Lattice Gauge Monte Carlo use a generalization of the random walk algorithm. In a Markov process, one changes the state of the system $\{s \}$ randomly according to a fixed {\it transition rule}, ${\mbox{$\mathcal P$}}(s\rightarrow s')$, thus generating a random walk through state space, $\{s_0,s_1,s_2 \ldots \}$. The transition probabilities often satisfy the {\it detailed balance} property (a sufficient but not necessary condition). This means that the transition rate from $s$ to $s'$ equals the reverse rate: \begin{equation} \pi (s) {\mbox{$\mathcal P$}}(s\rightarrow s')=\pi (s') {\mbox{$\mathcal P$}}(s'\rightarrow s). \label{db} \end{equation} Here $\pi(s)$ is the desired equilibrium distribution which we take for simplicity to be the classical Boltzmann distribution: $\pi(s) \propto \exp(- V(s)/(k_B T))$ where $T$ is the temperature and $V(s)$ is the energy. If the pair of functions $ \{ \pi (s), {\mbox{$\mathcal P$}}(s\rightarrow s') \} $ satisfy detailed balance and if ${\mbox{$\mathcal P$}}(s\rightarrow s')$ is ergodic, then the random walk will eventually converge to $\pi$. For more details see Refs. [\onlinecite{hh,wk}]. In the particular method introduced by Metropolis one ensures that the transition rule satisfies detailed balance by splitting it into an ``a priori'' {\it sampling distribution} $T(s\rightarrow s')$ (a probability distribution that can be directly sampled such as a uniform distribution about the current position) and an {\it acceptance probability} $a(s\rightarrow s')$ with $0 \le a \le 1$. The overall transition rate is: \begin{equation} {\mbox{$\mathcal P$}}(s\rightarrow s') = T(s\rightarrow s') a(s\rightarrow s'). \end{equation} Metropolis et al. \cite{mrt} made the choice for the acceptance probability: \begin{equation} a_M (s\rightarrow s') = \min \left[ 1 , q (s'\rightarrow s) \right], \label{acc} \end{equation} where \begin{equation} q(s\rightarrow s') = \frac{\pi (s') T (s'\rightarrow s) } {\pi (s) T (s\rightarrow s') } = \exp(-(V(s')-V(s))/(k_BT)). \label{qdef} \end{equation} Here we are assuming for the sake of simplicity that $T (s'\rightarrow s) = T (s\rightarrow s')$. The random walk does not simply proceed downhill; thermal fluctuations can drive it uphill. Moves that lower the potential energy are always accepted but moves that raise the potential energy are often accepted if the energy cost (relative to $k_B T= 1/\beta$) is small. Since asymptotic convergence can be guaranteed, the main issue is whether configuration space is explored thoroughly in a reasonable amount of computer time. What we consider in this article is the common situation where the energy, $V(s)$ needed to accept or reject moves, is itself uncertain. This can come about because of two related situations: \begin{itemize} \item The energy may be expressed as an integral: $V(s) = \int dx v(x,s)$. If the integral has many dimensions, one might need to perform the integral with another subsidiary Monte Carlo calculation. \item The energy may be expressed as a finite sum: $ V(s) = \sum_{k=1}^N e_k(s)$ where is $N$ is large enough that performing the summation slows the calculation. It might be desirable for the sake of efficiency to sample only a few terms in the sum. \end{itemize} \subsection{Mixed Quantum-Classical Simulation} First, consider the typical system in condensed matter physics and chemistry, composed of a number of classical nuclei and quantum electrons. In many cases the electrons can be assumed to be in their ground state and to follow the nuclei adiabatically. To perform a simulation of this system, we need to accept or reject the nuclear moves based on the Born-Oppenheimer potential energy $V_{BO} (s)$, defined as the eigenvalue of the electronic Schr\^{o}dinger equation with the nuclei fixed at position $s$. In most applications, this potential is approximated by a semi-empirical potential typically involving sums over pair of particles. More recently, in the Car-Parrinello molecular dynamics method\cite{cp}, one performs a molecular dynamics simulation of the ions simultaneous with a solution of the electronic quantum wave equation. To be feasible one uses a mean field approximation to the full many-body Schr\"{o}dinger equation using the local density functional approximation to density functional theory or a variant. Others have proposed coupling a nuclear Monte Carlo random walk to an LDA calculation\cite{Kohn}. Although mean-field methods such as LDA are among the most accurate methods fast enough to be useful for large systems, they also have known deficiencies\cite{mitas}. We would like to use a quantum Monte Carlo (QMC) simulation to calculate $V_{BO}(s)$ during the midst of a classical MC simulation (CMC) \cite{footnote1}. QMC methods, though not yet rigorous because of the fermion sign problem, are the most accurate methods useful for hundreds of electrons. But QMC simulation will only give an estimate of $V_{BO}(s)$ with some statistical uncertainty. It is very time consuming to reduce the error to a negligible level. We would like to take into account the statistical error without having to reduce it to zero. Note that we do not wish the new Monte Carlo procedure to introduce uncontrolled approximations because the goal of coupling the CMC and QMC is a robust, accurate method. We need to control systematic errors. It has been noticed by Doll and Freeman \cite{doll}, after studying a simple example, that CMC is robust with respect to noise but recommend using small noise levels and small step sizes to minimize the systematic errors. However, this can degrade the overall efficiency. If we can tolerate higher noise levels without introducing systematic errors, the overall computer algorithm will run faster and more challenging physical systems can be investigated, {\it e.g.} more electrons and lower temperatures \subsection{Long-range potentials } In CMC with a pair potential, to compute the change in energy when particle $k$ is moved to position ${\bf r}_k'$, one needs to compute the sum \begin{equation} \label{sumpot} \Delta V ({\bf r}_k')= \sum_{j=1}^N [v(r_{kj}')-v(r_{kj})]. \end{equation} This is referred to as an order $N^2$ algorithm since the computer effort to move all particles once is proportional to $N^2$. If the interaction has a finite range, neighbor tables\cite{allentild} will reduce this complexity to order $N$. However charged systems with Coulomb interactions are not amenable to this treatment. Usually the Ewald image method is used to handle the long-range potentials with a complexity \cite{NatCep} of order $N^{3/2}$. The fast multipole method\cite{greengard}, which scales as $N$ for the Coulomb interaction is not applicable to Monte Carlo since that method computes the total energy or force and in MC we need the change in potential as a single particle is moved. The challenge is to come up with an order $N$ Monte Carlo method for charged systems. In the Ewald method, the potential is split into a short-range part and a long-range part: \begin{equation} v(r) = v_s(r) +v_l (r).\end{equation} The short ranged part is a finite ranged and can be handled with neighbor tables, the long range part is usually expanded in a Fourier series, at least in periodic boundary conditions and is bounded and slowly varying. We suggest that it is possible to estimate the value of $v_l (r)$ by sampling either particles at random, or terms in its Fourier expansion. The question that arises is how to compensate for the noise of the estimate in $\Delta V$. In both of these examples one could simply ignore the effect of fluctuations in the estimate of $\Delta V(s)$. If the errors are small then clearly the sampled distribution will be changed only a little. If the acceptance ratio as a function of $\Delta V(s)$ were a linear function there would be no bias, but because it is non-linear, fluctuations will bias the asymptotic distribution. In this paper we will make a conceptually simple generalization of Metropolis algorithm, by adjusting the acceptance ratio formula so that the transition probabilities are unaffected by the fluctuations in the estimate of $\Delta V(s)$. We end up with a completely rigorous formula in the sense that if one averages long enough, one will get the exact distribution, even if the noise level is large. The only assumption is that the individual energy estimates are independently sampled from a normal distribution whose mean value is $\Delta V(s)$. One complication is that the estimates of the variance of $\Delta V(s)$ are also needed. We show how to treat that case as well. Kennedy, Kuti and Bhanot\cite{kk,bhanot} introduced an algorithm with many of the same aims as the present work but for computations in lattice gauge theory. We will describe their method and compare it to the new method later in the paper. \section{Detailed balance with uncertainty.} From the two examples discussed above, let us suppose that when a move from $s$ to $s'$ is made, an estimate of the difference in energy is available, which we denote $\delta(s \rightarrow s')$. (We often take units with $k_BT=1$ hereafter.) By $V(s)$ we mean the true potential energy. Let $a(s \rightarrow s')$ be a modified acceptance probability; we assume that it depends only on the estimate $\delta$ of the energy difference. Let $P( \delta ; s \rightarrow s' )d\delta$ be the probability for obtaining a value $\delta$. Then the average acceptance ratio from $s$ to $s'$: \begin{equation} A(s \rightarrow s') =\int_{-\infty}^{\infty} d\delta P(\delta; s \rightarrow s' ) a(\delta). \label{defA} \end{equation} The detailed balance equation is: \begin{equation} e^{-V(s)/k_BT}T(s \rightarrow s')A( s \rightarrow s' ) = e^{-V(s')/k_BT}T(s' \rightarrow s)A( s' \rightarrow s ) \end{equation} Defining: \begin{equation} \Delta(s \rightarrow s') = [V(s')-V(s)]/k_BT -\ln[T(s' \rightarrow s)/T(s \rightarrow s')] \end{equation} we can rewrite the detailed balance equation as: \begin{equation} A(s \rightarrow s ') = e^{-\Delta} A( s' \rightarrow s ). \label{MDB} \end{equation} If the process to estimate $\delta$ is symmetric in $s$ and $s'$ then $ P(\delta; s' \rightarrow s) = P(-\delta;s \rightarrow s')$. Then detailed balance requires: \begin{equation} \int_{-\infty}^{\infty} d\delta P(\delta; s \rightarrow s' ) [ a (\delta) - e^{-\Delta} a(-\delta) ] = 0. \label{inteq} \end{equation} In addition, we must have that $0 \leq a(\delta ) \leq 1$ since $a$ is a probability \cite{footnote2}. The difficulty in using these formulas is that during the MC random walk, we do not know either $P(\delta; s \rightarrow s' )$ or $\Delta$. Hence we must find a function $a(\delta)$ which satisfies Eq. (\ref{inteq}) for all $P(\delta)$ and $\Delta $. To make progress we assume a particular form for $P(\delta; s \rightarrow s' )$. In many interesting cases, the noise of the energy difference will be normally distributed. In fact the central limit theorem guarantees that the probability distribution of $\delta$ will approach a normal distribution if the variance of the energy difference exists and one averages long enough. Given that $\langle \delta \rangle = \Delta$, the probability of getting a particular value of $\delta$ is: \begin{equation} P(\delta) = (2 \sigma^2 \pi)^{-1/2} \exp(-(\delta-\Delta)^2/(2\sigma^2)). \label{normal} \end{equation} In this section only, we will assume that we know the value of $\sigma$, that only $\Delta$ is unknown. We will discuss relaxing this assumption in Sec. \ref{sigmasec}. In the case of a normal distribution with known variance $\sigma$ we have found a very simple exact solution to Eq. (\ref{inteq}): \begin{equation} a_P(\delta; \sigma ) = \min(1, \exp(-\delta-\sigma^2/2) ) \label{1psol} \end{equation} The uncertainty in the action just causes a reduction in the acceptance probability by an amount $\exp(-\sigma^2/2)$ for $\delta > -\sigma^2/2$. We refer to the quantity $u= \sigma^2/2$ as the noise {\it penalty}. Clearly, the formula reverts to the usual Metropolis formula when the noise vanishes. To prove Eq. (\ref{1psol}) satisfies Eq. (\ref{MDB}), one does the integrals in Eq. (\ref{defA}) to obtain: \begin{equation} A( \Delta ) = \frac{1}{2} [e^{-\Delta} \text{erfc}(c(\sigma^2/2-\Delta))+ \text{erfc} (c(\sigma^2/2+\Delta))] \end{equation} where $\text{erfc}(z)$ is the complimentary error function and $c=1/\sqrt{2\sigma^2}$. Below we apply Eq. (\ref{1psol}) to several simple problems and find that it indeed gives exact answers to statistical precision. The remainder of the paper concerns considerations of efficiency, a comparison to other methods and the more difficult problem of estimating $\sigma$. \section{Optimality} The chief motivation for studying the effect of noise on a Markov process is for reasons of efficiency. If computer time were not an issue, we could average enough to reduce the noise level to an insignificant level. In this section we are concerned with the question of how to optimize the acceptance formula and the noise level. \subsection{Acceptance ratio} We first propose a measure of optimality of an acceptance formula and relate that to a linear programming problem. It is clear that Eq.(\ref{inteq}) can have multiple solutions; its solution set is convex. For example, if $a(\delta)$ is a solution then so is $\lambda a(\delta)$ for $0 < \lambda <1$. Even in the noise-less case, several acceptance formulas have been suggested in the literature\cite{wood,barker}. To choose between various solutions we now discuss the efficiency of the Markov process, namely the computer time needed to calculate a property to a given accuracy. It is a difficult problem\cite{mira} to determine the efficiency of a Markov chain but Peskun\cite{peskun73} has shown that given two acceptance rules, $a_1(x)$ and $a_2(x)$, if $a_1 (\Delta) \ge a_2(\Delta)$ for all $ \Delta \neq 0 $, then every property will be computed with a lower variance using rule 1 versus rule 2. Hence the most efficient simulation will have the maximum value of $\lambda$. Very roughly what Peskun has shown is that it is always better to accept moves, other considerations being equal. We propose to call an {\it optimal} acceptance formula, one where the average probability of moving is as large as possible. Let $W(\delta)d\delta$ be the probability density of attempting a move with a change in action $\delta$, ( $W(\delta) \geq 0$.) In our definition an ``optimal'' formula will maximize: \begin{equation} \xi =\int_{-\infty}^{\infty} d \delta W( \delta )\left( a( \delta)- a_M( \delta)\right). \label{optimum} \end{equation} It is likely that the optimal functions are, to a large part, independent of $W$ and so we set $W(x)=1$. We subtracted $a_M(x)$, the Metropolis formula, so the integral would be convergent. Note that for the solution for a normal distribution $a_P(\delta)$ we have: $\xi_P=-\sigma^2/2$. In the noise-less case one can easily show\cite{peskun73} that the Metropolis formula is optimal. Without uncertainty, Eq. (\ref{MDB}) only couples values with the same $|\delta|$: $a(\delta)=e^{-\delta}a(-\delta)$. For each $\delta > 0 $, one needs to maximize: $W(\delta) a(\delta) + W(-\delta) a( -\delta)$. This and the constraint $0 \leq a(x)\leq 1 $ leads to the solution $a(\delta)=1$ if $\delta \leq 0$. We conjecture that the formula Eq. (\ref{1psol}) is nearly optimal; one argument is based on an analysis of the large and small $\delta$ limits: the other is numerical. First, consider moves which are definitively uphill or downhill $ \delta^2 \gg \sigma^2$. We expect downhill moves will always be accepted for an optimal function, so $A(\Delta)=1$; this is its maximum value. Then from Eq. (\ref{MDB}) $A( \Delta) =e^{-\Delta}$ for $\Delta \gg \sigma$. Now we must invert Eq. (\ref{defA}). The unique continuous solution is $a(\delta)=\exp(-\delta-\sigma^2/2)$ for $\delta \gg \sigma$. Hence, in the region $|\delta| \gg \sigma$ the solution is optimal in the class of continuous functions \cite{footnote3}. Another approach to finding the optimal solution to Eq. (\ref{inteq}) is numerical. We wish to maximize Eq. (\ref{optimum}) subject to equality constraints and the inequality constraints that $ a(\delta)$ be a probability. This is an infinite dimensional linear programming (LP) problem, a well-studied problem in optimization theory for which there exist methods to determine the globally optimal solution. To find such a solution, we represent $a(\delta)$ on a finite basis. We used a uniform grid in the range $-y$ to $y$ and assumed that outside the range $a(\delta)$ had the asymptotic form derived above. The discrete version of Eq. (\ref{defA}) is $A_j = \sum_i K_{ij} a_i + c_j$ where $c_j$ represents the contribution coming from $|\delta|> y$ and $K_{ij} = P(\delta_i;\Delta_j)$ for the simplest quadrature. The problem is to find a solution maximizing $\sum_i a_i$ subject to the inequalities: $0 \leq a \leq 1$ and the equalities: \begin{equation}\sum_i [K_{i,j}-e^{-x_j}K_{i,-j}] a_i =e^{-x_j}c_{-j}- c_j.\end{equation} Fig. \ref{LP} shows the LP solution, for $\sigma =1$ compared with $a_P(\delta)$. Note that it is not a continuous function, but for the most part consists of regions with $a_i = 1$ alternating with regions with $a_i=0$. The LP solution is a very accurate solution to the problem posed, with errors of less than 10$^{-5}$. The discontinuous nature of the LP solution is to be expected since the solution must lie on the vertices of the feasible region, determined by the equalities and inequalities. To obtain the solution to this difficult ill-conditioned problem, we discretized the values of $\delta$ on a grid with spacing 0.01. However we only demanded that Eq. (\ref{MDB}) be satisfied on a grid $\Delta$ with a spacing of 0.2. This implies that there were 40 times as many degrees of freedom as equality constraints and thus most variables were free to reach the extreme values of 0 and 1. The optimal LP function has a slightly larger value of $\xi$, roughly about $\xi_{LP} \approx -0.45 \sigma^2$ versus the value for $a_p$ of $\xi_P = -0.5 \sigma^2$. As far as we can determine, the LP solutions survive in the limit $dr \rightarrow 0$ and are slightly more optimal than $a_P$. However, given the inconvenience of determining and programming the LP solutions, and the very limited improvement in $\xi$, we see little reason \cite{footnote4} to prefer such solutions. When we added a factor to penalize discontinuities in $a(\delta)$ to the objective function proportional to $\sum_i (a_i-a_{i-1})^2$ (this makes it a quadratic programming problem) then the solution converged to $a_P(\delta)$. \subsection{Noise level} \label{noiseopt} Now let us consider how to optimize the noise level $\sigma$. An energy difference with a large noise level can be computed quickly, but because of the penalty in Eq. (\ref{1psol}) it has a low acceptance ratio, reducing the overall efficiency of the simulation. We should pick $\sigma$ to minimize the variance of some property with the total computer time fixed. The computer time can be written as $T = m (n t +t_0)$ where $t$ is the time for an elementary evaluation of a given energy difference, $n$ is the number of evaluations of $\delta$ before an acceptance is tried, $m$ is the total number of steps of the random walk and $t_0$ is the CPU time in the noise-less part of the code. But the error in any property converges as $\epsilon = c (\sigma) m^{-1/2}$ where $c$ is some function of $\sigma$ and the noise level converges as $\sigma = d n^{-1/2}$ where $d$ is some constant. Eliminating the variables $m$ and $n$, we write the MC inefficiency: \begin{equation} \label{effeq} \zeta^{-1} =T \epsilon^2= t_0 c(\sigma)^2 \left[ f\sigma^{-2} +1 \right]. \end{equation} Here $f=d^2t/t_0$, the relative noise parameter, is the CPU time needed to reduce the variance of the energy difference relative to the CPU time used in the noise-less part of the code: for $f \ll 1$ noise is unimportant, for $f \gg 1$ computation of the noisy energy difference dominates the computer time. To demonstrate how important this optimization step is, we consider a one dimensional double well with a potential given by: \begin{equation} k_B T V(s) = a_1 s^2 + a_2 s^4 .\end{equation} We picked parameters such the two minima are at $s=\pm 4$ and the height of the central peak is $\pi(0)/\pi(4) = 0.1$, which corresponds to $a_1=-0.288 $ and $a_2=0.009$. We used a uniform transition probability ($T(s\rightarrow s')$) with a maximum move step of 0.5. This means overcoming the barrier requires multiple steps, typical of an application which has a probability density with several competing minima. To measure the efficiency, we computed the error on the average value of $\langle s^k\rangle$ on Markov chains with $10^7$ steps. We examined values of noise in the range $0 \leq \sigma \leq 6$. We also calculated the density and compared to the exact values obtained by deterministic integration. Shown in Fig. \ref{accratio} is the acceptance ratio versus $\sigma$. We see that it decreases to zero rapidly at large noise levels. The dotted line ($\propto \exp[-\sigma^2/8]$) is the asymptotic form for large $\sigma$. Fig. \ref{doublewell} shows an example of the density obtained when the noise in the energy was $\sigma=2$. It is seen that ignoring the noise leads to a much smoother density than the exact result. Using the acceptance formula $a_P (\delta)$ we recover the exact result within statistical errors. Figure \ref{eff} shows the inefficiency (relative to its value when the noise is switched off) versus $\sigma$ and $f$. In general, as the difficulty of reducing the noise (as measured by $f$) increases, the calculation becomes less efficient, and the optimal value of $\sigma$ increases. The two panels show the efficiency of computing $\langle s \rangle$ and $\langle s^2 \rangle$; the behavior of the error is quite different for even and odd moments of $s$ because the error in the first moment is sensitive to the rate at which the walk passes over the barrier, while the second is not. The flat behavior at large noise level of the first moment occurs because the noise actually helps passage over the barrier: for $f>3$ a finite optimal value of $\sigma$ ceases to exist. On this example, we find that $c(\sigma)\propto\exp(\alpha\sigma^2)$ with $\alpha \approx 0.09$ for even moments and $\alpha \approx 0.025$ for odd moments. With this assumption the optimal value of the noise level equals: \begin{equation} \sigma^{*2}= (f/2)[ \sqrt{1+2/(f\alpha)}-1].\end{equation} Although this formula is approximate (because of the assumption on $c(\sigma)$) it does give reasonable values for the optimal $\sigma$. As this example demonstrates, it is much more efficient to perform a simulation at large noise levels. One can quickly try very many moves even if most of them get rejected instead of just a few ones where the energy difference has been accurately computed. However, there are practical problems with using large $\sigma$ as will be discussed next. \section{Uncertain Energy and Variance } \label{sigmasec} Unfortunately there is a serious complication: the variance needed in the noise penalty is also unknown. Both the change in energy and its variance need to be estimated from the data. The variance in general will depend on the particular transition: $(s \rightarrow s')$; we cannot assume it is independent of the configuration of the walk. Precise estimates of variance of the energy difference are even more difficult to obtain than of energy difference itself since the error is the second moment of the noise and will fluctuate more. In Fig.(\ref{doublewell}) is shown the effect on the double well example of using an estimate of the variance in the penalty formula instead of the true variance. The systematic error arises because the acceptance rate formula is a non-linear function of the variance. We will see that we must add an additional penalty for estimating the variance from the data. Let us suppose we generate $n$ estimates of the change in action: $\{ y_1, \ldots , y_n\}$ where each $y_k$ is assumed to be an independent normal variate with mean and variance: \begin{equation} \langle y_i \rangle = \Delta\end{equation} \begin{equation} \langle (y_i-\Delta)^2 \rangle = n\sigma^2.\end{equation} Unbiased {\it estimates} of $\Delta$ and $\sigma^2$ are: \begin{equation}\delta = \frac {\sum_{i=1}^n y_i}{n}\end{equation} \begin{equation}\chi^2 = \frac{\sum_{i=1}^n (y_i-\delta)^2}{n(n-1)}. \end{equation} By construction $\langle \delta \rangle =\Delta$ and $\langle \chi^2 \rangle=\sigma^2$. The joint probability distribution function of $\delta$ and $\chi2$ is the product of a normal distribution for the mean and a chi-squared distribution for the variance: \begin{equation} P(\delta, \chi^2 ; \Delta , \sigma ) = P(\delta-\Delta , \sigma ) P_{n-1} (\chi^2 ; \sigma) \end{equation} where $ P(\delta - \Delta , \sigma ) $is given in Eq.(\ref{normal}) and \begin{equation} P_{n-1} (\chi^2; \sigma) = c_n \chi^{n-3}e^{-\mu\chi^2/\sigma^2} \end{equation} with $\mu=(n-1)/2$ and \begin{equation} c_n= \frac{(\mu/\sigma^2 )^{\mu}}{\Gamma(\mu)}. \end{equation} The generalization from the previous section is straightforward. The acceptance probability can only depend on the estimators $\delta$ and $\chi^2$. The average acceptance probability is: \begin{equation} A(\Delta,\sigma)= \int_{-\infty}^{\infty} d\delta \int_0^{\infty} d \chi^2 P (\delta,\chi^2; \Delta , \sigma ) a(\delta,\chi^2). \end{equation} Detailed balance requires: \begin{equation} \label{twodeq} A(\Delta,\sigma)= \exp(-\Delta) A(-\Delta,\sigma) \end{equation} for all values of $\Delta$ and $\sigma \geq 0$. We have two parameters to estimate and average over instead of one and a two dimensional homogeneous integral equation for $a(\delta,\chi^2)$. In the limit of enough independent evaluations we recover the one parameter equation since $ \lim_{n \rightarrow \infty} P_{n-1} (\chi^2) = \delta(\chi^2-\sigma^2)$ and the equations for different $\sigma$'s decouple. \subsubsection*{Asymptotic Solution} We can do the same type of analysis at large $|\Delta|$ as we did when $\sigma$ was known. A move is definitely uphill or downhill if $\delta^2 \gg \chi^2$. Assume there exists a solution with $A(\Delta, \sigma) =1$ for $\Delta \ll -\sigma$. Then $A(\Delta,\sigma)=\exp(-\Delta)$ for $\Delta \gg \sigma$. Assume this solution can be expanded in a power series in $\chi^2$, $a(\delta, \chi^2) = \sum_{k=0}^{\infty} b_{k} \chi^{2k} e^{-\delta} $. Explicitly performing the integrals we obtain: \begin{equation} \exp(-\sigma^2/2) =\sum_k c_n b_k\Gamma(\mu+k)(\sigma^2/\mu)^{\mu+k}.\end{equation} Matching terms in powers of $\sigma^2$ we obtain $b_k$. The expansion can be summed to obtain a Bessel function: \begin{equation} a(\delta, \chi^2) = \Gamma(\mu) e^{-\delta} \left[\frac{2}{\mu\chi^2}\right]^{(\mu-1)/2}J_{\mu-1} (\chi\sqrt{2\mu}). \end{equation} This function is positive for $\chi^2 < n/4$. For larger values of $\chi^2$ either the assumption of $A(\Delta,\sigma)=1$ is wrong or no smooth solution exists. Taking the logarithm of the power series expansion, we obtain a convenient asymptotic form for the penalty in powers of $\eta =\chi^2/n$: \begin{equation} u_B = \frac{\chi^2}{2} +\frac{\chi^4}{4(n+1)} +\frac{\chi^6}{3(n+1)(n+3)} + \ldots \label{bess} \end{equation} The ``Bessel'' acceptance formula is: \begin{equation} a_B(\delta,\chi^2,n) = \min(1,\exp(-\delta -u_B ))\end{equation}. The first term $\chi^2/2$, is the penalty in the case where we know the variance. The error in the error causes an additional penalty equal, in lowest order, to $\chi^4/(4n)$. This asymptotic form should only be used for small values of $\eta$ since the expansion is not convergent for $\eta \geq 1/4$. In Fig. \ref{errors} we show errors in the detailed balance ratio as a function of $\Delta$ and $\sigma$ for $n=128$. It is seen that the errors are small but rapidly increasing as a function of $\sigma$. We find that the maximum relative error in the detailed balance ratio approximately equal to $0.15 \eta^2$. Good MC work will have the error less than 10$^{-3}$ requiring $\eta < 0.1$ Very accurate MC work with errors of less that 10$^{-4}$ requires a ratio $\eta < 0.02$. This is a limitation on the noise level. As an example, we have calculated the deviation of the energy from its exact value for the double well potential. The results for the relative error in the energy are shown in Figure 6 for several values of $n$ and $\sigma$. As we expect, the error in the energy depends only on $\eta$ and is proportional to $\eta^2$. We also see that the estimates of limits on the noise level given above are correct. There is a dip at $n=64$ for $\eta \approx .5$, beyond the region where the Bessel expansion is convergent. Figure \ref{eff} shows the effect on the efficiency of the additional noise penalty. While the effect on the even moments is small, the efficiency of the first moment dramatically increases for noise levels $\sigma > 2$, perhaps because rejections for large dispersions of the energy differences cause difficulty in crossing the barrier. The efficiency becomes more sensitive to $\sigma$. We have not found an exact solution for Eq. (\ref{twodeq}). From numerical searches it is clear that much more accurate solutions exist than the asymptotic form. We have found such piecewise exponential forms. But the Bessel formula is a practical way of achieving detailed balance if one can generate enough independent normally distributed data. \section{Deviations from a normal distribution} We have assumed that $\delta$ is normally distributed. In the case the noise is independent of position but otherwise completely general, we can perform the asymptotic analysis. Let us assume that:\begin{equation} A(\Delta)=\int d\delta P(\delta-\Delta) a(\delta) \end{equation} and that $A(\Delta)=1$ for sufficiently negative values of $\Delta$. Then for large values of $\Delta$ the unique continuous solution is: \begin{equation} a(\delta)=\exp(-\delta-u).\end{equation} The penalty $u$ has an expansion in terms of the cumulants of $P(\delta)$: \begin{equation} u=\sum_{n=2,4,\ldots}^{\infty} \kappa_n/n!= -\ln(\int_{-\infty}^{\infty}dx P(x)e^{-x} ).\label{cumulant}\end{equation} The odd cumulants vanish because $P(x)=P(-x)$. For the normal distribution this reduces to Eq. (\ref{1psol}) and the penalty form is exact. The contribution of higher order cumulants could be either positive or negative leading to positive or negative penalties. Eq. (\ref{cumulant}) illustrates a limitation of the penalty method: one can not allow the energy difference to have a long tail of large values. It is important that the energy difference be bounded because a penalty can be defined only if $\lim_{x \rightarrow\infty} e^x P(x) =0$ so the integral will exist. Suppose the energy difference in Eq. (\ref{sumpot}) is a sum of an inverse power of the distances to the other particles $\Delta = \sum_{j} r_j^{-m}$ and that ${\bf r}$ is sampled uniformly. Then we find (in 3 dimensions) that at large $\delta$: $P(\delta) \propto \delta^{-3/m}$. For any positive value of $m$ the higher order cumulants and the penalty will not exist even though the mean and variance of $\delta$ exist under the weaker condition: $m <3/2$. We must arrange things so that large deviations of the energy difference from the exact value are non-existent or exponentially rare, perhaps by bounding the energy error. \section{Comparison with other methods} \subsection{Method of Kennedy, Kuti, and Bhanot} Kennedy, Kuti and Bhanot\cite{kk,bhanot} (KKB) have introduced a noisy MC algorithm for lattice gauge theory. We adapted that method for the present application by using energy differences with respect to an approximate potential, $w(s)$, that can be determined quickly and exactly. Proposed moves are ``pre-rejected'' using $w(s)$ and then the more expensive computation of an estimate of $v(s)$ is done. Let us suppose that the deviation between these potentials can be bounded: $ \max |\delta w(s)- \delta v(s)| \leq \epsilon$ for some $\epsilon$. We determine an unbiased estimate of the ratio $q$ needed in Eq. (\ref{qdef}) by using the power series expansion: \begin{equation} q(s \rightarrow s') = e^{-\delta } =\sum_{n=0}^{\infty}(-\delta )^n/n! \end{equation} where $\delta (s\rightarrow s') = v(s')-w(s')-v(s)+w(s)$. With a predefined probability we sample terms in the power series up to order $n$ and obtain an estimate of $q$; this is a variant of the von Neumann-Ulam method We finally accept the move with probability \begin{equation} a =(1+q)/(2+\epsilon). \end{equation} For an appropriate choice of parameters, $a$ is in the range $0\leq a \leq 1$ most of the time. The revised KKB method is given by the following pseudo-code: \begin{tabbing} Sample $s'$ from $T(s\rightarrow s')$ \\ If \=($\exp\left[-w(s')+w(s)\right] < \text{prn})$ then \\ \> reject move \\ else \\ \> $q_0=t_0=1$ \\ \> do \=$n=1,\infty$ \\ \> \> $p_n = \min(\gamma/n,1)$\\ \> \> if ($p_n <$ prn) exit loop \\ \> \> sample $x_n= -v(s') + w(s') + v(s) - w(s)$ \\ \> \> $t_n = t_{n-1} x_n/(n p_n) $ \\ \> \> $q_n = q_{n-1} + t_n$ \\ \> end do \\ if $\left[(1+q_n)/(2+\epsilon)>\text{prn} \right]$ then accept move \\ \end{tabbing} In this procedure $\gamma > 0$ is a parameter which controls the number of terms sampled. For $\gamma \leq 1$ the average number of evaluations of $x$ per step is $n_e(e^{\gamma}-1)$, where $n_e$ is the acceptance ratio of the preliminary rejection step. Each sample of $x$ must be uncorrelated with previous samples. As $\epsilon \rightarrow 0$, one recovers the Metropolis algorithm. The sampling distribution, $\gamma$, and $\epsilon$ have to be fixed to ensure that $a$ is in the interval $[0,1]$ almost all of the time. Violations for which $a<0$ put a limit on the size of the noise and the size of the sampling step, while $\epsilon$ can be made arbitrarily large to remove violations where $a>1$. This will, however, affect the efficiency. A recent preprint\cite{lin} proposed to solve the problem of violating the constraints on the acceptance probabilities by introducing negative signs into the estimators. We have not explored this possibility. We made a comparison to the penalty method with the double well potential, using $w(s) = a_2 s^4$ as the approximate potential. (It confines the random walk but does not have the central barrier.) For a violation level of $10^{-4}$, the maximum noise was $\sigma=0.4$. This is a much smaller noise level than is optimal in the penalty method. For this noise, a transition step of 0.45 was optimal. To optimize $\gamma$ and $\epsilon$, we first adjusted $\gamma$ until the half the desired number violations occurred for $a<0$. Then we adjusted $\epsilon$ until the total number of violations equaled $10^{-4}$. The errors in the first and second moments are given in Table \ref{kkbdat}, along with the parameters used in the KKB algorithm. We find that the KKB method is 2.3 times slower for the first moment and 3.5 times slower for the second moment than the penalty method (run at the same noise level, with the same transition step size and computing the variance with $n=32$ points). This comparison was done assuming $f$ is sufficiently small that we do not have to take into account the multiple evaluations of the energy differences. Taking that into account would raise the inefficiency of the KKB method by another factor of 2.74, the average number of function evaluations. We also tested the KKB method with $w(s)=v(s)$ ({\it i.e.}, the argument of the exponential was only noise). The data for this case is also given in Table \ref{kkbdat}. The maximum value of allowable noise was still $\sigma = 0.4$. For $\sigma<0.2$, the average number of function evaluations was less than one, making the method more efficient than the penalty method, for a fixed noise level. For the first moment, KKB was 3.4 times more efficient for $\sigma=0.1$ and 1.3 times more efficient for $\sigma=0.2$. However, if we consider optimizing $\sigma$ as in Sec. \ref{noiseopt}, the KKB method is less efficient than the penalty method. To be efficient at large values of $f$, larger values of $\sigma$ must be used, and there the KKB method is less efficient. At small values of $f$, the last term in Eq. (\ref{effeq}) dominates, and the lesser number of function evaluations yields no advantage for the KKB method. The KKB method requires taking enough samples to lower the noise to an acceptable value. In contrast, the penalty method requires taking enough samples to ensure the distribution is normal. Also, for this problem, the penalty method could have an even higher efficiency because it could use larger sampling steps sizes (the maximum KKB sampling step size depends on the quality of the approximate function, $w$). The advantage of the KKB method is that it makes no assumptions about the normalcy of the noise; the disadvantage is that one cannot guarantee that $a$ is in the range $[0,1]$. Knowledge that the noise is normally distributed allows one to use a much more efficient method. \subsection{Reweighting} Another alternative noisy MC method is to combine the stochastic evaluation of an exponential with the reweighting method. One can perform a simulation with $w(s)$, generating a random walk $\{s_i\}$. Then an exact average can be generated by reweighting: \begin{equation} \langle O \rangle = \frac{\sum_i O(s_i) Q_i}{\sum_i Q_i} \end{equation} where $Q_i \exp( -(v(s_i)-w(s_i))/k_BT)$. As discussed above an estimate of $Q_i$ can be generated with the von Neumann-Ulam procedure by stochastically summing the power series expansion of the exponential. In this case we do not care whether the exponential is between 0 and 1, only its variance is important. The difficulty is that the exponent of the weight increases linearly with the size of the system, {\it i.e.} $\langle (v(s)-w(s))^2 \rangle \propto N$. Hence the variance of $\langle O\rangle$ will increase exponentially with the size of the system. This method is only appropriate for small systems, but no assumptions are made about the distribution of $v(s)-w(s)$. The advantage of including the noise in the random walk rather than reweighting the visited states is that one works with energy differences only and it is possible to make the fluctuations of differences independent of the size of the system. \section{Conclusion} We have shown a small modification of the usual random walk method by applying a penalty to the energy difference can compensate for noise in the computation of the energy difference. If the noise is normally distributed with a known variance, the compensation is exact. If one estimates the variance from $n$ data points, we show that it suffices to have $\chi^2 \le 0.1 n$ and apply an additional penalty. On a double well potential we found that the the optimal noise level is typically $k_B T \leq \sigma \leq 3k_BT.$ The penalty method utilizes the power of Monte Carlo: one can choose the transition rules to obey detailed balance and to optimize convergence and use only well-controlled approximations. We can generalize to other noise distributions by using numerical solutions to the detailed balance equations as we have shown. We have adapted a method introduced by Kennedy {\it et al.} \cite{kk} but found it to be much slower once the noise level becomes high. We now plan to apply the algorithm to a serious application. As we have shown, very large gains in efficiency are sometimes possible. However, the problem remains of ensuring that the estimates of the energy differences are statistically independent and normally distributed. Codes used in calculations reported here are available at: [http://www.ncsa.uiuc.edu/Apps/CMP/index.html ] \section*{Acknowledgments} This work was initiated at the Aspen Center for Physics and has been supported by the NSF grant DMR 98-02373 and computational facilities at NCSA. DMC acknowledges useful discussions with J. Kuti, A. Kennedy and A. Mira.
1,116,691,499,449
arxiv
\section{Introduction} \label{sec:introduction} {M}{achine} learning tools are not as widely used in psychology as in health and medicine. This is asserted by the fact that the interaction between cognition and emotion is not yet fully understood \cite{Taylor2005313}. Machine learning in medical applications helped characterize genes and viruses \cite{ shipp2002diffuse, guyon2002gene, ye2003predicting, shaik2014machine, yang2015persistence}; \cite{magar2021potential, shahid2021machine}, evaluate tumors and cancer cells \cite{ dreiseitl2001comparison, gletsos2003computer, cruz2006applications, kourou2015machine, salgado2015evaluation, ali2016computational}; \cite{myszczynska2020applications}, analyze medical images, \cite{el2004similarity, muller2004review, salas2009analysis, chaves2009svm, greenspan2016guest, macyszyn2016imaging}; \cite{willemink2020preparing}, and assess the health status of patients \cite{ kononenko2001machine, barakat2010intelligible, o2012using, prasad2016thyroid}; \cite{richens2020improving, myszczynska2020applications}. Other recent studies in machine learning include \cite{zheng2017multichannel, chen2017brain, hortensius2018perception, jamone2018affordances, cociu2018multimodal, malete2019eegbased, mmereki2021yolov3, mohutsiwa2021eegbased }. A machine learning tool is proposed to mimic the expertise of a psychologist in determining the state of mind and emotion of an individual. In this paper, state of mind and emotion is referred to as something that is not based on conscious reasoning but is based on one's feeling or intuition. Thus normally, when we use it to give our preference or opinion, we start our statement by saying "I feel~...". The proposed method presented in this work does not claim any theoretical contribution to machine learning theory and is purely applied research. It investigates the possibility of duplicating the expertise of a psychologist through purely numerical computations, without any formal knowledge of psychology. It is based on an online questionnaire, with inputs taken from online users and are analyzed using machine learning methods. Assessment questionnaires are extensively used in psychology and are analyzed by psychologists. There are several advantages to the proposed approach. Firstly, the proposed tool can possibly replace the required expertise in performing an intelligent psychological evaluation. Secondly, a huge database of questions can be created such that a fresh set of questions can be provided for retakers. Thirdly, questions can be designed to be user-friendly in order to capture a fast response or to avoid a respondent from intentionally hiding a truthful answer. And lastly, through the proposed method, it is possible to determine the dimension of the state of mind and emotion by identifying critical questions that greatly influence the final output. It is noted that manual evaluation of this dimension can be very difficult to determine. This method can possibly lead towards a deeper understanding of the state of mind and emotion, without the aid of a psychologist, but through a wealth of questions and their classifications stored in a repository. The author recognized the fact that questionnaire-based diagnosis cannot be very accurate and precise. Issues on accuracy can occur because respondents can lie, and issues on precision can arise because even the respondent cannot be precise about his or her own feelings \cite{picard2001toward}. (Subsequent references to ``his or her'' or ``he or she'' are omitted for simplification and are replaced by references to male sex only to refer to both sexes.) However, the same challenges are faced by questionnaire-based diagnostic examinations, whether numerically or professionally analyzed. There are other advantages of this numerical analysis compared to the human-analyzed questions. Firstly, data errors in creating the model can be compensated by a statistically higher number of consistencies in the majority of the gathered information. Secondly, analysis errors are consistent with the model and can be easily corrected by reconstructing the model. Compared to the manually analyzed questions, human error can contribute to errors in analysis. And thirdly, updating the model can be fast by removing erroneous data, adding newly gathered data, and reconstructing the model. It has long been suggested that machine learning models can provide better classification accuracy than explicit knowledge acquisition techniques \cite{ben1995classification}. Thus in the past two decades, a considerable number of researches were done in machine learning and has been applied to a wide range of fields of study. However, a more recent study by \cite{bollen2011modeling} showed that an analytic instrument from empirical psychometric research can also prove to be a valid alternative to machine learning to detect public sentiment. In some cases, machine learning tools are used to solve traditional mathematical computations \cite{jamisola2009using} which proved to be comparable to traditional results. Interestingly, the idea of a gaze sensor that has the ability to detect staring, similar to that of humans was first discussed in \cite{jamisola2015oflove}. To analyze the state of mind and emotion, there are two approaches used in this paper. The first is via a thorough discussion and analysis of related literature, and the second is via a machine learning model built on an addiction survey. In the first part of the paper, previous studies are classified according to data gathering methods to establish the different modes of collecting information on choices by respondents that are not based on reason or logic. This will introduce the reader to the wide range of mediums that the information on the state of mind and emotion is collected, whether the respondent has directly or indirectly provided the information. Then the type of choices is classified to look into their commonality and differences in order to gain a deeper interpretation and better understanding of such choices. The second part is dedicated to an initial attempt to build a machine learning model of addiction, which is identified as a platform to investigate the state of mind and emotion. It also investigates the dimension of addiction by verifying the independence or interdependence of the responses to the survey questions. From the extensive literature gathered, four data gathering methods are identified namely, questionnaire-based, data mining, user interface, and camera. Fig.~\ref{fig:diagram} shows a diagram of Machine Learning (ML) discussion presented in this paper. Data-gathering methods are shown as blocks on the left-hand side: questionnaire-based (QB), data mining (DM), user interface (UI), and camera (CA). Possible outputs of machine learning analysis are true (1), false (0) or number range (R) indicating the extent of influence. More recent studies in machine learning include a review of probabilistic machine learning \cite{ghahramani2015probabilistic}, human-in-the-loop \cite{holzinger2016interactive}, a review of recommender systems \cite{adomavicius2015context}, and computational nature of social systems \cite{hofman2017prediction}. \begin{figure}[!t] \centering \includegraphics[width=0.5\columnwidth]{diagram.eps} \caption{A diagram of the discussion presented in this paper. The center circle represents the machine learning (ML) method used for classification. Inputs discussed are questionnaire-based (QB), data mining (DM), user interface (UI), and camera (CA). The output of classification can be true (1), false (0), or a number range (R) representing an extent of influence. } \label{fig:diagram} \end{figure} Lastly, the following recent advances involve machine learning, questionnaire, or clinical assessment related to addiction. A 16-scale self-report questionnaire that assesses a range of addictive behaviors \cite{christo2003shorter} uses traditional statistical analysis and does not use machine learning methods for classification. A review paper \cite{huys2016computational} pointed out that computational psychiatry uses machine learning methods to improve disease classification, improve the selection of treatment or predict the outcome of treatment. A study identifies risk factors using feature selection and predicts drinking patterns using cluster analysis \cite{bi2013machine}. It used machine learning to classify from clinical data, but does not use online questionnaires or identify independent dimensions of addiction. And, big data has been recognized unprecedented opportunity to track and analyze behavior \cite{markowetz2014psycho}. \section{Data-Gathering Methods} This study proposes a questionnaire-based data-gathering method in building a machine learning model. To evaluate this method, different modes of data gathering in determining the state of mind and emotion are presented here to show how they are used. These methods may present advantages as well as limitations in output classification. From the gathered literature, four types of data-gathering methods are identified: questionnaire-based, online data mining, user interface, and camera. A quick-glance summary of the methods used is shown in Table~\ref{tab:data-gathering}. In addition, the data gathering methods presented here may be classified into two types: one with direct user interaction, and another with indirect interaction. Direct user interaction includes user interface and camera. In this case, user response is immediately received while interacting with the machine learning tool. This is normally done in real-time with at least one sensor involved. On the other hand, indirect user interaction includes questionnaire-based and online data mining, where the user response is saved and analyzed. Normally this is not done in real-time and no sensors are involved. This type of user interaction only requires regular office equipment and is thus cheaper to implement. \subsection{Questionnaire-Based} A questionnaire, also known as a survey, is used to gather information from respondents. This is normally used to assess consumer satisfaction with products and services. One study used choice-based conjoint analysis that built models of consumer preferences over products with answers gathered from questionnaires \cite{chapelle2005machine,maldonado2017embedded, huang2016consumer}. This was a marketing research technique that was used to determine the required features of a new product based on feedback from consumers. Two machine learning tools were used: hierarchical Bayes analysis and Support Vector Machine (SVM). Another questionnaire-based study classified students for an intelligent tutoring system in an adaptive pre-test using a machine learning tool \cite{aimeur2002clarisse}. Students were profiled based on performance measurements and gaming preferences through a questionnaire using Bayesian network and logistic regression models \cite{barata2016early}. In some cases, open-answer questionnaires \cite{yamanishi2002mining} were designed to use rule learning and correspondence analysis to automatically gather useful information. The authors argued that answers to open-ended questions often contain valuable information and provide an important basis for business decisions. This information included characteristics for individual analysis targets and relationships among the targets. A similar approach of information gathering was used in scoring open-ended responses to video prompts designed to assess Math teachers \cite{kersting2014automated} using na\"{i}ve Bayes. Questionnaires to assess occupational exposure \cite{wheeler2013inside} were used to identify the underlying rules from responses through regression trees and random forests. In one study by \cite{terano1996knowledge}, the gathered data was used to create efficient decision rules for extracting relevant information from noisy questionnaire data. It used both simulated breeding (genetic algorithm) and inductive learning techniques. Simulated breeding was used to get the effective features from the questionnaire data and inductive learning was used to acquire simple decision rules from the data. Through the use of questionnaires before and after deployment, one study \cite{karstoft2015early} predicted post-traumatic stress disorder of Danish soldiers after a deployment. It used a Markov boundary feature selection algorithm and classification used SVM. \begin{table*}[h!] \begin{minipage}{\textwidth} \centering \footnotesize \caption[Caption for LOF]% {Data-Gathering Methods used in Machine Learning Diagnostic Tools \footnote{Abbreviated words meaning: net. -- network, cor. -- correspondence, sim. -- simulated, ind. -- inductive, rand. -- random, dec. -- decision, n.n. -- nearest neighbor, and reg. -- regression.}} \label{tab:data-gathering} \begin{tabular}{llll} Purpose & Technique & Output & Reference \T\B \\ \hline \multicolumn{4}{l}{\T A. Questionnaire Based} \\ \hline \T Conjoint analysis & Bayes, SVM & consumer feedback & \cite{chapelle2005machine,maldonado2017embedded}\\ & & & \cite{huang2016consumer}; {\cite{oztas2021framework}}\\ Student classification & Bayesian net., SVM & categorized abilities & \cite{aimeur2002clarisse,barata2016early}\\ Analyze open answers & rule learning, cor. analysis, na\"{i}ve Bayes & classification rules & \cite{yamanishi2002mining,kersting2014automated}\\ Extract relevant data & sim. breeding, ind. learning, rand. forests& decision rules & \cite{wheeler2013inside,terano1996knowledge}\\ Assess traumatic disorder& Markov feature selection, SVM & stress classification & \cite{karstoft2015early}; {\cite{wani2021impact}}\\ \hlin \multicolumn{4}{l}{\T B. Online Data Mining} \\ \hline \T Analyze sentiments & na\"{i}ve Bayes, SVM, N-gram, dec. trees & sentiment classification & \cite{Ye20096527,ortigosa2014sentiment}\\ Track behavior online & clustering, neural network, Bayesian & dynamic user profile& \cite{smith2003mltutor,bidel2003statistical}\\ &&&\cite{zhao2014personalized}\\ Identify FAQ or non-FAQ & SVM, na\"{i}ve Bayes & question classification & \cite{razzaghi2016context}; {\cite{damani2020optimized}} \\ Assess review bias & supervised ML, logistic regression & bias assessment & \cite{millard2016machine}; {\cite{didimo2020combining}} \\ Assess performance & PCA, neural network, SVM, dec. trees & credit score, marketing & \cite{koutanaei2015hybrid,moro2014data}\\ \hline \multicolumn{4}{l}{\T C. User Interface}\\ \hline \T Detect human emotion & ID3, $k$-n.n., SVM, Bayesian net., reg. tree & emotion classification & \cite{zacharatos2014automatic,lotfian2016practical}\\ &&& \cite{chun2016determining,rani2006empirical}\\ &&&\cite{chalfoun2006predicting,picard2001toward}\\ Infer user preference & decision trees, HMM & preference classification & \cite{al2016predicting,bajoulvand2017analysis}\\ &&& \cite{chew2016aesthetic,cha2006learning}\\ Feedback to ML systems & rule learning, na\"{i}ve Bayes & suggested features & \cite{stumpf2009interacting}; {\cite{abid2020online}}\\ \hline \multicolumn{4}{l}{\T D. Camera} \\ \hline\T\B Detect real-time emotion & LDA, AdaBoost, SVM, Bayesian net. & emotion classification & \cite{bartlett2005recognizing,littlewort2006dynamics}\\ &&&\cite{sebe2007authentic,shan2009facial} \\ Video facial expression & neurofuzzy, Markovian, na\"{i}ve Bayes & emotion classification & \cite{ioannou2005emotion,cohen2003facial}\\ 3D facial expression & LDA & 3D facial database &\cite{yin20063d}; {\cite{lin2020orthogonalization}}\\ \hline \end{tabular} \end{minipage} \end{table*} \subsection{Online Data Mining} Online data mining involved an automatic gathering of information from online content. This is normally performed by applications that crawl through them and gather information based on keywords found. The mined data may carry information about personal sentiments, opinions, or preferences. This data was also used to track user's behavior online and assess a person's response or performance. On sentiment analysis, one study \cite{Ye20096527} considered sentiment classification of online reviews as a class of web-mining techniques that performed an analysis of opinion on travel destinations. The authors gathered information from travel blogs, then used three supervised machine learning techniques, namely, na\"{i}ve Bayes, SVM, and the character-based N-gram mode to come up with sentiment classification. Another study analyzed sentiments on Facebook comments \cite{ortigosa2014sentiment}, and used decision trees, na\"{i}ve Bayes, and SVM to classify them. A user's behavior online was tracked through a user's browsing history in hypertext \cite{smith2003mltutor}. This study involved applying machine learning algorithms to generate personalized adaptation of hypertext systems. Conceptual clustering and inductive machine learning algorithms were used. Predefined user profiles were replaced with a dynamic user profile-building scheme in order to provide individual adaptation. A homemade access log database was used, together with a number of statistical machine learning models, to compare different classification or tracking of user navigation patterns for closed world hypermedia \cite{bidel2003statistical}. Neural network and Markovian models were used in dealing with temporal data. Another study exploited the rich user-generated location contents in location-based social networks \cite{zhao2014personalized} to offer tourists the most relevant and personalized local venue recommendations using the Bayesian approach. In searching for online help, users may ask questions that can be frequently asked questions (FAQ) or not. A study identified FAQs from non-FAQs \cite{razzaghi2016context} by using machine learning-based parsing and question classification. The authors noted that the identification of specific question features was the key to obtaining an accurate FAQ classifier. The SVM method and na\"{i}ve Bayes were used. Risk-of-bias assessment can be a very critical factor in systematic reviews. One study tackled this issue \cite{millard2016machine} and created three risk-of-bias assessment properties: sequence generation, allocation concealment, and blinding. The approach used supervised machine learning and logistic regression. Online data mining also considered performance assessment in classifying credit scores and telemarketing success. In classifying credit scores, one study \cite{koutanaei2015hybrid} used feature selection algorithms and ensemble classifiers that include principal component analysis, genetic algorithm, artificial neural network, and AdaBoost. Telemarketing success \cite{moro2014data} predicted the success of telemarketing calls for selling bank long-term deposits using logistic regression, decision trees, neural networks, and SVM. \subsection{User Interface} Previous studies in user interface data gathering can be classified into two types: one through user input and another is by detection of brain signals. The first type of user interface allowed the respondent to input his reaction to a stimulus, normally through a screen display, by natural language, or physical cues. The second type is through the detection of brain signals, normally through electroencephalogram (EEG), which allowed real-time interaction with the respondent and online machine learning analysis. It can be used to detect human emotion, infer user preference, and as a feedback mechanism to machine learning systems. Emotion recognition from body movements \cite{zacharatos2014automatic} used cameras and motion tracker sensors to track body movements. To classify the emotion of the user, the study used Principal Component Analysis (PCA), na\"{i}ve Bayes, and Markov model. Emotion classification by speech \cite{lotfian2016practical} was studied where a speech emotion retrieval system aimed to detect a subset of data with specific expressive content. The experiment used a speech sensor to collect data and SVM for emotional classification. A study used many types of sensors, including temperature sensors to detect ambient temperature, and to collect data \cite{chun2016determining} in a patent application to determine emotion. Data acquisition devices include a camera, a microphone, an accelerometer, a gyroscope, a location sensor, and a temperature sensor to detect ambient temperature. This study outputted emotion classification using PCA and SVM. A study was performed to detect human emotion from physiological cues using four machine learning methods: $k$-nearest neighbor, regression tree, Bayesian network, and SVM \cite{rani2006empirical}. The respondents interact with computers, and their emotions were detected by sensors attached to their bodies. Results showed that SVM gave the best classification accuracy even though all the methods performed competitively. ID3 (Iterative Dichotomiser 3) algorithm is used in machine learning and natural language processing domains. For the study in \cite{chalfoun2006predicting}, the learner's emotional reaction in a distant learning environment is inferred using the ID3 algorithm. A study by \cite{picard2001toward} used physiological signals to gather data from a single subject over six weeks. A computer-controlled prompting system called ``Sentograph'' showed a set of personally-significant imagery to help elicit eight emotional states, namely, no emotion (neutral), anger, hate, grief, platonic love, romantic love, joy, and reverence. Transforming techniques used sequential floating forward search, Fisher projection, and a hybrid of the two. Classifiers used $k$-nearest-neighbor and maximum a posteriori. One study \cite{wall2015mapping} considered a prediction of emotional perceptive competency and implicit affective preferences. It gathered data through eye-tracking and neurocognitive processes comprising of six domains: executive function and attention, language, memory and learning, sensorimotor, visuospatial processing, and social perception. They were used to predict emotion through linear regression, PCA with linear regression, and SVM. The study in \cite{al2016predicting} considered inferring interface design preferences from the user’s eye-movement behavior using an eye tracker device. Machine learning information processing is done via decision trees, and this outputs user design preferences. Folk music preference was studied in \cite{bajoulvand2017analysis} using EEG signals to collect data from the user. SVM classifier with radial basis function (RBF) kernel was used, and the output is a predicted user preference. Another study in user preference was \cite{chew2016aesthetic} that considered aesthetic preference recognition of 3D shapes. It gathered user information through EEG signals and used SVM and k-nearest neighbors to process the information. A user interface has been devised so different learner preferences can be acquired through interaction with the system \cite{cha2006learning}. Based on this information, user interfaces were customized to accommodate a learner's preference in an intelligent learning environment. User preference was diagnosed using decision tree and hidden Markov model (HMM) approaches. One study used respondents to communicate feedback to machine learning systems \cite{stumpf2009interacting}, with the purpose of improving its accuracy. Users were shown explanations of machine learning predictions and were asked to provide feedback. These include suggestions for re-weighting of features, proposals for new features, feature combinations, relational features, and changes to the learning algorithm. Two learning algorithms were used: the Ripper rule-learning algorithm and the na\"{i}ve Bayes algorithm. The study showed the potential of rich human-computer collaboration via on-the-spot interactions, to share intelligence between user and machine. \subsection{Camera} The last method discussed in the paper for data gathering is through the use of a camera. This method can perform a real-time observation of facial expression, or can be non-real-time through a video recording, which the machine learning method then analyzed to output a judgment. One disadvantage of relying on face or voice to judge a person's emotion is that we may see a person smiling or hear that her voice sounded cheerful, but this does not mean that she was happy \cite{picard2001toward}. But because human emotion is greatly displayed by facial expression, its detection by a camera is extensively studied. To predict negative emotion, one study \cite{hung2016predicting} made use of the mobile phone camera and processed the information using a na\"{i}ve Bayes classifier, decision tree, and SVM. A study that used a camera to detect facial expression \cite{bartlett2005recognizing,littlewort2006dynamics} utilized AdaBoost for feature selection prior to classification by SVM or linear discriminant analysis (LDA). Facial expressions in the video were analyzed in a study in \cite{sebe2007authentic}. It developed an authentic facial expression database where the subjects showed natural facial expressions based on their emotional state. Then it evaluated machine learning algorithms for emotion detection including Bayesian networks, SVMs, and decision trees. Local binary pattern (LBP) was used for facial expression recognition. The authors used boosted-LBP to extract the most discriminant LBP features, and the results were classified via SVM. It was claimed that the method worked in low resolutions of face images and compressed low-resolution video sequences captured in real-world environments \cite{shan2009facial}. Extraction of appropriate facial features and identification of the user's emotional state through the use of a neurofuzzy system was studied \cite{ioannou2005emotion}, which can be robust to variations among different people. Facial animation parameters are extracted from ISO MPEG-4 video standard. The neurofuzzy analysis was performed based on the rules from facial animation parameters variations both at the discrete emotional space and 2D continuous activation–evaluation. The multi-level architecture of a hidden Markov model layer was shown in \cite{cohen2003facial} for automatically segmenting and recognizing human facial expressions from video sequences. Classification of expressions from video used na\"{i}ve Bayes classifiers and learning the dependencies among different facial motion features used Gaussian tree-augmented na\"{i}ve Bayes classifiers. A 3D facial expression recognition was shown in \cite{yin20063d} that has developed 3D facial expression database. It has created prototypical 3D facial expression shapes and 2D facial textures of 2,500 models from 100 subjects. LDA classifier was used to classify the prototypic facial expressions of sixty subjects. From all the four types of data-gathering methods, one may say that the sensor-based inputs with direct user interaction can be considered to be more accurate compared to the indirect, non-sensor-based method where the user inputs may be subjective. However, sensor-based inputs may be limited to what a sensor can detect. For example, a camera may detect a smiling face, but it does not mean the person is happy. Or if the sensor is not accurate enough to detect the brain signal, it can give out other output instead of what the user intended. Thus if the user is objective in his inputs, the questionnaire-based or data-mining method may be more accurate than the sensor-based. Until the time that more sophisticated sensors are developed to detect accurately what the person really wants to convey, at the current technological state, the questionnaire-based method may significantly cover the user's actual state of mind and emotion. \section{Types of Classifications} From the previous section, we were able to analyze the method of data gathering in the state of mind and emotion, including the questionnaire-based method that is proposed in this work. Using the same literature discussed in the previous section, together with a few more additional pieces of literature, we propose four ways in classifying the state of mind and emotion once data has been gathered. The proposed four classifications are preference, emotion, grouping, and rules. \subsection{Correlation Among Classifications} In this work, preference is referred to as an individual's intuitive choice given two or more options. It does not include any emotion. For visualization purposes, preference can be thought of as a ``horizontal'' expression of one's feelings, where the emotional level remains ``flat.'' On the other hand, emotion is not a conscious choice but an individual's reaction to an outside stimulus that affects the person's disposition. Emotion is not based on intuition because intuition involves a mental process without conscious reasoning. It can be thought of as a ``vertical'' expression of feelings with varying intensity. Thus the usual reference to ``up-and-downs'' of emotion. Preference and emotion are direct results from individual responses and are normally referred to as feelings. In other words, preference is a ``non-emotional'' feeling and is intuitive, while emotion is an ``emotional'' feeling and is not intuitive. Therefore feelings involve a mental process (intuition) and a non-mental process (emotion). On the other hand, grouping and rules classification are not direct results from individual responses. Rather, individual responses are further analyzed to output a final judgment. In grouping, classification rules are applied to the individual responses to classify them according to a set grouping. In rules, the individual responses are used to create new rules or modify existing ones, which may later be used to arrive at a final judgment. In terms of the interaction with the respondents, classifications by rules and grouping normally may entail an indirect interaction, while the emotion and preference classifications normally require direct interaction and are usually performed in real-time. And lastly, in terms of decision outcomes, emotion and preference classification are normally decided by the user. In grouping and rules classification, the decision outcomes are normally decided by an observer. Table~\ref{tab:classification} shows the summary of classifications. \begin{table}[tb!] \footnotesize \caption{Classifications in Determining\\State of Mind and Emotion} \label{tab:classification} \begin{tabularx}{\columnwidth}{ll} \multicolumn{1}{c}{Purpose} & \multicolumn{1}{c}{References} \\ \hline \T\B A. Classification: Preference & \\ - Consumer product & \cite{toubia2007optimization,chapelle2005machine,maldonado2017embedded}\\ - Travel destination interest & \cite{Ye20096527,zhao2014personalized,li2021machine}\\ - Ranking aesthetic preferences & \cite{Hullermeier20081897,al2016predicting,bajoulvand2017analysis}\\ - Analyze online sentiments & \cite{cha2006learning,stumpf2009interacting,budhi2021using}\\ - Tracking of navigation patterns & \cite{smith2003mltutor,bidel2003statistical,kumar2020progressive}\\ \hline\T\B B. Classification: Emotion & \\ - Detect emotion from physiological cues & \cite{rani2006empirical,picard2001toward,chun2016determining}\\ - Emotion detection from speech & \cite{Devillers2005407,freitag2000machine,agrawal2020speech}\\ - Online facial expression from camera & \cite{bartlett2005recognizing,littlewort2006dynamics,fathima2020review}\\ - Off-line facial expression from video & \cite{sebe2007authentic,cohen2003facial,jin2020diagnosing}\\ \hline\T\B C. Classification: Grouping & \\ - Student abilities prediction from response & \cite{beck2000high,aimeur2002clarisse,lamb2021computational}\\ - Identifying off-task behavior & \cite{cetintas2010automatic,karstoft2015early,abou2020application}\\ - Model formation to predict future actions & \cite{webb2001machine,razzaghi2016context}\\ - Gaming-detection model for tutoring behavior & \cite{walonoski2006detection,barata2016early}\\ \hline\T\B D. Classification: Rules & \\ - Open answers to questionnaires & \cite{yamanishi2002mining,kersting2014automated}\\ - Efficient decision rules from noisy data & \cite{terano1996knowledge,wheeler2013inside,rolf2020balancing}\\ - Learning casual relationships, word meanings & \cite{Boose1985495,Tenenbaum2006309,huang2020detecting}\\ - Production rules from independent assessment & \cite{jarvis2004applying,millard2016machine}\\ \hline \end{tabularx} \end{table} \subsection{Classification by Preference} Preference is an option chosen by an individual based on how he feels, but with no emotion attached to the judgment. It is mostly used to identify the liking of a user to a particular person, place, product, or service. Traditionally, this method of gathering preference information from users was used by many companies \cite{toubia2007optimization,chapelle2005machine,maldonado2017embedded,huang2016consumer} to assess their current market share or to estimate the degree of acceptance of a new product introduced to the market. In recent years, user preference posted online is becoming a new and powerful approach in gathering and analyzing such information. One approach was by tracking navigation patterns online and present the most likely information that will be of interest to the user based on navigation preference \cite{smith2003mltutor,bidel2003statistical}. It can be used to present the most likely advertisements, interactive interfaces, or locations of places that will be of interest to the user. Another approach was tracking user preference on travel destinations \cite{Ye20096527,zhao2014personalized} or preference on aesthetics \cite{Hullermeier20081897,al2016predicting,bajoulvand2017analysis,chew2016aesthetic} or of online sentiments \cite{cha2006learning,ortigosa2014sentiment,stumpf2009interacting}. One can say that such information is monetarily driven by companies providing products and services. But this can also be very helpful to users who might want to find immediate solutions to urgent needs. Thus nowadays, matching demand to its solution can be quite easily performed by analyzing online user preference. The other advantage of posting preference online is that the online document can become the source of information for other users. For example, other users can put additional online reviews for a particular travel destination, making the expanded information more exhaustive and useful for potential visitors. This is also true for political sentiments that gather huge support given a short period of time. This has been a vehicle of many social actions within the past decade. Thus online preference can become a powerful tool for users of the same liking. This enables them to bargain for better service or initiate a desired social change. \subsection{Classification by Emotion} Of the four types, emotion classification is quite extensively studied. It is normally detected through camera or EEG signals, and it enjoys significant interest among researchers. Emotion is an expression of the feelings of an individual with varying intensity according to the degree of feelings conveyed. Emotion can be transmitted and be strongly shared among individuals, as in a mob. It has the ability to overpower all other senses of the individual to assume singularity of purpose. The tone of emotion can be set given an appropriate environment, as in a relaxing environment with soft music and dim lights. Or it can be instantaneously derived by giving the right stimulus as in the case of anger by striking a sensitive chord or happiness by watching a cute baby. Normally, there are three types of emotional stimulus, namely, visual, by hearing, and by touching. Thus a person can be stimulated visually as in a movie that is horror, comedy, or sexually explicit; or stimulated by sound as in a vile language or shouting or relaxing music; or stimulated by touching as in shaking hands, hugging or kissing, or strong dislike by hard grip or punching. As emotion can be stimulated visually, or by sound, or by touching, it can also be manifested in the same manner. Thus from previous studies, visual manifestation of emotion through facial expression was identified in real-time using camera \cite{bartlett2005recognizing,littlewort2006dynamics,ioannou2005emotion,yin20063d} or off-line using recorded video \cite{sebe2007authentic,cohen2003facial,shan2009facial}. Emotion was also identified by sound through speech \cite{Devillers2005407,freitag2000machine,lotfian2016practical}. The last method may not be obvious from a human observer when emotion was detected through the use of sensors attached to the body, and was identified from bodily cues \cite{rani2006empirical,picard2001toward,zacharatos2014automatic,chun2016determining}. \subsection{Classification by Grouping} In grouping classification, the response of individuals are inputs to the machine learning algorithm which outputs the group classification. This is different from classifications of preference and of emotion where the responses are direct outputs of the classification. Teachers used classification by grouping to assess students on the appropriate level of training to be administered \cite{aimeur2002clarisse,beck2000high,castillo2003adaptive}. This gave them an idea of the optimal strategy to be adopted for each group of students, especially when a considerable disparity was observed from the grouping assessment. On the other hand, psychologists used group classification to assess mental conditions or capabilities \cite{cetintas2010automatic,karstoft2015early,kersting2014automated} in order to carry out the further intervention, or to perform an appropriate level of service. Once the group classification was determined, one will only need to match a predefined intervention that was appropriate for the corresponding group. From the groupings, a model of classification can be formed. This may be a new grouping with new characteristics or an existing grouping with modified characteristics \cite{webb2001machine,razzaghi2016context}. This is different from the rules classification because in this case, the output is groupings and not rules. As the groupings are formed, the model of the groupings may change. Then the characteristics are determined based on the new grouping models, such that the behavior of a group can be predicted. The last method of grouping classification is very much related to the students (or training) classification but the method of determining the grouping was through gaming \cite{walonoski2006detection,barata2016early}. This may result in a more appropriate grouping for younger students because normally they are more alert during a game interaction, which may help in getting a more accurate response from them. \subsection{Classification by Rules} Lastly, the classification of rules establishes relationships among different user responses in order to influence a decision-making process. It does not necessarily output a final classification but modifies policies or methods that influence the desired output. One such classification was through open answers to questionnaires \cite{yamanishi2002mining,kersting2014automated} where classification and association rules were defined to characterize targets and establish relationships among them. Although the answers were open, from the keywords and phrase level, their models can be created, and thus the rules that define their relationships can be formed. Rule classification was also used in determining underlying rules to make expert decisions \cite{terano1996knowledge,wheeler2013inside} even with noisy data. Rules were classified in determining the casual relationships and word meanings \cite{Tenenbaum2006309,Boose1985495}, in order to understand the idea of what the person is trying to convey. This can be used in spoken language or from documents to develop models for inductive learning and reasoning, or from construct psychology. Classification of rules for tutoring systems and risk-of-bias assessments were studied in \cite{jarvis2004applying,millard2016machine}. In the tutoring system, the purpose was to automate rule generation in the system development, such that production rules were generated from marked examples and background knowledge. In the assessment study, model rules were generated for the properties of sequence generation, allocation concealment, and blinding. The models predicted sentences that contain relevant information, as well as the risk of bias for each research article. This work has proposed four classifications of the state of mind and emotion using the different data-gathering methods shown in the previous section. These classifications enable us to see the different aspects of the state of mind and emotion, such that its range of forms was discussed. To verify these different aspects we need to select an experimental platform that enables us to gather data from its range of forms in order to gain a deeper understanding of its nature. In this work, we choose addiction as the experimental platform. What is unique to addiction is that it covers all the four classifications discussed in this work. Preference, which is a choice by feelings that involves a mental process or intuition, is performed when people have manageable addiction. It is normally done when decisions are not driven by urges or emotion. On the other hand, choices made by emotion are normally done by people who have a higher degree of addiction. The choices do not anymore involve a mental process. In terms of classification by grouping, the choices made by the respondents are analyzed by psychologists that output the groupings. And lastly, classification by rules that involves the process of looking for new rules or modifying existing ones applies to addiction because when gathering data via questionnaires, the relationships among the questions are verified and modified accordingly. This can lead towards understanding the dimensions of addiction and can affect its final output classification. Thus addiction indeed covers all four classifications discussed in this section. \section{Machine Learning Classifiers} Two machine learning classifiers are considered: artificial neural network and support vector machine. The number of questions will be equal to the dimension of the input space, $n$. For an $i$-th sample, the corresponding answers can be true ($1$), false ($0$), or a degree of state ($[10,100]$). Thus for an input $\mathbf{x}_i\in \mathbb{R}^n$, a function $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is defined as \begin{equation} y_i = f(\mathbf{x}_i) \end{equation} where $y_i\in\{1,0,[10,100]\}$. The function $f$ is numerically derived from an artificial neural network or support vector machine. \subsection{Artificial Neural Network} Artificial neural network (ANN) has been extensively used in many different machine learning applications. Two widely used types are feedforward multilayer perceptron and radial basis function. For multilayer perceptron, given input layer $i$ and output layer $j$, a weight between layers $i$ and $j$ is denoted as $w_{ij}$, such that for $n$ nodes in layer $i$, $\mathbf{w}_j\in\mathbb{R}^n=[w_{1j},\ldots,w_{ij},\ldots,w_{nj}]$. An output of a single node in layer $j$ for a given input $\mathbf{x}\in\mathbb{R}^n$ can be expressed as \begin{equation} \label{eqn:single-layer} y_j= \sum_{i=1}^n w_{ij} x_i. \end{equation} \noindent For an input layer $i$, hidden layers $j$ and $k$, and a single node output (output layer $l$), we can recursively apply (\ref{eqn:single-layer}) three times, to get the input-output relations to be \begin{equation} y_l = \sum_{k=1}^q~w_{kl}~f \left( \sum_{j=1}^p~w_{jk}~f \left( \sum_{i=1}^n w_{ij} x_i \right) \right) \end{equation} \noindent such that $\mathbf{w}_l\in\mathbb{R}^q$, $\mathbf{w}_k\in\mathbb{R}^p$, $\mathbf{w}_j\in\mathbb{R}^n$, and $y=f(\cdot)$ is called the activation function. For radial basis function with one single output, given input $\mathbf{x}$ and number of samples $m$, the following equation can be applied \begin{equation} y = \sum_{j=1}^m w_j~\phi(\|~\mathbf{x}- \mathbf{x}^{(j)}\|) \end{equation} \noindent where $\phi(\cdot)$ is a set of radial basis functions, $\mathbf{x}^{(j)}$ is a center of the radial basis function, and $w_j$ is an unknown coefficient. \subsection{Support Vector Machine Model} Support vector machines (SVM) are derived from statistical learning theory \cite{vapnik1998statistical}. It has two major advantages over other machine learning tools: (1) it does not have a local minimum during learning, and (2) its generalization error does not depend on the dimension of the space. Given $m$ samples $(\mathbf{x}_i,y_i)$ where $i= 1,\ldots,m$, for an $i$-th sample input $\mathbf{x}_i \in \mathbb{R}^n$, a scalar offset $b \in \mathbb{R}$ and a weighting vector $\mathbf{w} \in \mathbb{R}^n $, a function $f$ is given as \begin{equation} f(\mathbf{x}_i) = \mathbf{w}\cdot \mathbf{x}_i + b. \end{equation} A loss function $L$ that is insensitive to tolerable error $\epsilon$ can be expressed as \begin{equation} L = \|\mathbf{w}\|^2 + \frac{C}{m}\sum_{i=1}^{m} \max\{0,|y_i - f(\mathbf{x}_i)|- \epsilon \} \end{equation} where $C\in \mathbb{R}$ is a regularization constant which can be expressed as an optimization problem in the form \begin{equation} \begin{split} \min~~~~ & \frac{1}{2} \| \mathbf{w} \|^2 + \frac{C}{m}\sum_{i=1}^{m}(\xi_i + \xi_i^*)\\ \mbox{subject to:} ~~~ & ( \mathbf{w} \cdot \mathbf{x}_i +b )- y_i \leq \epsilon + \xi_i\\ & y_i - ( \mathbf{w} \cdot \mathbf{x}_i + b ) \leq \epsilon + \xi_i^* \\ & \xi_i,\xi_i^*\geq 0 \mbox{~~~for~~} i=1,\ldots,m. \end{split} \end{equation} To test the proposed method, an online addiction questionnaire with 10 questions was created and answers from 292 respondents were analyzed. Dependence/independence of questions were verified by removing questions one by one and noting the resulting accuracy of classification, which can be further developed to determine the dimensions of addiction. \section{Addiction as an Experimental Platform} To test the proposed machine learning tool, addiction is used as an experimental platform because it encompasses the entire range of forms of state of mind and emotion, especially based on its four classifications stated in the previous section. Depending on the extent of addiction, the person's intuitive response can be consciously or unconsciously made. When one is not addicted to a stimulant, his choices are consciously made and he is in total control of his reaction. For the state of emotion, normally, the emotional reaction is not consciously controlled but results from an urge or a bodily reaction that automatically occurs given the right stimulant. That is why some people easily cry at sad movies or laugh at certain types of humor. But when one is addicted to a stimulant, the person's reaction is based on an urge or an uncontrolled bodily reaction, similar to the emotional reaction. The person's choice, in this case, is based on unconscious preference, and his reaction is based on bodily urges. Thus, the study on addiction offers a platform that considers conscious and unconscious decisions, as well as controlled and uncontrolled reactions. Furthermore, the intervention by a psychologist to come up with groupings based on inputs from respondents shows classification by grouping. And finally, the classification by rules points to the attempt in identifying the dimensions of addiction by characterizing the interdependence of inputs from respondents. \section{Experimental Results} A questionnaire \cite{jamisola2016surveryquest} was designed to gather information regarding respondents' degree of addiction to an activity. The questions were all composed by the author who has no formal training in psychology, thus may be considered as random questions. This is done in order to mimic the method of gathering random questions from users that will be included in the database. These questions do not claim completeness in addressing all the dimensions of addiction but are presented in order to show how any given randomly gathered set of questions are processed and analyzed. The analysis is in terms of their interdependence, which may possibly lead towards clustering questions in the database and furthermore, may possibly lead towards identifying the dimensions of addiction. At the end of the questionnaire the respondent will rate self as `addicted', `not addicted', `manageable', and `don't know'. A total of 292 respondents participated in the survey. Ten questions were asked, with possible answers `yes', `no', and `not really', as well as a range of numbers to rate the frequency of occurrences or number of persons involved. Most of the respondents are students and staff from the University of the author where the average age ranges from 20-30 years old. Information about the sex of respondents was not gathered because the addiction study in this work is intended to be independent of this information, including other biases like culture, educational attainment, sexual orientation, race, etc. The author envisioned millions of responses from all over the world that are normalized to any biases due to the randomness of the respondents. The actual questionnaire and the percentage of responses are shown in Appendix \ref{appendix:questionnaire}. \subsection{Experimental Setup} Two machine learning experiments were performed using Matlab R2017a neural network, and statistics and machine learning toolboxes. The neural network toolbox used `newff' function to create a feed-forward backpropagation network with 85 hidden layers, four output layers with four outputs that represent four classifications. Data division was random such that from 292 samples, 204 are used for training, 44 for validation, and 44 for testing. The training algorithm used the Levenberg-Marquardt backpropagation technique, while the performance measure was by mean squared error. After the network has been trained, validated, and tested through `train' command, the network was again tested using `sim' the command that used all the 292 samples, then the output was compared against known target values. The average accuracy was at 77\%. The SVM experiment used `templateSVM' function to create an SVM template that invoked the Gaussian kernel function. Then `fitcecoc' was called to train the classifier using the SVM template that was created. The purpose was to group the responses into four classifications. The training function used binary learner and one-versus-one coding design. After the SVM training, 10-fold cross-validation by `crossval' command was used. Then the command `resubPredict' was used to predict the classification from all learners. Its results were compared using the target values from the 292 samples. SVM accuracy was measured at 85\%. \subsection{Analysis of Answers by Respondents} In this subsection, we are going to analyze the answers of the respondents based on their own self-assessment of whether or not they are addicted to activity. Of the 292 respondents, $20\%$ considered themselves addicted to the activity, $23\%$ not addicted, $45\%$ manageable, and $12\%$ did not know their status. The case of ``manageable'' could mean that the person enjoys the activity but is in total control of his decisions and reactions to it, and is thus not addicted. On the other hand, it can also mean that the person is partially addicted, and has some control over his decisions and reactions to the activity. Note that an addicted person, as defined above, has totally uncontrolled decisions and reactions. On the frequency of performing the activity, $50\%$ answered ``everyday.'' However, less than half of this number admitted addiction to the activity. This result showed that the everyday performance of an activity that one likes to do does not necessarily mean addiction to that activity. This also means that it is possible for the person to enjoy the addictive activity every day, but is still in control over it. Regarding the urge to do the activity, $59\%$ admitted to feeling the urge but only one-third of them considered themselves addicted. This percentage is higher compared to the percentage of everyday activity, which means that those who felt the urge, did not necessarily perform the activity every day. Furthermore, feeling the bodily urge to do an addictive activity does not easily overcome conscious actions and decisions. On non-performance of the activity on a regular basis and affecting the mood of the respondent, $43\%$ answered ``yes''. But only half of this percentage admitted addiction. This is interesting because it means that even without being addicted and in control over the activity, the person can still be affected in his regular daily work through his moods. It can also mean that the effort to control the urge to do the activity can somehow affect the everyday mood of the person. Solitary performance of an addictive activity, with a response of $50\%$, does not necessarily equate to addiction. A higher percentage felt the urge to do the activity, but this does not necessarily mean that they are going to do the activity alone. Having many other major activities besides the addictive activity can be a possible source of getting one's focus away from the addictive activity. However, $66\%$ answered the least number of other activities at ``three more'' and $19\%$ answered ``many.'' Thus, in this case, the addicted person can have many other activities besides work and study. This can also mean that even the not-addicted person has a limited number of activities besides work and study. Talking to close friends and family every day as a support group can be vital in coping with addiction. Around $45\%$ talked to ``one or two'' and $36\%$ talked to ``three to five.'' Isolation, in this case, does not seem to have a close connection to addiction, as in the case of performing the activity alone. This could mean that the person can be having many friends and seemed to have a normal life, but is addicted. On asking for professional help to stop the activity, a huge percentage of $75\%$ answered ``no.'' This could mean a lack of access to professional help or the hesitation to admit the need for help. Distraction from daily work or studies caused by the addictive activity has $26\%$ who answered ``yes.'' This is close to the percentage who admitted addiction to the activity. One may say that in this case, a distraction from daily routine caused by the addictive activity is a clear indication of addiction. (This will be supported by the result in the next section showing this as a critical question.) Talking about the activity to somebody else as a possible source of support has $36\%$ answered ``no, I keep it to myself.'' This scenario of isolating oneself is related to the question about the solitary performance of the addictive activity, and to the number of close friends and family that one talks to every day. It is noted that the performance of the activity alone has a higher percentage, which means that of all the persons who may be performing the activity alone, a large percentage of it kept it as a secret. A number of things can be noted in order to improve the machine learning results. One has to design the questions that tackle independent aspects of the psychological state. This will enhance a clearer separation in the classification. Another possible approach is to create subtle support questions to verify consistency in the answers of the respondent, most especially to critical questions. Lastly, indirect questions can be designed so as to avoid the respondent explicitly hide the truthful answer. \subsection{Investigating the Dimensions of Addiction} This subsection will analyze the dependence/independence of one question against the rest of the questions based on the output classification accuracy. Using the trained model, each question (response) was removed from the input data, and the accuracy of the output was observed. If the accuracy of the output drastically reduced in the absence of a given question, this means the question was critical and was independent of the rest of the questions. On the other hand, if a given question was removed and the resulting output did not change drastically, that means it was dependent on at least one other question. In other words, that question did not matter. In this study, a drastic decrease means a $25\%$ reduction from the overall accuracy. There were 10 questions that each respondent had to answer, and question ten ($Q_{10}$) was a self-assessment based on the four classifications. Using the model that was created, responses to question one ($Q_1$) up to question nine ($Q_9$) were removed one by one from the input data, and the accuracy of the output was compared against the target values, that is, the responses to $Q_{10}$. The SVM classification model was used in this analysis. From the overall accuracy of $85\%$, an accuracy reduction of $25\%$ is an output accuracy of around $64\%$. Table \ref{tab:resultsanalysis} shows the resulting percentage accuracy of classification when at most two questions were removed. The diagonal elements in the table (in boldface) represent the percentage accuracy when question $Q_i$ was removed. (In the table, $i=1,\ldots,9$.) The encircled values show an accuracy reduction of $25\%$ or more. The percentage accuracy shown in row $Q_i$ is the case when question $Q_i$ was removed first and questions $Q_1$ to $Q_9$ were removed second, one at a time, except $Q_i$. Thus using the convention (row, column) to define the elements in the table, the percentage accuracy in ($Q_1$, $Q_1$) is the case when only question one was removed, and ($Q_1$, $Q_2$) is the case when $Q_1$ was removed first and $Q_2$ was removed second. \begin{table}[tb!] \centering \renewcommand{\arraystretch}{1.4} \caption{Percentage Accuracy with at Most Two Questions Removed} \label{tab:resultsanalysis} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \centering $Q_i$ & $Q_1$ & $Q_2$ & $Q_3$ & $Q_4$ & $Q_5$ & $Q_6$ & $Q_7$ & $Q_8$ & $Q_9$ \\ \hline \T\B $Q_1$ & \textbf{81} & 76 & 80 & 74 & 78 & 66 & 79 & $\circled{49}$ & 70 \\ \hline\T\B $Q_2$ & 77 & \textbf{82} & 74 & $\circled{60}$ & 80 & 67 & 78 & $\circled{58}$ & 78 \\ \hline\T\B $Q_3$ & 80 & 74 & \textbf{82} & 76 & 77 & 74 & 75 & $\circled{59}$ & 73 \\ \hline\T\B $Q_4$ & 70 & $\circled{60}$ & 76 & \textbf{79} & 78 & 74 & 80 & $\circled{61}$ & 72 \\ \hline\T\B $Q_5$ & 80 & 80 & 78 & 79 & \textbf{83} & 76 & 81 & 73 & 74 \\ \hline\T\B $Q_6$ & 66 & 68 & 76 & 75 & 75 & \textbf{78} & 74 & $\circled{57}$ & 66 \\ \hline\T\B $Q_7$ & 79 & 78 & 75 & 79 & 81 & 68 & \textbf{81} & $\circled{55}$ & 81 \\ \hline\T\B $Q_8$ & $\circled{62}$ & 71 & $\circled{59}$ & $\circled{60}$ & 76 & $\circled{58}$ & $\circled{58}$ & $\circled{\mbox{\textbf{45}}}$ & 65 \\ \hline\T\B $Q_9$ & 74 & 77 & 73 & 73 & 74 & 70 & 78 & $\circled{60}$ & \textbf{80} \\ \hline \end{tabular} \end{table} From Table \ref{tab:resultsanalysis}, it can be observed that removal of $Q_8$ drastically lowers the output classification accuracy, which resulted in $(Q_8,Q_8)= 45\%$. This drastic decrease in accuracy is generally consistent all throughout the elements of $Q_8$ row and column, except for $(Q_5,Q_8)=75\%$ and $(Q_8,Q_5)=76\%$. This could mean that for most elements in $Q_8$ row and column, the removal of $Q_8$ and one other question, is greatly influenced by the absence of $Q_8$ alone. The other question did not greatly affect the accuracy results, except $Q_5$. We note question eight below. \bigskip \noindent \texttt{$Q_8$: Do you think you get distracted in your daily work or studies by thinking about this activity?} \bigskip \noindent This could mean that distraction from the daily activity is generally independent of the rest of the questions in the addiction survey questionnaire. Thus, it can be considered a critical question and can be counted as an independent dimension of addiction. Removal of $Q_2$ and $Q_4$ resulted in a more drastic decrease in accuracy compared to the removal of $Q_2$ or $Q_4$ alone. This could mean that both $Q_2$ and $Q_4$ belong to one class of critical questions, which are independent of the rest of the questions. \bigskip \noindent \texttt{$Q_2$: Do you feel an urge to do it?} \noindent \texttt{$Q_4$: Do you do this activity alone or with some company?} \bigskip \noindent The relationship can be that the feeling of a strong urge to do an addictive activity is somehow related to doing such activity on one's own accord, that is, being alone. And when the urge is lesser, it is somehow related to the performance of the activity with more company. Thus, we can say that another independent dimension of addiction includes the urge of performance or the number of persons involved during the performance. A peculiar observation is $Q_5$ and $Q_8$ and we note below. Question $Q_5$ is stated in the following. \bigskip \noindent \texttt{$Q_5$: Besides work or studies, how many other main activities you have in a day besides this activity?} \bigskip We note that the removal of both $Q_5$ and $Q_8$ resulted in a higher accuracy compared to the removal of $Q_8$ alone, such that $(Q_8,Q_5)=76\%$ and $(Q_5,Q_8)=73\%$. That is, the removal of both $Q_5$ and $Q_8$ resulted in a $10\%$ accuracy reduction against the overall accuracy, but removal of $Q_8$ alone results in a $25\%$ accuracy reduction. We note further that the removal of $Q_5$ alone had almost zero percent accuracy reduction, and is in fact the highest accuracy that is closest to the overall accuracy. When $Q_8$ was removed, the new accuracy was drastically reduced to $45\%$. But when $Q_5$ was removed next, the new accuracy drastically increased to $76\%$. This means that $Q_5$ was dependent on $Q_8$ alone such that when $Q_8$ was removed, it became an independent question and did affect the accuracy drastically. Let's investigate now the reverse order of removal. When $Q_5$ was removed, the accuracy did not change much and was at its highest value among the rest of the single questions removed. But when $Q_8$ was removed, the accuracy did not drastically change. This means that the removal of $Q_5$ affected $Q_8$ such that it was not able to drastically change the accuracy as it did when the other questions were removed. Thus, $Q_8$ was dependent on $Q_5$. But initially, we identified $Q_8$ to be a critical question because it drastically changed the accuracy when removed alone. The explanation is that the characteristic of $Q_5$ was very similar to the overall accuracy such that when $Q_8$ was removed alone, this dependence was not obvious. Another observation is the relationship between $Q_8$ and $Q_2$. The case of $(Q_2,Q_8)=59\%$ but $(Q_8,Q_2)=71\%$, that is, the order of removal has an effect on the resulting accuracy. In the first case, the accuracy did not drastically change when $Q_2$ was removed. The drastic change of $59\%$ happened only when $Q_8$ was removed after $Q_2$. This is the same case as when $Q_8$ was paired with the rest of the questions, except $Q_5$. Which means that the removal of $Q_2$ did not affect the removal of $Q_8$, and therefore $Q_8$ is independent of $Q_2$. In the second case, removal of $Q_8$ resulted in a drastic decrease of accuracy to $45\%$, but when $Q_2$ was removed after $Q_8$, the accuracy drastically increased to $71\%$. Thus $Q_2$ was affected by the removal of $Q_8$ and is therefore dependent on $Q_8$. But $Q_8$ is not dependent on $Q_2$, thus the dependence is only in one direction and not both. This explains why the order of removal has an effect on the resulting accuracy. The approach presented above can be used to identify critical questions that drastically change the accuracy output when removed. Critical questions can help identify the number of independent variables in the state of mind and emotion and can help in determining its dimensionality. Identifying critical questions can also help in minimizing the questions asked in the questionnaire, in order to save time for the respondents. This possibility of quantifying the dimensionality of a person's state of mind and emotion by an individual with no sufficient background in psychology upholds the advantage of a machine learning tool that can help replace the ``expertise'' required to perform an intelligent evaluation. It is noted that the identification of this dimensionality can be very difficult for an experienced psychologist to discover. Another future direction of the proposed method is the possibility of developing questions with hidden information such that the respondent cannot intentionally cheat on his answers. In addition, questions or choices of answers can be designed to capture a faster response from the respondents, such that the questionnaire can be more user-friendly. This way, user-friendliness from the perspective of the respondent can be accommodated without compromising technicality from the psychologist's perspective. Furthermore, this can lead to drastically increasing the number of questions in the database, such that the addicted person can test himself again without answering exactly the same set of questions. This can make the self-assessment more reliable. \section{Survey Questions vs. SADD Questionnaire} A questionnaire designed by the author to assess the degree of addiction to an activity by a respondent, called ``A Survey on Addiction,'' is shown in Appendix \ref{appendix:questionnaire}. This set of questions will be compared against a standard psychological test called ``Short Alcohol Dependence Data (SADD)'' Questionnaire \cite{raistrick1983development} that is used to assess oneself to alcohol addiction. And secondly, we will analyze the answers of the 292 respondents that participated in the addiction survey. One major difference in the addiction survey questions in this paper compared to the SADD questions are that the questions in this paper assessed addiction before the tangible effects are experienced. They did not tackle cases about physical effects of addiction like ``shaky hands'', ``vomiting'', ``imaginary'' things, etc. but these are included in the SADD questions. In the following discussion, we compare the first few questions from SADD against the questions of the survey shown in Appendix \ref{appendix:questionnaire}. Question one of SADD addresses the issue of getting the thought of drinking out of the mind, and this is similar to question eight in the survey which asked regarding the thought on the addictive activity being distractive to daily work or studies. Obviously, when something distracts your daily routine, it means the thought about it is always in the mind. Question two of SADD talked about misplaced priorities due to alcohol addiction. This is related to question five in this work that talked about major activities including addictive activity. But the SADD question is transparent in asking about misplaced priorities. Being transparent in the question can be an advantage to get a clear answer regarding it. Or this can be a disadvantage as well when the respondent will try to suppress from giving an accurate answer. Thus an indirect question might be able to address this issue. Question three of SADD where the activities of the respondent are revolving around alcohol drinking is again related to question five that asked about major activities of the respondent including the addictive activity. That is, if the addictive activity constitutes a major activity of the respondent then the addictive activity greatly influences all the other activities. Question four of SADD considers the frequency of drinking alcohol and is related to question one in this work that explicitly asks about the frequency of performing the addictive activity. In this question, the two approaches are very closely related. Question five of SADD asked about the desire to satisfy the need for alcohol without considering the quality of the drink and is related to question two of this work that asked about the urge to perform the addictive activity. In this way, the urge to do the addictive activity created the possibility to disregard any discomfort that may be experienced in performing it. Lastly, there were questions in SADD asked about the awareness of possible consequences of drinking alcohol. This is related to question two of this work that considered the urge to do the addictive activity without considerations of possible consequences. It can also be related to question eight that asks about seeking professional help because of the possible consequences of the addictive activity. \section{Conclusion and Future Direction} This paper has shown the possibility of determining the state of mind and emotion of an individual through a questionnaire-based machine learning tool, using an artificial neural network and support vector machine. Previous classifications and data-gathering methods were presented to determine preference, opinion, emotion, or capability. The proposed method is implemented in analyzing addiction through a survey on addiction with ten questions. Results analysis showed a proposed method to identify critical questions that can lead to the identification of the dimension of addiction. Analysis of the survey questions against a standard questionnaire on alcohol addiction is presented. This tool can be used to do the same method of computation for all applications but will vary only on the types of questions asked depending on the individual information to be extracted. The proposed machine learning diagnostic tool may be able to output judgment, based on the thousands of inputs collected from users. { The future direction of this research is for a psychologist to assess, compare and validate the proposed method and its results. In addition, a deeper investigation of the dimensions of addiction via a machine learning model will be performed. Lastly, as the online database of questions and answers increased, it is recommended to use unsupervised machine learning to build the state of mind and emotion model through the correlation of the responses from the respondents. } \section*{Acknowledgment} The author would like to acknowledge the contribution of Mario Saiano, Social Health Educator, Local Health Unit Genovese, Italy for his inputs in the preparation of this manuscript. \section{The Questionnaire on Addiction Survey} \label{appendix:questionnaire} A survey is designed to assess the addiction of a respondent to an activity. The survey was posted online using Google forms \cite{jamisola2016surveryquest}. This section shows the instruction, questions, and responses from 292 respondents. \bigskip Instruction: This survey consists of 10 questions. Please be honest in answering. Think of one type of activity that you like, answer the following questions, and judge for yourself at the end of the survey if you consider yourself addicted or not to this activity. \begin{table}[tb!] \centering \label{tab:survey-questions} \caption[Caption for LOF]% {A Survey on Addiction} \begin{tabular}{ll \hline \multicolumn{2}{l} {1. How often do you do this activity?} \T\B \\ $\bullet$ Everyday (50.3\%) & $\bullet$ Twice a day (4.5\%) \\ $\bullet$ Once a week (24\%) & $\bullet$ Twice a week (21.2\%) \\ \hline \multicolumn{2}{l}{2. Do you feel an urge to do it? } \T\B\\ $\bullet$ Yes (59.2\%) & $\bullet$ No (8.6\%) \\ $\bullet$ Not really (32.2\%) & \\ \hline \multicolumn{2}{l}{3. Does it affect your mood if you do not do this activity on} \T\B \\ \multicolumn{2}{l}{ a regular basis?} \\ $\bullet$ Yes (43.2\%) & $\bullet$ No (42.8\%) \\ $\bullet$ Not sure (14\%) & \\ \hline \multicolumn{2}{l}{4. Do you do this activity alone or with some company? } \T\B\\ $\bullet$ Alone (50.3\%) & $\bullet$ Two to three (23.3\%) \\ $\bullet$ More than three (3.1\%) & $\bullet$ Does not matter (23.3\%)\\ \hline \multicolumn{2}{l}{5. Besides work or studies, how many other main activities you have} \T\B\\ \multicolumn{2}{l}{in a day besides this activity?} \\ $\bullet$ Three more (66.1\%) & $\bullet$ Five more (14\%) \\ $\bullet$ 10 more (1.4\%) & $\bullet$ Many (18.5\%) \\ \hline \multicolumn{2}{l}{6. How many very close friends and family do you talk to everyday?} \T\B\\ $\bullet$ One or two (44.5\%) & $\bullet$ Three to five (36\%) \\ $\bullet$ Around 10 (8.2\%) & $\bullet$ Many (11.3\%)\\ \hline \multicolumn{2}{l}{7. Did you attempt to seek professional help to stop this activity?} \T\B \\ $\bullet$ No (74.7\%) & $\bullet$ Yes (6.8\%) \\ $\bullet$ Not really (18.5\%) & \\ \hline \multicolumn{2}{l}{8. Do you think you get distracted in your daily work or studies by} \T\B \\ \multicolumn{2}{l}{ thinking about this activity?}\\ $\bullet$ Yes (25.7\%) & $\bullet$ No (48.3\%) \\ $\bullet$ Manageable (26\%) & \\ \hline \multicolumn{2}{l}{9. Have you talked with others about this activity? }\T\B \\ \multicolumn{2}{l}{$\bullet$ No, I keep it to myself (36.3\%)} \\ \multicolumn{2}{l}{$\bullet$ Selected few (32.2\%)}\\ \multicolumn{2}{l}{$\bullet$ Close friends and family only (15.4\%)}\\ \multicolumn{2}{l}{$\bullet$ Everybody knows about it (16.1\%)} \\ \hline \multicolumn{2}{l}{10. Rate yourself with regard to this activity} \T\B \\ $\bullet$ Addicted (19.5\%) & $\bullet$ Not addicted (23.3\%) \\ $\bullet$ Manageable (44.9\%) & $\bullet$ I don't know (12.3\%)\\ \hline \end{tabular} \end{table} \footnotesize \bibliographystyle{IEEEtran}
1,116,691,499,450
arxiv
\section{Introduction} \label{sec:introduction} The NGS revolution has been contributing to decipher novel genes and genetic mechanism connected to cancer \cite{vogelstein2004cancer}. However, the always increasing amount of available genomics data together with new discoveries, is also posing new challenges to researchers, such as the one of prioritizing cancer genes among all the variants generated by NGS experiments. In fact, while a lot of efforts have been made to predict novel cancer driver genes, tools capable of automatically measure the association to cancer of genes based on the currently available in scientic literature are currently limited. To overcome these limitations, in \cite{piazza2017oncoscore} we proposed OncoScore, a bioinformatics text-mining tool to automatically measure, with dynamically updatable web queries to the biomedical literature, the association of genes to cancer based on genes citations. The output of the tool is a score that measures the strength of the association of any gene to cancer. The latest version of OncoScore is availabe on Github at the development branch of \href{https://github.com/danro9685/OncoScore}{https://github.com/danro9685/OncoScore} and on Bioconductor as a R package at \href{http://bioconductor.org/}{bioconductor.org}. This version of the tool allows full customization of the algorithm and can be easily integrated in existing NGS pipelines. Furthermore, we also provide a web interface of the method which allows an easier access to researchers with limited experience in bioinformatics (see the Website \href{http://www.galseq.com/oncoscore.html)}{http://www.galseq.com/oncoscore.html}). \section{OncoScore analysis} \label{sec:oncoscore_analysis} The OncoScore analysis consists of two parts. One can estimate the oncogenic potential of a set of genes given the lecterature knowledge at the time of the analysis, or one can study the trend of such oncogenic potential over time. See Figure \ref{fig:oncoscore_pipeline} for an overview on the OncoScore pipeline and the Supplementary Material for a detailed description of the software with examples. \begin{figure} \includegraphics[width=0.99\textwidth]{images/oncoscore_pipeline} \label{fig:oncoscore_pipeline} \caption[Oncoscore pipeline \cite{piazza2017oncoscore}.]{The input of the Oncocore pipeline is a set of candidate driver genes or a chromosomal region and all the genes within. For these genes, the tool scans the biomedical literature (i.e., PubMed scientific papers) and automatically retrives citation counts for all the genes. Then, the oncogenetic potential of each of the candidate driver genes is assessed. This analysis can also be repeated over a time line in order to analyze the trends of oncogeneicity of the genes over time.} \end{figure} OncoScore provides a set of functions to dynamically perform web queries to the biomedical literature in real time. The user can specify a set of genes and aliases to be considered in the queries. Furthermore, it is also possible to specify a set of dates and in this case the tool will retrieve only the literature up to these moments in order to assess the association to cancer of the considered genes at these specific times. Once the queries are performed, it is possible to measure the oncogenetic potentials of the genes by means of a number representing the strength of the association of any gene to cancer. The OncoScore analysis is particularly useful when considering copy number alterations. These genomic rearrangements can involve dozens or even hundreds of genes. To this extent, OncoScore provides functions to retrieve the names of the genes in a given portion of a chromosome together with functions to automatically perform the whole pipeline for all the genes within an chromosomic region, without the need of directly specify the gene names. As above mentioned, the tool also provides the opportunity of computing the OncoScore at different user-defined moment in time. This is useful to study the trend through the literature for specific genes. Finally, the tool provides functions to plot the results on the retrieves genes or chromosomal regions. It is possible to plot the OncoScores as bar plots when performing the standard analysis and as line plots for the analysis of genes trends through time. \section{Conclusions} \label{sec:conclusions} OncoScore is an open-source tool capable of ranking genes according to their association to cancer, based on available biomedical literature on PubMed. The tool can perform dynamically updatable web queries to the biomedical literature in real-time and measure the oncogenetic potential of genes. The output of the analysis is a score that measures the strength of the association of the genes to cancer at the time of the execution. OncoScore analysis on NGS data on both variants and chromosomal regions shows the utility of this tool when performing the crucial task of cancer gene prioritization. \bibliographystyle{unsrt}
1,116,691,499,451
arxiv
\section{Introduction} \noindent Throughout this paper we consider finite (undirected) graphs that allow parallel edges and may have loops. Let $G = (V, E)$ be a graph with vertex set $V$ and edge set $E$. The order, size of $G$ and the number of connected components of $G$ are denoted by $n=n(G)$, $m=m(G)$ and $c=c(G)$, respectively. The complete graph, the empty graph, the path and the cycle of order $n$ is denoted by $K_n, E_n, P_n$ and $C_n$, respectively. For $A \subseteq E$, we denote by $G[A]$ the subgraph induced by $A$, $G/A$ the graph obtained from $G$ by contracting all edges in $A$, $G-A$ the graph obtained from $G$ by deleting edges in $A$ and $G|_A $ the restriction of $G$ to $A$, namely $G|_A = G-(E\backslash A)$. The Tutte polynomial $T(G; x, y)$ of a graph $G = (V,E)$, introduced in \cite{TUTTE}, is a two-variable polynomial which can be recursively defined as: \begin{eqnarray*}\nonumber T(G;x,y)= \begin{cases} 1& \text{if $E = \emptyset$}\\ xT(G/e; x,y) & \text{if $e$ is a bridge}\\ yT(G-e; x,y) & \text{if $e$ is a loop}\\ T(G/e;x,y) + T(G-e;x,y) & \text{if $e$ is neither a loop nor a bridge}. \end{cases} \end{eqnarray*} It is independent of the order of edges selected for deletion and contraction in the reduction process to the empty graph. One way of seeing this is through the rank-nullity expansion of the Tutte polynomial. Let $A\subseteq E$. We identify $A$ with the spanning subgraph $(V, A)$ of $G$, i.e. $G|_A$, temporarily for the sake of simplicity. Let $\rho(A)$ denote the rank $n-c(A)$, $\gamma(A)$ denote the nullity $|A|-n+c(A)$. Then \begin{eqnarray}\nonumber T (G; x, y) = \sum_{A \subseteq E}(x- 1)^{\rho(E)-\rho(A)}(y-1)^{\gamma(A)}. \end{eqnarray} Moreover, the Tutte polynomial has a spanning forest expansion \cite{TUTTE}, i.e. \begin{eqnarray*} T (G; x, y) = \sum_{i,j}t_{ij}x^iy^j, \end{eqnarray*} where $t_{ij}$ is the number of spanning forests of $G$ with internal activity $i$ and external activity $j$. The Tutte polynomial contains as special cases the chromatic polynomial $P(G; \lambda)$ which counts proper $\lambda$-colorings of $G$ and the flow polynomial $F(G; \lambda)$ which counts nowhere-zero $AG$-flows of $G$, where $AG$ is a finite Abelian group and $|AG|=\lambda$. Namely, \begin{eqnarray} P(G; \lambda)&=&(-1)^{\rho(E)}\lambda^{c}T(G; 1 - \lambda, 0),\label{chro}\\ F(G; \lambda)&=&(-1)^{\gamma(E)}T(G; 0, 1-\lambda).\label{flow} \end{eqnarray} In \cite{KOOK}, Kook obtained the following convolution formula for the Tutte polynomial, which will be used in Section 2. \begin{eqnarray} T(G;x,y) = \sum_{A\subseteq E}T(G/A; x,0)T(G|_A;0,y).\label{con} \end{eqnarray} It is obvious that $t_{00} = 0$ if $|E|>0$. It is proved that $t_{01} = t_{10}$ if $|E|>1$ \cite{BOLLOBAS}, and it is called $\beta$ invariant. $\beta \neq 0$ implies that the considered graph is loopless and 2-connected \cite{BRYLAWSKI} and the $\beta$ invariant enumerates the bipolar orientations of a rooted 2-connected plane graph \cite{FOR}. For surveys of results and applications of the Tutte polynomial, we refer the readers to \cite{BRYLAWSKI,WELSH,ELLIS}. It is basic in graph theory to establish relations between the coefficients of graph polynomials and subgraph structures in the graph. See, for example, \cite{Biggs} for results on characteristic polynomial and chromatic polynomial. The purpose of the paper is to establish a relation between several extreme coefficients of the Tutte polynomial and subgraph structures of the graph. Let $G=(V,E)$ be a connected bridgeless and loopless graph. To state our results, we need some additional definitions and notations. A \emph{parallel class} of $G$ is a maximal subset of $E$ sharing the same endvertices. A parallel class is called \emph{trivial} if it contains only one edge. It is obvious that parallel classes partition the edge set $E$. A \emph{series class} of $G$ is a maximal subset $C$ of $E$ such that the removal of any two edges from $C$ will increase the number of connected components of the graph. Let $C\subseteq E$. Then $C$ is a series class if and only if $c(G-C)=|C|$ and $G-C$ is bridgeless. A series class is called \emph{trivial} if it contains only one edge. If a bridgeless and loopless graph $G$ is disconnected, then its series classes are defined to be the union of series classes of connected components of $G$. Series classes also partition the edge set $E$. See \cite{DJIN} for details. Let $k_1,k_2,k_3\geq 1$ be integers. We denote by $\Theta_{k_1,k_2,k_3}$, the graph with two vertices $u$, $v$ connected by three internally disjoint paths of lengths $k_1,k_2$ and $k_3$. If $C\subseteq E$ satisfies: (1) $C=C_1\cup C_2\cup C_3$, $C_{i}\subseteq E$ and $|C_i|=k_i$ $(i=1,2,3)$, (2) $G$ has the structure as shown in Figure 1, and (3) $G-C= G_1\cup G_2\cup\cdots\cup G_{k_1+k_2+k_3-1}$ and each $G_i$ is connected and bridgeless for $i=1,2,\cdots,k_1+k_2+k_3-1$, then we say $C$ is a $\Theta$ class of $G$. The total number of $\Theta$ classes of $G$ is denoted by $\Theta(G)$. Let $k_1,k_2\geq 1$ be integers. We denote by $\infty_{k_1,k_2}$ the graph formed from two cycles $C_{k_1}$ and $C_{k_2}$ by identifying one vertex of $C_{k_1}$ with one vertex of $C_{k_2}$. If $C\subseteq E$ satisfies: (1) $C=C_1\cup C_2$, $C_{i}\subseteq E$ and $|C_i|=k_i$ $(i=1,2)$, (2) $G$ has the structure as shown in Figure 2, and (3) $G-C= G_1\cup G_2\cup\cdots\cup G_{k_1+k_2-1}$ and each $G_i$ is connected and bridgeless for $i=1,2,\cdots,k_1+k_2-1$, then we say $C$ is an $\infty$ class of $G$. The total number of $\infty$ classes of $G$ is denoted by $\infty(G)$. \begin{figure}[htbp] \label{fig1} \centering \includegraphics[width=.5\textwidth]{thetaG.eps} \caption{A $\Theta$ class.} \end{figure} \begin{figure}[htbp] \label{fig2} \centering \includegraphics[width=.5\textwidth]{8G.eps}\label{eg} \caption{An $\infty$ class.} \end{figure} Now we are in a position to state our main results. \begin{theorem}\label{mainI} Let $G = (V, E)$ be a loopless and bridgeless connected graph. Let $s(G)$ and $s^*(G)$ be the number of series classes and non-trivial series classes of $G$, respectively. Then \begin{itemize} \item[(1)] $t_{0,m-n+1}= 1$, \item[(2)] $t_{0,m-n} = n + s(G) - m -1$, \item[(3)] $t_{0,m-n-1} ={m-n+1 \choose 2} - (m-n)s(G) + {s(G) \choose 2} - \Theta(G)$, \item[(4)] $t_{1,m-n} =s^*(G)$, \textrm{and} \item[(5)] $ t_{1,m-n-1} =-s^*(G)(m-n)+\sum_{\stackrel{A\subseteq E}{A\ \text{is a nontrivial series class} } }s(G-A) + \Theta(G). $ \end{itemize} \end{theorem} \begin{theorem}\label{mainII} Let $G = (V, E)$ be a loopless and bridgeless connected graph. Let $p(G)$ and $p^*(G)$ be the number of parallel classes and non-trivial parallel classes of $G$, respectively. Let $\Delta(\tilde{G})$ be the number of triangles of $\tilde{G}$, the graph obtained from $G$ by replacing each parallel class by a single edge. Then \begin{itemize} \item[(1)] $t_{n-1,0} = 1$, \item[(2)] $t_{n-2,0} = p(G) - n +1$, \item[(3)] $t_{n-3,0} ={n-1 \choose 2} - (n-2)p(G) + {p(G) \choose 2}-\Delta(\tilde{G})$, \item[(4)] $t_{n-2,1} =p^*(G)$, \textrm{and} \item[(5)] $t_{n-3,1} = -p^*(G)(n-2)+\sum_{\stackrel{A\subseteq E}{ A\ \text{is a nontrivial parallel class} } }p(G/A) + \Delta(\tilde{G}).$ \end{itemize} \end{theorem} \begin{remark} $t_{0,m-n+1}, t_{0,m-n}$ and $t_{0,m-n-1}$ can be obtained from coefficients of the flow polynomial $F(G; 1-y)$ and $t_{n-1,0}, t_{n-2,0}$ and $t_{n-3,0}$ can be obtained from coefficients of the chromatic polynomial $P(G;1-x)$. These will be seen from proofs of Theorems 1 and 2 and further clarified in Section 3. Another four coefficients are completely new as far as we know. \end{remark} \section{Proofs} \noindent To prove Theorems \ref{mainI} and \ref{mainII}, we need two results that express the chromatic and flow polynomials of graphs as characteristic polynomials of some lattice related to graphs, respectively, which was proven by G.~C. Rota in the 1960s \cite{ROTA}. \subsection{M\"{o}bius function, the chromatic and flow polynomials} \noindent Let $P$ be a poset (a finite set $S$ partially ordered by the relation $\leq$). The unique \emph{minimum element} and unique \emph{maximum element} in $P$, if they exist, are denoted by $\widehat{0}=\widehat{0}_P, \widehat{1}=\widehat{1}_P$, respectively. A segment $[x,y]$, for $x,y\in P$, is the set of all elements $z$ between $x$ and $y$, i.e. $\{z | x\leq z\leq y\}$. Note that the segment $[x,y]$ endowed with the induced order structure is a poset in its own right and $\widehat{0}_{[x,y]} = x, \widehat{1}_{[x,y]} = y$. An element $y$ \emph{covers} an element $x$ when the segment $[x,y]$ contains two elements. A poset is \emph{locally finite} if every segment is finite. Let $P$ be a locally finite poset. Then the M\"{o}bius function of $P$ is an integer-valued function defined on the Cartesian set $P\times P$ such that \begin{align*} \mu(x,y) =1\ & \mbox{if}\ x = y\\ \mu(x,y) =0\ & \mbox{if}\ x \nleq y\\ \sum_{x\leq z\leq y} \mu(x,z)=0\ & \mbox{if}\ x <y. \end{align*} If $P$ has a $\widehat{0}$, then \begin{eqnarray} \mu(\widehat{0},x) = -\sum_{y<x} \mu(\widehat{0},y)\ \mbox{if}\ x >\widehat{0}. \end{eqnarray} A finite poset $P$ is \emph{ranked (or graded)} if for every $x\in P$ every maximal chain with $x$ as top element has the same length, denoted $rk(x)$. Here the length of a chain with $k$ elements is $k - 1$. If $P$ is ranked, the function $rk$ called the \emph{rank function}, is zero for minimal elements of $P$ and $rk(y) = rk(x) + 1$ if $x, y\in P$ and $y$ covers $x$. Let $P$ be a ranked poset and $t$ be a variable. Then the characteristic polynomial of $P$ is defined by \begin{eqnarray} q(P;t) = \sum_{x\in P}\mu(\widehat{0},x)t^{rk(\widehat{1})-rk(x)}. \end{eqnarray} Let $G = (V, E)$ be a loopless graph. A \emph{bond} of $G$ is a spanning subgraph $H \subseteq G$ such that each connected component of $H$ is a vertex-induced subgraph of $G$. Then the set $L(G)$ consisting of all bonds of $G$ forms a graded lattice ordered by the refinement relation on the set of partitions of $V$, that is, $K \in L(G) \leq H\in L(G)$ means that $\{V(K_1), \cdots, V(K_s)\}$ is finer than $\{V(H_1), \cdots, V(H_t)\} $, where $K_1, \cdots, K_s$ are connected components of $K$ and $H_1, \cdots, H_t$ are connected components of $H$. Moreover, $rk(H) = \rho(H)$ for $H\in L(G)$ and $P(G; \lambda ) = \lambda^{c(G)}q(L(G); \lambda ) $. \begin{theorem}[\cite{ROTA}] Let $G$ be a loopless graph. Then \label{thA} \begin{align} P(G; \lambda )= \sum_{H\in L(G)}\mu(E_n,H)\lambda^{c(H)}.\label{ty} \end{align} \end{theorem} \begin{remark} Note that when $G$ contains parallel edges, Theorem \ref{thA} is still valid if we take $\tilde{G}$ in place of $G$ in the right side of (\ref{ty}). \end{remark} Let $G = (V,E)$ be a bridgeless graph. The set $L'(G)$ consisting of all spanning subgraphs of $G$ without bridges also forms a graded lattice with the partial order defined by $H_1 \leq H_2$ if $E(H_1) \supseteq E(H_2)$. Moreover, $rk(H) = \gamma(G) - \gamma(H)$ for $H\in L'(G)$. \begin{theorem} Let $G$ be a bridgeless graph. Then \label{thB} \begin{align} F(G; \lambda) = \sum_{H\in L'(G)}\mu(G,H)\lambda^{\gamma(H)}. \end{align} \end{theorem} \noindent\textbf{Proof.} Let $H \in L'(G)$. Suppose $N_{=}(H)$ is the function counting $AG$-flows of $\overrightarrow{G}$ such that $\mathbf{0}$ is assigned exactly on edges of $E\backslash E(H)$, and $N_{\geq}(H) =\sum_{H'\geq H}N_=(H')$ is the function counting $AG$-flows of $\overrightarrow{G}$ such that $\mathbf{0}$ is assigned at least on edges of $E\backslash E(H)$. Note that $N_=(G) = F(G; \lambda)$. By the M\"{o}bius Inversion Theorem \cite{BENDER}, \[ N_{=}(G) = \sum_{H\in L'(G)}\mu(G,H)N_{\geq}(H). \] It is not difficult to see that $N_{\geq}(H) = \lambda^{\gamma(H)}$, which completes the proof. \hfill\cvd \vskip0.2cm We write $\omega_1 = 1-x$ and $\omega_2 = 1-y$. Keep in mind that $\widehat{0}$ will be the graph $G$ itself when $L'(G)$ is concerned while $\widehat{0}$ will be the empty graph of order $n(G)$ when $L(G)$ is concerned. By inserting Eqs. (\ref{chro}) and (\ref{flow}) into Eq. (\ref{con}) , we obtain \begin{eqnarray} &&T(G;x,y)=\sum_{A\subseteq E}T(G/A; x,0)T(G|_A;0,y)\nonumber\\ &=& \sum_{A\subseteq E}(-1)^{n(G/A)-c(G/A)+|A|-n+c(G|_A)}\omega_1^{-c(G/A)}P(G/A;\omega_1)F(G|_A;\omega_2). \label{eq-sum} \end{eqnarray} Thus, we only consider $A$'s in the RHS of (\ref{eq-sum}) such that $G/A$ is loopless and $G|_A$ is bridgeless in the proofs of Theorems \ref{mainI} and \ref{mainII}. Otherwise, $P(G/A;\omega_1)=0$ or $F(G|_A;\omega_2)=0$. The following notations will be used in the next two subsections. Let $D_k$ be the dipole graph of size $k$, i.e. two distinct vertices connected by $k$ parallel edges. Let $P_{k_1, k_2}$~($k_i\geq 1$ for each $i=1,2$) be the \emph{multi-path} with $3$ vertices $v_1,v_2,v_{3}$ connected by $k_i$ parallel edges between $v_i$ and $v_{i+1}$ for $i=1,2$. Let $C_{k_1, k_2, k_3}$ ($k_i \geq 1$ for each $i=1,2,3$) be the \emph{multi-cycle} with $3$ vertices $v_1, v_2, v_{3}$ connected by $k_i$ parallel edges between $v_i$ and $v_{i+1}$ for $i=1,2$ and $k_3$ parallel edges between $v_3$ and $v_1$. \subsection{Proof for Theorem \ref{mainI}} \noindent Note that the degree in $y$ in Theorem \ref{mainI} is $m-n+1, m-n$ and $m-n-1$. We shall consider $A$'s with $G|_A$ close to $G$. \noindent\textbf{Case 1} $ |A| = m$. In this case, $G/A=K_1$ and $ G|_A=G$. The corresponding contribution of $A$ to the summation of (\ref{eq-sum}) is \begin{eqnarray} (-1)^{m-n+1}[\omega_2^{m-n+1} + \sum_{\stackrel {H\in L'(G)}{rk(H)=1}}\mu(\widehat{0},H)\omega_2^{m-n}+\sum_{\stackrel {H\in L'(G)}{ rk(H)=2}}\mu(\widehat{0},H)\omega_2^{m-n-1}+\cdots]. \label{eq-A1} \end{eqnarray} \noindent\textbf{Case 2} $|A| = m-1$. In this case, $ G/A $ will contain a loop. \noindent\textbf{Case 3} $|A| = m-2$. In this case, $E\backslash A$ is exactly a series class with cardinality $2$ of $G$. Moreover, $ G/A=D_2$ and hence $P(G/A; \omega_1)= -x(1-x)$. It is clear that $c(G/A) = 1$ and $c(G|_A) = c(G-(E\backslash A)) = 2$. Hence, \[(-1)^{n(G/A)-c(G/A)+|A|-n+c(G|_A)}\omega_1^{-c(G/A)} = (-1)^{m-n+1}(1-x)^{-1}. \] Thus, the contribution of $A$ can be written as \begin{align} (-1)^{m-n}x[\omega_2^{m-n} + \sum_{\stackrel {H\in L'(G|_A)}{ rk(H)=1}}\mu(\widehat{0},H)\omega_2^{m-n-1}+\cdots ] \label{eq-A2} \end{align} since $\deg(F(G|_A; \lambda))= \gamma(G|_A) = m - n$. \noindent\textbf{Case 4} $|A| = m-3$. There are two subcases. \noindent\textbf{(a)} $c(G|_A) = c(G - E\backslash A) = 3$. Then $E\backslash A$ is exactly a series class with cardinality 3. Moreover, $ G/A = C_3$. Then $n(G/A) =3, c(G/A) = 1$, $c(G|_A) = 3$ and $P(G/A; \omega_1)= x(1-x)(1+x)$. So \[(-1)^{n(G/A)-c(G/A)+|A|-n+c(G|_A)}\omega_1^{-c(G/A)} = (-1)^{m-n}(1-x)^{-1}. \] Thus, the contribution of $A$ can be written as \begin{align} (-1)^{m-n}x(1+x)[\omega_2^{m-n} + \sum_{\stackrel {H\in L'(G|_A)}{ rk(H)=1}}\mu(\widehat{0},H)\omega_2^{m-n-1}+\cdots ]. \label{eq-A4} \end{align} since $\deg(F(G|_A; \lambda))= \gamma(G|_A) = m - n$. \noindent\textbf{(b)} $c(G|_A) = c(G - E\backslash A) = 2$. Then $E\backslash A$ a $\Theta$ class. Moreover, $G/A = \Theta_{1,1,1}$. Note that $n(G/A) =2$, $c(G/A) = 1$, $c(G|_A) = 2$ and $P(G/A; \omega_1)= -x(1-x)$. So \[(-1)^{n(G/A)-c(G/A)+|A|-n+c(G|_A)}\omega_1^{-c(G/A)} = (-1)^{m-n}(1-x)^{-1}. \] Thus, the contribution of $A$ can be written as \begin{align} (-1)^{m-n-1}x[\omega_2^{m-n-1} +\cdots ] \label{eq-A3} \end{align} since $\deg(F(G|_A; \lambda))= \gamma(G|_A) = m - n-1$. \noindent\textbf{Case 5} $|A| = m- k$ ($k\geq 4$). Since the degree in $y$ we considered in Theorem \ref{mainI} is at least $m-n-1$, only the following two subcases are needed to be considered. \noindent\textbf{(a)} $c(G - E\backslash A) = k$. It means that $E\backslash A$ is exactly a series class with cardinality $k$. Note that $G/A=C_k$. Then the contribution of $A$ can be written as \begin{align} &(-1)^{k-1+m-k-n+ k}\omega_1^{-1}P(C_k;\omega_1)[\omega_2^{m - n}+ \sum_{\stackrel {H\in L'(G|_A)}{ rk(H)=1}}\mu(\widehat{0},H)\omega_2^{m-n-1}+\cdots]\nonumber\\ =&(-1)^{m-n+k-1}(1-x)^{-1}[(-1)^k(-x) + (-x)^k][\omega_2^{m-n}+\sum_{\stackrel {H\in L'(G|_A)}{ rk(H)=1}}\mu(\widehat{0},H)\omega_2^{m-n-1}+\cdots] \nonumber\\ =&(-1)^{m-n}[x+\cdots+x^{k-1}][\omega_2^{m-n}+\sum_{\stackrel {H\in L'(G|_A)}{ rk(H)=1}}\mu(\widehat{0},H)\omega_2^{m-n-1}+\cdots] \nonumber\\ =& xy^{m-n} - [(m-n)+ \sum_{\stackrel {H\in L'(G|_A)}{ rk(H)=1}}\mu(\widehat{0},H)]xy^{m-n-1}+\cdots.\label{eq-A5} \end{align} \noindent\textbf{(b)} $c(G - E\backslash A) = k-1$. Then $E\backslash A$ is either a $\Theta$ class or a $\infty$ class. \noindent\textbf{(1)} For the first case, $G/A$ will be the theta graph $\Theta_{k_1,k_2,k_2}$ with $k_1, k_2, k_3\geq 1$ and $k_1 +k_2 +k_3 = k$. Then we know from \cite{BH} that \begin{align*} P(G/A;\omega_1) =&\frac{\operatorname*{\prod}\limits_{i=1}^3[(\omega_1-1)^{k_i+1}+(-1)^{k_i+1}(\omega_1-1)]}{[\omega_1(\omega_1-1)]^2}\nonumber\\ &+\frac{\operatorname*{\prod}\limits_{i=1}^3[(\omega_1-1)^{k_i}+(-1)^{k_i}(\omega_1-1)]}{\omega_1^2}. \end{align*} If $k_1, k_2, k_3\geq 2$, the chromatic polynomial of $G/A$ can be written as \begin{align*} P(G/A;\omega_1)=&(-1)^kx(1-x)(1+\cdot\cdot\cdot+x^{k_1-1})(1+\cdot\cdot\cdot+x^{k_2-1})(1+\cdot\cdot\cdot+x^{k_3-1})\nonumber\\ &+(-1)^{k+1}x^3(1-x)(1+\cdot\cdot\cdot+x^{k_1-2})(1+\cdot\cdot\cdot+x^{k_2-2})(1+\cdot\cdot\cdot+x^{k_3-2}). \end{align*} Then the contribution of $A$ to the summation of (\ref{eq-sum}) is \begin{align*} &(-1)^{(k-1)-1+m-k-n+(k-1)}\omega_1^{-1}P(G/A;\omega_1)[\omega_2^{m-n-1}+\cdots]\nonumber\\ =&(-1)^{m-n-1}x(1+\cdots+x^{k_1-1})(1+\cdots+x^{k_2-1})(1+\cdots+x^{k_3-1})[\omega_2^{m-n-1}+\cdots]\nonumber\\ &+(-1)^{m-n}x^3(1+\cdots+x^{k_1-2})(1+\cdots+x^{k_2-2})(1+\cdots+x^{k_3-2})[\omega_2^{m-n-1}+\cdots]. \end{align*} If there is one $k_i$ such that $k_i=1$, then \begin{align*} P(G/A;\omega_1)=(-1)^kx(1-x)(1+\cdot\cdot\cdot+x^{k_1-1})(1+\cdot\cdot\cdot+x^{k_2-1})(1+\cdot\cdot\cdot+x^{k_3-1}). \end{align*} The contribution of $A$ is \begin{align*} &(-1)^{(k-1)-1+m-k-n+(k-1)}\omega_1^{-1}P(G/A;\omega_1)[\omega_2^{m-n-1}+\cdots]\nonumber\\ =&(-1)^{m-n-1}x[1+\cdots+x^{k_1-1}][1+\cdots+x^{k_2-1}][1+\cdots+x^{k_3-1}][\omega_2^{m - n-1}+\cdots]. \end{align*} It can be seen that no mater what values of $k_1$, $k_2$, $k_3$ are, the contributions of $A$ are the same, i.e. \begin{align} (-1)^{m-n-1}x[1+\cdots+x^{k_1-1}][1+\cdots+x^{k_2-1}][1+\cdots+x^{k_3-1}][\omega_2^{m - n-1}+\cdots]\label{eq-A7}. \end{align} Note that Eq. (\ref{eq-A7}) coincides with Eq. (\ref{eq-A3}). \noindent\textbf{(2)} For the second case, $G/A$ will be $\infty_{k_1,k_2}$ with $k_1,k_2\geq 1$, $k_1 +k_2 = k$. If $k_1$ or $k_2$ is 1, then $G/A$ contains a loop. So we only consider $k_1,k_2\geq 2$. Then the contribution of $A$ can be written as \begin{align*} &(-1)^{(k-1)-1+m-k-n+(k-1)}\omega_1^{-1}\frac{C(C_{k_1};\omega_1)C(C_{k_2};\omega_1)}{\omega_1}[\omega_2^{m-n-1}+\cdot\cdot\cdot]\nonumber \\ =&(-1)^{m-n-1}x^2[1+\cdot\cdot\cdot+x^{k_1-2}][1+\cdot\cdot\cdot+x^{k_2-2}][\omega_2^{m - n-1}+\cdots]. \end{align*} Now we combine all above cases to obtain Theorem \ref{mainI}. Note that we only need to consider Case 1 to determine $t_{0,m-n+1},t_{0,m-n}$ and $t_{0,m-n-1}$ since Case 2 has no contribution and contributions of Cases 3 to 5 and their subcases and subsubcases all include the variable $x$. To determine $t_{1,m-n}$, we only need consider Case 3, Case 4(a) and Case 5(a). To determine $t_{1,m-n-1}$, we need consider the first type: Case 3, Case 4(a) and Case 5(a) and the second type: Case 4(b) and Case 5(b)(1). Note that Case 5(b)(2) includes $x^2$ and has no contribution. Now let's insert (\ref{eq-A1})-(\ref{eq-A7}) into (\ref{eq-sum}). We obtain: \begin{align*} &T(G; x, y)\nonumber\\ =&y^{m-n+1} + (-1)[(m-n+1) + \sum_{\stackrel {H\in L'(G)}{rk(H)=1}}\mu(\widehat{0},H)]y^{m-n} \\ & + [{m-n+1 \choose 2} + (m-n)\sum_{\stackrel {H\in L'(G)}{rk(H)=1}}\mu(\widehat{0},H) + \sum_{\stackrel {H\in L'(G)}{rk(H)=2}}\mu(\widehat{0},H)]y^{m-n-1}\\ & + \cdots \\ &+ \sum_{\stackrel {A \subseteq E }{E\backslash A\ \text{is a nontrivial serial class}}} xy^{m-n}\\ &+ \Big\{\sum_{\stackrel {A \subseteq E }{E\backslash A\ \text{is a nontrivial serial class}}}[-(m-n)-\sum_{\stackrel {H\in L'(G|_A)}{rk(H)=1}}\mu(\widehat{0},H)]+ \Theta(G)\Big\}xy^{m-n-1}\\ & + \cdots \end{align*} Let $H=G-A \in L'(G)$. Then $rk(H) =1$ $\Longleftrightarrow$ $\gamma(H)=m-n$ $\Longleftrightarrow$ $|A| = c(G -A)$. Thus \[\sum_{\stackrel {H\in L'(G)}{rk(H)=1}} \mu(\widehat{0}, H)=\sum_{\stackrel {A\subseteq E, |A| = c(G-A)} {G - A\ \text{is bridgeless}}} (-1)= \sum_{A\ \text{is a series class}} (-1)=-s(G). \] \begin{figure}[htbp] \label{fig2} \centering \includegraphics[width=.5\textwidth]{rkH2.eps} \caption{$H'=G_1\cup\cdots\cup G_{k_1-1}\cup G_{k_1}\cup\cdots\cup G_{k_1+k_2-1}$.} \end{figure} Similarly, let $H'=G-A' \in L'(G)$. Then $rk(H') =2$ $\Longleftrightarrow$ $|A'| = c(G -A') +1 $ $\Longleftrightarrow$ $A'-A$ is a series class of $G-A$ $\Longleftrightarrow$ $G$ has the structure like Figure 3 $\Longleftrightarrow$ $A'$ is either a $\Theta$ class or a $\infty$ class. If $A'$ is a $\Theta$ class, then $\mu(\hat{0},H')=2$ and if $A'$ is an $\infty$ class, then $\mu(\hat{0},H')=1$. Note that $\infty(G)=\binom{s(G)}{2}-3\Theta(G)$. Thus \begin{align*} \sum\limits_{\scriptstyle H\in L'(G) \atop \scriptstyle rk(H)=2}\mu(\hat{0},H)=1\times\left[\binom{s(G)}{2}-3\Theta(G)\right]+2\times\Theta(G)=\binom{s(G)}{2}-\Theta(G). \end{align*} This completes the proof of Theorem \ref{mainI}. \hfill\cvd \vskip0.2cm \subsection{Proof for Theorem \ref{mainII}} \noindent We only give a sketch of the proof of Theorem \ref{mainII}. \noindent\textbf{Case 1} $ A = \emptyset $. In this case, $ G/A =G$ and $ G|_A =E_n$. The contribution of $A$ to the summation (\ref{eq-sum}) is \begin{eqnarray} (-1)^{n-1}[\omega_1^{n-1} + \sum_{\stackrel {H\in L(G)}{rk(H)=1}}\mu(\widehat{0},H)\omega_1^{n-2}+\sum_{\stackrel {H\in L(G)}{ rk(H)=2}}\mu(\widehat{0},H)\omega_1^{n-3}+\cdots].\label{ccc} \label{eq-B1} \end{eqnarray} \noindent\textbf{Case 2} $|A| = 1$. $G|_A$ will have a bridge. \noindent\textbf{Case 3} $|A| = 2$. $A$ is exactly a parallel class of cardinality 2, $G|_A =D_2\cup E_{n-2}$ and \[(-1)^{n(G/A)-c(G/A)+|A|-n+c(G|_A)}\omega_1^{-c(G/A)} = (-1)^{n-1}\omega_1^{-1}. \] It follows that the contribution of $A$ can be written as \begin{align} (-1)^{n-1}(-y)[\omega_1^{n-2} + \sum_{\stackrel {H\in L(G/A)}{ rk(H)=1}}\mu(\widehat{0},H)\omega_1^{n-3}+\cdots ]. \label{eq-B2} \end{align} \noindent\textbf{Case 4} $|A| = 3$. There are only two subcases to be considered. \noindent\textbf{(a)} $A$ is a parallel class. In this case $n(G/A) =n-1$, $c(G/A) = 1$, $c(G|_A) = n-1$. Hence \[(-1)^{n(G/A)-c(G/A)+|A|-n+c(G|_A)}\omega_1^{-c(G/A)} = (-1)^{n}\omega_1^{-1}. \] Thus, the contribution of $A$ can be written as \begin{align} (-1)^{n}(y^2+y)[\omega_1^{n-2} + \sum_{\stackrel {H\in L(G/A)}{ rk(H)=1}}\mu(\widehat{0},H)\omega_1^{n-3}+\cdots ]. \label{eq-B3} \end{align} since $\deg(P(G/A;\lambda))= n-1$ and $F(G|_A; \omega_2) = (1-\omega_2) + (1-\omega_2)^2 = y^2 +y $. \noindent\textbf{(b)} Each edge of $A$ is a trivial parallel class and $G[A]=C_3$. In this case, $n(G/A) =n-2$, $c(G/A) = 1$, $c(G|_A) = n-2$. Thus, \[(-1)^{n(G/A)-c(G/A)+|A|-n+c(G|_A)}\omega_1^{-c(G/A)} = (-1)^{n}\omega_1^{-1}. \] Note that $\deg(P(G/A; \lambda))= n-2$ and $F(G|_A; \omega_2) = -y$, the contribution of $A$ can be written as \begin{align} (-1)^{n}(-y)[\omega_1^{n-3} + \cdots ]. \label{eq-B4} \end{align} \noindent\textbf{Case 5} $|A| = k$ ($k\geq 4$). Since the degree of $x$ considered in Theorem \ref{mainII} is at least $n-3$, we only need to discuss the following subcases. \noindent\textbf{(a)} $A$ is a parallel class. Then the contribution of $A$ is \begin{align} &(-1)^{n(G/A)-c(G/A)+|A|-n+c(G|_A)}\omega_1^{-c(G/A)}F(G|_A;\omega_2)P(G/A; \omega_1)\nonumber\\ =&(-1)^{(n-1)-1+k-n+(n-1)}F(D_k;\omega_2)[\omega_1^{n-2}+\sum_{\stackrel {H\in L(G/A)}{ rk(H)=1}}\mu(\widehat{0},H)\omega_1^{n-3}+\cdots]\nonumber\\ =&(-1)^{n+k-1}[(-1)^{k-1}\sum_{i=1}^{k-1}y^{i}][\omega_1^{n-2}+\sum_{\stackrel {H\in L(G/A)}{ rk(H)=1}}\mu(\widehat{0},H)\omega_1^{n-3}+\cdots]\nonumber\\ =& yx^{n-2} - [(n-2)+ \sum_{\stackrel {H\in L(G/A)}{ rk(H)=1}}\mu(\widehat{0},H)] yx^{n-3} +\cdots.\label{eq-B5} \end{align} \noindent\textbf{(b)} $G[A]=P_{k_1, k_2}$ or $D_{k_1}\cup D_{k_2}$ ($k_i\geq 2, k_1+k_2 = k$) and $G/A$ has no loops. Since the lowest degree in $y$ of the polynomial $F(G|_A;\omega_2)$ is 2. So we need not consider such $A$'s. \noindent\textbf{(c)} $G[A]=C_{k_1,k_2,k_3}$ ($k_i\geq 1, k_1+k_2+k_3=k$) and $G/A$ has no loops. Then the contribution of $A$ is \begin{align} &(-1)^{n(G/A)-c(G/A)+|A|-n+c(G|_A)}F(C_{k_1, k_2, k_3}; \omega_2)\omega_1^{-1}[\omega_1^{n-2}+\cdots]\nonumber\\ =&(-1)^{n-k+1}[(-1)^{k}(y + 3y^2 + \cdots)][\omega_1^{n-3}+\cdots]\nonumber\\ =& yx^{n-3} +\cdots\label{eq-B7} \end{align} since \begin{align*} &F(C_{k_1, k_2, k_3}; \omega_2) \\ =& (-1)^{k_1}\Big[\frac{(1-\omega_2)^{k_1}-1}{\omega_2}F(D_{k_2+k_3};\omega_2) + F(P_{k_2, k_3};\omega_2)\Big]\\ =& (-1)^{k_1}[(-1)\sum_{i=0}^{k_1-1}y^i(-1)^{k_2+k_3-1}\sum_{i=1}^{k_2+k_3-1}y^{i} + (-1)^{k_2+k_3}\sum_{i=1}^{k_2-1}y^{i}\sum_{i=1}^{k_3-1}y^{i}]\\ =& (-1)^{k}(y + 3y^2 + \cdots ). \\ \end{align*} Now let's insert (\ref{eq-B1})-(\ref{eq-B7}) into (\ref{eq-sum}) and will obtain \begin{align*} &T(G; x, y)\nonumber\\ =&x^{n-1} + (-1)[(n-1) + \sum_{\stackrel {H\in L(G)}{rk(H)=1}}\mu(\widehat{0},H)]x^{n-2} \\ & + [{n-1 \choose 2} + (n-2)\sum_{\stackrel {H\in L(G)}{rk(H)=1}}\mu(\widehat{0},H) + \sum_{\stackrel {H\in L(G)}{rk(H)=2}}\mu(\widehat{0},H)]x^{n-3}\\ &+ \cdots \\ &+ \sum_{\stackrel {A \subseteq E }{A\ \text{is a nontrivial parallel class}}} yx^{n-2}\\ & + \Big\{\sum_{\stackrel {A \subseteq E }{A\ \text{is a nontrivial parallel class}}}[-(n-2)-\sum_{\stackrel {H\in L(G/A)}{rk(H)=1}}\mu(\widehat{0},H)]+ \sum_{\stackrel {A \subseteq E,G[A]=C_{k_1,k_2,k_3}} {G/A\ \text{is loopless}}}1 \Big\}yx^{n-3}\\ & + \cdots. \end{align*} Clearly, \begin{eqnarray*} \sum_{\stackrel {H\in L(G)}{rk(H)=1}} \mu(\widehat{0}, H)&=&-p(G),\\ \sum_{\stackrel {A \subseteq E,G[A]=C_{k_1,k_2,k_3}} {G/A\ \text{is loopless}}}1&=&\Delta(\tilde{G}). \end{eqnarray*} Let $\tilde{G}$ be the graph obtained from $G$ by replacing each parallel class by a single edge. Note that $rk(H)=2$ means $c(H)=n-2$. $H=P_3\cup E_{n-3}$ (but $\tilde{G}[V(P_3)]\neq C_3$), $P_2\cup P_2\cup E_{n-4}$ or $C_3\cup E_{n-3}$. The former two have the M\"{o}bius function value 1 and the third one has the M\"{o}bius function value 2. Then \begin{eqnarray*} \sum_{\stackrel {H\in L(G)}{rk(H)=2}} \mu(\widehat{0}, H)=1\times[{p(G) \choose 2}-3\Delta(\tilde{G})]+2\times\Delta(\tilde{G})={p(G) \choose 2}-\Delta(\tilde{G}). \end{eqnarray*} This completes the proof of Theorem \ref{mainII}. \hfill\cvd \section{Discussions} \noindent In this section, we first discuss the duality of Theorems 1 and 2. Then, we deduce the results on extreme coefficients of the Jones polynomials of alternating links \cite{Lin} and graphs \cite{DJIN}. Finally we also deduce extreme coefficients of the chromatic and flow polynomials. \subsection{Duality} \noindent It is well known that if $G$ is a plane graph and $G^*$ is the dual graph of $G$, then \begin{eqnarray} T(G^*;x,y)=T(G;y,x). \end{eqnarray} Let $t^*_{i,j}$ be the coefficient of $x^iy^j$ in the Tutte polynomial $T(G^*;x,y)$, and we have $t^*_{i,j}=t_{j,i}$. Loops and bridges, deletion and contraction will interchange by taking dual of a plane graph. The dual of a bridgeless and loopless connected plane graph is still a bridgeless and loopless connected plane graph. Let $G$ be a bridgeless and loopless connected plane graph and $G^*$ be the dual of $G$. Let $n^*$ and $m^*$ be the order and size of $G^*$, respectively. Then \begin{eqnarray*} m^*&=&m,\\ n^*&=&m-n+2,\\ s(G^*)&=&p(G),\\ p(G^*)&=&s(G),\\ s^*(G^*)&=&p^*(G),\\ p^*(G^*)&=&s^*(G),\\ \Theta(G^*)&=&\Delta(\tilde{G}),\\ \Delta(\tilde{G^*})&=&\Theta(G). \end{eqnarray*} Theorems \ref{mainI} and \ref{mainII} are dual to each other in the case that $G$ is a bridgeless and loopless connected plane graph. Now we check it as follows. \begin{itemize} \item[(1)] \begin{eqnarray*} t^*_{0,n-1}=t^*_{0,m^*-n^*+1}=1=t_{n-1,0}. \end{eqnarray*} \item[(2)] \begin{eqnarray*} t^*_{0,n-2}&=&t^*_{0,m^*-n^*}\\ &=&n^*+s(G^*)-m^*-1\\ &=&(m-n+2)+p(G)-m-1\\ &=&p(G)-n+1\\ &=&t_{n-2,0}. \end{eqnarray*} \item[(3)] \begin{eqnarray*} t^*_{0,n-3}&=&t^*_{0,m^*-n^*-1}\\ &=&{m^*-n^*+1 \choose 2} - (m^*-n^*)s(G^*) + {s(G^*) \choose 2} - \Theta(G^*)\\ &=&{n-1 \choose 2} - (n-2)p(G) + {p(G) \choose 2} - \Delta(\tilde{G})\\ &=&t_{n-3,0}. \end{eqnarray*} \item[(4)] \begin{eqnarray*} t^*_{1,n-2}&=&t^*_{1,m^*-n^*}\\ &=&s^*(G^*)\\ &=&p^*(G)\\ &=&t_{n-2,1}. \end{eqnarray*} \item[(5)] \begin{eqnarray*} t^*_{1,n-3}&=&t^*_{1,m^*-n^*-1}\\ &=&-s^*(G^*)(m^*-n^*)+\sum_{\stackrel{A^*\subseteq E^*}{A^*\ \text{is a nontrivial series class} } }s(G^*-{A^*}) + \Theta(G^*)\\ &=& -p^*(G)(n-2)+\sum_{\stackrel{A\subseteq E}{ A\ \text{is a nontrivial parallel class} } }p(G/A) + \Delta(\tilde{G})\\ &=&t_{n-3,1}. \end{eqnarray*} \end{itemize} In addition, Theorems 1 and 2 may be generalized to the Tutte polynomials of matroids and in that setting the duality may be more obvious. \subsection{Jones polynomial} \noindent In \cite{DJIN}, Dong and Jin introduced the Jones polynomial of graphs. In the case of plane graphs, it (up to a factor) reduces to the Jones polynomial of the alternating link constructed from the plane graph via medial construction \cite{Th,BOLLOBAS}. We denote by $J_G(t)$ the Jones polynomial of $G$. Then \begin{eqnarray} J_G(t)=(-1)^{n-1}t^{m-n+1}T(-t,-t^{-1}). \end{eqnarray} Based on the work of Dasbach and Lin \cite{Lin}, Dong and Jin \cite{DJIN} further obtain: \begin{theorem}[\cite{DJIN}]\label{cor} Let $G=(V,E)$ be a connected bridgeless and loopless graph with order $n$ and size $m$. Then \[J_G(t)=b_0+b_1t+b_2t^2+...+b_{m-2}t^{m-2}+b_{m-1}t^{m-1}+b_{m}t^{m},\] where $(-1)^{m-i}b_i$ is a non-negative integer for $i=0,1,2,\cdots,m$ and in particular, \begin{eqnarray*} b_0&=&(-1)^{m},\\ b_1&=&(-1)^{m}[m-n+1-s(G)],\\ b_{m-2}&=&\binom{p(G)-n+2}{2}+p^*(G)-\Delta(\tilde{G}),\\ b_{m-1}&=&n-1-p(G),\\ b_{m}&=&1. \end{eqnarray*} \end{theorem} We can deduce Theorem \ref{cor} by using Theorem \ref{mainI} and Theorem \ref{mainII} and taking $x=-t,y=-t^{-1}$, and further obtain: \begin{corollary} \begin{eqnarray*} b_2=(-1)^{m}\left[\binom{s(G)-m+n}{2}+s^*(G)-\Theta(G)\right]. \end{eqnarray*} \end{corollary} \noindent\textbf{Proof.} \begin{eqnarray*} b_2&=&(-1)^{m}[t_{0,m-n-1}+t_{1,m-n}]\\ &=&(-1)^{m}\left[{m-n+1 \choose 2} - (m-n)s(G) + {s(G) \choose 2} - \Theta(G)+s^*(G)\right]\\ &=&(-1)^{m}\left[\binom{s(G)-m+n}{2}+s^*(G)- \Theta(G)\right]. \end{eqnarray*} \hfill\cvd In \cite{Kau}, Kauffman generalized the Tutte polynomials from graphs to signed graphs, which includes the Jones polynomial of both alternating and non-alternating links. It is worth studying extreme coefficients of the signed Tutte polynomial. \subsection{Chromatic and flow coefficients} \noindent \begin{theorem}[\cite{Read,Mer}] Let $G$ be a loopless and bridgeless connected graph of order $n$. Let $P(G;\lambda)=a_0\lambda^n+a_1\lambda^{n-1}+\cdots+a_{n-1}\lambda$. Then \begin{eqnarray*} a_0&=&1,\\ a_1&=&-p(G),\\ a_2&=&\binom{p(G)}{2}-\Delta(\tilde{G}). \end{eqnarray*} \end{theorem} By Eq. (\ref{ccc}), one can obtain $t_{n-1,0}, t_{n-2,0}$ and $t_{n-3,0}$ and vice versa. Theorem \ref{fm} may be known but we have not found it in the literature. By Eq. (\ref{eq-A1}), one can obtain it from $t_{0,m-n+1}, t_{0,m-n}$ and $t_{0,m-n-1}$. \begin{theorem}\label{fm} Let $G$ be a loopless and bridgeless connected graph of order $n$ and size $m$. Let $F(G;\lambda)=c_0\lambda^{m-n+1}+c_1\lambda^{m-n}+\cdots+c_{m-n+1}$. Then \begin{eqnarray*} c_0&=&1,\\ c_1&=&-s(G),\\ c_2&=&\binom{s(G)}{2}-\Theta(G). \end{eqnarray*} \end{theorem} \section*{Acknowledgements} \noindent This work is supported by NSFC (No. 11271307,11671336) and President's Funds of Xiamen University (No. 20720160011). We thank the anonymous referee and A/P Fengming Dong for some helpful comments. \section*{References}
1,116,691,499,452
arxiv
\section{Introduction} \label{sec:intro} In recent years, Deep Neural Networks (DNNs) have been successfully applied to Automatic Speech Recognition (ASR) for many well-resourced languages including Mandarin and English \cite{amodei2016deep, xiong2016achieving}. However, only a small portion of languages have clean speech labeled corpus. As a result, there is an increasing interest in building speech recognition systems for low-resource languages. To address this issue, researchers have successfully exploited multilingual speech recognition models by taking advantage of labeled corpora in other languages \cite{huang2013cross, heigold2013multilingual}. Multilingual speech recognition enables acoustic models to share parameters across multiple languages, therefore low-resource acoustic models can benefit from rich resources. While low-resource multilingual works have proposed various acoustic models, those works tend to combine several low-resource corpora together without paying attention to the variety of corpora themselves. One common training approach here is to first pretrain a multilingual model by combining all training corpora, then the pretrained model is fine-tuned on the target corpus \cite{dalmia2018sequence}. During the training process, each corpus in the training set is treated equally and sampled uniformly. We argue, however, this approach does not take account of the characteristics of each corpus, therefore it fails to take advantage of the relations between them. For example, a conversation corpus might be more beneficial to another conversation corpus rather than an audio book corpus. In this work, we propose an effective sampling strategy (Corpus Relatedness Sampling) to take advantage of relations among corpora. Firstly, we introduce the corpus-level embedding which can be used to compute the similarity between corpora. The embedding can be estimated by being jointly trained with the acoustic model. Next, we compute the similarity between each corpus and the target corpus, the similarity is then used to optimize the model with respect to the target corpus. During the training process, we start by uniformly sampling from each corpus, then the sampling distribution is gradually updated so that more related corpora would be sampled more frequently. Eventually, only the target corpus would be sampled from the training set as the target corpus is the most related corpus to itself. While our approach differs from the pretrained model and the fine-tuned model, we can prove that those models are special cases of our sampling strategy. To evaluate our sampling strategy, we compare it with the pretrained model and fine-tuned model on 16 different corpora. The results show that our approach outperforms those baselines on all corpora: it achieves 1.6\% lower phone error rate on average. Additionally, we demonstrate that our corpus-level embeddings are able to capture the characteristics of each corpus, especially the language and domain information. The main contributions of this paper are as follows: \begin{enumerate} \item We propose a corpus-level embedding which can capture the language and domain information of each corpus. \item We introduce the Corpus Relatedness Sampling strategy to train multilingual models. It outperforms the pretrained model and fine-tuned model on all of our test corpora. \end{enumerate} \section{Related Work} Multilingual speech recognition has explored various models to share parameters across languages in different ways. For example, parameters can be shared by using posterior features from other languages \cite{stolcke2006cross}, applying the same GMM components across different HMM states \cite{burget2010multilingual}, training shared hidden layers in DNNs \cite{huang2013cross, heigold2013multilingual} or LSTM \cite{dalmia2018sequence}, using language independent bottleneck features \cite{vesely2012language, dalmia2018domain}. Some models only share their hidden layers, but use separate output layers to predict their phones \cite{huang2013cross, heigold2013multilingual}. Other models have only one shared output layer to predict the universal phone set shared by all languages \cite{schultz2001language, tong2017investigation, vu2013multilingual}. While those works proposed the multilingual models in different ways, few of them have explicitly exploited the relatedness across various languages and corpora. In contrast, our work computes the relatedness between different corpora using the embedding representations and exploits them efficiently. The embedding representations have been heavily used in multiple fields. In particular, embeddings of multiple granularities have been explored in many NLP tasks. To name a few, character embedding \cite{kim2016character}, subword embedding \cite{bojanowski2017enriching}, sentence embedding \cite{kiros2015skip} and document embedding \cite{le2014distributed}. However, there are few works exploring the corpus level embeddings. The main reason is that the number of corpora involved in most experiments is usually limited and it is not useful to compute corpus embeddings. The only exception is the multitask learning where many tasks and corpora are combined together. For instance, the language level (corpus level) embedding can be generated along with the model in machine translation \cite{johnson2017google} and speech recognition \cite{li2018multi}. However, those embeddings are only used as an auxiliary feature to the model, few works continue to exploit those embeddings themselves. Another important aspect of our work is that we focused on the sampling strategy for speech recognition. While most of the previous speech works mainly emphasized the acoustic modeling side, there are also some attempts focusing on the sampling strategies. For instance, curriculum learning would train the acoustic model by starting from easy training samples and increasingly adapt it to more difficult samples \cite{amodei2016deep, braun2017curriculum}. Active learning is an approach trying to minimize human costs to collect transcribed speech data \cite{riccardi2005active}. Furthermore, sampling strategies can also be helpful to speed up the training process \cite{cui2015multilingual}. However, the goals of most strategies are to improve the acoustic model by modifying the sampling distribution within a single speech corpus for a single language. On the contrary, our approach aims to optimize the multilingual acoustic model by modifying distributions across all the training corpora. \section{Approach} In this section, we describe our approach to compute the corpus embedding and our Corpus Relatedness Sampling strategy. \begin{table*}[t] \begin{center} \caption{The collection of training corpora used in the experiment. Both the baseline model and the proposed model are trained and tested with 16 corpora across 10 languages. We assign a corpus id to each corpus after its corpus name so that we can distinguish the corpora sharing the same language.} \label{tab:corpus} \begin{tabular}{c c c c | c c c c} \toprule {\bf Language} & {\bf Corpus Name} & {\bf Domain} & {\bf Utterance} & {\bf Language } & {\bf Corpus Name } & {\bf Domain} & {\bf Utterance}\\ \midrule English & TED (ted) \cite{rousseau2012ted} & broadcast & 100,000 & Mandarin & Hkust (hk) \cite{liu2006hkust} & telephone & 100,000 \\ English & Switchboard (swbd)\cite{godfrey1992switchboard} & telephone & 100,000 & Mandarin & SLR18 (s18) \cite{THCHS30_2015} & read & 13,388 \\ English & Librispeech (libri) \cite{panayotov2015librispeech} & read & 100,000 & Mandarin & LDC98S73 (hub) & broadcast & 35,999 \\ English & Fisher (fisher) \cite{cieri2004fisher} & telephone & 100,000 & Mandarin & SLR47 (s47) \cite{primewords_201801} & read & 50,384 \\ \midrule Amharic & LDC2014S06 (babel) & telephone & 41,403 & Bengali & LDC2016S08 (babel) & telephone & 60,663 \\ Dutch & voxforge (vox) & read & 8,492 & German & voxforge (vox) & read & 41,146 \\ Spanish & LDC98S74 (hub) & broadcast & 31,615 & Swahili & LDC2017S05(babel) & telephone & 44,502 \\ Turkish & LDC2012S06 (hub) \cite{saraclar2012turkish} & broadcast & 97,427 & Zulu & LDC2017S19 (babel) & telephone & 60,835 \\ \bottomrule \end{tabular} \end{center} \end{table*} \subsection{Corpus Embedding} Suppose that $\mathcal{C}_t$ is the target low-resource corpus, we are interested in optimizing the acoustic model with a much larger training corpora set $\mathcal{S} = \{ \mathcal{C}_1, \mathcal{C}_2 ... \mathcal{C}_n \}$ where $n$ is the number of corpora and $\mathcal{C}_t \in \mathcal{S}$. Each corpus $\mathcal{C}_i$ is a collection of $(\mathbf{x}, \mathbf{y})$ pairs where $\mathbf{x}$ is the input features and $\mathbf{y}$ is its target. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{arch.png} \caption{The acoustic model to optimize corpus embeddings.} \label{fig:arch} \end{figure} Our purpose here is to compute the embedding $\mathbf{e}_i$ for each corpus $\mathcal{C}_i$ where $\mathbf{e}_i$ is expected to encode information about its corpus $\mathcal{C}_i$. Those embeddings can be jointly trained with the standard multilingual model \cite{dalmia2018sequence}. First, the embedding matrix $E$ for all corpora is initialized, the $i$-th row of $E$ is corresponding to the embedding $\mathbf{e}_i$ of the corpus $\mathcal{C}_i$. Next, during the training phase, $\mathbf{e}_i$ can be used to bias the input feature $\mathbf{x}$ as follows. \begin{align} \mathbf{h} = \mathrm{Encoder}(\mathbf{x}+\mathbf{e}_i; W, E) \end{align} where $(\mathbf{x}, \mathbf{y}) \in \mathcal{C}_i$ is an utterance sampled randomly from $\mathcal{S}$, $\mathbf{h}$ is its hidden features, $W$ is the parameter of the acoustic model and Encoder is the stacked bidirectional LSTM as shown in Figure.\ref{fig:arch}. Next, we apply the language specific softmax to compute logits $\mathbf{l}$ and optimize them with the CTC objective \cite{graves2006connectionist}. The embedding matrix $E$ can be optimized together with the model during the training process. \subsection{Corpus Relatedness Sampling} With the embedding $\mathbf{e}_i$ of each corpus $\mathcal{C}_i$, we can compute the similarity score between any two corpora using the cosine similarity. \begin{align} score(\mathcal{C}_i, \mathcal{C}_j) = \frac{\mathbf{e}_i \cdot \mathbf{e}_j}{ | \mathbf{e}_i | |\mathbf{e}_j|} \end{align} As the similarity reflects the relatedness between corpora in the training set, we would like to sample the training set based on this similarity: those corpora which have a higher similarity with the target corpus $\mathcal{C}_t$ should be sampled more frequently. Therefore, we assume those similarity scores to be the sampling logits and they should be normalized with softmax. \begin{align} Pr(\mathcal{C}_i) = \frac{\exp\big(T \cdot score(\mathcal{C}_i, \mathcal{C}_t)\big)}{\sum_j{\exp(T \cdot score(\mathcal{C}_j, \mathcal{C}_t))}} \end{align} where $Pr(\mathcal{C}_i)$ is the probability to sample $\mathcal{C}_i$ from $\mathcal{S}$, and $T$ is the temperature to normalize the distribution during the training phase. We argue that different temperatures could create different training conditions. The model with a lower temperature tends to sample each corpus equally like uniform sampling. In contrast, a higher temperature means that the sampling distribution should be biased toward the target corpus like the fine-tuning. Next, we prove that both the pretrained model and the fine-tuned model can be realized with specific temperatures. In the case of the pretrained model, each corpus should be sampled equally. This can be implemented by setting $T$ to be $0$. \begin{align} Pr(\mathcal{C}_i) = \lim_{T \to 0} \frac{\exp\big(T \cdot score(\mathcal{C}_i, \mathcal{C}_t)\big)}{\sum_j{\exp(T \cdot score(\mathcal{C}_j, \mathcal{C}_t))}} = \frac{1}{n} \end{align} On the other hand, the fine-tuned model should only consider samples from the target corpus $\mathcal{C}_t$ , while ignoring all other corpora. We argue that this condition can be approximated by setting $T$ to a very large number. As $score(\mathcal{C}_t, \mathcal{C}_t) = 1.0$ and $score(\mathcal{C}_i, \mathcal{C}_t) < 1.0$ if $i \neq t$, we can prove the statement as follows: \begin{equation} \begin{split} Pr(\mathcal{C}_i) & = \lim_{T \to \infty} \frac{\exp\big(T \cdot score(\mathcal{C}_i, \mathcal{C}_t)\big)}{\sum_j{\exp(T \cdot score(\mathcal{C}_j, \mathcal{C}_t))}} \\ & = \begin{cases} 1.0 & if\> \mathcal{C}_i = \mathcal{C}_t \\ 0.0 & if\> \mathcal{C}_i \neq \mathcal{C}_t \end{cases} \end{split} \end{equation} While both the pretrained model and the fine-tuned model are special cases of our approach, we note that our approach is more flexible to sample from related corpora by interpolating between those two extreme temperatures. In practice, we would like to start with a low temperature to sample broadly in the early training phase. Then we gradually increase the temperature so that it can focus more on the related corpora. Eventually, the temperature would be high enough so that the model is automatically fine-tuned on the target corpus. Specifically, in our experiment, we start training with a very low temperature $T_0$, and increase its value every epoch $k$ as follows. \begin{align} T_{k+1} = a T_{k} \end{align} where $T_k$ is the temperature of epoch $k$ and $a$ is a hyperparameter to control the growth rate of the temperature. \section{Experiments} To demonstrate that our sampling approach could improve the multilingual model, we conduct experiments on 16 corpora to compare our approach with the pretrained model and fine-tuned model. \subsection{Datasets} We first describe our corpus collection. Table.\ref{tab:corpus} lists all corpora we used in the experiments. There are 16 corpora from 10 languages. To increase the variety of corpus, we selected 4 English corpora and 4 Mandarin corpora in addition to the low resource language corpora. As the target of this experiment is low resource speech recognition, we only randomly select 100,000 utterances even if there are more in each corpus. All corpora are available in LDC, voxforge, openSLR or other public websites. Each corpus is manually assigned one domain based on its speech style. Specifically, the domain candidates are \textit{telephone}, \textit{read} and \textit{broadcast}. \subsection{Experiment Settings} We use EESEN \cite{miao2015eesen} for the acoustic modeling and epitran \cite{mortensen2018epitran} as the g2p tool in this work. Every utterance in the corpora is firstly re-sampled into 8000Hz, and then we extract 40 dimension MFCCs features from each audio. We use a recent multilingual CTC model as our acoustic architecture \cite{dalmia2018sequence}: The architecture is a 6 layer bidirectional LSTM model with 320 cells in each layer. We use this architecture for both the baseline models and the proposed model. Our baseline model is the fine-tuned model: we first pretrained a model by uniformly sampling from all corpora. After the loss converges, we fine-tune the model on each of our target corpora. To compare it with our sampling approach, we first train an acoustic model to compute the embeddings of all corpora, then the embeddings are used to estimate the similarity as described in the previous section. The initial temperature $T_0$ is set to 0.01, and the growth rate is $1.5$. We evaluated all models using the phone error rate (PER) instead of the word error rate (WER). The reason is that we mainly focus on the acoustic model in this experiment. Additionally, some corpora (e.g.: Dutch voxforge) in this experiment have very few amounts of texts, therefore it is difficult to create a reasonable language model without augmenting texts using other corpora, which is beyond the scope of this work. \subsection{Results} \begin{table}[h] \begin{center} \caption{Phone error rate (\%PER) of the pretrained model, the baseline model (fine-tuned model) and our CRS(Corpus Relatedness Sampling) approach on all 16 corpora} \begin{tabular}{l c c c} \toprule {\bf Corpus} & {\bf Pretrain PER}& {\bf Base PER} & {\bf CRS PER} \\ \midrule English (ted) & 19.2 & 11.6 & 10.3 \\ English (swbd) & 24.9 & 15.3 & 14.1 \\ English (libri) & 12.1 & 6.5 & 5.4 \\ English (fisher) & 34.7 & 23.5 & 22.5 \\ Mandarin (hk) & 32.6 & 16.1 & 14.5 \\ Mandarin (s18) & 8.5 & 6.6 & 5.6 \\ Mandarin (hub) & 10.2 & 5.1 & 4.4 \\ Mandarin (s47) & 13.0 & 10.9 & 9.2 \\ \midrule Amharic (babel) & 45.4 & 40.7 & 36.9 \\ Bengali (babel) & 47.0 & 41.9 & 40.0 \\ Dutch (vox) & 27.3 & 21.6 & 18.2 \\ German (vox) & 23.1 & 12.9 & 10.3 \\ Swahili (babel) & 17.7 & 16.1 & 14.5 \\ Spanish (hub) & 14.0 & 8.4 & 7.5 \\ Turkish (hub) & 49.2 & 45.6 & 44.9 \\ Zulu (babel) & 47.8 & 38.5 & 37.9 \\ \midrule Average & 26.7 & 20.1 & \bf{18.5} \\ \bottomrule \end{tabular} \label{table:result} \end{center} \end{table} Table.\ref{table:result} shows the results of our evaluation. We compare our approach with the baseline using all corpora. The left-most column of Table.\ref{table:result} shows the corpus we used for each experiment, the remaining columns are corresponding to the phone error rate of the pretrained model, the fine-tuned model and our proposed model. First, we can easily confirm that the fine-tuned model outperforms the pretrained model on all corpora. For instance, the fine-tuned model outperforms the pretrained model by 4.7\% on the Amharic corpus. The result is reasonable as the pretrained model is optimized with the entire training set, while the fine-tuned model is further adapted to the target corpus. Next, the table suggests our Corpus Relatedness Sampling approach achieves better results than the fine-tuned model on all test corpora. For instance, the phone error rate is improved from 40.7\% to 36.9\% on Amharic and is improved from 41.9\% to 40.0\% on Bengali. On average, our approach outperforms the fine-tuned model by 1.6\% phone error rate. The results demonstrate that our sampling approach is more effective at optimizing the acoustic model on the target corpus. We also train baseline models by appending corpus embeddings to input features, but the proposed model outperforms those baselines similarly. One interesting trend we observed in the table is that the improvements differ across the target corpora. For instance, the improvement on the Dutch corpus is 3.4\%, on the other hand, its improvement of 0.6\% is relatively smaller on the Zulu dataset. We believe the difference in improvements can be explained by the size of each corpus. The size of Dutch corpus is very small as shown in Table.\ref{tab:corpus}, therefore the fine-tuned model is prone to overfit to the dataset very quickly. In contrast, it is less likely for a larger corpus to overfit. Compared with the fine-tuned model, our approach optimizes the model by gradually changing the temperature without quick overfitting. This mechanism could be interpreted as a built-in regularization. As a result, our model can achieve much better performance in small corpora by preventing the overfitting effect. \begin{table}[h] \begin{center} \caption{The similarity between training corpora. For each target corpus, we show its most related corpus and second related corpus.} \begin{tabular}{l | l | l} \toprule {\bf Target} & {\bf 1st Related Corpus} & {\bf 2nd Related Corpus}\\ \midrule English (ted) & English (libri) & Turkish (hub) \\ English (swbd) & English (fisher) & Mandarin (hk) \\ English (libri) & English (ted)& German (vox) \\ English (fisher) & English (swbd) &Mandarin (hk) \\ Mandarin (hk) & Bengali (babel) &English (swbd) \\ Mandarin (s18) & Mandarin (s47)& Mandarin (hub) \\ Mandarin (s47) & Mandarin (s18)& Dutch (vox) \\ Mandarin (hub) & Spanish (hub)& Tkish (hub) \\ \midrule Amharic (babel) & Bengali (babel) & Swahili (babel) \\ Bengali (babel) & Mandarin (hk)& Amharic (babel) \\ Dutch (vox) & Mandarin (s47) & Mandarin(s18) \\ German (vox) & English (libri) & Mandarin (s47) \\ Swahili (babel) & Zulu (babel) & Amharic (babel) \\ Spanish (hub) & Mandarin (hub) & Turkish (hub) \\ Turkish (hub) & English (libri) & Mandarin (hub) \\ Zulu (babel) & Swahili (babel) & Amharic (babel) \\ \bottomrule \end{tabular} \label{table:relation} \end{center} \end{table} To understand how our corpus embeddings contribute to our approach, we rank those embeddings and show the top-2 similar corpora for each corpus in Table.\ref{table:relation}. We note that the target corpus itself is removed from the rankings because it is the most related corpus to itself. The results of the top half show very clearly that our embeddings can capture the language level information: For most English and Mandarin corpora, the most related corpus is another English or Mandarin corpus. Additionally, the bottom half of the table indicates that our embeddings are able to capture domain level information as well. For instance, the top 2 related corpus for Amharic is Bengali and Swahili. According to Table.\ref{tab:corpus}, those three corpora belong to the \textit{telephone} domain. In addition, Dutch is a \textit{read} corpus, its top 2 related corpora are also from the same domain. This also explains why the 1st related corpus of Mandarin (hk) is Bengali: because both of them are from the same \textit{telephone} domain. \begin{figure} \centering \begin{tikzpicture} \begin{axis}[% scatter/classes={% telephone={mark=square*,blue},% record={mark=triangle*,red},% broadcast={mark=o,draw=black}}, xmin=-100,xmax=125,ymin=-100,ymax=125] \addplot[scatter,only marks,% scatter src=explicit symbolic]% table[meta=label] { x y label -30.31571388244629 -32.897972106933594 telephone 2.3990633487701416 31.22985076904297 record 11.002802848815918 -73.944580078125 telephone 51.9562873840332 -47.479148864746094 record -24.818572998046875 -9.482719421386719 telephone -9.081963539123535 -54.546974182128906 telephone -21.057504653930664 51.35031509399414 record -55.622982025146484 -38.70508575439453 telephone 4.542168617248535 60.26102828979492 broadcast 77.91976165771484 7.000478744506836 record 68.43834686279297 -18.6563777923584 record -49.72146224975586 -10.975892066955566 telephone 52.388553619384766 12.837336540222168 record 1.0258090496063232 3.9491748809814453 record -3.7222859859466553 -27.329965591430664 record 39.575313568115234 -20.297710418701172 telephone -22.801538467407227 14.41296100616455 telephone -78.33529663085938 -25.326885223388672 telephone 14.079936027526855 -15.207659721374512 record 19.753564834594727 -44.717838287353516 telephone 42.81153869628906 -71.23609924316406 telephone 48.969696044921875 72.3545913696289 broadcast 83.62925720214844 38.655426025390625 record -49.41233825683594 21.66193199157715 telephone 26.44234848022461 7.403919219970703 record -73.8827896118164 6.017458438873291 telephone -17.529891967773438 -77.6971664428711 telephone -39.04698181152344 -57.32247543334961 telephone 29.086034774780273 52.09471893310547 record 32.02229309082031 29.5660457611084 record 57.19882583618164 47.49128723144531 broadcast 26.665611267089844 94.22811126708984 record 4.1577959060668945 85.670166015625 record -22.751380920410156 83.13924407958984 broadcast -53.254608154296875 55.57710647583008 telephone }; \legend{telephone, read, broadcast} \end{axis} \end{tikzpicture} \caption{Domain plot of 36 corpora, the corpus embeddings are reduced to 2 dimensions by t-SNE} \label{fig:domain} \end{figure} To further investigate the domain information contained in the corpus embeddings, we train the corpus embeddings with an even larger corpora collection (36 corpora) and plot all of them in Figure.\ref{fig:domain}. To create the plot, the dimension of each corpus embedding is reduced to 2 with t-SNE \cite{maaten2008visualizing}. The figure demonstrates clearly that our corpus embeddings are capable of capturing the domain information: all corpora with the same domain are clustered together. This result also means that our approach improves the model by sampling more frequently from the corpora of the same speech domain. \section{Conclusion} \label{sec:conclusion} In this work, we propose an approach to compute corpus-level embeddings. We also introduce Corpus Relatedness Sampling approach to train multilingual speech recognition models based on those corpus embeddings. Our experiment shows that our approach outperforms the fine-tuned multilingual models in all 16 test corpora by 1.6 phone error rate on average. Additionally, we demonstrate that our corpus embeddings can capture both language and domain information of each corpus. \section{Acknowledgements} This project was sponsored by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O), program: Low Resource Languages for Emergent Incidents (LORELEI), issued by DARPA/I2O under Contract No. HR0011-15-C-0114. \bibliographystyle{IEEEtran}
1,116,691,499,453
arxiv
\section{INTRODUCTION} Coalescing binary neutron stars are among the most promising sources of gravitational waves for detection by interferometers such as LIGO and VIRGO \cite{LIGO92,thorne92}. Recent studies \cite{how-many} suggest that binary inspiral due to gravitational radiation reaction, and the eventual coalescence of the component stars, may be detectable by these instruments at a rate of several per year. The inspiral phase comprises the last several thousand binary orbits and covers the frequency range $f\sim 10$--$1000 {\rm Hz}$, where the broad-band interferometers are most sensitive. During this stage, the separation of the stars is much larger than their radii and the gravitational radiation can be calculated quite accurately using post-Newtonian expansions in the point-mass limit \cite{post-Newt}. It is expected that the inspiral waveform will reveal the masses and spins of the neutron stars, as well as the orbital parameters of the binary systems \cite{thorne92,cutler93,CF94}. When the binary separation is comparable to the neutron star radius, hydrodynamic effects become dominant and coalescence takes place within a few orbits. The coalescence regime probably lies at or beyond the upper end of the frequency range accessible to broad-band detectors, but it may be observed using specially designed narrow band interferometers \cite{narrow} or resonant detectors \cite{resonant}. Such observations may yield valuable information about neutron star radii, and thereby the nuclear equation of state \cite{cutler93,KLT94,lindblom92}. Three-dimensional numerical simulations are needed to calculate the detailed hydrodynamical evolution of the system during coalescence. Rather than dwell on the uncertain details of the physics of neutron-star interiors, most studies of this problem have opted simply to model the neutron stars as polytropes with equation of state \begin{equation} P = K \rho^{\Gamma} = K \rho^{1 + 1/n}, \label{polytrope} \end{equation} where $K$ is a constant that measures the specific entropy of the material and $n$ is the polytropic index. A choice of $n = 1\ (\Gamma = 2)$ mimics a fairly stiff nuclear equation of state. Shibata, Nakamura, \& Oohara \cite{SNO,ON} have studied the behavior of binaries with both synchronously rotating and non-rotating stars, using an Eulerian code with gravitational radiation reaction included. Rasio \& Shapiro \cite{RS92,RS94} have simulated the coalescence of synchronously rotating neutron-star binaries using the Lagrangian smooth particle hydrodynamics (SPH) method with purely Newtonian gravity. Recently, Davies et al. \cite{MBD93} have carried out SPH simulations of the inspiral and coalescence of nonsynchronously rotating neutron stars, focusing on the thermodynamics and nuclear physics of the coalescence, with particular application to gamma ray bursts. All of these studies use the quadrupole formula to calculate the gravitational radiation emitted. Stars in a synchronous binary rotate in the same sense as their orbital motion, with spin angular velocity equal to the orbital angular velocity, as seen from a non-rotating frame. In most close binary systems (for example, those with normal main-sequence components) viscosity acts to spin up initially non-rotating stars, causing them to come into a state of synchronous rotation in a relatively short period of time \cite{zahn}. However, realistic neutron star viscosities are expected to be quite small, and recent work suggests \cite{KBC} that the timescale for synchronization of neutron star binaries is generally much longer than the timescale for orbital decay and inspiral due to the emission of gravitational waves. Thus neutron star binaries are generally {\it not} expected to become synchronous as they evolve toward coalescence. As a complement to full 3-D hydrodynamical simulations, Lai, Rasio, \& Shapiro \cite{LRS,LRS93} have used quasi-equilibrium methods to focus on the last 10 or so orbits before the surfaces of the neutron stars come into contact. During this time, as tidal effects grow, the neutron stars are modeled as triaxial ellipsoids inspiraling on a sequence of quasi-static circular orbits. Using an approximate energy variational method these authors have modeled both synchronous and non-synchronous binaries. They find that, for sufficiently incompressible polytropes ($n < 1.2$), the system undergoes a dynamical instability which can significantly accelerate the secular orbital inspiral driven by radiation reaction. (This instability is driven by Newtonian hydrodynamics; see \cite{KWW93} for the case of an unstable plunge driven by strong spacetime curvature.) They have calculated the evolution of binaries as they approach the stability limit and the orbital decay changes from secular to dynamical in character, and have investigated the resulting gravitational wave emission \cite{LRS}. Their results provide an important component in understanding the behavior of the full 3-D hydrodynamical models. We have carried out 3-D simulations of the coalescence of non-rotating neutron stars using SPH, with particular application to the resulting gravitational wave energy spectrum $dE/df$. Our initial conditions consist of identical spherical polytropes of mass $M$ and radius $R$ on circular orbits with separations sufficiently large that tidal effects are negligible. The stars thus start out effectively in the point-mass regime. The gravitational field is purely Newtonian, with gravitational radiation calculated using the quadrupole approximation. To cause the stars to spiral in, we mimic the effects of gravitational radiation reaction by introducing a frictional term into the equations of motion to remove orbital energy at the rate given by the equivalent point-mass inspiral. As the neutron stars get closer together the tidal distortions grow and eventually dominate, and coalescence quickly follows. The resulting gravitational waveforms match smoothly onto the point-mass inspiral waveforms, facilitating analysis in the frequency domain. We focus on examining the effects of changing $R$ and the polytropic index $n$ on the gravitational wave energy spectrum $dE/df$. This paper is organized as follows. In Sec.~\ref{num-tech} we present a brief description of the numerical techniques we used to do the simulations. Sec.~\ref{grav-wave} discusses the calculation of the gravitational wave quantities, including the spectrum $dE/df$. The use of a frictional term in the equations of motion to model the inspiral by gravitational radiation reaction is discussed in Sec.~\ref{fric-term}, and the initial conditions are given in Sec.~\ref{init-cond}. The results of binary inspiral and coalescence for a standard run with $M = 1.4 {\rm M}_{\odot}$, $R = 10 {\rm km}$, and polytropic index $n = 1$ ($\Gamma = 2$) are given in Sec.~\ref{inspiral}, with the frequency analysis and the spectrum $dE/df$ presented in Sec.~\ref{freq}. Sec.~\ref{new-param} presents the results of varying the neutron star radius $R$ and the polytropic index $n$. Finally, Sec.~\ref{summary} contains a summary of our results. \section{Numerical Techniques} \label{num-tech} Lagrangian methods such as SPH \cite{SPH} are especially attractive for modeling neutron-star coalescence since the computational resources can be concentrated where the mass is located instead of being spread over a grid that is mostly empty. We have used the implementation of SPH by Hernquist \& Katz \cite{HK} known as TREESPH. In this code, the fluid is discretized into particles of finite extent described by a smoothing kernel. The use of variable smoothing lengths and individual particle timesteps makes the program adaptive in both space and time. Gravitational forces in TREESPH are calculated using a hierarchical tree method \cite{tree} optimized for vector computers. In this method, the particles are first organized into a nested hierarchy of cells, and the mass multipole moments of each cell up to a fixed order, usually quadrupole, are calculated. To compute the gravitational acceleration, each particle interacts with different levels of the hierarchy in different ways. The force due to neighboring particles is computed by directly summing the two-body interactions. The influence of more distant particles is taken into account by including the multipole expansions of the cells which satisfy the accuracy criterion at the location of each particle. In general, the number of terms in the multipole expansions is small compared to the number of particles in the corresponding cells. This leads to a significant gain in efficiency and allows the use of larger numbers of particles than would be possible with methods that simply sum over all possible pairs of particles. TREESPH uses artificial viscosity to handle the shocks that develop when stars collide and coalesce. The code contains three choices for the artificial viscosity; we have chosen to use the version modified by the curl of the velocity field. This prescription reduces the amount of artificial viscosity used in the presence of curl, and has proved to be superior to the other options in tests of head-on collisions of neutron stars \cite{CM} and global rotational instability \cite{bars}. Since this has already been discussed in the literature, we remark only that the artificial viscosity consists of two terms, one that is linear in the particle velocities (with user-specified coefficient $\alpha$) and another that is quadratic in the velocities (with coefficient $\beta$), and refer the interested reader to references \cite{HK} and \cite{CM} [see especially their equations (3), (5), and (6)] for details. As has been noted above, the neutron stars are not expected to be synchronously rotating due to their very small physical viscosity. However in computer simulations numerical viscosity, either present in the method itself or added explicitly as artificial viscosity, can have a similar effect and cause the stars to spin up. We have monitored this effect in our simulations and have found it to be small; see Sec.~\ref{inspiral} below. \section{Calculation of Gravitational Radiation} \label{grav-wave} The gravitational radiation in our simulations is calculated in the quadrupole approximation, which is valid for nearly Newtonian sources \cite{MTW}. The gravitational waveforms are the transverse-traceless (TT) components of the metric perturbation $h_{ij}$, \begin{equation} h_{ij}^{TT} = {\frac{G}{c^4}}\, {\frac{2}{r}} {\skew6\ddot{I\mkern-6.8mu\raise0.3ex\hbox{-}}}_{ij}\,^{TT}, \label{hij-TT} \end{equation} where \begin{equation} {{I\mkern-6.8mu\raise0.3ex\hbox{-}}}_{ij} = \int\rho \,(x_i x_j - {\textstyle{\frac{1}{3}}} \delta_{ij} r^2) \:d^3 r \label{Iij} \end{equation} is the reduced ({\it i.e.} traceless) quadrupole moment of the source and a dot indicates a time derivative $d/dt$. Here spatial indices $i,j=1,2,3$ and the distance to the source $r=(x^2 + y^2 + z^2)^{1/2}$. In an orthonormal spherical coordinate system $(r,\theta,\phi)$ with the center of mass of the source located at the origin, the TT part of ${I\mkern-6.8mu\raise0.3ex\hbox{-}}_{ij}$ has only four non-vanishing components. Expressed in terms of Cartesian components these are ({\it cf.} \cite{KSTC}) \begin{eqnarray} {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{\theta\theta}&=& ({I\mkern-6.8mu\raise0.3ex\hbox{-}}_{xx}\cos^2\phi+ {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{yy}\sin^2\phi+ {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{xy}\sin 2\phi)\cos^2\theta \nonumber\\&&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{zz}\sin^2\theta - ({I\mkern-6.8mu\raise0.3ex\hbox{-}}_{xz}\cos\phi+ {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{yz}\sin\phi)\sin 2\theta, \nonumber\\ {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{\phi\phi}&=& {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{xx}\sin^2\phi+ {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{yy}\cos^2\phi -{I\mkern-6.8mu\raise0.3ex\hbox{-}}_{xy}\sin 2\phi, \label{Ispher}\\ {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{\theta\phi}&=& {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{\phi\theta} \nonumber\\ &=& \textstyle{\frac{1}{2}}({I\mkern-6.8mu\raise0.3ex\hbox{-}}_{yy}- {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{xx})\cos\theta\sin 2\phi\nonumber\\&& \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{xy}\cos\theta\cos 2\phi + ({I\mkern-6.8mu\raise0.3ex\hbox{-}}_{xz}\sin\phi - {I\mkern-6.8mu\raise0.3ex\hbox{-}}_{yz}\cos\phi)\sin\theta.\nonumber \end{eqnarray} The wave amplitudes for the two polarizations are then given by \begin{eqnarray} h_+&=&\frac{G}{c^4}\frac{1}{r}( {\skew6\ddot{ {I\mkern-6.8mu\raise0.3ex\hbox{-}}}}_{\theta\theta}- {\skew6\ddot{ {I\mkern-6.8mu\raise0.3ex\hbox{-}}}}_{\phi\phi}), \label{hplus}\\ h_\times &=&\frac{G}{c^4}\frac{2}{r} {\skew6\ddot{ {I\mkern-6.8mu\raise0.3ex\hbox{-}}}}_{\theta\phi}.\label{hcross} \end{eqnarray} For an observer located on the axis at $\theta=0, \phi=0$ these reduce to \begin{eqnarray} h_{+{\rm ,axis}}&=&\frac{G}{c^4}\frac{1}{r}( {\skew6\ddot{ {I\mkern-6.8mu\raise0.3ex\hbox{-}}}}_{xx}- {\skew6\ddot{ {I\mkern-6.8mu\raise0.3ex\hbox{-}}}}_{yy}), \label{hplus-axis}\\ h_{\times{\rm ,axis}} &=&\frac{G}{c^4}\frac{2}{r} {\skew6\ddot{ {I\mkern-6.8mu\raise0.3ex\hbox{-}}}}_{xy}.\label{hcross-axis} \end{eqnarray} The angle-averaged waveforms are defined by \cite{KSTC} \jot 5pt \begin{eqnarray} \langle h_+^2 \rangle & = & \frac{1}{4\pi}\int h_+^2 \, d\Omega \nonumber \\ \langle h_{\times}^2 \rangle & = & \frac{1}{4\pi} \int h_{\times}^2 \, d\Omega , \label{angle-avg} \end{eqnarray} which gives \jot 5pt \begin{eqnarray} \frac{c^8}{G^2} r^2 \langle h_+^2 \rangle & = & {\textstyle{\frac{4}{15}}} ({I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{xx}- {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{zz})^2 + {\textstyle{\frac{4}{15}}} ({I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{yy}- {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{zz})^2 + {\textstyle{\frac{1}{10}}} ({I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{xx}- {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{yy})^2 + \nonumber \\ & & {\textstyle{\frac{14}{15}}} ({I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{xy})^2 + {\textstyle{\frac{4}{15}}}({I\mkern-6.8mu\raise0.3ex\hbox{-}} ^{(2)}_{xz})^2 + {\textstyle{\frac{4}{15}}}({I\mkern-6.8mu\raise0.3ex\hbox{-}}^ {(2)}_{yz})^2, \label{h+hx} \\ \frac{c^8}{G^2} r^2 \langle h_{\times}^2 \rangle & = & {\textstyle{\frac{1}{6}}} ({I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{xx}- {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{yy})^2 + {\textstyle{\frac{2}{3}}}({I\mkern-6.8mu\raise0.3ex\hbox{-}} ^{(2)}_{xy})^2 + {\textstyle{\frac{4}{3}}}({I\mkern-6.8mu\raise0.3ex\hbox{-}} ^{(2)}_{xz})^2 + {\textstyle{\frac{4}{3}}}({I\mkern-6.8mu\raise0.3ex\hbox{-}} ^{(2)}_{yz})^2 . \nonumber \end{eqnarray} (Note that equation~(\ref{h+hx}) corrects some typographical errors in equation (3.12) of \cite{KSTC}.) The standard definition of gravitational-wave luminosity is \begin{equation} L = \frac{dE}{dt} = {\frac15}\frac{G}{c^5} \left\langle \left\langle {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(3)}_{ij} {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(3)}_{ij} \right \rangle \right \rangle , \label{lum} \end{equation} where there is an implied sum on $i$ and $j$, the superscript $(3)$ indicates the third time derivative, and the double angle brackets indicate an average over several wave periods. Since such averaging is not well-defined during coalescence, we simply display the unaveraged quantity $(G/5c^5) \textstyle{ {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(3)}_{ij} {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(3)}_{ij}}$ in the plots below. The energy emitted as gravitational radiation is \begin{equation} \Delta E = \int L\; dt . \label{delta-E} \end{equation} The angular momentum lost to gravitational radiation is \begin{equation} \frac{dJ_i}{dt} = {\frac{2}{5}}\frac{G}{c^5} \epsilon_{ijk} \left\langle \left\langle {I\mkern-6.8mu\raise0.3ex\hbox{-}} ^{(2)}_{jm} {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(3)}_{km} \right \rangle \right \rangle\ , \label{dJ/dt} \end{equation} where $\epsilon_{ijk}$ is the alternating tensor. The total angular momentum carried away by the waves is \begin{equation} \Delta J_i = \int (dJ_i/dt)dt. \label{delta-J} \end{equation} Again, we plot the unaveraged quantity $(2G/5c^5) % {\textstyle\epsilon_{ijk} {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{jm} {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(3)}_{km}}$ for $dJ_i/dt$. The energy emitted in gravitational waves per unit frequency interval $dE/df$ is given by Thorne \cite{thorne87} in the form \begin{equation} \frac{dE}{df} = \frac{c^3}{G} \frac{\pi}{2} (4 \pi r^2) f^2 \langle |\tilde h_+ (f)|^2 + |\tilde h_{\times}(f)|^2 \rangle, \label{dE/df} \end{equation} where $r$ is the distance to the source and the angle brackets denote an average over all source angles. We define the Fourier transform $\tilde h(f)$ of any function $h(t)$ by \begin{equation} \tilde h(f) \equiv \int_{- \infty}^{+ \infty} h(t) e^{2\pi ift} dt \label{h-FFT} \end{equation} and \begin{equation} h(t) \equiv \int_{- \infty}^{+ \infty} \tilde h(f) e^{-2\pi ift} df. \label{h-inv-transf} \end{equation} To calculate the angle-averaged quantity $\langle |\tilde h_+|^2 + |\tilde h_{\times}|^2 \rangle$ we first take the Fourier transforms of equations~(\ref{hplus}) and~(\ref{hcross}), to obtain \begin{eqnarray} \frac{c^4}{G} r \tilde h_+&=& {\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}}^{(2)}_{\theta\theta}- {\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}}^{(2)}_{\phi\phi} , \label{hplusft}\\ \frac{c^4}{G} r \tilde h_\times &=& 2 {\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}}^{(2)}_{\theta\phi}. \label{hcrossft} \end{eqnarray} The Fourier transforms $\tilde h_+$ and $\tilde h_{\times}$ have the same angular dependence, given by~(\ref{Ispher}), as $h_+$ and $h_{\times}$, respectively. The angle averaging \jot 5pt \begin{eqnarray} \langle | \tilde h_+|^2 \rangle & = & \frac{1}{4\pi}\int|\tilde h_+|^2 \, d\Omega \nonumber \\ \langle | \tilde h_{\times}|^2 \rangle & = & \frac{1}{4\pi} \int| \tilde h_{\times}|^2 \, d\Omega . \label{angle-avg-ft} \end{eqnarray} gives expressions analogous to~(\ref{h+hx}): \jot 5pt \begin{eqnarray} \frac{c^8}{G^2} r^2 \langle | \tilde h_+|^2 \rangle & = & {\textstyle{\frac{4}{15}}} | \tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{xx}- \tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{zz}|^2 + {\textstyle{\frac{4}{15}}} |\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{yy}- \tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{zz}|^2 + {\textstyle{\frac{1}{10}}}|\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{xx}- \tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{yy}|^2 + \nonumber \\ & & {\textstyle{\frac{14}{15}}} |\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{xy}|^2 + {\textstyle{\frac{4}{15}}}|\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{xz}|^2 + {\textstyle{\frac{4}{15}}}|\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{yz}|^2, \label{h+hx-ft} \\ \frac{c^8}{G^2} r^2 \langle | \tilde h_{\times}|^2 \rangle & = & {\textstyle{\frac{1}{6}}} |\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{xx} -\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{yy}|^2 + {\textstyle{\frac{2}{3}}} |\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{xy}|^2 + {\textstyle{\frac{4}{3}}}|\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{xz}|^2 + {\textstyle{\frac{4}{3}}}|\tilde {I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(2)}_{yz}|^2 . \nonumber \end{eqnarray} We then have \begin{equation} \langle |\tilde h_+ (f)|^2 + |\tilde h_{\times}(f)|^2 \rangle = \langle |\tilde h_+ (f)|^2 \rangle + \langle |\tilde h_{\times}(f)|^2 \rangle , \label{ang-avg} \end{equation} which may be substituted into expression~(\ref{dE/df}) for $dE/df$. We use the techniques of \cite{CM} to calculate the quadrupole moment and its derivatives. In particular, ${\skew6\dot{I\mkern-6.8mu\raise0.3ex\hbox{-}}}$ and ${\skew6\ddot{I\mkern-6.8mu\raise0.3ex\hbox{-}}}$ are obtained using particle positions, velocities, and accelerations already present in the code to produce very smooth waveforms. This yields expressions similar to those of Finn and Evans \cite{FE}. However, ${I\mkern-6.8mu\raise0.3ex\hbox{-}}^{(3)}_{ij}$ requires the derivative of the particle accelerations, which is taken numerically and introduces some numerical noise into $L$ and $dJ_i/dt$. This noise can be removed by smoothing; see \cite{CM} for further discussion. We have applied this smoothing in producing all graphs of $L$ and $dJ_i/dt$ in this paper. \section{Modeling Inspiral by Gravitational Radiation Reaction} \label{fric-term} Widely separated binary neutron stars (that is, with separation $a \gg R$) spiral together due to the effects of energy loss by gravitational radiation reaction. Once the two stars are close enough for tidal distortions to be significant, these effects dominate and rapid inspiral and coalescence ensue. In our calculations we initially place the neutron stars on (nearly) circular orbits with wide enough separation that tidal distortions are negligible and the stars are effectively in the point-mass limit. Since the gravitational field is purely Newtonian and does not take radiation reaction losses into account, we must explicitly include these losses to cause inspiral until purely hydrodynamical effects take over. To accomplish this, we add a frictional term to the particle acceleration equations to remove orbital energy at a rate given by the point-mass inspiral expression (see \cite{MBD93} for a similar approach). The gravitational wave luminosity for point-mass inspiral on circular orbits is \cite{MTW,ST} \begin{equation} L_{pm} = \left . \frac{dE}{dt} \right |_{\rm pm} = \frac{32}{5} \frac{G^4}{c^5} \frac{\mu^2 {\cal M}^3}{a^5} , \label{Lpt} \end{equation} where $\mu = M_1 M_2/(M_1 + M_2)$ is the reduced mass, ${\cal M} = M_1 + M_2$ is the total mass of the system, and the subscript ``pm'' refers to point-mass inspiral. For equal mass stars with $M_1 = M_2 \equiv M$ and separation $a = \xi R$, we find \begin{equation} \left . \frac{dE}{dt} \right |_{\rm pm} = \frac{64}{5} \frac{c^5}{G} \frac{1}{\xi^5} \left ( \frac{GM}{c^2R} \right )^5. \label{dE-pm} \end{equation} We then assume that this energy change is due to a frictional force $\vec f$ that is applied at the center of mass of each star, so that each point in the star feels the same frictional deceleration. Dividing the loss equally between the two stars gives \begin{equation} \vec f \cdot \vec V = {\frac{1}{2}} \left . \frac{dE}{dt} \right|_{\rm pm} , \end{equation} where $\vec V$ is the center of mass velocity of the star. Since $\vec f$ acts in the direction opposite to $\vec V$ this gives an acceleration \begin{equation} \vec a = \frac{\vec f}{M} = -\frac{1}{2M} \left . \frac{dE}{dt} \right|_{\rm pm} \frac{\vec V}{|\vec V|^2}. \end{equation} This term is added to the acceleration of every particle, so that each particle in either star experiences the same frictional deceleration. The net effect is that the centers of mass of the stars follow trajectories that approximate point-mass inspiral. This frictional term is applied until tidal effects dominate, leading to more rapid inspiral and coalescence; see Sec.~\ref{inspiral} and Fig.~\ref{a-fric-off} below. (Operationally, our assignment of a particle to a ``star'' is based simply on which body it happened to belong to initially. Since the frictional term is turned off before coalescence occurs, the question of what to do after the stars have merged does not arise.) The dynamics of polytropes in purely Newtonian gravity is scale free in the sense that, for a given polytropic index $n$, the results of a calculation can be scaled for any values of the mass $M$ and radius $R$. Inspiral by gravitational radiation reaction introduces the dimensionless parameter $GM/Rc^2$, as is explicitly evident in equations~(\ref{dE-pm}) and~(\ref{vx}) for our frictional model of inspiral. For neutron stars, $GM/Rc^2$ is determined by the nuclear equation of state. In the calculations below, we will vary both $R$ (and hence $GM/Rc^2$) and the polytropic index $n$. \section{Initial Conditions} \label{init-cond} Since our neutron stars start out widely separated with negligible tidal interaction, they are modeled initially as spherical polytropes. Because the timescale for tidal effects to develop is much greater than the dynamical time $t_D$ for an individual star, where \begin{equation} t_D = \left ( \frac{R^3}{GM} \right ) ^{1/2} , \label{tD} \end{equation} we start with stable, ``cold'' polytropes produced by the method discussed in \cite{CM}. The stars are then placed on a circular orbit with separation $a_0 = \xi_0 R$ in the center of mass frame of the system in the $x-y$ plane. Locating the centers of mass of the individual stars at $(x,y)$ positions $(\pm a_0/2, 0)$ initially, the stars are then given the equivalent point-mass velocities for a circular orbit $V_y = \pm (M/2a_0)^{1/2}$. To ensure that the stars start out on the correct point-mass inspiral trajectories, we also give them an initial inward radial velocity $V_x$ as follows. For point-mass inspiral the separation $a(t)$ is given by \cite{MTW} \begin{equation} a(t) = a_0 \left ( 1- \frac{t}{\tau_0} \right )^{1/4} , \label{a(t)} \end{equation} where $a_0$ is the separation at the initial time $t=0$ and \begin{equation} \tau_0 = \frac{5}{256} \frac{c^5}{G^3} \frac{a_0^4}{\mu {\cal M}^2} \label{tau0} \end{equation} is the inspiral time, {\it i.e.} the time needed to reach separation $a=0$. For equal mass stars, the initial inward velocity is thus \begin{equation} V_x = \left . \frac{dr}{dt} \right |_{t=0} = \left . \frac{1}{2} \frac{da}{dt} \right |_{t=0} = - \frac{1}{8} \frac{a_0}{\tau_0} . \label{vx-1} \end{equation} Since the stars have initial separation $a_0 = \xi_0 R$ this gives \begin{equation} V_x = - \frac{64}{5} \frac{c}{\xi_0^3} \left ( \frac{GM}{c^2R} \right )^3. \label{vx} \end{equation} The use of the correct initial inspiral trajectory allows us to match our gravitational waveforms smoothly to the equivalent point-mass waveforms. This is important when analyzing the signals in the frequency domain, as discussed in Sec.~\ref{freq} below. \section{Binary Inspiral and Coalescence} \label{inspiral} We take the case $M = 1.4 {\rm M}_{\odot}$ and $R = 10 {\rm km}$ (so $GM/Rc^2 = 0.21$), with polytropic index $n = 1$ and initial separation $a_0 = 4R$ as our standard model, which we refer to as Run 1. The parameters of this model and the other two models introduced in Sec.~\ref{new-param} below are presented in Table~\ref{models-param}. The results of the simulations are summarized in Table~\ref{models-results}. Time is measured in units of the dynamical time $t_D$ given in equation~(\ref{tD}); for Run 1, $t_D = 7.3 \times 10^{-5} {\rm s}$. The evolution of this system for the case of $N = 4096$ particles per star is shown in Fig.~\ref{std-movie}. Each frame shows the projection of all particles onto the $x-y$ plane. As the stars spiral together their tidal bulges grow. By $t \sim 100 t_D$ the stars have come into contact, after which they rapidly merge and coalesce into a rotating bar-like structure. Note that the merger is a fairly gentle process and, in contrast to head-on collisions, does not generate strong shocks \cite{RS94,CM,GS}. Spiral arms form as mass is shed from the ends of the bar. Gravitational torques cause angular momentum to be transported outward and lost to the spiral arms. The arms expand and eventually form a disk around the central object. By $t = 200 t_D$ the system is roughly axisymmetric. Contour plots at $t = 200 t_D$ reveal more details of the system. In Fig.~\ref{contour}(a), which shows a cut along the $x-y$ equatorial plane, we see that the core is essentially axisymmetric out to cylindrical radius $\varpi \sim 2R$, where $\varpi = (x^2 + y^2)^{1/2}$. As the spiral arms wind up, expand, and merge, the disk grows increasingly axisymmetric. In the process the arms expand supersonically, producing shock heating that causes the disk to puff up. This can be seen in Fig.~\ref{contour}(b), which shows density contours (two per decade in density) for a cut along the meridional $x-z$ plane. The angular velocity $\Omega(\varpi)$ of the particles is shown as a function of cylindrical radius $\varpi$ in Fig.~\ref{omega-std}. For our choice of parameters, $\Omega = 1\ (t_D^{-1})$ corresponds to a spin period $T_{\rm spin} = 0.46 {\rm ms}$. At $t = 150 t_D$ the object is in the final stage of its gravitational wave ``ring down'' ({\it cf.} Fig.~\ref{std-waves} below). Fig.~\ref{omega-std}$a$ shows that the central core $\varpi \lesssim 2R$ is still differentially rotating at this time (with $\Omega\sim\varpi^{-0.4}$). The disk $\varpi \gtrsim 2R$ is also differentially rotating, with $\Omega\sim\varpi^{-2}$. By $t = 200 t_D$ the central object has less differential rotation and is more nearly rigidly rotating, with $\Omega \sim 0.65 t_D^{-1}$, giving a spin period $T_{\rm spin} \sim 0.71 {\rm ms}$. The disk is differentially rotating with $\Omega \sim \varpi^{-1.7}$. (Recall that Keplerian motion has $\Omega \sim \varpi^{-1.5}$.) The mass $m$ contained within cylindrical radius $\varpi$ is plotted in Fig.~\ref{std-m(varpi)}, showing that $\sim 6\%$ of the mass has been shed to the disk $\varpi \gtrsim 2R$. Between the times $t = 150 t_D$ and $t = 200 t_D$, some of the matter in the disk is redistributed out to larger radii. The specific spin angular momentum $j(\varpi)$ within cylindrical radius $\varpi$ is shown in Fig.~\ref{std-j(varpi)}. About $27 \%$ of the angular momentum has been shed to the disk, with continued outward transport of angular momentum within the disk taking place between $t=150 t_D$ and $t=200 t_D$. The gravitational waveforms $r h_+$ and $r h_{\times}$ for an observer on the axis at $\theta = 0$ and $\varphi = 0$ are shown for this run in Figs.~\ref{std-waves} (a) and (b), where the solid lines give the code waveforms and the dashed lines the point-mass results. For the first couple of orbits after the start of the run ($T_{\rm orbit} = 2 T_{\rm GW}$) the code waveforms match the point-mass predictions. As the tidal bulges grow, the stars spiral in faster than they would on point-mass trajectories, leading to an increase in the frequency and amplitude of the gravitational waveforms ({\it cf.} \cite{LRS}). The gravitational wave amplitudes reach a maximum during the merger of the two stars at $t \sim 105 - 110 t_D$, then decrease as the stars coalesce and the spiral arms expand and form the disk. The peak waveform amplitude $(c^2/GM)rh_+\sim 0.4$ corresponds to a value $h\sim 1.4 \times 10^{-21}$ for a source at distance r = 20Mpc (the approximate distance to the Virgo Cluster). By $t\sim 180 t_D$ the gravitational waves have shut off and the system is essentially axisymmetric. Fig.~\ref{std-gw} shows (a) the gravitational wave luminosity $L/L_0$ (where $L_0 = c^5/G$) , (b) the energy $\Delta E/Mc^2$ emitted as gravitational radiation, (c) $dJ_z/dt$ for the gravitational radiation, and (d) the total angular momentum $\Delta J_z/J$ (where $J$ is the initial total angular momentum of the system) carried by the waves. In all the gravitational-wave quantities, the code results (solid lines) initially track the point-mass case (dashed lines) very well, departing significantly from the point-mass predictions somewhat before the onset of dynamical instability. The maximum luminosity is $1.65\times10^{-4}L_0$. This may be compared with the value of $5.3 \times 10^{-4} L_0$ found by \cite{CM} for the case of a head-on collision with $GM/Rc^2 = 0.21$; for off-axis collisions on parabolic orbits, the maximum obtained by those authors was $1.0 \times 10^{-3} L_0$. The total energy radiated away after the luminosity departs from the point-mass result by more than 10\% is $0.032 Mc^2$. Again this can be compared with $0.0025 Mc^2$ for a head-on collision and a maximum of $0.016 Mc^2$ for off-axis collisions obtained by \cite{CM}. Although the collisions can achieve a higher gravitational wave luminosity, they radiate less energy in the form of gravitational waves overall because they take place on shorter timescales than the inspiralling binaries. How sensitive are these results to the resolution of the calculation? To answer this question we ran the same standard model with different numbers of particles per star. Fig.~\ref{std-waves-compare} shows a comparison of the waveform $r h_+$ for the cases $N = 1024$, $2048$, and $4096$ particles per star. It is clear that the differences in the waveforms are small. We will see in Sec.~\ref{freq} below that there are only slight differences in the frequency domain as well. In numerical simulations viscosity, whether implicit in the numerical method or added explicitly as artificial viscosity, can cause problems by artificially spinning up the stars \cite{MBD93}. To monitor this effect in our simulations we calculated the spin angular momentum of each star about its center of mass and compared this with the results expected for a synchronously rotating star (the expected result in the limit of large viscosity). In general, we have found that these non-physical viscous effects always remain small in our simulations. For example, we ran a test case consisting of initially non-spinning stars each composed of $N=1024$ particles on a circular orbit of constant separation $a = 4 R$, with artificial viscosity coefficients $\alpha = \beta = 0.3$. After 100 $t_D$ ($\sim 2.8$ orbits), the stars had spin angular momenta $< 2.3\%$ of the synchronous value. We conclude that numerical and artificial viscosities play negligible roles in spinning up the stars in our simulations. For inspiraling stars, torquing due to the gravitationally-induced tidal bulges will cause a physical spin-up of the stars \cite{RS94}. This is shown for the case $N=4096$ particles per star in Fig.~\ref{spin-4096}, which plots the spin angular momentum of one star (normalized by the spin of a synchronously rotating star at that orbital separation) as a function of time. We see that the spin angular momentum of the star remains small until contact occurs at $t \sim 100 t_D$; after this it increases sharply, reaching nearly 70\% of the synchronous value at $t = 105 t_D$. (Each ``star'' is composed of the particles that belonged to it initially, with the orbital separation of the stars given by the distance between the two centers of mass.) Comparison with Fig.~\ref{std-movie} confirms that this effect is due to the tidal torquing that occurs when the stars develop large tidal bulges, come into contact, and merge. Once the stars are close enough for this gravitationally-induced tidal torquing to be significant, Newtonian gravitational effects operating on a dynamical timescale dominate the secular radiation reaction effects, leading to more rapid inspiral, merger, and coalescence \cite{LRS}. We should turn off the gravitational friction term at some time after the Newtonian tidal torquing takes over and before the merger occurs, since during the merger the concept of equivalent point-mass trajectories is meaningless. We have experimented with turning off the gravitational friction term at different times and present the results for our standard run with $N=1024$ particles per star in Fig.~\ref{a-fric-off}, which shows the center of mass separation of the two stars as a function of time. Here, the solid line shows the result of running with the gravitational friction term left on, and the short dashed lines show the results of turning this term off at $(1)$ $t=70 t_D$, $(2)$ $t=85 t_D$, and $(3)$ $t=100 t_D$. The long-dashed line shows the equivalent point-mass result. In cases $(1)$ and $(2)$, the stars go into nearly circular orbits (with eccentricities appropriate to the inspiral radial velocity at that separation) once the frictional term is turned off. However, the trajectory in case $(3)$ is very similar to the result when the frictional term is left on, indicating that the Newtonian tidal effects are dominant by this point. The center of mass separation of the two stars in this case is $\sim 2.5 R$ at $t=100 t_D$, and then rapidly decreases. This result is in good agreement with the prediction of a dynamical stability limit at $a=2.49 R$ by Lai, Rasio, and Shapiro \cite{LRS}. On the basis of these tests, we have turned off the gravitational friction term at $t = 100 t_D$ for our standard run, and all the other plots in this paper for this run were done with this choice. For each of the runs reported below with different values of the physical parameters, we have carried out such experiments to determine the optimal time to turn off the gravitational friction term, since the time at which the Newtonian tidal effects dominate differs in each case ({\it cf.} \cite{LRS}). Finally, we have experimented with the values of the artificial viscosity coefficients $\alpha$ and $\beta$. For all runs we used the values $\alpha = \beta = 0.3$ during the inspiral phase. However, since shocks occur during the merger, coalescence, and the expansion of the spiral arms, we ran some tests with different amounts of artificial viscosity during these regimes. Fig.~\ref{av-test} shows the waveform $r h_+$ for our standard run with $N=1024$ particles per star during this phase for three cases: solid line, $\alpha = \beta = 0.3$; short-dashed line, $\alpha = 0.3$, $\beta = 1$; and long-dashed line, $\alpha = 1$, $\beta = 2$. Not surprisingly, the amplitude of the waveform is damped as $\alpha$ and $\beta$ are increased. The low viscosity case conserves energy to $\sim 2\%$ during the period $100-200$ $t_D$ (after the frictional term is turned off), indicating that the evolution of the system is not dominated by strong shocks. Overall, the differences in energy conservation for the three cases are not significant. Therefore, since the low viscosity case produces the least damping of the waveform, we chose to use the values $\alpha = \beta = 0.3$ in all of our runs. \section{Analysis in the Frequency Domain} \label{freq} Broad-band detectors such as LIGO and VIRGO should be able to measure the gravitational waveforms of inspiraling neutron star binaries in the frequency range $f \sim 10 - 1000 {\rm Hz}$. Comparison of these signals with waveform templates derived from post-Newtonian analysis is expected to yield the neutron-star mass $M$ \cite{cutler93,CF94}. It is important to develop techniques to measure the neutron-star radius $R$ since this information, coupled with $M$, can constrain the equation of state for nuclear matter \cite{lindblom92}. The actual merger and coalescence stages are driven primarily by hydrodynamics and are expected to depend on both $R$ and the equation of state, here parametrized by the polytropic index $n$. For most neutron-star binaries, this will take place at frequencies $f > 1000 {\rm Hz}$. In this regime, shot noise limits the sensitivity of the broad-band interferometers and so these signals may not be detectable by them \cite{LIGO92,thorne92}. However, a set of specially designed narrow band interferometers \cite{narrow} or resonant detectors \cite{resonant} may be able to provide information about this high frequency region \cite{KLT94}. The merger and coalescence of the neutron stars takes place within several orbits following initial contact, after which the gravitational radiation shuts off fairly rapidly as the system settles into a roughly axisymmetric final state \cite{cutler93}. This rapid shutoff of gravitational waves is expected to produce a sharp cutoff in the spectrum $dE/df$. Since the frequency of the radiation calculated in the point-mass approximation at separation $a$ scales as $\sim a^{-3/2} \sim R^{-3/2}$, a set of narrow-band detectors that can locate the cutoff frequency where the energy spectrum $dE/df$ drops sharply may in principle determine the neutron-star radius $R$ \cite{cutler93,KLT94,thorne-pc93}. We have calculated the spectrum $dE/df$ for our simulations using equation~(\ref{dE/df}). For point-mass inspiral, $dE/df \sim f^{-1/3}$ \cite{thorne87}, where the decrease in energy with frequency reflects the fact that the binary spends fewer cycles near a given frequency $f$ as it spirals in. To see any cutoff frequency in our data, we need a reasonably long region of point-mass inspiral in the frequency domain. Although our runs do start out in the point-mass regime, the binaries undergo dynamical instability and rapid merger within just a few orbits. To compensate for this we match our waveforms $h_+$ and $h_{\times}$ onto point-mass waveforms extending back to much larger separations and hence lower frequencies. The energy spectrum $dE/df$ for Run 1 with $N=4096$ particles per star is shown in Fig.~\ref{dE/df-std}. The solid line shows the spectrum for the extended waveform including point-mass inspiral, and the short-dashed line shows the spectrum of the simulation data only. Note that the two curves match closely. The separation at which the data were matched corresponds to frequency $\sim 770 {\rm Hz}$, which is well within the inspiral regime $dE/df\sim f^{-1/3}$. Fig.~\ref{dE/df-std} shows that the match is smooth, and does not affect the merger and coalescence region of the spectrum. We have also examined the effect of using different numbers of particles on $dE/df$ The result is shown in Fig.~\ref{dE/df-std-1024}, which plots the spectra for Run 1 with $N=1024$ and $4096$ particles per star. The use of a smaller number of particles makes only a slight difference to $dE/df$. Examination of Figs \ref{dE/df-std} and \ref{dE/df-std-1024} reveals several interesting features. Starting in the point-mass regime, as $f$ increases, $dE/df$ first drops below the point-mass inspiral value, reaching a local minimum at $f\sim1500{\rm Hz}$. We identify this initial dip with the onset of dynamical instability. For the parameters of Run 1, Lai, Rasio, and Shapiro \cite{LRS} found that dynamical instability occurs at separation $a = 2.49 R$; for point-mass inspiral, the frequency at this separation is $f_{\rm dyn} = 1566 {\rm Hz}$. The instability causes the spectrum $dE/df$ to drop below the point-mass value, since the stars fall together faster than they would had they remained on strictly point-mass trajectories. For reference, Fig.~\ref{dE/df-std} also shows the frequency $f_{\rm contact} = (1/2\pi)(GM/R^3)^{1/2} \sim 2200 {\rm Hz}$, which is twice the orbital frequency (in the point-mass limit) at separation $2R$. This initial fall-off in the spectrum is rather slight. At higher frequencies, $dE/df$ increases above the point-mass result, reaching a fairly broad maximum at $f_{\rm peak}\sim2500 {\rm Hz}$, roughly the frequency of the waves in Fig.~\ref{std-waves} near $t\sim 125 t_D$ (the approximate time at which the gravitational waves shut off). To further demonstrate that this feature is associated with the late-time behavior of the merged system, we have calculated the spectrum $dE/df$ for the cases in which the waveforms $rh_+$ and $rh_{\times}$ (including the point-mass inspiral) were truncated at $t=120 t_D$ and $t=150 t_D$. The results are shown in Fig.~\ref{FFT-trunc}, where the solid line shows the spectrum for the complete waveforms and the dashed lines show the spectra for the truncated ones. We see that this peak forms between $t = 120 t_D$ and $t=150 t_D$, and therefore associate it with the transient, rotating bar-like structure formed immediately following coalescence; {\it cf.} Fig.~\ref{std-movie}. The angular speed of this structure is $\sim 0.65 t_D^{-1}$ (see Fig.~\ref{omega-std} ), which corresponds to gravitational radiation with frequencies near $\sim 2800 {\rm Hz}$. The conclusion that $f_{\rm peak}$ is associated with a bar is strengthened by Run 3 below, in which the bar survives for a much longer time and the peak is correspondingly stronger. Beyond $f_{\rm peak}$, the spectrum drops sharply, eventually rising again to a secondary maximum at $f_{\rm sec}\sim3200 {\rm Hz}$, too high to be associated with the bar. Fig.~\ref{FFT-trunc} shows that this peak also appears between $t = 120 t_D$ and $t=150 t_D$. We attribute this broad high-frequency feature to transient oscillations induced in the coalescing stars during the merger process---the result of low-order p-modes with frequencies somewhat higher than the Kepler frequency in the merging object (see, e.g., \cite{cox80}). The three frequencies $f_{\rm dyn}$, $f_{\rm peak}$ and $f_{\rm sec}$ serve as a useful means of characterizing our runs. They are indicated on Fig. \ref{dE/df-std} and are presented in more detail in Table \ref{freq-results} below. \section{The Effects of Changing the Neutron Star Radius and Equation of State} \label{new-param} The energy spectrum $dE/df$ shows rich structure in the frequency range $f \sim 1000 - 3000 {\rm Hz}$ in which the merger and coalescence of the neutron stars take place. To understand how observations of $dE/df$ might provide information on the neutron star radius and equation of state, we must investigate the effects of changing $R$ and the polytropic index $n$. In this section we present the results of two runs which begin to explore this parameter space. We will continue this study in future papers. Run 2 is the same as Run 1 except that the initial neutron star radius is $R = 15 {\rm km}$. Taking $M = 1.4 {\rm M}_{\odot}$, this gives $GM/Rc^2 = 0.14$; see Table~\ref{models-param}. This model was run with $N = 1024$ particles per star. The gross features of the evolution of this model are similar to those found in Run 1. The stars first come into contact at $t \sim 250 t_D$. By the end of the run, the core $\varpi\lesssim 2R$ is essentially axisymmetric and has $92 \%$ of the mass and $65 \%$ of the angular momentum. The disk extends out to $\sim 10 R$. The gravitational waveforms $r h_+$ and $r h_{\times}$ are shown in Figs.~\ref{run9-waves} (a) and (b) for an observer on the axis at $\theta = 0,\phi = 0$. Fig.~\ref{run9-gw} shows (a) the gravitational wave luminosity $L/L_0$ and (b) the energy $\Delta E/Mc^2$ emitted as gravitational radiation. As in Fig.~\ref{std-gw}, the time-dependence of the angular momentum carried away by the waves is quite similar to that of the energy, and is not presented here. In these figures, the solid lines give the code waveforms and the dashed lines the point-mass results. Some interesting properties of this model are summarized in Table~\ref{models-results}. The energy spectrum $dE/df$ for Run 2 is shown in Fig.~\ref{dE/df-run9}. Again, we matched the code data to point-mass inspiral waveforms for analysis in the frequency domain. For Run 2 the match occurs at frequency $\sim 420 {\rm Hz}$. Given the parameters of this run, dynamical instability is expected to occur at separation $a = 2.49R$ \cite{LRS}; the point-mass inspiral frequency at this separation is $f_{\rm dyn} = 852 {\rm Hz}$. Fig~\ref{dE/df-run9} shows that, as in Run 1, the spectrum drops below the point-mass inspiral result near $f_{\rm dyn}$. The spectrum does not then rise above the point-mass result at $f_{\rm peak}\sim 1500 {\rm Hz}$ as in Run 1; however, it does drop sharply just beyond $f_{\rm peak}$, rising again to a secondary peak at $f_{\rm sec}\sim 1750 {\rm Hz}$. See Table \ref{freq-results}. We estimate the frequency of the waves in Fig.~\ref{run9-waves} at the time when the gravitational radiation shuts off (around $t\sim 270 t_D$) to be $\sim 1300 {\rm Hz}$---that is, close to $f_{\rm peak}$. The orbital angular velocity near the end of the run is $\Omega\sim 0.6 t_D^{-1}$ or $4.5\times 10^3$rad/s; any residual non-axisymmetric material rotating at this speed would yield gravitational waves at frequency $\sim 1500 {\rm Hz}$. Again, we interpret the secondary peak as the result of high-frequency oscillations in the merging system. The absence of a strong peak at $f_{\rm peak}$ and the weaker maximum at $f_{\rm sec}$ is the result of weaker tidal forces at the point of dynamical instability, leading to a less pronounced and shorter-lived bar. Since the frequency of the gravitational radiation for point-mass inspiral is $\sim a^{-3/2} \sim R^{-3/2}$, we expect the features in the spectrum for Run 2 to occur at lower frequencies than in Run 1, roughly in the ratio $f_{\rm Run1}/f_{\rm Run2} \sim (R_{\rm Run 1}/R_{\rm Run 2})^{-3/2} \sim 1.8$. Our numerical simulations do indeed show this behavior. For example, the ratio of the frequencies at which the first peak occurs is $\sim 2500 {\rm Hz} / 1500 {\rm Hz} \sim 1.7$. The ratio of the frequencies at which the secondary peak occurs is $\sim 3200 {\rm Hz} / 1750 {\rm Hz} \sim 1.8$. Run 3 is the same as Run 1 except that we use polytropic index $n=0.5$ ($\Gamma = 3$). This model was run with $N = 1024$ particles per star, with initial separation $a_0 = 4.5 R$. The stars first make contact at $t \sim 167 t_D$. By the end of the run, the core $\varpi \lesssim 2R$ has $93\%$ of the mass and $67\%$ of the angular momentum; the disk extends out to $\sim 50 R$. The gravitational waveforms $r h_+$ and $r h_{\times}$ are shown in Fig.~\ref{run10-waves} for an observer on the axis at $\theta = 0, \phi = 0$. Fig.~\ref{run10-gw} shows (a) the gravitational wave luminosity $L$ and (b) the energy $\Delta E$ emitted as gravitational radiation. Again, the solid lines give the code waveforms and the dashed lines the point-mass results. Table~\ref{models-results} summarizes some features of this run. However, unlike the previous cases, the core of the merged object is slightly non-axisymmetric, as shown in Fig.~\ref{contour-compare}. The effect of this rotating, bar-like core can be seen in the gravitational waveforms $r h_+$ and $r h_{\times}$ in Fig.~\ref{run10-waves}. At late times the angular velocity of the core is $\Omega \sim 0.5 t_D^{-1}$, corresponding to a gravitational wave frequency $f \sim 2200 {\rm Hz}$. This agrees with the wave frequency calculated from Fig.~\ref{run10-waves} at $t \sim 290 t_D$, confirming that the radiation at late times is due to the rotating core. Rasio \& Shapiro \cite{RS94} also found that the coalescence of a synchronized binary with $n = 0.5$ resulted in a rotating bar-like core. The energy spectrum $dE/df$ for Run 3 is shown in Fig.~\ref{dE/df-run10}. Here, the match to point-mass inspiral waveforms occurs at frequency $\sim 640 {\rm Hz}$. Dynamical instability is expected to occur at separation $a = 2.76R$ \cite{LRS}, which gives $f_{\rm dyn} = 1342 {\rm Hz}$. Again we see that the spectrum drops below the point-mass inspiral result near $f_{\rm dyn}$. The spectrum then rises to a sharp peak at $f_{\rm peak}\sim 2200 {\rm Hz}$, drops sharply, then rises again to a secondary peak at $f_{\rm sec}\sim 2600 {\rm Hz}$. In this model the gravitational radiation due to the rotating bar is in the frequency range of the first sharp peak. See Table \ref{freq-results}. Lai, Rasio, and Shapiro \cite{LRS} found that the onset of dynamical instability occurs at separation $a = 2.49 R$ for the parameters of Run 1 and at $a = 2.76 R$ for the parameters of Run 3. From this we estimate that the ratio of frequencies at which the various spectral features occur should be $f_{\rm Run 1}/f_{\rm Run 3} \sim 1.2$. Our simulations approximate this behavior. For example, the ratio of the frequencies at which the sharp drop occurs is $\sim 2500 {\rm Hz}/2200 {\rm Hz} \sim 1.1$. The ratio of the frequencies at which the secondary peak occurs is $\sim 3200 {\rm Hz}/2600 {\rm Hz} \sim 1.2$. \section{Summary and Discussion} \label{summary} We have carried out SPH simulations of the merger and coalescence of identical non-rotating neutron stars modeled as polytropes. The stars start out in the point-mass regime and spiral together due to the effects of gravitational radiation reaction. Once the stars come into contact, they rapidly merge and coalesce. Spiral arms form as mass is shed from the ends of the central rotating bar-like structure. Angular momentum is transported outward by gravitational torques and lost to the spiral arms. The arms expand supersonically and merge, forming a shock-heated axisymmetric disk. The central rotating core becomes axisymmetric for $n=1$, with the gravitational radiation shutting off rapidly after coalescence. For the stiffer equation of state $n=0.5$, the rotating core is slightly non-axisymmetric and considerably longer-lived, and the gravitational waves decrease more slowly in amplitude. It is instructive to compare our results with other, related work. Davies et al. (1993) recently carried out SPH calculations very similar to ours with $n \sim 0.71$ ($\Gamma = 2.4$). Their results for non-rotating stars are similar to ours. Rasio \& Shapiro \cite{RS92,RS94} have performed SPH simulations of synchronously rotating systems. They found that polytropes with $n=1$ produce an axisymmetric core, and those with $n=0.5$ yield a non-axisymmetric core, in agreement with our results. However, for their synchronously rotating models, the amplitude of the gravitational radiation drops off more rapidly after the merger than in our models. This effect was also seen by Shibata, Nakamura, and Oohara \cite{SNO} and may be due to the fact that the synchronously rotating stars are not spinning relative to one another when they merge, leading to less ``ringing'' of the resulting remnant. We have also calculated the energy emitted in gravitational waves per unit frequency interval $dE/df$. We find that the spectrum gradually drops below the point-mass inspiral value near the frequency at which the dynamical instability sets in; this causes the stars to spiral together faster than they would on point-mass trajectories. The spectrum then drops sharply near the frequency at which the waves from the main coalescence burst fall off. Finally, the spectrum rises again to a secondary peak at larger frequencies, the result of oscillations that occur during the merger. The frequencies at which these features in the spectrum occur, as well as their amplitudes, depend on both the neutron star radius $R$ and the equation of state specified by the polytropic index $n$. Our standard model, Run 1, has $R=10 {\rm km}$ and $n=1$. When we change just the radius in Run 2 to $R = 15 {\rm km}$, the spectral features occur at frequencies that are lower by a factor $\sim 1.8$ and the ``gentler'' merger leads to a much lower amplitude in the energy spectrum, both near $f_{\rm peak}$ and in the secondary maximum. When instead we change just the polytropic index in Run 3 to $n = 0.5$, the features occur at frequencies that are a factor $\sim 1.2$ lower. The stiffer equation of state results in a longer-lived bar and a substantially stronger peak amplitude. Measurement of the three frequencies $f_{\rm dyn}$, $f_{\rm peak}$ and $f_{\rm sec}$, along with the amplitudes of the spectrum there (relative to the point-mass result), thus may be used to obtain direct information about the physical state of the merging neutron stars. While the details of the peaks depend somewhat on the resolution of our simulations, the general results described here do not. The gravitational waveforms and the spectrum $dE/df$ contain much information about the hydrodynamical merger and coalescence of binary neutron stars. Our results show that the characteristic frequencies depend on both the neutron star radii and the polytropic equation of state. We intend to expand our study to include the effects of both spin and non-equal masses, as well as gravitational radiation reaction. In particular, radiation reaction can be expected to affect the evolution of the rotating bar in Run 3, leading to changes in the spectrum $dE/df$. We will present the results of these studies in future papers. \acknowledgments We thank K. Thorne for pointing out the importance of the energy spectrum $dE/df$ and encouraging this work. We also thank M. Davies, D. Kennefick, D. Laurence, and K. Thorne for interesting and helpful conversations, and L. Hernquist for supplying a copy of TREESPH. This work was supported in part by NSF grants PHY-9208914 and AST-9308005, and by NASA grant NAGW-2559. The numerical simulations were run at the Pittsburgh Supercomputing Center.
1,116,691,499,454
arxiv
\section{Introduction} $F$-manifolds have been introduced in \cite{HM} as a unifying geometric scheme that encompasses several areas of modern Mathematics, ranging from the theory of Frobenius manifolds to special solutions of the oriented associativity equations (\cite{LoMa2}), from quantum $K$-theory (\cite{lee}) to differential-graded deformation theory (\cite {mer}). An $F$-manifold $M$ is a smooth (or analytic) manifold equipped with a commutative and associative product $\circ : TM \times TM \rightarrow TM$ on sections of the tangent bundle $TM$, such that $\circ$ is $C(M)$-bilinear ($C(M)$ is the ring of smooth or analytic functions on $M$) and such that \begin{equation}\label{HM}P_{X\circ Y}(Z, W)=X\circ P_{Y}(Z,W)+Y\circ P_X(Z, W),\end{equation} where $P_X(Z,W):=[X, Z\circ W]-[X,Z]\circ W-Z\circ[X, W].$ The condition \eqref{HM} is usually called the Hertling-Manin condition and it implies that the deviation of the structure $(TM, \circ, [\cdot, \cdot])$ from that of a Poisson algebra on $(TM, \circ)$ is not arbitrary. Usually $M$ is also required to be equipped with a distinguished vector field $e$, called unity or identity, such that for every vector field $X$, $X\circ e=X$. Since the operation $\circ$ is $C(M)$-bilinear and commutative, it can be identified with a tensor field $c: S^2(TM)\rightarrow TM.$ Once $c$ is locally written in a coordinate system as $c^i_{jk}:=<c(\partial_j, \partial_k),dx^i>$, then the commutativity, the associativity and the Hertling-Manin condition \eqref{HM} translate respectively as \begin{eqnarray*} &c^i_{jk}=c^i_{kj},\\ &c^i_{jl}c^l_{km}=c^i_{kl}c^l_{jm},\\ &c^s_{im}\partial_s c^k_{jl}+c^k_{sl}\partial_j c^s_{im}-c^s_{jl}\partial_s c^k_{im}-c^k_{sm}\partial_i c^s_{jl}-c^k_{si}\partial_l c^s_{jm}-c^k_{js}\partial_m c^s_{li}=0. \end{eqnarray*} An $F$-manifold $(M, \circ, e)$ is called \emph{semisimple} if locally $(TM, \circ)$ is isomorphic to $C(M)^n$ (where $n$ is the dimension of the manifold $M$) with componentwise multiplication. This means that locally there exists a distinguished coordinate system such that, if $X$ and $Y$ are vector fields given in components as $X=(X^1, \dots, X^n),$ $Y=(Y^1, \dots, Y^n)$, then $(X\circ Y)^i=X^i Y^i$. This is equivalent to say that $c^i_{jk}=\delta^i_j \delta^i_k$ in this distinguished coordinate system (these are called {\em canonical coordinates} for $\circ$ whenever they exist). We will denote canonical coordinates with $\{u^1, \dots, u^n\}$. If $\circ$ is semisimple, then the identity vector field $e$ is given by $e=\sum_{i}\frac{\partial}{\partial u^i}$ in canonical coordinates for $\circ.$ Few years later, Manin introduced $F$-manifolds with compatible flat structure (\cite{manin}), which we call flat $F$-manifold for simplicity. In particular, he proved that many constructions related to Frobenius manifolds, such as Dubrovin's duality, do not require the presence of a (pseudo)-metric satisfying the condition $g(X\circ Y, Z)=g(X, Y\circ Z)$ for all vector fields $ X,Y,Z$ (such metrics are said to be \emph{invariant}). \begin{defi}[\cite{manin}]\label{def111}A flat $F$-manifold $(M, \circ, \nabla, e)$ with identity is a manifold $M$ equipped with the following data: \begin{enumerate} \item a commutative associate product $\circ : TM \times TM \rightarrow TM$ on sections of the tangent bundle $TM$, \item a distinguished vector field $e$ such that $X\circ e=X$ for every vector field $X$, \item a flat torsionless affine connection $\nabla,$ such that $\left(\nabla_X c\right)\left(Y,Z\right)=\left(\nabla_Y c\right)\left(X,Z\right)$ for all vector fields $X$, $Y$, and $Z$. \item $\nabla e=0$ (flat identity). \end{enumerate} A semisimple flat $F$-manifold is defined analogously, with the requirement that the operation $\circ$ is semisimple. \end{defi} Observe that in the Definition \ref{def111} there is no mention of the Hertling-Manin condition \eqref{HM} since the symmetry condition on $\nabla c$ forces \eqref{HM} to be automatically satisfied (see \cite{hert} for a proof). In any coordinate system, this condition reads $\nabla_i c^k_{lj}=\nabla_l c^k_{ij}.$ Let us mention also that the condition $\nabla e=0$ is the least important, and in many cases it is possible to modify the connection $\nabla$, preserving the other properties and in such a way to fulfill the condition $\nabla e=0$ even when it does not hold for the original connection (see for instance the example of $\vee$-systems below). The role played by flat $F$-manifolds in the study of integrable systems has been investigated in \cite{LPR,LP}. Further generalizations of these structures that suit very well the environment of integrable dispersionless PDEs have been proposed in \cite{ALimrn, AL, L2014}). In this paper, following similar ideas, we introduce and study what we call {\em semisimple multi-flat $F$-manifolds}. They are a natural generalization of semisimple bi-flat $F$-manifolds (see \cite{AL, L2014}) and they are deeply related to the notion of eventual identities and duality introduced in \cite{manin}. In order to define multi-flat $F$ manifolds we need to recall few facts about eventual identites: \begin{defi}[\cite{manin}] A vector field E on an $F$-manifold is called an \emph{eventual identity}, if it is invertible with respect to the product $\circ$, and if the bilinear product $*$ defined via \begin{equation}\label{nm} X *Y := X \circ Y \circ E^{-1},\qquad \text{ for all } X, Y \text{ vector fields} \end{equation} defines a new $F$-manifold structure on $M$. If $E$ satisfies the additional condition $[e,E]=e$ then it is called \emph{Euler vector field}. \end{defi} By definition, an eventual identity is the unity of the associated product $*$. A useful criterion to detect eventual identities is the following: \begin{thm}\cite{DS} An invertible vector field $E$ is an eventual identity for the $F$-manifold $(M, \circ, e)$ if and only if \begin{equation}\label{DScond} {\rm Lie}_E(\circ)(X,Y)=[e,E]\circ X\circ Y,\qquad\forall X,Y \text{ vector fields. } \end{equation} \end{thm} In the semisimple case, it is actually easier to characterize eventual identities. We have indeed the following theorem. \begin{thm}\label{theoremDSNijenhuis}\cite{ALimrn} Let $(M, \circ, e)$ be a semisimple $F$-manifold and let $E$ be an invertible vector field and assume that the eigenvalues of the endomorphism of the tangent bundle $V=E\,\circ$ are distinct. Then condition \eqref{DScond} is equivalent to the vanishing of the Nijenhuis torsion of $V$. \end{thm} In other words, in canonical coordinates for $\circ$ eventual identities are vector fields of the form $$E=\sum_{i=1}^n E^i(u^i)\frac{\partial}{\partial u^i},$$ and the product $*$ has associated structure constants $c^{*i}_{jk}$ (again in canonical coordinates for $\circ$) given by : \begin{equation}\label{dualp} c^{*i}_{jk}=\frac{1}{E^i(u^i)}\delta^i_j\delta^i_k. \end{equation} We have now all the ingredients to define (semisimple) multi-flat $F$-manifolds. \begin{defi}\label{multiflatdefi} Let $(M, \nabla, \circ, e)$ be a (semisimple) flat $F$-manifold with unity $e$. A \emph{multi-flat (semisimple)} $F$-manifold $(M,\nabla^{(l)},\circ, e, E,l=0...N-1)$ anchored at $(M, \nabla, \circ, e)$ is a manifold $M$ endowed with $N$ flat torsionless affine connections $\nabla^{(0)}:=\nabla,\, \nabla^{(1)},...,\nabla^{(N-1)}$, a commutative associative (semisimple) product $\circ$ on sections of the tangent bundle $TM$, an invertible vector field $E$ satisfying the following conditions: \begin{enumerate} \item $E$ is an Euler vector field (in the semisimple case we assume that the eigenvalues of $L:=E\circ$ are canonical coordinates for $\circ$). \item Given $E_{(l)}:=E^{\circ l}=E\circ E\circ \dots \circ E$ $l$-times, $l=0, \dots, N-1$, (by definition, $E_{(0)}=e$, $E_{(1)}=E$), then we require $\nabla^{(l)}E_{(l)}=0.$ \item Given $E_{(l)}$ and the related commutative, associative product $\circ_{(l)}$ (defined as $X\circ_{(l)}Y:=X\circ Y \circ E^{-1}_{(l)}, $ so that $\circ_{(0)}=\circ$ and $\circ_{(1)}=*$), we require that the connection $\nabla^{(l)}$ is compatible with $\circ_{(l)}$. In other words we require that \begin{equation}\label{scc} \left(\nabla^{(l)}_X c_{(l)}\right)\left(Y,Z\right)=\left(\nabla^{(l)}_Y c_{(l)}\right)\left(X,Z\right), \end{equation} for all vector fields $X$, $Y$, and $Z$ for all $l=0, \dots N-1$. \item The connections $\nabla^{(l)}$, $l=0, \dots, N-1$ are almost hydrodynamically equivalent (see \cite{AL}) i.e. \begin{equation}\label{almostcomp} (d_{\nabla^{(l)}}-d_{\nabla^{(l')}})(X\,\circ_{(l)})=0, \end{equation} for every vector fields $X$ and for every pair $l, l'$; here $d_{\nabla^{(l)}}$ is the exterior covariant derivative constructed from the connection $\nabla^{(l)}$. \end{enumerate} \end{defi} \begin{rmk} The last condition must be checked only for $l=0$. Indeed, due to the invertibility of the operator $E_{(l)}^{-1}\circ$ the condition \eqref{almostcomp} is equivalent to the condition $$ (d_{\nabla^{(l)}}-d_{\nabla^{(l')}})(X\,\circ)=0,\qquad\forall X, $$ which clearly follows from the condition $$ (d_{\nabla}-d_{\nabla^{(l)}})(X\,\circ)=0,\qquad\forall X. $$ \end{rmk} \begin{rmk} $M$ in Definition \ref{multiflatdefi} is a real or complex $n$-manifold. In the latter case $TM$ is intended as the holomorphic tangent bundle and all the geometric data are supposed to be holomorphic. \end{rmk} \begin{rmk} The powers of the Euler vector fields are eventual identities. This follows from the fact that eventual identities form a subgroup of the group of invertible vector fields on an $F$-manifold \cite{DS}. The above definition can be easily generalized substituting the powers of the Euler vector field with general eventual identities. \end{rmk} \begin{rmk} In the semisimple case the condition that $E_{(l)}:=E^{\circ l}$ implies that in canonical coordinates for $\circ$ the products $\circ_{(l)}$, $l=1, \dots N-1$ have associated tensor representatives $(c_{(l)})^i_{jk}=\frac{1}{E^i_{(l)}(u^i)}\delta^i_j\delta^i_k$ and $E^i_{(l)}(u^i)=(u^i)^l$. Furthermore the condition that the connections are almost hydrodynamically equivalent in canonical coordinates reduces to (see \cite{AL}): \begin{equation} \Gamma^{i}_{ij}=\Gamma^{(1)i}_{ij}=...=\Gamma^{(N-1)i}_{ij}. \end{equation} \end{rmk} In the first part of the paper we will study semisimple $F$-manifolds endowed with $N$ flat structures. In principle $N$ might be arbitrary, however we will see that the coexistence of more than $3$ flat structures is in general impossible. The case of two structures has been studied in details in \cite{AL,L2014}. It turns out that tri-dimensional bi-flat $F$-manifolds are parametrized by solutions of Painlev\'e VI equation. In this paper we will find an alternative parametrization, in terms of the solutions of a system of $6$ ODEs admitting $5$ first integrals. We will study in details also the case of tri-flat $F$-manifolds in the 3-component case. For more components, due to the appearance of some functional parameters the situation becomes more involved. We will find a class of solutions parametrized by hypergeometric functions. In the second part of the paper we will consider the non-semisimple case. First we will study regular non-semisimple bi-flat $F$-manifolds, leveraging on the recent results obtained in \cite{DH} unveiling a deep relation between regular bi-flat $F$-manifolds in dimension three on one side, and the full Painlev\'e equations P$_{VI}$, P$_{V}$ and P$_{IV}$ on the other. More precisely, regular bi-flat $F$-manifolds are characterized by the Jordan normal form of the operator $L=E\circ$. For three-dimensional manifolds, this gives rise to three cases, corresponding to $L_1, L_2$ and $L_3$ given by: $$L_1:=\left( \begin{array}{ccc} \lambda_1 & 0 & 0\\ 0 & \lambda_2 & 0\\ 0 & 0 & \lambda_3 \end{array}\right), \quad \quad L_2:=\left( \begin{array}{ccc} \lambda_1 & 1 & 0\\ 0 & \lambda_1 & 0\\ 0 & 0 & \lambda_3 \end{array}\right), \quad \quad L_3:=\left( \begin{array}{ccc} \lambda_1 & 1 & 0\\ 0 & \lambda_1 & 1\\ 0 & 0 & \lambda_1 \end{array}\right),$$ (here $\lambda_i$ with different indices are assumed to be distinct). Regular bi-flat $F$-manifolds in dimension three whose endomophism $L$ has the form $L_1$ are actually semisimple and, as recalled above, are locally parameterized by solutions of the full Painlev\'e VI. We will focus our attention on three-dimensional regular bi-flat $F$-manifolds whose operator $L$ has the form $L_2$ or $L_3$ and we will show that in the former case they are locally parameterized by solutions of the full P$_{V}$, while in the latter case they are locally parameterized by solutions of the full P$_{IV}.$ This highlight a striking parallelism between confluences of Painlev\' e equations and collision of eigenvalues of the endomorphism $L$ (preserving regularity), a fact which in our opinion deserves further investigation. It would be definitely interesting to extend this correspondence beyond the regular case. Unfortunately for the non-regular case there are no structural result similar to those developed in \cite{DH} at the moment. Let us remark that to the best of our knowledge, this is the first time in which other Painlev\' e trascendents, besides Painlev\' e VI appear in the analysis of geometric structures related to integrability or topological field theory. Our work provides a clear indication that the other Painlev\'e equations might be appear in the classification not only of non-semisimple bi-flat $F$-manifolds, besides the regular case treated here, but also in the analysis of non-semisimple Frobenius manifolds. We also point out that the approach championed in \cite{AL} and \cite{L2014} is based on the study of a generalized Darboux-Egorov system and cannot be applied to the semisimple case while the methodology developed here, in which the key role is played by a geometric version of Tsarev's conditions of integrability paired with a commutativity condition between the Lie derivative with respect to a set of eventual identities defining a subalgebra of the centerless Virasoro algebra and the covariant derivative of the associated connections, does not require the semisimplicity of the product. Finally, in the second part of the paper we show the remarkable phenomenon that, while in the semisimple case there are in general obstructions to the existence of multi-flat $F$-manifolds, in the regular non-semisimple case it is possible to construct multi-flat $F$-manifolds with an {\em arbitrary} (countable) number of compatible flat connections (all the powers of the Euler vector field). This fact is in striking contrast to the semisimple situation, where the number of simultaneous compatible flat structures is severely limited. This is the first example of an $F$-manifold equipped with an infinite collection of non-trivial compatible flat structures. The paper is structured as follows. In Section \ref{structuresec} we discuss the relations between geometric structures appearing in the study of $F$-manifolds and integrable dispersionless PDEs. We introduce Tsarev's conditions which will be essential to determine multi-flat $F$-structures once they are coupled with the necessary conditions determined in Section \ref{multiflatnesssec}. In Section \ref{examplesflatbiflatsec} we discuss some examples of flat and bi-flat $F$-manifolds. In particular, we show that the theory of Lauricella structures recently developed in \cite{CHL, Looijenga} support non-trivial products in the sense of Manin. These structures are related to the flat and bi-flat structures of the generalized $\epsilon$-system \cite{LP,AL,L2014}. In Section \ref{multiflatnesssec} we provide necessary conditions for the existence of multi-flat structures. In Section 5 we discuss the semisimple case proving that $N$-flat structures with $N>3$ can not exist in general. In Section \ref{biflatsec} we couple the necessary conditions for the existence of multi-flat structures found in the previous Section with Tsarev's conditions. This allows us to study in detail bi-flat $F$-manifolds in dimension $2$ and $3$, in particular we find that bi-flat $F$-manifold in dimension $3$ are parametrized by the solutions of a nonlinear non-autonomous system of first order quadratic ODEs, possessing $5$ independent integral of motions. Moreover, we construct a one-parameter family of maps each of which associates a given solution of this system of ODEs to a solution of the Painlev{\'e} VI equation. In Section \ref{triflatsec} we analyze tri-flat $F$-manifolds, construct a system of ODEs that parametrize them and find some special solutions of this system given by hypergeometric functions. In Section \ref{structuresec2} we study non-semisimple regular bi-flat $F$-manifolds in dimension three according to the form of $L$. We provide a local model for these manifolds in the ``canonical coordinates" provided by \cite{DH} and show that the geometric data are controlled by two systems ODEs depending on the form of $L$. We give a detailed proof that these systems reduce in one case to the full P$_{V}$ and in the other to the full P$_{IV}.$ Although the reduction proof is completely elementary, it is highly non-trivial. In Section 9 we construct examples of non-semisimple regular tri-flat and multi-flat $F$-manifolds in dimension three (under the assumption that $L=E\circ$ has only one Jordan block). These examples show that the existence of multi-flat structures in the non-semisimple case is unexpectedly much more involved than in the semisimple case. In particular we show that in this situation it is possible to construct multi-flat $F$-manifolds with an arbitrary number of compatible flat structures. This happens essentially because once these $F$-manifolds are equipped with a quadri-flat structure, they are equipped automatically with infinitely many. \section{Flat $F$-manifolds and Integrable dispersionless PDEs}\label{structuresec} In this section, we survey the relationships between $F$-manifolds, flat $F$-manifolds and other geometric structures on one hand, and the theory of integrable dispersionless PDEs on the other. We also introduce Tsarev's conditions, which play a key role in determining multi-flat $F$-structures. According to Tsarev's theory \cite{ts1,ts2}, integrable quasilinear systems of PDEs of the form \begin{equation}\label{shs} u^i_t=v^i(u)u^i_x,\qquad i=1,...,n \end{equation} are defined by a set of functions $\Gamma^i_{ij}$ ($i\ne j$) satisfying the conditions (called {\em Tsarev's conditions}) \begin{eqnarray} \label{rt1} \partial_j\Gamma^i_{ik}+\Gamma^i_{ij}\Gamma^i_{ik}-\Gamma^i_{ik}\Gamma^k_{kj} -\Gamma^i_{ij}\Gamma^j_{jk}=0, \quad \mbox{if $i\ne k\ne j\ne i$}. \end{eqnarray} Once the conditions \eqref{rt1} are satisfied the solutions of the system \begin{equation}\label{sym} \partial_j v^i=\Gamma^i_{ij}(v^j-v^i) \end{equation} define a set (depending on functional parameters) of commuting flows of the form \eqref{shs}. From \eqref{rt1} it follows that the solutions of \eqref{rt1} satisfy the conditions \begin{equation} \label{sh} \partial_j\left(\frac{\partial_k v^i}{v^i-v^k}\right)= \partial_k\left(\frac{\partial_j v^i}{v^i-v^j}\right)\hspace{1 cm}\forall i\ne j\ne k\ne i, \end{equation} Conversely, given $v^i$ satisfying \eqref{sh} and using \eqref{sym} as definition of $\Gamma^i_{ij}$, the compatibility conditions \eqref{rt1} are automatically satisfied. Quasilinear systems satisfying conditions \eqref{sh} are called \emph{semi-Hamiltonian} \cite{ts1,ts2} or \emph{rich} \cite{serre1,serre2}. Sevennec \cite{sevennec} later found a nice characterization of semi-Hamiltonian systems. He showed they coincide with diagonalizable systems of conservation laws. As the notation suggests, the functions $\Gamma^i_{ij}$ can be identified with (part of) the coefficients of a symmetric connection $\nabla$. The reconstruction of $\nabla$ can be done in essentially two non-equivalent ways. In the first case, we call the connection $\nabla$ a {\em Hamiltonian connection}. In this case, $\nabla$ is the Levi-Civita connection of a diagonal metric $g$: \begin{equation}\label{metric} \partial_j\ln{\sqrt{g_{ii}}}=\Gamma^i_{ij},\qquad j\ne i. \end{equation} Given a diagonal metric $g$ for which the functions $\Gamma^i_{ij}$ satisfy the above conditions, all the remaining Christoffel symbols are uniquely defined through the classical Levi-Civita's formula. However, as it is easy to check, the general solution of \eqref{metric} depends on $n$ arbitary functions of a single variable: if $g_{ii}$ is a solution then $\varphi_i(u^i)g_{ii}$ is still a solution. The connections defined by \eqref{metric} have been introduced by Dubrovin and Novikov in \cite{DN}. We call them Hamiltonian connections since they are related to the Hamiltonian formalism. For instance, in the flat case (i.e. when $\nabla$ is flat), the differential operator \begin{equation} P^{ij}:=g^{ii}\delta^i_j\partial_x-g^{il}\Gamma^j_{lk}u^k_x \end{equation} defines a local Hamiltonian operators for the flows \eqref{shs} defined by the solutions of \eqref{metric}. The non-flat case is more involved: the Hamiltonian operators are non-local and the non-local tail is related to the quadratic expansion of the Riemann tensor in terms of solutions of the system \eqref{sym}: $$R^{ij}_{ij}=\sum_{\alpha}\epsilon_{\alpha}w^i_{\alpha}w^j_{\alpha}.$$ The existence of this quadratic expansion is a non-trivial property. It was conjectured by Ferapontov \cite{F} that all solutions of the system \eqref{metric} possess such a property. Ferapontov's conjecture has been checked for reductions of dKP and 2d Toda in \cite{GLR} and \cite{CLR}. The other way to reconstruct a torsionless affine connection $\nabla$ having $\Gamma^i_{ij}$ as a subset of its Christoffel symbols in a distinguished coordinate system was devised in \cite{LP}. This leads to the notion of natural connections and $F$-manifold with compatible connection and flat unity \cite{LPR}. An {\em $F$-manifold with compatible connection and flat unity} is a semisimple $F$-manifold $(M, \circ, e)$ equipped with a torsionless connection $\nabla$ (not necessarily flat) such that the following requirements hold \begin{eqnarray*} &&Z\circ R(W,Y)(X)+W \circ R(Y,Z)(X)+Y \circ R(Z,W)(X)=0,\\ &&\left(\nabla_X c\right)\left(Y,Z\right)=\left(\nabla_Y c\right)\left(X,Z\right),\\ &&\nabla e=0, \end{eqnarray*} where $R$ in the first condition above is the Riemann tensor and $X, Y, Z, W$ are arbitrary vector fields. Connections satisfying these conditions are called {\em natural connections}. In the distinguished coordinate system given by the canonical coordinates of $\circ$, the first and second requirements imply Tsarev's condition \eqref{rt1} for $\Gamma^i_{ij}$ (\cite{LPR}), while the second and third one provide additional conditions that specify completely all the other Christoffel symbols. Indeed in canonical coordinates for $\circ$, given $\Gamma^i_{ij}$, the last two requirements above for $\nabla$ are equivalent to \begin{equation}\label{naturalc} \begin{split} \Gamma^{i}_{jk}&:=0,\qquad\forall i\ne j\ne k \ne i,\\ \Gamma^{i}_{jj}&:=-\Gamma^{i}_{ij},\qquad i\ne j,\\ \Gamma^{i}_{ii}&:=-\sum_{l\ne i}\Gamma^{i}_{li}. \end{split} \end{equation} Let us remark that the condition $\nabla e=0$ in the definition of natural connection $\nabla$ is the less important and can be dropped; as we have remarked in the Introduction it is not too restrictive, at least in some examples, since deforming the connection $\nabla$ with the product $\circ$ one can obtain $\nabla e=0$ for the deformed connection, while preserving the other two conditions. In this framework the quasilinear system \eqref{shs} can be written as \begin{equation}\label{shs2} u_t=X\circ u_x. \end{equation} and the system \eqref{sym} reads \begin{equation}\label{sym2} c^i_{jl}\nabla_k X^l=c^i_{kl}\nabla_j X^l. \end{equation} In this setting the characteristic velocities $v^i$ are thought as the components of the vector fields $X$ in canonical coordinates. Since the Riemann invariants are identified with the canonical coordinates, given a semi-Hamiltonian system, the associated natural connection is defined up to a reparameterization of the Riemann invariants, that is up to the choice of $n$ arbitrary functions of a single variable. Like in the case of Hamiltonian connections, the most interesting case is when the connection $\nabla$ is flat (we have discussed these manifolds in the Introduction, they are the flat $F$-manifolds introduced by Manin in \cite{manin}). In this case, a countable set of solutions of the system \eqref{sym} can be obtained from a frame of flat vector fields $(X_{(1,0)},...,X_{(n,0)})$ via the following recursive relations (here $d_{\nabla}$ is the exterior covariant derivative) \begin{equation}\label{rr} d_{\nabla}X_{(p,\alpha+1)}=X_{(p,\alpha)}\circ. \end{equation} In flat coordinates the flows of the principal hierarchy are systems of conservation laws. Due to \eqref{rr}, the current associated to the ``time" $t_{(p,\alpha)}$ is given by the vector field $X_{(p,\alpha+1)}$. In general the two ways we just described to reconstruct a torsionelss connection $\nabla$ starting from the functions $\Gamma^i_{ij}$ satisfying \eqref{rt1} are inequivalent: Hamiltonian connections are not natural connections and natural connections are not Hamiltonian. Indeed combining the conditions $\nabla g=0$ and $\nabla_i c^k_{lj}=\nabla_l c^k_{ij}$, one obtains $\partial_j g_{ii}=\partial_i g_{jj}$, which implies that in order to have a connection which is both Hamiltonian and natural, the metric must be potential in canonical coordinates (Egorov case). This is for instance the case of semisimple Frobenius manifolds. In many examples (including Frobenius manifolds) besides the recursive relation \eqref{rr} there exists an additional one, which we called twisted Lenard-Magri chain (see \cite{ALimrn}) \begin{equation}\label{rr2} d_{\nabla^{(1)}}(e\circ X_{(p,\alpha+1)})=d_{\nabla^{(2)}}(E\circ X_{(p,\alpha)}). \end{equation} It is based on the existence of an additional flat structure and on an eventual identity $E$. This leads naturally to define the class of bi-flat $F$-manifolds that was extensively studied in \cite{AL,L2014}. \begin{rmk} In the semisimple case removing the condition $\nabla e=0$ in the definition of natural connections one has the freedom to choose the Christoffel symbols $\Gamma^i_{ii}$ \cite{LP}. The same freedom can be also described in terms of the special family of connections \cite{DS2} $$\tilde\nabla_X Y=\nabla_X Y+V\circ X\circ Y$$ This is a family of connections satisfying the symmetry condition (1.5) (the product is not assumed to be semisimple). Like in the semisimple case the condition $\tilde\nabla e=0$ fixes uniquely the vector field $V$. \end{rmk} \section{Examples of flat and bi-flat $F$-manifolds}\label{examplesflatbiflatsec} In this section we present some examples of flat and bi-flat $F$-manifolds. Despite their variety, these examples are all related to integrable systems. \subsection{$\vee$-systems}\label{veesystem} $\vee$-systems were introduced by A. Veselov in \cite{Ve} to construct new solutions of generalized WDVV equations, starting from a special set of covectors. We want to point out that it is always possible to construct a flat $F$-manifold starting from a $\vee$-system. First we recall the notion of $\vee$-systems (see \cite{Ve}). Let $V$ be a finite dimensional vector space and let $V^*$ is dual. Let $\mathcal{V}$ be a finite set of non-collinear covectors $\alpha\in V^*$ with the property that they span $V^*$. This means that the symmetric bilinear form defined by $G^{\mathcal{V}}:=\sum_{\alpha\in \mathcal{V}} \alpha\otimes \alpha$ is non-degenerate. The non-degeneracy of $G^{\mathcal{V}}$ is equivalent to require that the map $\phi_{\mathcal{V}}: V\rightarrow V^*$ defined by the formula $$(\phi_{\mathcal{V}}(u))(v):=G^{\mathcal{V}}(u,v), \quad u, v\in V,$$ is invertible. In this context, for each covector $\alpha$ it is possible to define the vector $\check \alpha\in V$ as \begin{equation}\label{checkalpha} \check{\alpha}:=\phi^{-1}_{\mathcal{V}}(\alpha), \quad \alpha\in V^*, \end{equation} or, which is equivalent, as the unique vector in $V$ such that \begin{equation}\label{checkalpha2}\alpha=G^{\mathcal{V}}(\cdot, \check{\alpha}).\end{equation} The finite spanning set $\mathcal{V}\subset V^*$ is called a $\vee$-system if for each two-dimensional plane $ \Pi\subset V^*$ one has \begin{equation}\label{veeequation} \sum_{\beta\in \Pi\cap \mathcal{V}}\beta(\check{\alpha})\check{\beta}=\lambda \check{\alpha}, \end{equation} for each $\alpha \in \Pi\cap \mathcal{V}$ and for some $\lambda$, which may depend on $\Pi$ and $\alpha$. In this case, the (contravariant) metric is given by $$\check{G}=\sum_{\check{\alpha}\in\mathcal{V}}\check{\alpha}\otimes\check{\alpha},$$ while the product $\circ$ is defined by the following formula: \begin{equation}\label{provsys}(X\circ Y)_u=\sum_{\alpha\in \mathcal{V}}\frac{\alpha(X) \alpha(Y)\, \check{\alpha}}{\alpha(u)}.\end{equation} The product \eqref{provsys} is clearly commutative and the $\vee$-conditions guarantee that it is also associative. The flat connection $\nabla$ is the Levi-Civita connection associated to the standard flat metric obtained inverting $\check{G}$. It is a flat connection and it satisfies $\nabla_l c^i_{jk}=\nabla_j c^i_{lk}$. Indeed, in flat coordinates we have $$\partial_l c^i_{jk}=-\sum_{\alpha \in \mathcal{V}}\frac{\alpha_j \alpha_k \alpha_l \check{\alpha}^i}{(\alpha(u))^2},$$ which is symmetric in $l,j,k$, so the condition holds. Therefore in order to have a flat $F$-manifold we need to check that $\nabla e=0$, where $e$ is the unit vector field of \eqref{provsys}. It is immediate to see that the unity for the product \eqref{provsys} is given by the vector field $e_u:=(u^1,\dots, u^n)$ since for every vector field $X$ $$(X\circ e)_u:=\sum_{\alpha\in \mathcal{V}}\frac{\alpha(X) \alpha(u)\, \check{\alpha}}{\alpha(u)}=X_u,$$ due to the fact that $\sum_{\alpha\in \mathcal{V}}\alpha \otimes \check{\alpha}$ is equal to the identity endomorphism. The unity $e$ does not satisfy in general the condition $\nabla e=0$, but it is always possible to modify the Christoffel symbols of the Euclidean structure with the structure constants $c^i_{lp}$ so that for the new connection $\tilde \nabla$ one has $\tilde \nabla e=0$. Indeed, for any vector field $V=v^i \partial_i$ we get $$\tilde\nabla_V e=\left(v^i u^j \tilde\Gamma^k_{ij}+v^i \frac{\partial u^k}{\partial u^i}\right)\partial_k.$$ We have $\tilde\nabla_V e=0$ iff $v^i u^j\tilde\Gamma^k_{ij}+v^k=0$ for any choice of the vector field $V$. This give the condition $u^j \tilde\Gamma^k_{ij}=-\delta^k_i$. The last condition says that $e$ behaves like the unity for the product with structure constants $-\tilde \Gamma^k_{ij}$, so it is natural to choose $\tilde \Gamma^k_{ij}=-c^k_{ij}$. To get a flat $F$-manifold, we also need to check that $\tilde \nabla_l c^i_{jk}=\tilde\nabla_j c^i_{lk}$ and that the modified connection $\tilde\nabla $ is still flat. Both properties follow from the associativity of the product and from the condition $\nabla_l c^i_{jk}=\nabla_j c^i_{lk}$. Let us point out that $\vee$-systems are also related to purely non-local Hamiltonian structures (see \cite{AL2014JMP}). \subsection{Semisimple Frobenius manifolds}\label{frobsubsection} As we mentioned before, Frobenius manifolds have a compatible flat structure which is the Levi-Civita connection of an invariant metric $g$. They possess also a second flat metric, called intersection form that we denote with $\tilde{g}$. In canonical coordinates for a product $\circ$ compatible with $g$, the two metrics are related by the simple formula $$\tilde{g}^{ii}=u^ig^{ii}, \forall i.$$ This implies that the Christoffel symbols $\Gamma^i_{ij}$ (with $i\ne j$) are the same. Moreover the Levi-Civita connection of the intesection form is compatible with the dual product $*$ whose structure constants are given by $$c^{*i}_{jk}=\frac{1}{u^i}\delta^i_j\delta^i_k.$$ In general the unity of the dual product is not flat with respect to the Levi-Civita connection of the intersection form. However it is always possible, modifying it in a suitable way (in particular modifying the Christoffel symbols $\Gamma^i_{ii}$) to get a second flat connection which satisfies also this further property. We will discuss later in more details this point. \subsection{Lauricella connections, $W(A_n)$ $\vee$-systems and Lauricella bi-flat $F$-manifolds} An example of bi-flat $F$-manifold, which in general can not be recast in the framework of Frobenius manifold is provided by the generalized $\epsilon$-system. In this case, the Christoffel symbols that determine the connection are given by: \begin{equation}\label{nablaepsilon} \Gamma^{i}_{ij}=\frac{\epsilon_j}{u^i-u^j}\qquad i\ne j, \end{equation} and in the coordinate system $\{u^1,\dots, u^n\}$ the structure constants of the product $\circ$ have the form $c^i_{jk}=\delta^i_j \delta^i_k.$ This case was treated in detail in \cite{LP} and \cite{L2014} and it is related to the Euler-Poisson-Darboux system \begin{equation}\label{ddl} dd_L k=dk\wedge da, \end{equation} where $L$ is an endomorphism of the tangent bundle given by $L={\rm diag}(u^1,....,u^n)$, $a$ is a function given by $a=\sum_{i=1}^n\epsilon_i u^i$ and $d_Lf(X)=(LX)(f)=df(LX)$, for every vector field $X$. Indeed one can write the solutions of the system \eqref{sym2} as $X^i=-\frac{\partial_i k}{\epsilon_i}$ where $k$ is a solution of \eqref{ddl}. The vector fields $X^i_{(p,\alpha)}=-\frac{\partial_i k_{(p,\alpha)}}{\epsilon_i}$, which define the principal hierarchy correspond to special solutions of \eqref{ddl}: the flat vector fields $X_{(p,0)}$ correspond to a set ($k_{(1,0)}=-a,k_{(2,0)},...,k_{(n,0)})$ of flat coordinates for the connection \eqref{nablaepsilon} with $\epsilon_i\to-\epsilon_i$ and, up to inessential constant factors and a part from some resonant cases, the vector fields $X_{(p,\alpha)}$ ($\alpha\ge 1$) correspond to the solutions of \eqref{ddl} defined recursively by $dk_{(p,\alpha+1)}=d_Lk_{(p,\alpha)}-k_{(p,\alpha)}da$. For instance, it is easy to check that the vector field $X_{(1,1)}$ has components $X^i_{(1,1)}=u^i-a$. The corresponding flow $$u^i_{t_{(1,1)}}=\left(u^i-\sum_{k=1}^n\epsilon_k u^k\right)u^i_x,\qquad i=1,...,n,$$ is called \emph{the generalized $\epsilon$-system} \cite{Pavlovhydro}. This example is related to the theory of Lauricella functions \cite{Lauricella} and Lauricella manifolds \cite{CHL,Looijenga}. Here the coordinates $u^1, \dots u^n$ are intended as complex coordinates. We begin by recalling the definition of Lauricella functions. Consider $n$ real numbers in the interval $(0,1)$, $(\epsilon_1, \dots, \epsilon_n):=\epsilon$, called the weight system $\epsilon$ and let $|\epsilon|:=\sum_{i=1}^n \epsilon_i$ be the total weight of $\epsilon.$ Let $\mathcal{H}:=\cup_{1\leq i<j\leq n}H_{ij} $ where $H_{ij}:=\{u\in \mathbb{C}^n | u^i=u^j\}.$ The value of the {\em Lauricella function} of weight $\epsilon$ at the point $u:=(u^1, \dots, u^n)\in \mathbb{C}^{n}\setminus \mathcal{H}$ is given by $$\int_{\gamma_u}\eta_u=\int_{\gamma_u}(u^1-\zeta)^{-\epsilon_1}\dots (u^n-\zeta)^{-\epsilon_n}d\zeta.$$ Here $\gamma_u$ is an oriented piecewise differentiable arc such that the end points of $\gamma_u$ lie in $\{u^1, \dots, u^n\}$ (but such that $\gamma_u$ does not meet this set elsewhere) and a determination of the multivalued differential $\eta_u$ is fixed (in general Lauricella functions are multivalued). Moreover the choice of the arc $\gamma_u$ and the choice of the determination of $\eta_u$ should depend continuously on $u$ (see \cite{Looijenga} for details). To show that Lauricella functions provide (almost all) flat homogenous coordinates for the natural connection associated to the generalized $\epsilon$-system described above, we first recall the following \begin{prop}\label{lauricella}\cite{Looijenga} Let $L^{\epsilon}_u$ be the complex vector space of germs of holomorphic Lauricella functions at $u\in \mathbb{C}^{n}\setminus \mathcal{H}$ with fixed weight system $\epsilon$. Then $\mathrm{dim}_{\mathbb{C}}(L^{\epsilon}_u)=n-1$ and $L^{\epsilon}_u$ contains the constant functions iff $|\epsilon|=1$. Moreover for any $f\in L^{\epsilon}_u$ the following hold \begin{enumerate} \item $e(f)=0$, where $e=\sum_{i=1}^n \frac{\partial}{\partial u^i}.$ \item $f$ is homogeneous of degree $(1-|\epsilon|)$. \item $f$ satisfies the system of differential equations \begin{equation}\label{lauricellaeq1} \frac{\partial^2 f}{\partial u^i \partial u^j}=\frac{1}{u^i -u^j}\left(\epsilon_j \frac{\partial f}{\partial u^i}-\epsilon_i \frac{\partial f}{\partial u^j}\right), \quad 1\leq i<j\leq n. \end{equation} \end{enumerate} \end{prop} Notice that the above system \eqref{lauricellaeq1} coincides with the Euler-Darboux-Poisson system \eqref{ddl} with $\epsilon_i\to-\epsilon_i$. Combining Proposition \ref{lauricella} with the results from Section 5.1 of \cite{L2014}, we obtain the following Corollary relating the generalized $\epsilon$-system with Lauricella functions: \begin{cor} Consider the generalized $\epsilon$-system in $n$-dimensions, $\epsilon=(\epsilon_1, \dots, \epsilon_n)$ and suppose that $0<\epsilon_i<1$ for all $i=1, \dots, n$ and that $|\epsilon|:=\sum_{i=1}^n \epsilon_i \neq 1$. Then any basis of Lauricella functions $\{f_l\}_{l=1}^{n-1}$ in $L^{\epsilon}_u$, $u\in \mathbb{C}^{n}\setminus\mathcal{H}$ gives rise to $n-1$ of the $n$ flat coordinates of the natural connection associated to the generalized $\epsilon$-system. \end{cor} \emph{Proof} Let $f$ be any element in a basis of Lauricella functions $\{f_l\}_{l=1}^{n-1}$ in $L^{\epsilon}_u$. Introducing the notation $\theta_i=\frac{\partial f}{\partial u^i}$, equation \eqref{lauricellaeq1} can be written as ($\partial_i:=\frac{\partial }{\partial u^i}$): $$\partial_i \theta_j=\frac{1}{u^j-u^i}\left( \epsilon_i \theta_j-\epsilon_j\theta_i\right), \quad i=1, \dots n, \; i\neq j,$$ which is the first of the set of equations that characterize flat $1$-forms for the natural connection associated to the generalized $\epsilon$-system (see the first set of equations in formula $5.4$ in \cite{L2014}). Notice also the constraint $e(f)=0$ (the first point of Proposition \ref{lauricella}) immediately implies $\sum_{i=1}^n \partial_i \theta_j=0$ for all $j=1, \dots n$, which constitutes the other equation $5.4$ in \cite{L2014} characterizing flat $1$-forms for the natural connection. Therefore any basis of Lauricella functions gives rise to $n-1$ flat coordinates, provided these functions are not constant. This is the case since $|\epsilon|\neq 1$ by assumption and therefore by Proposition \ref{lauricella} any basis of $L^{\epsilon}_u$ does not contain constant functions. The natural connection has $n$ flat coordinates: the missing flat coordinate is the function $a=\sum_{i=1}^n\epsilon_i u^i$. \begin{flushright} $\Box$ \end{flushright} Therefore, under suitable assumptions on the weights $\epsilon_i$, the Lauricella functions provide $n-1$ of the $n$ flat homogeneous coordinates for the generalized $\epsilon$-system. Finally we comment on the relation between the natural connection of the generalized $\epsilon$-system and the so called Lauricella connection. We first recall the notion of Lauricella connection. Consider the free smooth diagonal action $\psi: \mathbb{C}\times \mathbb{C}^n \rightarrow \mathbb{C}^n$ give by $\psi(\lambda, u)=(u^1+\lambda, \dots, u^n+\lambda)$. Call $V$ the quotient of $\mathbb{C}^n$ by this action and $\pi: \mathbb{C}^n \rightarrow V$ the corresponding quotient map. Denote with $e_1, \dots, e_n$ the standard basis of $\mathbb{C}^n$, which we identify with the global frame $\partial_1, \dots, \partial_n$ for its tangent bundle. Call $\mathcal{V}\mathbb{C}^n$ the line sub-bundle of $T\mathbb{C}^n$ given by $\mathrm{Ker}(\pi_*)$, this is just the vertical distribution and notice that it is spanned by $e=\sum_{i=1}^n \partial_i$. Given now positive real numbers $\epsilon_1, \dots, \epsilon_n$, we define an inner product on $\mathbb{C}^n$ by $\langle e_i, e_j\rangle=\epsilon_i \delta_{ij}$. Using this inner product, it is possible to identify $V$ with the orthogonal complement of the main diagonal, i.e. with $Z_0:=\{(u^, \dots, u^n)| \sum_{i=1}^n \epsilon_i u^i=0\}$ and construct a global decomposition $T\mathbb{C}^n=\mathcal{V}\mathbb{C}^n\oplus \mathcal{C}$, where the sub-bundle $\mathcal{C}$ is orthogonal to $\mathcal{V}\mathbb{C}^n$ and it is spanned by the vector fields $\epsilon_j \partial_i-\epsilon_i \partial_j.$ All the integral leaves of $\mathcal{C}$ are just given by translations of the hyperplane $Z_0,$ namely they are $Z_c:=\{(u^1, \dots, u^n)| \sum_{i=1}^n \epsilon_i u^i=c\}$ as $c$ varies in $\mathbb{C}$. To each hyperplane $H_{ij}$ with $1\leq i<j\leq n$ in $\mathbb{C}^n$ there is associated a unique meromorphic differential with divisor $-H_{ij}$ and residue $1$ along $H_{ij}$, $\omega_{H_{ij}}:=\frac{d\phi_{H_{ij}}}{\phi_{H_{ij}}}$, where $\phi_{H_{ij}}$ is a linear equation for $H_{ij}.$ As $\phi_{H_{ij}}$ we can choose $u^i-u^j$. In this respect, Arnol'd has proved (see \cite{Arnold}) that the forms $\omega_{H_{ij}}$ are the generators of the cohomology ring of the colored braid groups (essentially the cohomology ring of the space of ordered subsets of $n$ different points of the plane $\mathbb{C}$). Using the inner product introduced above, the orthogonal complement $H_{ij}^{\perp}$ of the hyperplane $H_{ij}$ is the line spanned by the vector $\epsilon_j e_i-\epsilon_i e_j$, since if $v=\sum_k v^k e_k\in H_{ij}$, then $\langle v, \epsilon_j e_i-\epsilon_i e_j\rangle=\epsilon_j \epsilon_i(v^i-v^j)=0$. Consider the rank $1$ endomorphism $\rho_{H_{ij}}$ of $\mathbb{C}^n$ with kernel $H_{ij}$ and range given by $H_{ij}^{\perp}$. Any such endomorphism is self-adjoint with respect to the inner product $\langle \cdot, \cdot \rangle$ as it is immediate to check. We can fix a normalization for $\rho_{H_{ij}}$ imposing that $\epsilon_j e_i-\epsilon_i e_j$ is an eigenvector with eigenvalue $\epsilon_i+\epsilon_j$. This simply means that $\rho_{H_{ij}}$ has the form $u\mapsto (u^i-u^j)(\epsilon_j e_i-\epsilon_i e_j)$. Obviously we can also think of $\rho_{H_{ij}}$ as being vector fields valued, simply interpreting $\epsilon_j e_i-\epsilon_i e_j$ as $\epsilon_j \partial_i-\epsilon_i \partial_j$ which is what we do below discussing the connection $\nabla$. Moreover, one can view $\rho_{H_{ij}}$ as an endomorphism of the tangent bundle of $\mathbb{C}^n$ rather than an endomorphism of $\mathbb{C}^n$, in which case we write $\rho_{H_{ij}}=(du^i-du^j)\otimes (\epsilon_j \partial_i-\epsilon_i \partial_j).$ Observe also that $\rho_{H_{ij}}$ induces also an endomorphism $\rho^V_{H_{ij}}$ on $V$, due to the fact that $\rho_{H_{ij}}$ is translation invariant and to the fact that its range lies in $Z_0$ (or seeing it as a vector fields valued map its range lies in $\mathcal{C}$). Similarly, since $\omega_{H_{ij}}$ is invariant under the action of $\psi$ extended to the tangent bundle, the corresponding form $\omega^V_{H_{ij}}$ on $V$ is well defined. These details, although cumbersome, will be important to show that the Lauricella connection is a reduction of the natural connection of the generalized $\epsilon$-system in a sense detailed below. \begin{thm}\cite{CHL}\label{CHLthm1} Let $\nabla^{0,V}$ be the standard, translation invariant flat connection on the tangent bundle of $V$. Then the connection \begin{equation}\label{lauricellaconn} \nabla^V:=\nabla^{0, V}-\sum_{1\leq i<j\leq n} \omega^V_{H_{ij}} \otimes\rho^V_{H_{ij}}, \end{equation} is called the Lauricella connection and it is flat. Furthermore if $0<\epsilon_i <1$ for all $i=1, \dots, n$, then the multivalued holomorphic Lauricella functions defined above are translation invariant, so they define multivalued holomorphic functions on $\tilde{V}$ (still called Lauricella functions) and their differentials are flat for the Lauricella connection. \end{thm} Let us remark that Theorem \ref{CHLthm1} holds under slightly more general assumptions (see \cite{CHL} Section 2.3), but we recall it here in this form since we are interested to compare the Lauricella connection with the natural connection of the generalized $\epsilon$-system. First we have this straightforward characterization of the natural connection of the generalized $\epsilon$-system: \begin{lemma} Let $\nabla^0$ be the standard flat translation invariant connection of $\mathbb{C}^n$. Then the natural connection of the generalized $\epsilon$-system, intended as a connection on the holomorphic tangent bundle, coincide with the connection $$\nabla:=\nabla^0-\sum_{1\leq m<l\leq n}\omega_{H_{ml}}\otimes \rho_{H_{ml}},$$ on $\mathbb{C}^n\setminus {\mathcal{H}}$. \end{lemma} \emph{Proof} As above we identify the standard basis $\{e_1, \dots, e_n\}$ of $\mathbb{C}^n$ with the global frame $\{\partial_1, \dots, \partial_n\}$ of its tangent bundle. Since $$\nabla_{\partial_i} \partial_j=\Gamma^k_{ij}\partial_k=-\sum_{1\leq m<l\leq n}\omega_{H_{ml}}(\partial_i)\rho_{H_{ml}}(\partial_j),$$ it is immediately clear that $\Gamma^k_{ij}=0$ for $i\neq j\neq k\neq i,$ due to the fact that the range of $\rho_{H_{ml}}$ is spanned by $\epsilon_l \partial_m-\epsilon_m \partial_l$ and that $\rho_{H_{ml}}(\partial_j)=0$ for $m\neq j$, $l\neq j$. In general we have: $$\nabla_{\partial_i}\partial_j=-\sum_{1\leq m<l\leq n}\frac{d(u^m-u^l)(\partial_i)}{u^m-u^l} (du^m-du^l)(\partial_j)(\epsilon_l \partial_m -\epsilon_m \partial_l).$$ For $i<j$ we obtain $$\nabla_{\partial_i}\partial_j=\frac{1}{u^i-u^j}\left( \epsilon_j \partial_i-\epsilon_i \partial_j\right),$$ so that $\Gamma^i_{ij}=\frac{\epsilon_j}{u^i-u^j}$, $\Gamma^j_{ji}=\frac{\epsilon_i}{u^j-u^i}$ and analogously for $i>j$. Finally $$\nabla_{\partial_i}\partial_i=\Gamma^k_{ii}\partial_k=-\sum_{l>i}\frac{1}{u^i-u^l}(\epsilon_l \partial_i-\epsilon_i \partial_l)-\sum_{m<i}\frac{1}{u^m-u^i}(\epsilon_i \partial_m -\epsilon_m \partial_i)$$ $$=\sum_{l\neq i}\frac{\epsilon_i}{u^i-u^l}\partial_l-\sum_{l\neq i}\frac{\epsilon_l}{u^i-u^l}\partial_i,$$ from which we deduce that $\Gamma^i_{ii}=-\sum_{l\neq i}\frac{\epsilon_l}{u^i-u^l}=-\sum_{i\neq l}\Gamma^i_{il}$ and for $k\neq i$ that $\Gamma^k_{ii}=\frac{\epsilon_i}{u^i-u^k}=-\Gamma^k_{ki}.$ \begin{flushright} $\Box$ \end{flushright} The following Proposition clarifies the relation between the natural connection of the generalized $\epsilon$-system and the Lauricella connection: \begin{prop} Let $X, Y$ be vector fields on $V$ and denote with $X^{\mathcal{C}}$ and $Y^{\mathcal{C}}$ the unique vector fields on $\mathbb{C}^n$ such that $X^{\mathcal{C}}\subset \mathcal{C}$, $Y^{\mathcal{C}}\subset \mathcal{C}$ and such that $\pi_*(X^{\mathcal{C}}_u)=X_{\pi(u)}$ and $\pi_*(Y^{\mathcal{C}}_u)=Y_{\pi(u)}$. Then, with the notation introduced previously we have $$\left(\nabla^V_XY\right)^{\mathcal{C}}=\nabla_{X^{\mathcal{C}}}Y^{\mathcal{C}}.$$ \end{prop} \emph{Proof} Since $\nabla_{X^{\mathcal{C}}}Y^{\mathcal{C}}\subset \mathcal{C}$ we have to prove that \begin{equation} \nabla^V_XY=\pi_{*}\left(\nabla_{X^{\mathcal{C}}}Y^{\mathcal{C}}\right). \end{equation} It is enough to prove the claim using constant vector fields, $X, Y$, in which case $X^{\mathcal{C}}$ and $Y^{\mathcal{C}}$ are also constant and the connections $\nabla^{0, V}$ and $\nabla^0$ do not play any role. By definition the function $\omega_{H_{ij}}(X^{\mathcal{C}})$ on $\mathbb{C}^n$ defines a function on $V$ which coincides with $\omega^V_{H_{ij}}(X)$ and $\rho^V_{H_{ij}}(Y)=\pi_*\rho_{H_{ij}}(Y^{\mathcal{C}})$. This implies $$\omega^V_{H_{ij}}(X) \rho^V_{H_{ij}}(Y)=\pi_*\left(\omega_{H_{ij}}(X^{\mathcal{C}}) \rho_{H_{ij}}(Y^{\mathcal{C}})\right).$$ \begin{flushright} $\Box$ \end{flushright} Observe that in the case of $\vee$-systems one obtains a one-parameter family of flat connections in which the deformed Christoffel symbols are obtained via the product structure (see the discussion in Section \ref{veesystem} of this paper and \cite{AL2014JMP} for many more details). This leads to ask if also in the case of the Lauricella systems there is a product structure and if one can also obtains a one-parameter family of flat (Lauricella) connections. The answer to the latter question is positive and immediate. Indeed, due to the Proposition 2.3 in \cite{CHL}, in particular points (i) and (iv), the endomorphisms $\rho_{H_{ij}}$ can be rescaled by a common factor $\lambda \in \mathbb{C}^*$ without affecting the flatness of \eqref{lauricellaconn}. This means that $\nabla^{\lambda}:=\nabla^0-\lambda \sum_{1\leq i<j\leq n} \omega_{H_{ij}} \otimes\rho_{H_{ij}}$ is flat for any $\lambda$. The answer to the former question is also positive, but we can actually provide two (in general) non-equivalent answers. Indeed, one way is to interpret the term $\sum_{1\leq m<l\leq n}\omega_{H_{ml}}\otimes \rho_{H_{ml}}$ in $\nabla$ as a deformation of $\nabla^0$ obtained using the structure constants of a non-trivial product. This leads to interpret this term as a $\vee$-system. To do this, recall that the metric $\langle \cdot, \cdot \rangle=\text{diag}(\epsilon^1, \dots, \epsilon^n)$ is diagonal in the coordinates $u^1, \dots, u^n$, and consider the vector fields $\check{\alpha}_{ij}:=\frac{1}{\sqrt{\epsilon_i \epsilon_j}}\left( \epsilon_j \partial_i-\epsilon_i \partial_j \right)=\sqrt{\epsilon_i \epsilon_j}\left(\frac{\partial_i}{\epsilon_i}-\frac{\partial_j}{\epsilon_j}\right).$ Now define forms $\alpha_{ij}:=\langle \check{\alpha}_{ij}, \cdot\rangle=\sqrt{\epsilon_i \epsilon_j}\left( du^i-du^j\right).$ With these definitions, it is immediate to check that the term that deforms $\nabla^0$ can be written as: $$\sum_{1\leq i<j\leq n}\omega_{H_{ij}}(X_u) \rho_{H_{ij}}(Y_u)=\sum_{1\leq i<j\leq n}\frac{\alpha_{ij}(X_u)\alpha_{ij}(Y_u)}{\alpha_{ij}(e_u)}\check{\alpha}_{ij},$$ where $X_u, Y_u\in T_u\mathbb{C}^n$ and $e_u=(u^1, \dots, u^n)$. Indeed, with these notations, $\omega_{H_{ij}, u}=\frac{\alpha_{ij}}{\alpha_{ij}(E_u)}$ and $\rho_{H_{ij}}=\alpha_{ij}\otimes \check{\alpha}_{ij}.$ The fact that the covectors $\alpha_{ij}$ form a $\vee$-system follows from the flatness of the connection $$(\nabla_X Y)_u:=(\nabla^0_X Y)_u-\sum_{i,j} \frac{\alpha_{ij}(X_u) \alpha_{ij}(Y_u)\check{\alpha}_{ij}}{\alpha_{ij}(e_u)}.$$ This multiparametric family of $\vee$-systems appeared in \cite{CV}. Restricting it on the hypeplane $Z_0$ one gets a new family of $\vee$-system that, for $\epsilon_i=1$, correponds to the almost Frobenius structure for the Coxeter group $W(A_{n-1})$ \cite{Ddual}. The other possible product structure one can introduce is obtained by imposing that the standard basis of $\mathbb{C}^n$ is a basis of idempotents for a commutative associative product. This means that the standard basis for $\mathbb{C}^n$ provides canonical coordinates for a semisimple product. In a slightly different language this was observed for the first time in \cite{LP} (see also \cite{L2014}). In other terms, depending on how one interprets the vector fields $\partial_i$, either as flat vector fields or as idempotents, one obtains two different examples of flat $F$-manifolds. Remarkably in the second case there is also a dual product defined by the eventual identity $E=\sum_i u^i\frac{\partial}{\partial u^i}$ (see \cite{AL,L2014} for details). Due to the previous discussion we will call this structure on $\mathbb{C}^n\setminus {\mathcal{H}}$ the \emph{Lauricella bi-flat structure}. \begin{rmk} The generalized $\epsilon$-system is a special example of a class of integrable quasilinear systems of PDEs associated to solutions of the Euler-Poisson-Darboux system \cite{Pavlovhydro,LM,L2006}. In \cite{KKS} it was observed that this class is related to Whitham $g$-phase Whitham equation by a sequence of Levy transformations generated by suitable Lauricella functions. \end{rmk} \begin{rmk} In the generalized $\epsilon$-system, the Christoffel symbols $\Gamma^{i}_{ij}$ depend only on the difference $u^i-u^j$. Under this assumption, the Tsarev's condition \eqref{rt1} reduces to the following algebraic system \begin{eqnarray} \label{rt1red} \Gamma^i_{ij}\Gamma^i_{ik}=\Gamma^i_{ik}\Gamma^k_{kj}+\Gamma^i_{ij}\Gamma^j_{jk},\,\,\,\mbox{if $i\ne k\ne j\ne i$}, \end{eqnarray} or \begin{eqnarray} \label{rt1redbis} \frac{\Gamma^k_{kj}}{\Gamma^i_{ij}}+\frac{\Gamma^j_{jk}}{\Gamma^i_{ik}}=1\,\,\,\mbox{if $i\ne k\ne j\ne i$}. \end{eqnarray} \end{rmk} \section{Flatness conditions}\label{multiflatnesssec} \subsection{Eventual identities and flatness conditions} Given a {\it semisimple} $F$- manifold with an eventual identity $E$ we want to characterize \emph{flat} symmetric connections $\nabla$ compatible with the eventual identity $E$, i.e satisfying the following requirements \begin{eqnarray*} &&\left(\nabla_X c^*\right)\left(Y,Z\right)=\left(\nabla_Y c^*\right)\left(X,Z\right)\\ &&\nabla E=0, \end{eqnarray*} where $c^*$ is the $(1,2)$-tensor field associated to the dual product $*$. Without loss of generality, we can analyze the situation in the canonical coordinates for the dual product $*$ induced by the eventual identity $E$; in this way we can consider the case $E=e$. In this case, in canonical coordinates, the Christoffel symbols of the connection are uniquely specified once the coefficients $\Gamma^i_{ij}$ are given through the formula \eqref{naturalc}. It is possible to provide an intrinsic characterization of the flatness condition, which is given in Theorem \ref{flatnessintrinsic} below. Before stating this result and proving it, we elucidate a general fact: \begin{lemma}\label{lemmaintrinsic2} Let $M$ be a smooth manifold equipped with a $C^{\infty}(M)$-bilinear product $\circ$ on sections of its tangent bundle $\circ: TM\times TM \rightarrow TM$ and with a torsionless affine connection $\nabla$. Suppose $\circ$ is equipped with an unit vector field $e$ and that $\nabla$ and $\circ$ satisfy the following condition: \begin{equation}\label{Rc+Rc+Rc2} Z\circ R(W,Y)(X)+W \circ R(Y,Z)(X)+Y \circ R(Z,W)(X)=0, \end{equation} where $R(X,Y):=\nabla_X\nabla_Y-\nabla_Y\nabla_X -\nabla_{[X,Y]}$ for all vector fields $X, Y, Z, W$. Then $\nabla$ is flat if and only if $R(e, W)=0$ for all vector fields $W$. \end{lemma} \emph{Proof} If $\nabla$ is flat, certainly $R(e, W)=0$ for all vector fields $W$. Conversely, suppose $R(e,W)=0$ for all vector fields. Then substituting $Z:=e$ in \eqref{Rc+Rc+Rc2} we get immediately $e\circ R(W, Y)(X)=0$, i.e. $R(W, Y)(X)=0$ for all vector fields $W, Y, X$, and we are done.\begin{flushright} $\Box$ \end{flushright} Observe that the condition \eqref{Rc+Rc+Rc2} appearing in the previous Lemma is exactly one of the conditions that define an $F$-manifold with compatible connection. However, there is no need for the product $\circ$ to be commutative, associative or semisimple, neither for the other conditions defining an $F$-manifold with compatible connection to be satisfied. \begin{thm}\label{flatnessintrinsic} A semisimple $F$-manifold with compatible connection $\nabla$ (see Section 2) and flat unity $e$ is flat if and only if the operator ${\rm Lie}_e$ and the covariant derivative $\nabla$ satisfy the following condition: \begin{equation}\label{flatness2} {\rm Lie}_e (\nabla_X T)-\nabla_X ({\rm Lie}_e T)-\nabla_{[e, X]}T=0, \end{equation} for any vector field $X$ and for any tensor field $T$. \end{thm} \emph{Proof} Since the unity $e$ is assumed to be a flat vector field, we have that ${\rm Lie}_e=\nabla_e$ and therefore the condition \eqref{flatness2} is equivalent to $R(e,X)(T)=0$. Now by Lemma \ref{lemmaintrinsic2} we know that $\nabla$ is flat if and only if $R(e,X)=0$ for all vector fields $X$. \begin{flushright} $\Box$ \end{flushright} Using the fact that $X$ is an arbitrary vector field, we have the following Lemma \begin{lemma}\label{flatness3lemma} Condition \eqref{flatness2} is equivalent to \begin{equation}\label{flatness3}{\rm Lie}_e(\nabla T)-\nabla ({\rm Lie}_eT)=0,\end{equation} for any tensor fied $T$. \end{lemma} \emph{Proof} Observe that we can write $\nabla_X T=(\nabla T)(X)=C(\nabla T\otimes X)$ for any vector field $X$, where $C$ is the contraction. Therefore using the property that ${\rm Lie}_e$ commutes with contractions and it satisfies Leibniz rule with respect to the the tensor product we have $$[{\rm Lie}_e(\nabla T)-\nabla ({\rm Lie}_eT)](X)={\rm Lie}_e((\nabla T)(X))-\nabla T({\rm Lie}_e X)-\nabla_X({\rm Lie}_e T)=$$ $$={\rm Lie}_e(\nabla_X T)-\nabla T([e, X])-\nabla_X({\rm Lie}_e T)={\rm Lie}_e(\nabla_X T)-\nabla_X({\rm Lie}_e T)-\nabla_{[e,X]}T.$$ \begin{flushright} $\Box$ \end{flushright} Observe that in the proof of Theorem \ref{flatnessintrinsic} no use has been made of two conditions, namely the semisimplicity of $\circ$ and the symmetry of $\nabla c$. The only hypotheses that were used are the presence of a flat identity $e$ for the product $\circ$ and condition \eqref{Rc+Rc+Rc2} for the torsionless connection $\nabla$. \begin{rmk}Let us observe that relation \eqref{flatness3} is reminiscent of the commutation relation between the Lie derivative with respect to $e$ and the differential $d_P$ associated to a Poisson structure $P$, when $P$ is part of an {\em exact} pencil of Poisson structures $Q-\lambda P$ (for more details see \cite{ALinher} and \cite{FL}). \end{rmk} \begin{rmk} The flatness of $e$ and the condition \eqref{flatness3}, in general, do not imply the flatness of $\nabla$. Indeed the condition \eqref{flatness3} written for an arbitrary vector field $T$ reads $$(\nabla_j\nabla_l e^i-R^i_{jkl}e^k)T^l=0.$$ \end{rmk} \section{The semisimple case} \subsection{Multi-flatness conditions in the semisimple case} We apply now the flatness criterion discussed in the previous Section to study multi-flat structures in the semisimple case. As a consequence of the previous result we have the following \begin{thm}\label{corollaryflatness} A semisimple $F$-manifold with compatible connection $\nabla$ (see Section 2) and flat unity $e$ is flat if and only $e(\Gamma^i_{ij})=0$ for all $i\neq j$, where $\Gamma^i_{ij}$ are the Christoffel symbols of $\nabla$ in the canonical coordinates of $\circ$. \end{thm} \emph{Proof} Under the current hypotheses, $\nabla$ is flat if and only if \eqref{flatness3} holds for an arbitrary tensor field $T$. However, notice that \eqref{flatness3} is automatically satisfied when $T$ is a function since covariant and Lie derivatives coincide on functions. Moreover, the operators ${\rm Lie}_e$ and $\nabla_{i}$ commute with contractions and satisfy Leibniz rule with respect to tensor products. This easily implies that \eqref{flatness3} holds for an arbitrary tensor fields $T$ if and only if it holds for an arbitrary {\em vector} field $T.$ Writing the right hand side of \eqref{flatness3} in canonical coordinates of $\circ$, for $T$ an arbitrary vector field, we get $$e\left(\partial_j T^i+\Gamma^i_{jk}T^k\right)-\partial_j(e(T^i))-\Gamma^i_{jk}e(T^k)=e(\Gamma^i_{jk})T^k,$$ since $e$ commutes with $\partial_j$ in canonical coordinates. Therefore \eqref{flatness3} is fulfilled if and only if $e(\Gamma^i_{jk})=0$, due to the arbitrariness of $T$. On the other hand, for a natural connection in canonical coordinates one already has $\Gamma^i_{jk}=0$ $i\neq j\neq k\neq i$, while all the other non-vanishing components are expressed as linear combinations with constant coefficients of $\Gamma^i_{ij}$, $i\neq j$ (see formula \eqref{naturalc}). \begin{flushright} $\Box$ \end{flushright} It is clear from the proof of the previous Corollary, that the flatness of $\nabla$ is equivalent to $e(\Gamma^i_{jk})=0$ in canonical coordinates for $\circ$ {\em without} assuming the symmetry of $\nabla c$, so without using all the defining conditions for an $F$-manifold with compatible connection. That's because the symmetry of $\nabla c$ is the condition that forces $\Gamma^i_{jk}=0$ $i\neq j\neq k\neq i$ in canonical coordinates for $\circ$. We have been stating results for $F$-manifolds with compatible connections and not under weaker assumptions, since for our purposes we are interested in using the Tsarev's condition \eqref{rt1}. Now it turns out that Tsarev's condition is equivalent to $Z\circ R(W,Y)(X)+W \circ R(Y,Z)(X)+Y \circ R(Z,W)(X)=0$ {\em and} the symmetry of $\nabla c$, both expressed in the distinguished coordinates system given by canonical coordinates for $\circ.$ If follows also from the Corollary \ref{corollaryflatness} that, in canonical coordinates, all Christoffel symbols of $\nabla$ depend only on the differences $(u^i-u^j)$ of canonical coordinates. Obviously, the flatness criterion provided by relation \eqref{flatness3} and its equivalent forms can be applied to the case of connections associated to general eventual identities. It is easy to check that a symmetric connection $\nabla$ compatible with the dual product defined by $E$ and satisfying the condition $\nabla E=0$ has Christoffel symbols (in canonical coordinates for $\circ$) of the form: \begin{equation}\label{dualgamma} \begin{split} \Gamma^{i}_{jk}&:=0,\qquad\forall i\ne j\ne k \ne i,\\ \Gamma^{i}_{jj}&:=-\frac{E^i}{E^j}\Gamma^{i}_{ij},\qquad i\ne j,\\ \Gamma^{i}_{ii}&:=-\sum_{l\ne i}\frac{E^l}{E^i}\Gamma^{i}_{li}-\frac{\partial_i E^i}{E^i}. \end{split} \end{equation} Given an eventual identity $E$ with associated dual product $*$, it is useful to have relations characterizing the flatness of the connection given by \eqref{dualgamma} in the canonical coordinate for $\circ$. In this case $E$ does not reduce anymore to $e$. This characterization is provided by the following: \begin{thm}\label{flatnessgeneralth} Suppose that the functions $\Gamma_{ij}^i$ satisfy Tsarev's conditions \eqref{rt1}, then in canonical coordinates for $\circ$, the symmetric connection \eqref{dualgamma} is flat if and only if $$E(\Gamma^i_{ij})=-(\partial_j E^j)\Gamma^i_{ij}, \qquad i\ne j.$$ \end{thm} \emph{Proof}. We use the invariant condition \eqref{flatness3}, expressed in canonical coordinates for $\circ$, where we choose as $T$ a vector field. In this case we get \begin{eqnarray*} [{\rm Lie}_E (\nabla T)]^i_j-[\nabla ({\rm Lie}_E T)]^i_j=0. \end{eqnarray*} Expanding this we get for $i\neq j$: \begin{eqnarray*} &&(E(\Gamma^i_{ij})+\Gamma^i_{ij}\partial_j E^j)T^i+(E(\Gamma^i_{jj})-\Gamma^i_{jj}\partial_i E^i+\Gamma^i_{jj}\partial_j E^j+\Gamma^i_{jj}\partial_jE^j)T^j=0,\\ \end{eqnarray*} while for $i=j$ we obtain: \begin{eqnarray*} &&(E(\Gamma^i_{ii})+\partial^2_iE^i +\Gamma^i_{ii}\partial_iE^i)T^i+\sum_{l\ne i}(E(\Gamma^i_{il})+\Gamma^i_{il}\partial_lE^l)T^l=0. \end{eqnarray*} Thus for the condition \eqref{flatness3} to be fulfilled we have that the following constraints have to be satisfied: \begin{eqnarray*} E(\Gamma^i_{ij})&=&-\Gamma^i_{ij}\partial_j E^j,\\ E(\Gamma^i_{jj})&=&\Gamma^i_{jj}\partial_i E^i-\Gamma^i_{jj}\partial_j E^j-\Gamma^i_{jj}\partial_jE^j,\\ E(\Gamma^i_{ii})&=&-\partial^2_iE^i -\Gamma^i_{ii}\partial_iE^i. \end{eqnarray*} The first condition is the statement of the Theorem. The second and third one follow using the first one, the defining relations of the natural connection and the obvious identities $E(E^i)=E^i\partial_i E^i, \, E(\partial_i E^i)=E^i\partial_i^2 E^i$: \begin{eqnarray*} E(\Gamma^i_{jj})&=&E\left(-\frac{E^i}{E^j}\Gamma^i_{ij}\right)=-\frac{E(E^i)}{E^j}\Gamma^i_{ij}+\frac{E^i}{E^j}\Gamma^i_{ij}\partial_j E^j+\frac{E^i}{(E^j)^2}\Gamma^i_{ij}E(E^j)=\\ &=&\Gamma^i_{jj}\partial_i E^i-\Gamma^i_{jj}\partial_j E^j-\Gamma^i_{jj}\partial_jE^j,\\ E(\Gamma^i_{ii})&=&E\left(-\sum_{l\ne i}\frac{E^l}{E^i}\Gamma^{i}_{il}-\frac{\partial_i E^i}{E^i}\right)= \sum_{l\neq i}\frac{E^l \Gamma^i_{il}E^i \partial_i}{E^i}-\frac{E(\partial_i E^i)}{E^i}+\frac{(\partial_i E^i)^2}{E^i}=\\ & =&-\partial^2_iE^i -\Gamma^i_{ii}\partial_iE^i. \end{eqnarray*} \begin{flushright} $\Box$ \end{flushright} \begin{rmk} In the case of semisimple Frobenius manifolds, in canonical coordinates $\{u^1, \dots, u^n\}$ for $\circ$ we have that $E^i=u^i$ for all $i=1,\dots, n$, and $\tilde{g}^{ii}=u^ig^{ii}=u^i\partial_i\varphi$. This implies that \begin{eqnarray*} \Gamma^i_{jk}&=&\tilde\Gamma^i_{jk}=0,\qquad\forall\; i\ne j\ne k\ne i,\\ \tilde\Gamma^i_{ji}&=&\tilde{g}^{ii}(\partial_j\tilde{g}_{ii})=\Gamma^i_{ji},\qquad\forall\; i\ne j,\\ \tilde\Gamma^i_{jj}&=&\tilde{g}^{ii}(-\partial_i\tilde{g}_{jj})=\frac{u^i}{u^j}g^{ii}(-\partial_ig_{jj})=\frac{u^i}{u^j}g^{ii}(-\partial_jg_{ii}) =-\frac{u^i}{u^j}\Gamma^i_{ij}=-\frac{u^i}{u^j}\tilde\Gamma^i_{ij},\qquad\forall\; i\ne j.\\ \end{eqnarray*} Moreover, due to the homogeneity property, it is easy to check that $E(\Gamma^i_{ji})=-\Gamma^i_{ji}$. Due to the previous theorem this means that the connection compatible with the dual product is flat. However this connection does not necessarily coincide with the Levi-Civita connection of the intersection form $\tilde{g}$ since, in general, $$\tilde\Gamma^{i}_{ii}\ne-\sum_{l\ne i}\frac{u^l}{u^i}\tilde\Gamma^{i}_{li}-\frac{1}{u^i}.$$ \end{rmk} \subsection{Non existence of semisimple $F$-manifolds with more than $3$ compatible connections} We are going to apply Theorem \ref{flatnessgeneralth} to study the existence of multi-flat structures on $F$-manifolds. Recall that, by definition, given an $N$-multi flat (semisimple) manifold, the $N$ connections $\nabla^{(l)}$, $l=0, \dots, N-1$ share the same Christoffel symbols $\Gamma^i_{ij}$, $i\neq j$ (say in the canonical coordinates for $\circ=\circ_{(0)}$), while the remaining ones are determined according to the formulas \eqref{dualgamma}, where $E$ is the corresponding eventual identity $E_{(l)}$. Therefore, given the $E_{(l)}$, $l=0,\dots N-1$, it is possible to reconstruct $N$-multi-flat connections only if the system for $\Gamma^i_{ij}$ ($j$ is fixed): \begin{equation}\label{orsysbis} E_{(l)}(\Gamma^i_{ij})+(\partial_j E^j_{(l)})\Gamma^i_{ij}=0, \quad l=0, \dots N-1 \end{equation} admits non-trivial solutions $\Gamma^i_{ij}$ for all $i\neq j$. Indeed, \eqref{orsysbis} is just the flatness condition of Theorem \ref{flatnessgeneralth}. It is possible to reduce the non-homogenous system \eqref{orsysbis} to a homogenous one. To do this we introduce a fictitious additional variable $u^{n+1}$ and assume that $\Gamma^i_{ij}$ is defined implicitly via $\phi(u^1, \dots , u^n, u^{n+1})=c$ where $c$ a constant and $u^{n+1}=\Gamma^i_{ij}(u^1, \dots, u^n)$. In this case the system \eqref{orsysbis} becomes \begin{equation}\label{orsystris} \hat{E}_{(l)}(\phi):=E_{(l)}(\phi)-(\partial_j E^j_{(l)})u^{n+1}\partial_{n+1}\phi=0, \quad l=0, \dots, N-1. \end{equation} In this way, determining $\phi$ can be interpreted as the problem of finding invariant functions for the distribution $\Delta$ generated by the vector fields $\{\hat{E}_{(l)}\}_{l=0, \dots, N-1}.$ Therefore we are interested in characterizing the integrable distributions generated by the extended vector fields $\hat{E}_{(l)}, l=,0, \dots N-1$, where by definition of multi-flat $F$-manifold the vector fields $E_{(l)}:=(u^1)^{l}\partial_1+...+(u^n)^{l}\partial_n$, $l=0, \dots, N-1.$ \begin{thm}\label{distri1} Let $\Delta_{(i_1, \dots, i_k)}$ be the distribution spanned by the vector fields $\hat{E}_{(i_1)},\dots, \hat{E}_{(i_k)}$ in the $n+1$-dimensional space with coordinates $(u^1, \dots, u^n, u^{n+1}).$ Then: \begin{enumerate} \item The distributions $\Delta_{(1,m)}$ with $m\in \mathbb{Z}\setminus\{1\}$ are integrable and these are the only integrable distribution of rank $2$ among $\Delta_{(i_1, i_2)}.$ \item $\Delta_{(0,1,2)}$ is integrable. \item $\Delta_{(0,1,2,3)}$ is not integrable. Furthermore, at the points where $u^i\ne u^k$ ($i\ne k,i,k=1,...,n$) and $u^{n+1}\ne 0$ it is totally non-holonomic, that is the minimal integrable distribution $\bar{\Delta}$ containing $\Delta_{(0,1,2,3)}$ has dimension $n+1$. \item More in general $\Delta_{(i_1, \dots, i_k)}$, with $i_1<i_2<\dots<i_k$ is not integrable for $ 4\leq k \leq n$. \end{enumerate} \end{thm} \emph{Proof}. We have \begin{equation} [\hat{E}_{(l)},\hat{E}_{(m)}]^i=\left\{ \begin{array}{ll} (m-l)(u^i)^{l+m-1} & \textrm{if $i=1,...,n$} \\ -(m-l)(m+l-1)(u^j)^{m+l-2}u^{n+1} & \textrm{if $i=n+1$} \end{array}\right. \end{equation} that is $$[\hat{E}_{(l)},\hat{E}_{(m)}]=(m-l)\hat{E}_{(m+l-1)}, \quad l\neq m,$$ $$[\hat{E}_{(l)}, \hat{E}_{(m)}]=0, \quad l=m.$$ Since $[\hat{E}_{(m)},\hat{E}_{(1)}]=\hat{E}_{(m)}$, the distribution $\Delta_{(m,1)}$ is integrable. Moreover, any other distribution of rank $2$, $\Delta_{(i_1, i_2)}$ is not integrable since $[\hat{E}_{i_1}, \hat{E}_{i_2}]=(i_2-i_1)\hat{E}_{(i_1+i_2-1)},$ and $i_1+i_2-1=i_1$ or $i_1+i_2-1=i_2$ implies either $i_2=1$ or $i_1=1.$ Since $\hat{E}_{(0)}$, $\hat{E}_{(1)}$ and $\hat{E}_{(2)}$ satisfy the commutation relations of $sl(2,\mathbb{C})$: $[\hat{E}_{(0)},\hat{E}_{(1)}]=\hat{E}_{(0)},$ $[\hat{E}_{(0)},\hat{E}_{(2)}]=2\hat{E}_{(1)}$ and $[\hat{E}_{(1)},\hat{E}_{(2)}]=\hat{E}_{(2)}$, we have that also the distribution $\Delta_{(0,1,2)}$ is integrable. With regard to the fourth point, consider $\Delta_{(i_1,\dots, i_k)}$, with $i_1<\dots<i_k$ and $4\leq k\leq n.$ If the two indices $i_{1}, i_{2}$ are both strictly negative or if $i_1<0$ and $i_2=0$, then $[\hat{E}_{(i_1)}, \hat{E}_{(i_2)}]\notin\Delta_{(i_1,\dots, i_k)}$, due to the commutation relations. Thus we can assume $i_1\geq 0$ and the indices $i_{k-1}, i_k$ strictly greater than $1$. Therefore again we have $[\hat{E}_{(i_{k-1})}, \hat{E}_{(i_k)}]=(i_k-i_{k-1})\hat{E}_{(i_k+i_{k-1}-1)}\notin \Delta_{(i_1,\dots, i_k)}$, since $i_k+i_{k-1}-1>i_k$. Finally, it remains to prove the third point. By the fourth point, the distribution $\Delta_{(0,1,2,3)}$ is not integrable. Before determining the minimal integrable distribution containing $\Delta_{(0,1,2,3)}$ we recall few definitions and a fundamental result. Given a collection of vector fields $\{\hat{E}_{(l)}\}_{l\in L}$ their Lie hull is the collection of all vector fields of the form $\{\hat{E}_{(l)}, [\hat{E}_{(l)}, $ $\hat{E}_{(m)}], [\hat{E}_{(n)},[\hat{E}_{(l)}, \hat{E}_{(m)}]], \dots\}$ generated by the iterated Lie brackets. The minimal integrable distribution containing $\Delta_{(i_1, \dots, i_k)}$ is the minimal integrable distribution containing the Lie hull of the vector fields $\{\hat{E}_{(i_1)}, \dots \hat{E}_{(i_k)}\}.$ The distribution $\Delta_{(i_1, \dots, i_k)}$ (or equivalently the associated collection of vector fields) is called bracket generating if its Lie hull spans the whole tangent bundle in an open set. In this case the minimal integrable distribution containing $\Delta_{(i_1, \dots, i_k)}$ has integral leaf equal to the entire $n+1$-dimensional space (this is the Chow-Rashevsky Theorem, see \cite{C,R}). We apply this result to compute the minimal integrable distribution containing $\Delta_{(0,1,2,3)}$ and we show that it is the $n+1$-dimensional space. In order to compute the minimal integrable distribution containing $\Delta_{(0,1,2,3)}$, we consider the sub-bundle of the tangent bundle spanned by $\hat{E}_{(0)},\hat{E}_{(1)},\hat{E}_{(2)},\hat{E}_{(3)},\hat{E}_{(m)}=\frac{1}{m-2}[\hat{E}_{(2)},\hat{E}_{(m-1)}],\,m=4,5,...,n$. To show that its rank is $n+1$, it is sufficient to show that the determinant of the $A$ matrix does not vanish on an open set, where $$A:=\begin{pmatrix} 1 & \dots & 1 & 0\\ u^1 & \dots & u^n & -u^{n+1} \\ \vdots & \ddots & \vdots &\vdots\\ (u^1)^{n} & \dots & (u^n)^{n} & -n(u^j)^{n-1}u^{n+1} \end{pmatrix}.$$ The matrix $A$ can be written as $$A= \begin{pmatrix} 1 & \dots & 1 & -u^{n+1}\frac{\partial}{\partial u^{n+1}}_{|u^{n+1}=u^j}1\\ u^1 & \dots & u^n &-u^{n+1}\frac{\partial}{\partial u^{n+1}}_{|u^{n+1}=u^j}(u^{n+1}) \\ \vdots & \ddots & \vdots &\vdots\\ (u^1)^{n} & \dots & (u^n)^{n} &-u^{n+1}\frac{\partial}{\partial u^{n+1}}_{|u^{n+1}=u^j}(u^{n+1})^{n} \end{pmatrix}. $$ Expanding the determinant of $A$ along the last column, we get $$\det(A)=\sum_{k=0}^{n} A_k\left(-u^{n+1}\frac{\partial}{\partial u^{n+1}}_{|u^{n+1}=u^j}(u^{n+1})^{k}\right),$$ where $A_k$ are the corresponding minors. Since the $A_k$'s do not depend on $u^{n+1}$ we can factor the derivative operator in front of the expansion and get: $$\det(A)=-u^{n+1}\frac{\partial}{\partial u^{n+1}}_{|u^{n+1}=u^j}\left(\sum_{k=0}^{n} A_k(u^{n+1})^{k} \right)=-u^{n+1}\frac{\partial}{\partial u^{n+1}}_{|u^{n+1}=u^j}\det(V_{0,\dots, n}),$$ where $V_{0,\dots, n}$ is the Vandermonde matrix. By the form of the Vandermonde determinant, it is clear that $\det(A)\neq 0$ in the open subset $\Omega:=\{(u^1, \dots, u^n , u^{n+1})\; | \;u^i\ne u^k (i\ne k,i,k=1,...,n), u^{n+1}\ne 0\}.$ \begin{flushright} $\Box$ \end{flushright} \begin{rmk} Notice that the extended vector fields $Z_{(l)}:=\hat{E}_{(l+1)}$ satisfy the commutation relation $$[Z_{(l)},Z_{(m)}]=[\hat{E}_{(l+1)},\hat{E}_{(m+1)}]=(m-l)\hat{E}_{(m+l+1)}=(m-l)Z_{(m+l)},$$ of the centerless Virasoro algebra. \end{rmk} Theorem \ref{distri1} shows that in general multi-flat $F$-structures with more than three distinct products can not exist. Indeed, if a distribution $\Delta_{(i_1, \dots, i_k)}$ is totally non-holonomic, then the only solutions of the system \eqref{orsystris} are given by functions $\phi$ that are constant everywhere and therefore they give rise to trivial Christoffel symbols $\Gamma^i_{ij}$. Below we point out that instead multi-Hamiltonian structures are allowed also in the case $N>3$. \subsection{Multi-flat structures vs multi-Hamiltonian structures}\label{multiflatvsHamiltonian} Unlike the multi-flat case we have been analyzing, it is possible to have multi-Hamiltonian structures encompassing more than three structures. An example is given by the following $n+1$ metrics introduced in \cite{FP}: \begin{equation} g_{ii}^{(\alpha)}=\frac{\prod_{k\ne i}(u^k-u^i)}{(u^i)^{\alpha}},\qquad\alpha=0,...,n. \end{equation} They are flat and thus their inverses define $n+1$ Hamiltonian structures of hydrodynamic type that turns out to be compatible among each other. The corresponding Levi-Civita connections are defined by \begin{equation} \begin{split} \Gamma^i_{ij}&=\partial_j\ln{\sqrt{g^{(\alpha)}_{ii}}}=-\frac{1}{2}\frac{1}{u^i-u^j},\\ \Gamma^{i}_{jk}&=0,\qquad\forall i\ne j\ne k \ne i,\\ \Gamma^{i}_{jj}&=-\frac{1}{2}\frac{\partial_ig^{(\alpha)}_{jj}}{g^{(\alpha)}_{ii}},\\ \Gamma^{i}_{ii}&=\partial_i\ln{\sqrt{g^{(\alpha)}_{ii}}}. \end{split} \end{equation} Observe that the Christoffel symbols $\Gamma^i_{ij}$ coincide with the Christoffel symbols $\Gamma^i_{ij}$ for the $\epsilon$-system in the case $\epsilon=-\frac{1}{2}$. It is also immediate to check directly that \begin{eqnarray*} E_{(0)}(\Gamma^{i}_{ij})&=&[\partial_1+\partial_2+\dots+\partial_n]\Gamma^{i}_{ij}=0,\\ E_{(1)}(\Gamma^{i}_{ij})&=&[u^1\partial_1+u^2\partial_2+\dots+u^n\partial_n]\Gamma^{i}_{ij}=- \Gamma^{i}_{ij}, \end{eqnarray*} while $$E_{(k)}(\Gamma^{i}_{ij})=[(u^1)^{k}\partial_1+(u^2)^k\partial_2+\dots+(u^n)^{k}\partial_n]\Gamma^{i}_{ij}\ne-k(u^j)^{k-1} \Gamma^{i}_{ij}$$ for $k\ge 2$. This means that among the natural connections \begin{equation} \begin{split} \Gamma^i_{ij}&=-\frac{1}{2}\frac{1}{u^i-u^j},\\ \Gamma^{i}_{jk}&=0, \qquad\forall i\ne j\ne k \ne i,\\ \Gamma^{i}_{jj}&=\frac{1}{2}\frac{E_{(k)}^i}{E_{(k)}^j}\frac{1}{u^i-u^j}, \qquad i\ne j,\\ \Gamma^{i}_{ii}&=\frac{1}{2}\sum_{l\ne i}\frac{E_{(k)}^l}{E_{(k)}^i}\frac{1}{u^i-u^l}-\frac{\partial_i E_{(k)}^i}{E_{(k)}^i}, \end{split} \end{equation} only those compatible with the eventual identities $E_{(0)}$ and $E_{(1)}$ are flat. \subsection{A comment on more general eventual identities} Here we drop the assumption $ E^k_{(m)}=(u^k)^m$ and we consider the case of two vector fields $\hat{E}_{(l)}, \hat{E}_{(m)}$ and see when the distribution they span is integrable. We will see that we obtain again the same results we proved assuming $E^k_{(m)}=(u^k)^m.$ To this end we compute \begin{eqnarray}\label{comgen1} &&[\hat{E}_{(l)}, \hat{E}_{(m)}]^k=E^k_{(l)}\partial_k E^k_{(m)}-E^k_{(m)}\partial_k E^k_{(l)}, \quad k=1, \dots n\\\label{comgen2} &&[\hat{E}_{(l)}, \hat{E}_{(m)}]^{n+1}=-\left(E^j_{(l)}\partial^2_j E^j_{(m)}-E^j_{(m)}\partial^2_j E^j_{(l)} \right) u^{n+1}. \end{eqnarray} In the open set where $E^i_{(l)}(u^i)\neq 0$, $i=1, \dots n$, we can introduce a change of variables of the form $\frac{d\tilde u^i}{d u^i}=\frac{1}{E^i_{(l)}(u^i)}$, $i=1, \dots n$, so that in the new coordinates $\tilde u^i$ $E_{(l)}$ becomes ${\tilde E}^i_{(l)}=\frac{d\tilde u^i}{d u^i}E^i_{(l)}=1$ for each $i=1,\dots n$. Since the vector field commutators above do not depend on the coordinate system, we can use them also in the coordinate system $\tilde u^i$ (anyway we drop the notation $\tilde E_{(l)}$ and so on to simplify the notation). So without loss of generality on a an open set, we can assume $E_{(l)}=e=(1, \dots, 1)$ while $E_{(m)}=E=(E^1(u^1),\dots, E^n(u^n))$ is arbitrary. Substituting this information in the computation of vector field commutators above, we get $[\hat{E}_{(0)}, \hat{E}_{(m)}]^i=\partial_i E^i$, $i=1, \dots n$ and $[\hat{E}_{(0)}, \hat{E}_{(m)}]^{n+1}=-\partial_j^2E^ju^{n+1}.$ Imposing that the distribution $\Delta$ spanned by $\hat{E}_{(0)}, \hat{E}_{(m)}$ is integrable, i.e. $[\hat{E}_{(0)}, \hat{E}_{(m)}]=\alpha \hat{E}_{(0)}+\beta \hat{E}_{(m)}$ for some functions $\alpha(u^1, \dots, u^n, u^{n+1}), \beta(u^1, \dots, u^n, u^{n+1})$ we get: \begin{eqnarray}\label{biflatsystem} &&\partial_i E^i=\alpha +\beta E^i, \quad i=1, \dots, n\\ \label{biflatsystem2} &&\partial_j^2 E^j=\beta\partial_j E^j, \quad \text{ for fixed }j. \end{eqnarray} \begin{lemma} Suppose that $\partial_j E^j\ne 0$, then the functions $\alpha$ and $\beta$ in the system above are necessarily constants. \end{lemma} \emph{Proof} From \eqref{biflatsystem2} it follows immediately that $\beta=\beta(u^j)$. Now we turn to \eqref{biflatsystem} with $i=j$ so we have $$\partial_j E^j=\alpha(u^1, \dots u^{n+1}) +\beta(u^j) E^j.$$ Taking derivative with respect to $u^k$, $k\neq j$ we get that $\alpha=\alpha(u^j)$ only. Finally we look at \eqref{biflatsystem} with $i\neq j$. For this we have $$ \partial_i E^i(u^i)=\alpha(u^j) +\beta(u^j) E^i(u^i).$$ If $\beta=0$, then $\alpha(u_j)$ has to a be a constant $k$ and this gives $E^i=ku^i,$ for all $i=1, \dots n$. If $\beta\neq 0$, then taking again derivative with respect to $u^i$ we get $$\partial_i^2 E^i(u^i)=\beta(u^j)\partial_i E^i(u^i).$$ Assuming $\partial_i E^i(u^i)\neq 0$ this gives that $\beta$ has to be a constant $k$. Going back to $\partial_i E^i(u^i)=\alpha(u^j) +k E^i(u^i)$ and taking derivative with respect to $u^j$ this shows that $\alpha$ is also a constant. \begin{flushright} $\Box$ \end{flushright} Now we classify eventual identities $\{e, E\}$ related to bi-flat structures. \begin{thm} Consider the bi-flat structures $\{e, E\}$. Then $E$ is either given by $\alpha (u^1, \dots, u^n)$ with $\alpha$ non-zero constant or $E=(c_1 e^{\beta u^1}-\frac{\alpha}{\beta}, \dots, c_n e^{\beta u^n}-\frac{\alpha}{\beta}).$ \end{thm} \emph{Proof} Since $\alpha$ and $\beta$ are constants, then equation \eqref{biflatsystem2} is just a consequence of equation \eqref{biflatsystem} for $i=j$. Therefore it is enough to classify all the solutions of \eqref{biflatsystem}. If $\beta=0$ and $\alpha\neq 0$, then $E^i=\alpha u^i$, $i=1, \dots n$, while if $\beta\neq 0$ then $E^i=c_i e^{\beta u^i}-\frac{\alpha}{\beta}$, $i=1, \dots, n$ (where the constants $c_1^i$ in general might be chosen to depend on $i$). \begin{flushright} $\Box$ \end{flushright} Without loss of generality we can assume $\alpha=1$ in the first case and $\alpha=0$ in the second case (the distribution does not change). Moreover assuming $c_i \ne 0,\,\, \forall i$ we can reduce the second case to the first one with a simple change of coordinates: $\tilde{u}^i=-\frac{1}{\beta c_i}e^{-\beta u^i}$. \section{Bi-flat $F$-manifolds}\label{biflatsec} Bi-flat $F$-manifolds were introduced in \cite{AL} and further studied and classified in \cite{L2014}. In this Section we present the classification in dimension two and three, using Tsarev's conditions instead of a generalized Darboux-Egorov system. Due to the results of the previous section semisimple bi-flat $F$-manifolds are parametrized by the solutions of the system \begin{eqnarray} \label{BF1} &&\partial_k\Gamma^i_{ij}=-\Gamma^i_{ij}\Gamma^i_{ik}+\Gamma^i_{ij}\Gamma^j_{jk} +\Gamma^i_{ik}\Gamma^k_{kj}, \quad i\ne k\ne j\ne i,\\ \label{BF2} &&E_{(0)}(\Gamma^i_{ij})=0,\qquad i\ne j\\ \label{BF3} &&E_{(1)}(\Gamma^i_{ij})=-\Gamma^i_{ij},\qquad i\ne j \end{eqnarray} where $E_{(0)}=\sum_{i=1}^n\partial_i$ and $E_{(1)}=\sum_{i=1}^nu^i\partial_i$. It is possible to prove (see Appendix 1 for details) that the above system is compatible and thus its general solution depends on $n(n-1)$ arbitrary constants. \subsection{Two dimensional bi-flat $F$-manifolds} For $n=2$ Tsarev's conditions \eqref{BF1} are empty. The general solution of the remaining conditions \eqref{BF2} and \eqref{BF3} depends on two arbitrary constants $\epsilon_1$ and $\epsilon_2$. It coincides with the two-component generalized $\epsilon$-system (see \cite{L2014}): $$\Gamma^{i}_{ij}=\frac{\epsilon_j}{u^i-u^j},\qquad i\ne j.$$ \subsection{Three-dimensional bi-flat $F$-manifolds } Tridimensional bi-flat $F$-manifolds are parametrized by solutions of Painlev\'e VI equation \cite{AL,L2014}. This result has been obtained reducing a generalized version of the Darboux-Egorov system for the rotation coefficients $\beta_{ij}$ to a sistem of ODEs equivalent to the sigma form of Painlev\'e VI. Given a solution of Painlev\'e VI, the natural connection is defined as \begin{equation}\label{gammabeta} \Gamma^i_{ij}=\frac{H_j}{H_i}\beta_{ij}, \end{equation} where $\beta_{ij}$ is the corresponding solution of the generalized Darboux-Egorov system and the function $H_i$ are the Lam\'e coefficients satisfying the further conditions $e(H_i)=0$ and $E(H_i)=d_iH_i$, (see \cite{AL,L2014} for details). In this Section, we follow a different approach, based on the study of the sytem (\ref{BF1},\ref{BF2},\ref{BF3}). In particular we show that this system is equivalent to a system of six first order ODEs admitting $4$ independent first integrals. Moreover we provide an explicit relation between the solutions of this system and the solutions of the generic Painlev\'e VI equation. The value of the $4$ parameters of the Painlev\'e VI equation is related to the value of the first integrals of the system. As a first step we have to solve the system \begin{eqnarray*} E_{(0)}(\Gamma^{i}_{ij})&=&[\partial_1+\partial_2+\partial_3]\Gamma^{i}_{ij}=0,\\ E_{(1)}(\Gamma^{i}_{ij})&=&[u^1\partial_1+u^2\partial_2+u^3\partial_3]\Gamma^{i}_{ij}=- \Gamma^{i}_{ij}, \end{eqnarray*} the solutions of which are given by \begin{eqnarray*} &&\Gamma^1_{12}=\frac{F_{12}\left(\frac{u^2-u^3}{u^1-u^2}\right)}{u^1-u^2},\qquad\Gamma^1_{13}=\frac{F_{13}\left(\frac{u^2-u^3}{u^1-u^2}\right)}{u^1-u^3},\qquad\Gamma^2_{21}=\frac{F_{21}\left(\frac{u^2-u^3}{u^1-u^2}\right)}{u^2-u^1},\\ &&\Gamma^2_{23}=\frac{F_{23}\left(\frac{u^2-u^3}{u^1-u^2}\right)}{u^2-u^3},\qquad\Gamma^3_{31}=\frac{F_{31}\left(\frac{u^2-u^3}{u^1-u^2}\right)}{u^3-u^1},\qquad\Gamma^3_{32}=\frac{F_{32}\left(\frac{u^2-u^3}{u^1-u^2}\right)}{u^3-u^2}. \end{eqnarray*} where $F_{ij}$, $i\neq j$ are arbitrary smooth functions. Imposing Tsarev's conditions and introducing the auxiliary variable $z=\frac{u^2-u^3}{u^1-u^2},$ we obtain the system \begin{equation}\label{mainsys} \begin{split} \frac{dF_{12}}{dz}&=-\frac{(F_{12}(z)F_{23}(z)-F_{12}(z)F_{13}(z))z-F_{12}(z)F_{23}(z)+F_{32}(z)F_{13}(z)}{z(z-1)},\\ \frac{dF_{21}}{dz}&=\frac{(F_{21}(z)F_{23}(z)-F_{21}(z)F_{13}(z))z+F_{23}(z)F_{31}(z)-F_{23}(z)F_{21}(z)}{z(z-1)},\\ \frac{dF_{13}}{dz}&=\frac{(F_{12}(z)F_{23}(z)-F_{12}(z)F_{13}(z))z-F_{12}(z)F_{23}(z)+F_{32}(z)F_{13}(z)}{z(z-1)},\\ \frac{dF_{31}}{dz}&=-\frac{(-F_{31}(z)F_{12}(z)+F_{21}(z)F_{32}(z))z+F_{31}(z)F_{32}(z)-F_{21}(z)F_{32}(z)}{z(z-1)},\\ \frac{dF_{23}}{dz}&=-\frac{(F_{21}(z)F_{23}(z)-F_{21}(z)F_{13}(z))z+F_{23}(z)F_{31}(z)-F_{23}(z)F_{21}(z)}{z(z-1)},\\ \frac{dF_{32}}{dz}&=\frac{(-F_{31}(z)F_{12}(z)+F_{21}(z)F_{32}(z))z+F_{31}(z)F_{32}(z)-F_{21}(z)F_{32}(z)}{z(z-1)} \end{split} \end{equation} or, in the more compact form \begin{equation}\label{mainsys2} \begin{split} \frac{d\ln F_{12}}{dz}&=-\frac{1}{z}F_{23}(z)+\frac{1}{z-1}F_{13}(z)-\frac{1}{z(z-1)}\frac{F_{13}(z)F_{32}(z)F_{21}(z)}{F_{12}(z)F_{21}(z)},\\ \frac{d\ln F_{21}}{dz}&=\frac{1}{z}F_{23}(z)-\frac{1}{z-1}F_{13}(z)+\frac{1}{z(z-1)}\frac{F_{12}(z)F_{23}(z)F_{31}(z)}{F_{12}(z)F_{21}(z)},\\ \frac{d\ln F_{13}}{dz}&=-\frac{1}{z-1}F_{12}(z)+\frac{1}{z(z-1)}F_{32}(z)+\frac{1}{z}\frac{F_{12}(z)F_{23}(z)F_{21}(z)}{F_{13}(z)F_{31}(z)},\\ \frac{d\ln F_{31}}{dz}&=\frac{1}{z-1}F_{12}(z)-\frac{1}{z(z-1)}F_{32}(z)-\frac{1}{z}\frac{F_{21}(z)F_{13}(z)F_{32}(z)}{F_{13}(z)F_{31}(z)},\\ \frac{d\ln F_{23}}{dz}&=-\frac{1}{z}F_{21}(z)-\frac{1}{z(z-1)}F_{31}(z)+\frac{1}{z-1}\frac{F_{32}(z)F_{13}(z)F_{21}(z)}{F_{23}(z)F_{32}(z)},\\ \frac{d\ln F_{32}}{dz}&=\frac{1}{z}F_{21}(z)+\frac{1}{z(z-1)}F_{31}(z)-\frac{1}{z-1}\frac{F_{12}(z)F_{23}(z)F_{31}(z)}{F_{23}(z)F_{32}(z)}.\\ \end{split} \end{equation} It is straightforward to check that the above system admits three linear first integrals \begin{eqnarray} \label{I1} I_1&=&F_{12}+F_{13},\\ \label{I2} I_2&=&F_{23}+F_{21},\\ \label{I3} I_3&=&F_{31}+F_{32}, \end{eqnarray} and one quadratic first integral \begin{equation}\label{I4} I_4=F_{31}F_{13}+F_{12}F_{21}+F_{23}F_{32}. \end{equation} We consider also the cubic first integral \begin{equation}\label{I5} I_5=-I_3I_4+I_1I_2I_3=F_{21}F_{13}F_{32}+F_{12}F_{23}F_{31}+(I_2-I_3)F_{13}F_{31}+(I_1-I_3)F_{23}F_{32}, \end{equation} where $I_1,I_2,I_3$ are given by \eqref{I1}, \eqref{I2} and \eqref{I3} respectively. On the affine subspace $S$ defined by $I_1=d_1,I_2=d_2,I_3=d_3$ we can reduce the original system of six first order ODEs to a system of three first order ODEs in the variables $F_{12}(z),F_{23}(z)$ and $F_{31}(z)$: \begin{eqnarray*} \frac{dF_{12}}{dz}&=&-\frac{(F_{12}F_{23}-F_{12}(d_1-F_{12}))z-F_{12}F_{23}+(d_3-F_{31})(d_1-F_{12})}{z(z-1)},\\ \frac{dF_{31}}{dz}&=&-\frac{(-F_{31}F_{12}+(d_2-F_{23})(d_3-F_{31}))z+F_{31}(d_3-F_{31})-(d_2-F_{23})(d_3-F_{31})}{z(z-1)},\\ \frac{dF_{23}}{dz}&=&-\frac{((d_2-F_{23})F_{23}-(d_2-F_{23})(d_1-F_{12}))z+F_{23}F_{31}-F_{23}(d_2-F_{23})}{z(z-1)}. \end{eqnarray*} On the subspace $S$ the functions $I_4$ and $I_5$ become dependent $$(I_4)_{|S}=F_{31}(d_1-F_{12})+F_{12}(d_2-F_{23})+F_{23}(d_3-F_{31}),\,(I_5)_{|S}=-d_3(I_4)_{|S}+d_1d_2d_3$$ Using this first integral we can further reduce the above system to a system of two non-autonomous first order ODEs. To prove the relation between the system \eqref{mainsys} and the Painlev\'e VI transcendents we observe that from \eqref{gammabeta} it follows immediately \begin{equation}\label{mainID} \frac{F_{ij}\left(\frac{u^2-u^3}{u^1-u^2}\right)}{u^i-u^j}\frac{F_{ji}\left(\frac{u^2-u^3}{u^1-u^2}\right)}{u^j-u^i}=\Gamma^i_{ij}\Gamma^j_{ji}=\beta_{ij}\beta_{ji}=\frac{\tilde{F}_{ij}\left(\frac{u^2-u^3}{u^1-u^2}\right)}{u^i-u^j}\frac{\tilde{F}_{ji}\left(\frac{u^2-u^3}{u^1-u^2}\right)}{u^j-u^i}. \end{equation} The last identity is due to the fact the rotation coefficients satisfy the conditions $$E_{(0)}\beta_{ij}=0,\qquad E_{(1)}\beta_{ij}=-\beta_{ij}.$$ The remaining condition $$\partial_k\beta_{ij}=\beta_{ik}\beta_{kj}$$ becomes a system of ODEs for the unknown functions $\tilde{F}_{ij}(z)$. This system of ODEs admits a quadratic and a cubic first integrals that are very similar to the first integrals $I_4$ and $I_5$ of the system \eqref{mainsys}. Up to a sign the only difference is the fact that in the present case the quantities $I_1,I_2,I_3$ appearing in $I_5$ are not constant but first integrals while in \cite{L2014} they coincide with the degree of homogeneity $d_1,d_2,d_3$ of the Lam\'e coefficients. Taking into account the results of \cite{L2014} and the identity \eqref{mainID} it is clear that a function $f(z)$ satisfying the condition $f'=F_{12}F_{21}$ must be a solution of Painlev\'e VI equation. Actually, it turns out that the correspondence between solutions of the system \eqref{mainsys} and solutions of the Painlev\'e VI equation is given in terms of purely {\it algebraic} operations, as it is highlighted by the following Theorem (see also the Appendix 2): \begin{thm}\label{thmreductionsigma} Let $(F_{12}(z),F_{21}(z),F_{13}(z),F_{31}(z),F_{23}(z),F_{32}(z))$ be a solution of the system \eqref{mainsys}, then the function $f(z)=F_{23}F_{32}+zF_{12}F_{21}-\frac{q_1}{2}$ is a solution of the equation \begin{equation}\label{PVImod} \begin{split} [z(z-1)f'']^2=&[q_2-(d_2-d_3)g_2-(d_1-d_3)g_1]^2-4f'g_1g_2, \end{split} \end{equation} where $g_1=f-zf'+\frac{q_1}{2}$ and $ g_2=(z-1)f'-f+\frac{q_1}{2}$ and the parameters $d_1,d_2,d_3,q_1,q_2$ coincide with the values of the first integrals $I_1,I_2,I_3,I_4,I_5$ on the given solution of \eqref{mainsys}. Furthermore, equation \eqref{PVImod} can be reduced to the sigma form of the generic Painlev\'e VI equation. \end{thm} \emph{Proof} Let $(F_{12}(z),F_{21}(z),F_{13}(z),F_{31}(z),F_{23}(z),F_{32}(z))$ be a solution of the system \eqref{mainsys} and $d_1,d_2,d_3,q_1,q_2$ the corresponding values of the first integrals $I_1,I_2,I_3,I_4,I_5$. In analogy with \cite{AL,L2014} we introduce the function $f(z)=F_{23}F_{32}+zF_{12}F_{21}-\frac{q_1}{2}$ satisfying $f':=F_{12}F_{21}$. Indeed \begin{eqnarray*} \frac{d}{dz}\left(F_{12}F_{21}\right)&=&\frac{F_{23}F_{31}F_{12}-F_{13}F_{32}F_{21}}{z(z-1)},\\ \frac{d}{dz}\left(F_{23}F_{32}\right)&=&-\frac{F_{23}F_{31}F_{12}-F_{13}F_{32}F_{21}}{z-1},\\ \frac{d}{dz}\left(F_{13}F_{31}\right)&=&\frac{F_{23}F_{31}F_{12}-F_{13}F_{32}F_{21}}{z}. \end{eqnarray*} Summarizing we have \begin{eqnarray*} F_{12}F_{21}&=&f',\\ F_{23}F_{32}&=&g_1:=f-zf'+\frac{q_1}{2}. \end{eqnarray*} Taking into account that $$F_{31}F_{13}+F_{12}F_{21}+F_{23}F_{32}=q_1,$$ we obtain $$F_{31}F_{13}=g_2:=(z-1)f'-f+\frac{q_1}{2}.$$ Using these relations we get \begin{equation*} \begin{split} [z(z-1)f'']^2=&[F_{23}F_{31}F_{12}-F_{13}F_{32}F_{21}]^2=\\ &[q_2-(d_2-d_3)F_{13}F_{31}-(d_1-d_3)F_{23}F_{32}]^2-4F_{23}F_{31}F_{12}F_{13}F_{32}F_{21}=\\ &[q_2-(d_2-d_3)g_2-(d_1-d_3)g_1]^2-4f'g_1g_2. \end{split} \end{equation*} Up to an inessential sign the above equation coincides with the equation $(4.3)$ appearing in \cite{L2014} and, as a consequence, it is equivalent to the sigma form of the generic Painlev\'e VI equation (see \cite{L2014} for details). \begin{flushright} $\Box$ \end{flushright} This proves that each solution of \eqref{mainsys} determines a specific Painlev\'e VI equation (namely it fixes its parameters) and it identifies a unique solution of the corresponding Painlev\'e VI equation itself. Moreover the correspondence is clearly algebraic. The converse statement is also true and it will be proved in the Appendix 2. \section{Tri-flat $F$-manifolds}\label{triflatsec} In this Section we provide a complete classification of tri-flat $F$-manifolds in dimension two and a partial classification in dimension three. We first briefly discuss the relation between tri-flat $F$-manifolds and tri-Hamiltonian Frobenius manifolds, introduced and studied in \cite{Ro}. \subsection{Tri-flat $F$ manifolds and the augmented Darboux-Egorov system} \emph{Tri-Hamiltonian} Frobenius manifolds exist only in even dimensions \cite{Ro}, while we will see that general tri-flat structures exist also in odd dimensions. Notice that (semisimple) Frobenius manifolds are special examples of bi-flat $F$-manifolds as was pointed out in Section \ref{frobsubsection}, but tri-Hamiltonian Frobenius manifolds, in general, do not constitute a special subclass of tri-flat $F$-manifolds. To see this, we proceed as follows. First of all, tri-Hamiltonian Frobenius manifolds are related to solutions of the following system (see \cite{Ro}) $$E_{(0)}(\beta_{ij})=0,\qquad E_{(1)}(\beta_{ij})=-\beta_{ij},\qquad E_{(2)}(\beta_{ij})=-(u^i+u^j)\beta_{ij}.$$ and the last equation in general is not compatible with $E_{(2)}(\Gamma^{i}_{ij})=-2u^j \Gamma^{i}_{ij}$. We have the following theorem that elucidate the relationship between tri-flat $F$-manifolds and the augmented Darboux-Egorov system for the rotation coefficients $\beta_{ij}$ and for the Lam\'e coefficients $H_i$. \begin{thm} Let $\beta_{ij},\,i\ne j$ be a solution of the system \begin{eqnarray} \label{ED1} &&\partial_k\beta_{ij}=\beta_{ik}\beta_{kj},\qquad k\ne i\ne j\ne k\\ \label{ED2} &&E_{(0)}(\beta_{ij})=0,\\ \label{ED3} &&E_{(1)}(\beta_{ij})=-\beta_{ij},\\ \label{ED4} &&E_{(2)}(\beta_{ij})=[2d_iu^i-2(d_j+1) u^j+c_i-c_j]\beta_{ij}, \end{eqnarray} (where $d_1,...,d_n$, $c_1,...,c_n$ are constants) and let $(H_1,...,H_n)$ be a solution of the system \begin{eqnarray} \label{L1} &&\partial_j H_i=\beta_{ij}H_j,\qquad i\ne j\\ \label{L2} &&E_{(0)}(H_i)=0,\\ \label{L3} &&E_{(1)}(H_i)=d_iH_i,\\ \label{L4} &&E_{(2)}(H_i)=(2d_iu^i+c_i)H_i. \end{eqnarray} Then \begin{itemize} \item the connection $\nabla^{(0)}$ defined by \begin{equation}\label{naturalc1} \begin{split} \Gamma^{(0)i}_{jk}&:=0,\qquad\forall i\ne j\ne k \ne i,\\ \Gamma^{(0)i}_{jj}&:=-\Gamma^{(0)i}_{ij},\qquad i\ne j,\\ \Gamma^{(0)i}_{ij}&:=\frac{H_j}{H_i}\beta_{ij},\qquad i\ne j,\\ \Gamma^{(0)i}_{ii}&:=-\sum_{l\ne i}\Gamma^{(0)i}_{li}, \end{split} \end{equation} \item the connection $\nabla^{(1)}$ defined by \begin{equation}\label{dualnabla} \begin{split} \Gamma^{(1)i}_{jk}&:=0,\qquad\forall i\ne j\ne k \ne i,\\ \Gamma^{(1)i}_{jj}&:=-\frac{u^i}{u^j}\Gamma^{(1)i}_{ij},\qquad i\ne j,\\ \Gamma^{(1)i}_{ij}&:=\frac{H_j}{H_i}\beta_{ij},\qquad i\ne j,\\ \Gamma^{(1)i}_{ii}&:=-\sum_{l\ne i}\frac{u^l}{u^i}\Gamma^{(1)i}_{li}-\frac{1}{u^i}, \end{split} \end{equation} \item the connection $\nabla^{(2)}$ defined by \begin{equation}\label{dualnablaplus} \begin{split} \Gamma^{(2)i}_{jk}&:=0,\qquad\forall i\ne j\ne k \ne i,\\ \Gamma^{(2)i}_{jj}&:=-\frac{(u^i)^2}{(u^j)^2}\Gamma^{(2)i}_{ij},\qquad i\ne j,\\ \Gamma^{(2)i}_{ij}&:=\frac{H_j}{H_i}\beta_{ij},\qquad i\ne j,\\ \Gamma^{(2)i}_{ii}&:=-\sum_{l\ne i}\frac{(u^l)^2}{(u^i)^2}\Gamma^{(2)i}_{li}-\frac{2}{u^i}, \end{split} \end{equation} \item the vector fields $E_{(0)},E_{(1)}$, $E_{(2)}$ and the corresponding products $\circ_{(0)}, \circ_{(1)}$. $\circ_{(2)}$, \end{itemize} define a semisimple tri-flat $F$-manifold. Moreover any semisimple tri-flat $F$-manifold can be obtained in this way. \end{thm} \noindent \emph{Proof}. Given a solution of the system (\ref{ED1},\ref{ED2},\ref{ED3},\ref{ED4},\ref{L1},\ref{L2},\ref{L3},\ref{L4}) to prove that the above formulas define a semisimple tri-flat $F$-manifold is an elementary straightforward computation. The converse statement was partially proved in \cite{L2014} (the part involving $E_{(0)}$ and $E_{(1)}$). The part involving $E_{(2)}$ can be proved in a similar way. First we observe that $E_{(2)}(\Gamma^{(k)i}_{ij})=-2u^j \Gamma^{(k)i}_{ij}$ for $k=0,1,2$, for $i\neq j$ because we are starting from a tri-flat $F$-manifold. Since $\Gamma^{(k)i}_{ij}=\beta_{ij}\frac{H_j}{H_i}$, we can rewrite $\Gamma^{(k)i}_{ij}=\partial_j \ln(H_i)$ if $\partial_j H_i=\beta_{ij}H_j$. Now we obtain $$\partial_j\left(E_{(2)}(\ln{H_i})\right)=E_{(2)}\left(\partial_j\ln{H_i}\right)+2u^j\partial_j\ln{H_i}=E_{(2)}(\Gamma^{(k)i}_{ij})+2u^j \Gamma^{(k)i}_{ij}=0,\qquad\forall j\ne i.$$ This is equivalent to $\frac{\partial_ j E_{(2)}(H_i)}{E_{(2)}(H_i)}=\frac{\partial_ j H^i}{H^i}$ which has solution $E_{(2)}(H_i)=f_i(u^i)H_i.$ To check that equation \eqref{L4} is satisfied, we need to compute the coefficients $f_i(u^i)$. To do so we determine their derivative \begin{eqnarray*} \partial_i f_i&=&\partial_i\left(\frac{E_{(2)}(H_i)}{H_i}\right)=\frac{E_{(2)}(\partial_i H_i)+2u^i\partial_i H_i}{H_i}-\frac{E_{(2)}(H_i)\partial_i H_i}{H_i^2}=\\ &&\frac{E_{(2)}\left(-\sum_{l\ne i}\partial_l H_i\right)-2u^i\sum_{l\ne i}\partial_l H_i+f^i\sum_{l\ne i}\partial_l H_i}{H_i},\\ \end{eqnarray*} where we have used equation \eqref{L2} which is equivalent to $\partial_i H_i=-\sum_{l\neq i}\partial_l H_i$ (we are allowed to use it since by the results of \cite{L2014} we already know the converse for $E_{(0)}$ and $E_{(1)}$). Using $E_{(2)}(-\partial_l H_i)=2u^l \partial_l H_i-\partial_l E_{(2)}(H_i)$ and the fact that $\partial_l E_{(2)}(H_i)=f^i\partial_l H_i$ the last expression becomes \begin{eqnarray*} &&\frac{2\sum_{l\ne i}u^l\partial_l H_i-2u^i\sum_{l\ne i}\partial_l H_i}{H_i}=\\ &&\frac{2\sum_{l=1}^nu^l\partial_l H_i-2u^i\sum_{l=1}^n\partial_l H_i}{H_i}=2d_i,\\ \end{eqnarray*} by equations \eqref{L2} and \eqref{L3}. This means that the Lam\'e coefficients $H_i$ satisfy the condition \begin{equation}\label{LC} E_{(2)}(H_i)=(2d_iu^i+c_i)H_i, \end{equation} where $d_i=\frac{E_{(1)}(H_i)}{H_i}$ and $c_i$ are constants. \begin{flushright} $\Box$ \end{flushright} Comparing \eqref{ED4} with Romano's condition (see \cite{Ro}) $$E_{(2)}(\beta_{ij})=-(u^i+u^j)\beta_{ij},$$ we observe that they coincide iff $d_i=d_j=-\frac{1}{2}$ and $c_i=c_j$. \subsection{Three-dimensional tri-flat $F$-manifolds } Let us consider the case corresponding to the subalgebra generated by $Z_{(-1)},Z_{(0)},Z_{(1)}$ (or, which is equivalent, the subalgebra generated by $\hat{E}_{(0)}, \hat{E}_{(1)}, \hat{E}_{(2)}$). First of all we have to solve the systems (for $j=1,2,3$) \begin{eqnarray*} E_{(0)}(\Gamma^{i}_{ij})&=&[\partial_1+\partial_2+\partial_3]\Gamma^{i}_{ij}=0,\\ E_{(1)}(\Gamma^{i}_{ij})&=&[u^1\partial_1+u^2\partial_2+u^3\partial_3]\Gamma^{i}_{ij}=- \Gamma^{i}_{ij},\\ E_{(2)}(\Gamma^{i}_{ij})&=&[(u^1)^{2}\partial_1+(u^2)^2\partial_2+(u^3)^{2}\partial_3]\Gamma^{i}_{ij}=-2u^j \Gamma^{i}_{ij}. \end{eqnarray*} The general solution is given by \begin{eqnarray*} &&\Gamma^1_{12}=\frac{C_{12}(u^3-u^1)}{(u^2-u^1)(u^2-u^3)},\,\Gamma^1_{13}=\frac{C_{13}(u^1-u^2)}{(u^3-u^1)(u^3-u^2)},\,\Gamma^2_{21}=\frac{C_{21}(u^2-u^3)}{(u^1-u^3)(u^1-u^2)},\\ &&\Gamma^2_{23}=\frac{C_{23}(u^1-u^2)}{(u^3-u^1)(u^3-u^2)},\,\Gamma^3_{31}=\frac{C_{31}(u^2-u^3)}{(u^1-u^3)(u^1-u^2)},\,\Gamma^3_{32}=\frac{C_{32}(u^3-u^1)}{(u^2-u^1)(u^2-u^3)}, \end{eqnarray*} where $C_{12},C_{21},C_{13},C_{31},C_{23},C_{32}$ are arbitrary constants. Imposing Tsarev's condition we obtain immediately the following constraints $$C_{13}=-C_{12},\qquad C_{23}=-C_{21},\qquad C_{32}=-C_{31},\qquad C_{12}+C_{23}+C_{31}=1.$$ \begin{rmk} In some cases it might be convenient to work in canonical coordinates for the dual product rectifying the Euler vector field. In this case the generators of the $sl(2, \mathbb{C})$ algebra have the exponential form \begin{equation}\label{distri2eq}\hat{E}_{(l)}:=e^{lu^1}\partial_1+\dots e^{lu^n}\partial_n-le^{lu^j}u_{n+1}\partial_{n+1},\quad l=-1,0,1.\end{equation} All the formulas obtained in this paper can be immediately rephrased in this dual framework. For instance, the Christoffel symbols defining three dimensional tri-flat $F$-manifolds have the following form \begin{eqnarray*} &&\Gamma^1_{12}=\frac{C_{12}(e^{u^3}-e^{u^1})e^{u^2}}{(e^{u^2}-e^{u^3})(e^{u^2}-e^{u^1})},\, \Gamma^1_{13}=-\frac{C_{13}(e^{u^1}-e^{u^2})e^{u^3}}{(e^{u^3}-e^{u^1})(e^{u^3}-e^{u^2})},\,\\ &&\Gamma^2_{21}=-\frac{C_{21}(e^{u^2}-e^{u^3})e^{u^1}}{(e^{u^1}-e^{u^3})(e^{u^1}-e^{u^2})},\,\,\Gamma^2_{23}=\frac{C_{23}(e^{u^1}-e^{u^2})e^{u^3}}{(e^{u^3}-e^{u^1})(e^{u^3}-e^{u^2})},\\ &&\Gamma^3_{31}=\frac{C_{31}(e^{u^2}-e^{u^3})e^{u^1}}{(e^{u^1}-e^{u^3})(e^{u^1}-e^{u^2})},\,\Gamma^3_{32}=-\frac{C_{32}(e^{u^3}-e^{u^1})e^{u^2}}{(e^{u^2}-e^{u^3})(e^{u^2}-e^{u^1})}, \end{eqnarray*} with $$C_{13}=-C_{12},\qquad C_{21}=-C_{23},\qquad C_{32}=-C_{31},\qquad C_{12}+C_{23}+C_{31}=1.$$ \end{rmk} \subsection{Four-dimensional tri-flat $F$-manifolds} The study of tri-flat $F$-manifolds in higher dimensions in much more complicated due to the appearance of functional parameters. For instance four dimensional tri-flat $F$-manifolds are related to the solutions of the system (with $j=1,2,3,4$) \begin{eqnarray*} E_{(0)}(\Gamma^{i}_{ij})&=&[\partial_1+\partial_2+\partial_3+\partial_4]\Gamma^{i}_{ij}=0,\\ E_{(1)}(\Gamma^{i}_{ij})&=&[u^1\partial_1+u^2\partial_2+u^3\partial_3+u^4\partial_4]\Gamma^{i}_{ij}=- \Gamma^{i}_{ij},\\ E_{(2)}(\Gamma^{i}_{ij})&=&[(u^1)^{2}\partial_1+(u^2)^2\partial_2+(u^3)^{2}\partial_3+(u^4)^{2}\partial_4]\Gamma^{i}_{ij}=-2u^j \Gamma^{i}_{ij}. \end{eqnarray*} satisfying the Tsarev's condition \eqref{rt1}. After the first step we obtain \begin{eqnarray*} \Gamma^i_{i1}&=& F_{i1}\left(\frac{(u^1-u^2)(u^3-u^4)}{(u^2-u^3)(u^1-u^4)}\right)\frac{u^3-u^2}{(u^1-u^3)(u^1-u^2)},\quad i=2,3,4,\\ \Gamma^i_{i2}&=& F_{i2}\left(\frac{(u^1-u^2)(u^3-u^4)}{(u^2-u^3)(u^1-u^4)}\right)\frac{u^3-u^1}{(u^2-u^3)(u^2-u^1)},\quad i=1,3,4,\\ \Gamma^i_{i3}&=& F_{i3}\left(\frac{(u^1-u^2)(u^3-u^4)}{(u^2-u^3)(u^1-u^4)}\right)\frac{u^2-u^1}{(u^3-u^1)(u^3-u^2)},\quad i=1,2,4,\\ \Gamma^i_{i4}&=& F_{i4}\left(\frac{(u^1-u^2)(u^3-u^4)}{(u^2-u^3)(u^1-u^4)}\right)\frac{u^1-u^3}{(u^4-u^1)(u^4-u^3)},\quad i=1,2,3. \end{eqnarray*} The second step seems very difficult. We have to solve a system of 24 equations (Tsarev's conditions) for the 12 unknown functions $F_{ij}$. This system can be written as a system of ODEs (\emph{two for each unknown function}) in the variable $z=\frac{(u^1-u^2)(u^3-u^4)}{(u^2-u^3)(u^1-u^4)}$ for the unknown functions $F_{ij}(z):$ \begin{eqnarray*} &&\frac{dF_{12}}{dz}=-\frac{-F_{12}F_{13}+F_{12}F_{23}+F_{32}F_{13}+F_{12}}{z-1}=-\frac{-F_{42}F_{14}+F_{12}F_{14}-F_{12}F_{24}}{z},\\ &&\frac{dF_{13}}{dz}=\frac{F_{12}F_{23}-F_{12}F_{13}+F_{32}F_{13}-F_{13}}{z}=\frac{-F_{14}F_{13}+F_{14}F_{43}+F_{34}F_{13}}{z},\\ &&\frac{dF_{14}}{dz}=-\frac{-F_{42}F_{14}+F_{12}F_{14}-F_{12}F_{24}}{z}=-\frac{(F_{34}F_{13}+F_{14}F_{43}-F_{14}F_{13})z+F_{14}}{z(z-1)},\\ &&\frac{dF_{21}}{dz}=-\frac{F_{23}F_{21}-F_{13}F_{21}-F_{23}F_{31}+F_{21}}{z-1}=-\frac{-F_{24}F_{21}+F_{24}F_{41}+F_{14}F_{21}}{z},\\ &&\frac{dF_{23}}{dz}=-\frac{-F_{13}F_{21}+F_{23}F_{21}-F_{23}F_{31}-F_{23}}{(z-1)z}=\frac{F_{23}F_{34}-F_{23}F_{24}+F_{43}F_{24}}{z},\\ &&\frac{dF_{24}}{dz}=\frac{F_{14}F_{21}-F_{24}F_{21}+F_{24}F_{41}-F_{24}z}{(z-1)z}=-\frac{z(F_{34}F_{23}-F_{24}F_{23}+F_{24}F_{43})+F_{24}}{(z-1)z},\\ &&\frac{dF_{31}}{dz}=-\frac{-F_{31}F_{14}+F_{31}F_{34}-F_{41}F_{34}}{z}=\frac{F_{31}F_{12}+F_{21}F_{32}-F_{31}F_{32}+F_{31}}{z},\\ &&\frac{dF_{32}}{dz}=\frac{F_{31}F_{12}+F_{21}F_{32}-F_{31}F_{32}-F_{32}}{(z-1)z}=\frac{F_{34}F_{42}-F_{34}F_{32}+F_{24}F_{32}}{z},\\ &&\frac{dF_{34}}{dz}=-\frac{F_{31}F_{34}-F_{41}F_{34}-F_{31}F_{14}+F_{34}z}{(z-1)z}=\frac{F_{34}F_{42}-F_{34}F_{32}+F_{24}F_{32}}{z},\\ &&\frac{dF_{41}}{dz}=\frac{F_{41}F_{12}+F_{21}F_{42}-F_{41}F_{42}+F_{41}}{z}=-\frac{F_{31}F_{43}+F_{41}F_{13}-F_{41}F_{43}-F_{41}}{z-1},\\ &&\frac{dF_{42}}{dz}=\frac{F_{41}F_{12}+F_{21}F_{42}-F_{41}F_{42}-F_{42}}{(z-1)z}=-\frac{F_{42}F_{23}-F_{42}F_{43}+F_{32}F_{43}+F_{42}}{z-1},\\ &&\frac{dF_{43}}{dz}=\frac{F_{31}F_{43}-F_{41}F_{43}+F_{41}F_{13}+F_{43}}{(z-1)z}=\frac{F_{42}F_{23}-F_{42}F_{43}+F_{32}F_{43}-F_{43}}{z}. \end{eqnarray*} Comparing the right and sides of the above equations we obtain some constraints on the functions $F_{ij}$. We have the following relations \begin{eqnarray*} (z-1)\frac{dF_{12}(z)}{dz}+F_{12}(z)&=&-z\frac{dF_{13}(z)}{dz}-F_{13}(z),\\ \frac{dF_{12}(z)}{dz}&=&\frac{dF_{14}(z)}{dz},\\ -(z-1)\frac{dF_{14}(z)}{dz}-\frac{F_{14}(z)}{z}&=&z\frac{dF_{13}(z)}{dz},\\ z(z-1)\frac{dF_{23}(z)}{dz}-F_{23}(z)&=&-(z-1)\frac{dF_{21}(z)}{dz}+F_{21}(z),\\ z\frac{dF_{21}(z)}{dz}&=&z(z-1)\frac{dF_{24}(z)}{dz}+zF_{24}(z),\\ -(z-1)\frac{dF_{24}(z)}{dz}-\frac{F_{24}}{z}&=&z\frac{dF_{23}(z)}{dz},\\ z(z-1)\frac{dF_{34}(z)}{dz}+zF_{34}(z)&=&z\frac{dF_{31}(z)}{dz},\\ z(z-1)\frac{dF_{32}(z)}{dz}+F_{32}(z)&=&z\frac{dF_{31}(z)}{dz}-F_{31}(z),\\ \frac{dF_{32}(z)}{dz}&=&\frac{dF_{34}(z)}{dz},\\ z(z-1)\frac{dF_{42}(z)}{dz}+F_{42}(z)&=&z\frac{dF_{41}(z)}{dz}-F_{41}(z),\\ z(z-1)\frac{dF_{43}(z)}{dz}-F_{43}(z)&=&-(z-1)\frac{dF_{41}(z)}{dz}+F_{41}(z),\\ z\frac{dF_{43}(z)}{dz}+F_{43}(z)&=&-(z-1)\frac{dF_{42}(z)}{dz}-F_{42}(z),\\ \end{eqnarray*} which imply \begin{eqnarray*} F_{14}(z)-F_{12}(z)&=&C_1,\\ zF_{13}(z)+(z-1)F_{12}(z)&=&C_1,\\ F_{32}(z)-F_{34}(z)&=&C_2,\\ (z-1) F_{34}(z)-F_{31}(z)&=&C_2,\\ -zF_{43}(z)-(z-1)F_{42}(z)&=&C_3,\\ \frac{F_{41}(z)}{z}-\frac{(z-1)}{z}F_{42}(z)&=&C_3,\\ \frac{zF_{23}(z)}{z-1}+\frac{F_{21}(z)}{z-1}&=&C_7,\\ (z-1)F_{24}(z)-F_{21}(z)&=&C_7. \end{eqnarray*} Since for each unknown we have two equations, we have still to impose that such equations coincide. In general this seems a very complicate task. However, assuming $C_1=0$ we obtain the following additional constraints \begin{eqnarray*} C_7 &=& C_2+C_3-2,\\ F_{42}(z)&=&\frac{(1 -C_3)z+F_{34}(z) (z-1)-C_2}{z-1},\\ F_{21}(z)&=&F_{34}(z) (z-1)+1-C_2,\\ F_{34}(z)&=&C_3+F_{12}(z)-1. \end{eqnarray*} After this, all the equations of the original system reduce to the first order equation \begin{equation}\label{finalEQ} \frac{dF_{12}(z)}{dz}=-\frac{F_{12}(z)[(F_{12}(z)+C_3-1)(1-z)+C_2]}{z(z-1)} \end{equation} whose general solution is given by \begin{equation} F_{12}(z) =\frac{C_9z^{C_2}(z-1)^{-C_2}}{C_8C_9z^{C_9}+{\rm hypergeom}([C_2, C_9], [1+C_9], \frac{1}{z})} \end{equation} where $C_9=1-C_3$ and $C_8$ is an additional integration constant. \section{Non-semisimple regular bi-flat $F$ manifolds in dimension three and Painlev\'e equations}\label{structuresec2} We consider now non-semisimple multi-flat $F$ manifolds. According to the results of the Section 4, the flatness of $\nabla_{(l)}$ is equivalent to the following pair of conditions: \begin{itemize} \item $[{\rm Lie}_{E_{(l)}}, \nabla_{(l)}](T)=0, $ for any tensor field $T$. \item For every vector field $X, Y, Z, W$ we have $$Z \circ_{(l)} R_{(l)}(W, Y )(X) +W \circ_{(l)} R_{(l)}(Y,Z)(X) + Y \circ_{(l)} R_{(l)}(Z,W)(X) = 0,$$ where $R_{(l)}$ is the Riemann operator associated to the torsionless connection $\nabla_{(l)}$. \end{itemize} In this Section we are interested in $F$-manifolds that are not semisimple, but that satisfy still a regularity condition. In order to deal with the non-semisimple regular case we will use a result of David and Hertling (see \cite{DH}) about the existence of local ``canonical coordinates" for non-semisimple regular $F$-manifolds with an Euler vector field. Let us summarize the main results of their work which are relevant for our situation. \begin{defi}[\cite{DH}] An $F$-manifold $(M, \circ, e, E)$ where $E$ is an Euler vector field is called \emph{regular} if for each $p\in M$ the endomorphism $L_p:=E_p \circ : T_pM \rightarrow T_pM$ has exactly one Jordan block for each distinct eigenvalue. \end{defi} Here is the result from \cite{DH} which is relevant for our analysis: \begin{thm}[\cite{DH}]\label{DavidHertlingth} Let $(M, \circ, e, E)$ be a regular $F$-manifold of dimension greater or equal to $2$ with an Euler vector field $E$ of weight one. Furthermore assume that locally around a point $p\in M$, the operator $L$ has only one eigenvalue. Then there exists locally around $p$ a distinguished system of coordinates $\{u^1, \dots, u^n\}$ ( a sort of ``generalized canonical coordinates" for $\circ$) such that \begin{eqnarray} \label{canonical1} e&=&\partial_{u^1},\\ \label{canonical2} c^k_{ij}&=&\delta^k_{i+j-1},\\ \label{canonical3} E&=&u^1\partial_{u^1}+\dots+u^n\partial_{u^n}. \end{eqnarray} \end{thm} (Here we have performed a shift of the variables $u^1$ and $u^2$ compared to the coordinate system identified in \cite{DH} to obtain simpler formulas. In particular the operator $L_p$ has Jordan normal form with one Jordan block and all eigenvalues equal to $a$ at the point $p$ with coordinates $u^1=a, u^2=1, u^3=\dots=u^n=0$.) \endproof Let us point out that if the endomorphisms $L_p:=E_p\circ$ consist of different Jordan blocks with distinct eigenvalues, then the results of \cite{DH} can be readily extended using Hertling's Decomposition Lemma (see \cite{H}). However, in the case in which there are multiple Jordan blocks with the same eigenvalues no results are available to the best of our knowledge. In the three dimensional case, assuming regularity, one has only three possibilities. One is the semisimple case (in which $L$ has the form $L_1$), one is the case with one Jordan block and all eigenvalues equal (this corresponds to $L=L_3$) and this is the situation we analyze in detail in the first part of the next section. In the third case (corresponding to $L=L_2$) there is a non-trivial $2\times 2$ Jordan block with one eigenvalue and a second distinct eigenvalue. This last case is analyzed in detail in the second part of the next section \subsection{The case of one single eigenvalue and one Jordan block} In this section, we use canonical coordinates for a regular nonsemisimple bi-flat $F$-manifold in dimension three to show that locally these structures are parameterized by solutions of a three-parameter second order ODE that contains the full Painlev\'e IV for a special choice of one of these parameters. \begin{thm}\label{thm1} Let $(M, \nabla_1, \nabla_2, \circ, *, e, E)$ be a regular bi-flat $F$-manifold in dimension three such that $L_p$ has three equal eigenvalues. Then there exist local coordinates $\{u^1, u^2, u^3\}$ such that \begin{enumerate} \item $e, E, \circ$ are given by \eqref{canonical1}, \eqref{canonical2}, \eqref{canonical3}. \item The Christoffel symbols $\Gamma_{jk}^{(1)i}$ for $\nabla_1$ are given by: $$\Gamma_{23}^{(1)1}=\Gamma_{32}^{(1)1}=\Gamma_{33}^{(1)2}=\frac{F_1\left(\frac{u^3}{u^2} \right)}{u^2}, \; \Gamma_{32}^{(1)3}=\Gamma_{23}^{(1)3}=\frac{F_2\left(\frac{u^3}{u^2} \right)}{u^2},\; \Gamma_{32}^{(1)2}=\Gamma_{23}^{(1)2}=\frac{F_3\left(\frac{u^3}{u^2} \right)}{u^2},$$ $$\Gamma_{22}^{(1)1}=\frac{F_4\left(\frac{u^3}{u^2} \right)}{u^2}, \;\Gamma_{22}^{(1)2}=\frac{F_5\left(\frac{u^3}{u^2} \right)}{u^2}, \; \Gamma_{22}^{(1)3}=\frac{F_6\left(\frac{u^3}{u^2} \right)}{u^2}, \; \Gamma_{33}^{(1)3}=\frac{F_3\left(\frac{u^3}{u^2} \right)-F_4\left(\frac{u^3}{u^2} \right)}{u^2},$$ where the functions $F_1, \dots, F_6$ satisfy the system \begin{eqnarray} \label{F1} \frac{d F_1}{dz}&=&0,\\ \label{F2} \frac{d F_2}{dz}&=&2F_4F_3z+2F_2F_1z-2F_5F_1z+F_6F_1-F_2F_3+F_4-F_3,\\ \label{F3} \frac{d F_3}{dz}&=&-F_4F_3-F_2F_1+F_5F_1-F_1,\\ \label{F4} \frac{d F_4}{dz}&=&F_4F_3+F_2F_1-F_5F_1-F_1,\\ \label{F5} \frac{d F_5}{dz}&=&F_4F_3z+F_2F_1z-F_5F_1z-F_6F_1+F_2F_3+F_1z-F_3,\\ \label{F6} \frac{d F_6}{dz}&=&-2F_4F_3z^2-2F_2F_1z^2+2F_5F_1z^2-F_6F_1z+F_2F_3z+\\ \nonumber && F_4F_6-F_4z+F_2^2-F_2F_5+F_3z-F_2. \end{eqnarray} in the variable $z=\frac{u^3}{u^2}$ while the other symbols are identically zero. \item The Christoffel symbols $\Gamma_{jk}^{(2)i}$ for $\nabla_2$ are uniquely determined by the Christoffel symbols of $\nabla_1$ via the following formulas: \begin{eqnarray*} \Gamma_{11}^{(2)1}&=&\frac{1}{(u^1)^3}\left[- (u^1)^2+\Gamma_{22}^{(1)1} u^1 (u^2)^2+ \Gamma_{32}^{(1)1}(2u^1 u^2 u^3- (u^2)^3)\right],\\ \Gamma_{11}^{(2)2}&=&\frac{1}{(u^1)^3}\left[\Gamma_{22}^{(1)2}u^1 (u^2)^2+ \Gamma_{32}^{(1)2} (2u^1 u^2 u^3-(u^2)^3)+u^2 u^1+\Gamma_{32}^{(1)1}( u^1 (u^3)^2-(u^2)^2 u^3)\right],\\ \Gamma_{11}^{(2)3}&=&\frac{1}{(u^1)^3}\left[\Gamma_{22}^{(1)3} u^1 (u^2)^2+\Gamma_{32}^{(1)3}(2 u^1 u^2 u^3-(u^2)^3) +\Gamma_{32}^{(1)2}(u^1 (u^3)^2-(u^2)^2u^3)+ u^1 u^3\right.\\ &&\left. -(u^2)^2+\Gamma_{22}^{(1)1} ((u^2)^2 u^3-(u^3)^2u^1)\right],\\ \Gamma_{21}^{(2)1}&=&-\frac{1}{(u^1)^2}\left[\Gamma_{22}^{(1)1}u^1 u^2+\Gamma_{32}^{(1)1}( u^1 u^3-(u^2)^2)\right],\\ \end{eqnarray*} \begin{eqnarray*} \Gamma_{21}^{(2)2}&=&-\frac{1}{(u^1)^2}\left[\Gamma_{22}^{(1)2} u^1 u^2+\Gamma_{32}^{(1)2}( u^1 u^3-(u^2)^2)+ u^1-\Gamma_{32}^{(1)1} u^2 u^3\right],\\ \Gamma_{21}^{(2)3}&=&-\frac{1}{(u^1)^2}\left[\Gamma_{22}^{(1)3} u^1 u^2+\Gamma_{32}^{(1)3}( u^1 u^3-(u^2)^2 )+(\Gamma_{22}^{(1)1}-\Gamma_{32}^{(1)2}) u^2 u^3- u^2\right],\\ \Gamma_{31}^{(2)1}& =& -\frac{u^2}{u^1}\Gamma_{32}^{(1)1},\\ \Gamma_{31}^{(2)2}&=&-\frac{1}{u^1}\left[\Gamma_{32}^{(1)2} u^2+\Gamma_{32}^{(1)1} u^3\right],\\ \Gamma_{31}^{(2)3}&=& \frac{1}{u^1}\left[-\Gamma_{32}^{(1)3}u^2+(\Gamma_{22}^{(1)1}-\Gamma_{32}^{(1)2})u^3-1\right],\\ \Gamma_{22}^{(2)1} &=& -\frac{\Gamma_{32}^{(1)1}u^2-\Gamma_{22}^{(1)1}u^1}{u^1},\\ \Gamma_{22}^{(2)2} &=& \frac{\Gamma_{22}^{(1)2}u^1-\Gamma_{32}^{(1)1}u^3-\Gamma_{32}^{(1)2}u^2}{u^1},\\ \Gamma_{22}^{(2)3}&=&\frac{\Gamma_{22}^{(1)3}u^1-\Gamma_{32}^{(1)3}u^2+u^3\Gamma_{22}^{(1)1}-\Gamma_{32}^{(1)2}u^3-1}{u^1},\\ \Gamma_{22}^{(2)1} &=& \Gamma_{32}^{(1)1}, \; \; \Gamma_{23}^{(2)2}= \Gamma_{32}^{(1)2},\; \; \Gamma_{23}^{(2)3}= \Gamma_{32}^{(1)3},\\ \Gamma_{32}^{(2)1} &=& \Gamma_{32}^{(1)1},\; \; \Gamma_{32}^{(2)2}=\Gamma_{32}^{(1)2},\; \; \Gamma_{32}^{(2)3} = \Gamma_{32}^{(1)3}, \\ \Gamma_{33}^{(2)1}& =& 0,\; \; \Gamma_{33}^{(2)2} = \Gamma_{32}^{(1)1},\; \; \Gamma_{33}^{(2)3}= -\Gamma_{22}^{(1)1}+\Gamma_{32}^{(1)2}. \end{eqnarray*} \item The dual product $*$ is obtained via formula \eqref{nm} using $\circ$ and $E$. \end{enumerate} \end{thm} \noindent \emph{ Proof}. Due to David-Hertling result there exist local coordinates such that $e, E, \circ$ are given by \eqref{canonical1}, \eqref{canonical2}, \eqref{canonical3}. To determine the Christoffel symbols $\Gamma_{ij}^{(1)k}$ for the torsionless connection $\nabla_1$ in these coordinates we start imposing the following conditions: \begin{itemize} \item compatibility with $\circ$: $$\Gamma_{ml}^{(1)i}c_{jk}^m-\Gamma_{lk}^{(1)m}c_{jm}^i-\Gamma_{mj}^{(1)i}c_{lk}^m+\Gamma_{jk}^{(1)m}c_{lm}^i=0, \; 1\leq l,j,k \leq 3,$$ \item symmetry of the connection: $$\Gamma_{ij}^{(1)k}=\Gamma_{ji}^{(1)k},$$ \item flatness of unity: $$\nabla e=0 \iff \Gamma_{1j}^{(1)i}=0.$$ \end{itemize} This provides a system of algebraic equations for $\Gamma^{(1)k}_{ij}$. These symbols are in general functions of $u^1, u^2, u^3$. Imposing the commutativity of $\nabla_1$ and $\rm{Lie}_e$, coming from the flatness of $\nabla_1$ (see Remark \label{rmk1}) we obtain that the symbols $\Gamma^{(1)k}_{i,j}$ do not depend on $u^1$. Now we link the Christoffel symbols $\nabla_{(2)}$ to the Christofell symbols of $\nabla_{(1)}$ imposing that the two connections are almost hydrodynamically equivalent, i.e. $d_{\nabla_{(1)}}\left( X \circ\right)=d_{\nabla_{(2)}}\left( X \circ \right),$ we obtain the following constraints on $\Gamma_{ij}^{(2)k}$. $$\Gamma_{2 2}^{(2) 1} = \Gamma_{22}^{(1)1}(u^2, u^3)+\Gamma_{31}^{(2) 1}(u^2, u^3)$$ $$\Gamma_{2 2}^{(2) 2} = \Gamma_{22}^{(1)2}(u^2, u^3)+\Gamma_{31}^{(2) 2}(u^2,u^3),$$ $$ \Gamma_{22}^{(2)3} = \Gamma_{2 2}^{(1) 3}(u^2, u^3)+\Gamma_{31}^{(2) 3}(u^2,u^3), $$ $$\Gamma_{22}^{(2)1} = \Gamma_{32}^{(1)1}(u^2, u^3), $$ $$\Gamma_{23}^{(2)2}= \Gamma_{32}^{(1)2}(u^2, u^3),$$ $$\Gamma_{23}^{(2)3}= \Gamma_{32}^{(1)3}(u^2, u^3),$$ $$ \Gamma_{32}^{(2)1} = \Gamma_{32}^{(1)1}(u^2, u^3),$$ $$\Gamma_{32}^{(2)2}=\Gamma_{32}^{(1)2}(u^2, u^3),$$ $$\Gamma_{32}^{(2)3} = \Gamma_{32}^{(1)3}(u^2, u^3), $$ $$\Gamma_{33}^{(2)1}=0,$$ $$\Gamma_{33}^{(2)2} = \Gamma_{32}^{(1)1}(u^2, u^3),$$ $$\Gamma_{33}^{(2)3}= -\Gamma_{22}^{(1)1}(u^2, u^3)+\Gamma_{32}^{(1)2}(u^2, u^3)$$ Imposing that $\nabla_{(2)}E=0$ and using the constraints obtained so far, we are able to express uniquely all the Christoffel symbols of $\nabla_{(2)}$ in terms of the Christoffel symbols of $\nabla_{(1)}$. We get indeed the further constraints: \begin{eqnarray*} \Gamma_{11}^{(2)1}&=&\frac{1}{(u^1)^3}\left[- (u^1)^2+\Gamma_{22}^{(1)1} u^1 (u^2)^2+ \Gamma_{32}^{(1)1}(2u^1 u^2 u^3- (u^2)^3)\right],\\ \Gamma_{11}^{(2)2}&=&\frac{1}{(u^1)^3}\left[\Gamma_{22}^{(1)2}u^1 (u^2)^2+ \Gamma_{32}^{(1)2} (2u^1 u^2 u^3-(u^2)^3)+u^2 u^1+\Gamma_{32}^{(1)1}( u^1 (u^3)^2-(u^2)^2 u^3)\right],\\ \Gamma_{11}^{(2)3}&=&\frac{1}{(u^1)^3}\left[\Gamma_{22}^{(1)3} u^1 (u^2)^2+\Gamma_{32}^{(1)3}(2 u^1 u^2 u^3-(u^2)^3) +\Gamma_{32}^{(1)2}(u^1 (u^3)^2-(u^2)^2u^3)+ u^1 u^3\right.\\ &&\left. -(u^2)^2+\Gamma_{22}^{(1)1} ((u^2)^2 u^3-(u^3)^2u^1)\right],\\ \Gamma_{21}^{(2)1}&=&-\frac{1}{(u^1)^2}\left[\Gamma_{22}^{(1)1}u^1 u^2+\Gamma_{32}^{(1)1}( u^1 u^3-(u^2)^2)\right],\\ \Gamma_{21}^{(2)2}&=&-\frac{1}{(u^1)^2}\left[\Gamma_{22}^{(1)2} u^1 u^2+\Gamma_{32}^{(1)2}(u^1 u^3-(u^2)^2)+ u^1-\Gamma_{32}^{(1)1} u^2 u^3\right],\\ \end{eqnarray*} \begin{eqnarray*} \Gamma_{21}^{(2)3}&=&-\frac{1}{(u^1)^2}\left[\Gamma_{22}^{(1)3} u^1 u^2+\Gamma_{32}^{(1)3}( u^1 u^3-(u^2)^2 )+(\Gamma_{22}^{(1)1}-\Gamma_{32}^{(1)2}) u^2 u^3- u^2\right],\\ \Gamma_{31}^{(2)1}& =& -\frac{u^2}{u^1}\Gamma_{32}^{(1)1},\\ \Gamma_{31}^{(2)2}&=&-\frac{1}{u^1}\left[\Gamma_{32}^{(1)2} u^2+\Gamma_{32}^{(1)1} u^3\right],\\ \Gamma_{31}^{(2)3}&=& \frac{1}{u^1}\left[-\Gamma_{32}^{(1)3}u^2+(\Gamma_{22}^{(1)1}-\Gamma_{32}^{(1)2})u^3-1\right]. \end{eqnarray*} Now we use the expression of the Euler vector field in the canonical coordinates and impose the commutativity of $\nabla_{(2)}$ with $\rm{Lie}_E$, coming from the flatness of $\nabla_{(2)}$. Let $T$ be a general vector field, then we impose $$\rm{Lie}_E\nabla_{(2)j} T^i-\nabla_{(2)j} (\rm{Lie}_E T^i)=0,$$ that is $$E(\partial_j T^i+\Gamma^{(2)i}_{jk}T^k)-(\nabla_{(2)j} T^l)\partial_l E^i+(\nabla_{(2)l} T^i)\partial_j E^l-\partial_j(\rm{Lie}_ET^i)-\Gamma^i_{jk}\rm{Lie}_ET^k=0.$$ Expanding further we obtain the following system of PDEs for $\Gamma^{(2)i}_{jk}(u^2, u^3)$: $$E^m\partial_m \Gamma^{(2)i}_{jk}-\Gamma^{(2)m}_{jk}\partial_m E^i+\Gamma^{(2)i}_{mk}\partial_j E^m+\Gamma^{(2)i}_{jm}\partial_k E^m+\partial_j\partial_kE^i=0.$$ Since $\Gamma^{(2)i}_{jk}$ are expressed uniquely in terms of $\Gamma^{(1)i}_{jk}$, the previous system of PDEs reduces to a system for the unknown $\Gamma^{(1)i}_{jk}$. In particular, we observe that for $[j,k,i]=[3,2,2]$ we get the PDE: $$ u^2(\partial_{u^2}\Gamma_{32}^{(1)2})+\partial_{u^3}\Gamma_{32}^{(1)2}u^3+\Gamma_{32}^{(1)2}=0, $$ for $[j,k,i]=[2,3,3]$ we obtain the PDE: $$ u^2(\partial_{u^2}\Gamma_{32}^{(1)3})+\partial_{u^3}\Gamma_{32}^{(1)3}u^3+\Gamma_{32}^{(1)3}=0, $$ and finally for $[j,k,i]=[3,1,1]$ we get the PDE: $$ u^2(\partial_{u^2}\Gamma_{32}^{(1)1})+\partial_{u^3}\Gamma_{32}^{(1)1}u^3+\Gamma_{32}^{(1)1}=0. $$ The general solutions of these PDEs can be obtained directly with the method of characteristics yielding $$\Gamma_{32}^{(1)1}=F_1\left(\frac{u^3}{u^2} \right)\frac{1}{u^2}, \; \Gamma_{32}^{(1)3}=F_2\left(\frac{u^3}{u^2} \right)\frac{1}{u^2},\; \Gamma_{32}^{(1)3}=F_3\left(\frac{u^3}{u^2} \right)\frac{1}{u^2}.$$ Substituting these solutions in the remaining equations, we obtain for $[j,k,i]=[2,2,2]$, for $[j,k,i]=[2,2,1]$ and for $[j,k,i]=[2,2,3]$ identical PDEs for $\Gamma_{22}^{(1)2}$, $\Gamma_{22}^{(1)1}$ and $\Gamma_{22}^{(1)3}$. These yield the solutions: $$\Gamma_{22}^{(1)1}=F_4\left(\frac{u^3}{u^2} \right)\frac{1}{u^2}, \;\Gamma_{22}^{(1)2}=F_5\left(\frac{u^3}{u^2} \right)\frac{1}{u^2}, \; \Gamma_{22}^{(1)3}=F_6\left(\frac{u^3}{u^2} \right)\frac{1}{u^2}.$$ Imposing the zero curvature conditions for $\nabla_{(1)}$, we obtain the system of equations (\ref{F1},\,\ref{F2},\,\ref{F3},\,\ref{F4},\,\ref{F5},\,\ref{F6}) for the unknown functions $F_i$ in the variable $z=\frac{u^3}{u^2}$. To conclude we observe that it is easy to check by straightforward computations that the remaining conditions (namely the flatness of $\nabla_2$ and the compatibility of $\nabla_2$ with $*$) are automatically satisfied once the functions $F_i$ are chosen among the solutions of the system (\ref{F1},\,\ref{F2},\,\ref{F3},\,\ref{F4},\,\ref{F5},\,\ref{F6}). \begin{flushright} $\Box$ \end{flushright} The system (\ref{F1},\,\ref{F2},\,\ref{F3},\,\ref{F4},\,\ref{F5},\,\ref{F6}) reduces to the full Painlev\' e IV family of equations. \begin{thm} Regular bi-flat $F$-manifolds in dimension three such that $L_p$ has three equal eigenvalues and one Jordan block are locally parameterized by solutions of the full Painlev\' e IV equation. \end{thm} \emph{ Proof}. It is straightforward to check that the system of ordinary differential equations given by (\ref{F1},\,\ref{F2},\,\ref{F3},\,\ref{F4},\,\ref{F5},\,\ref{F6}) admits the following integrals of motion: $$I_1=F_1,\; I_2=2F_1z+F_3+F_4, \; I_3=-2F_3z+F_4z-F_2-F_5.$$ Using these first integrals, the system can be reduced to a system of three ODEs given by: \begin{eqnarray*} \frac{d F_4}{dz}&=&4I_1^2z^2-2I_1I_2z+F_4I_1z+I_2F_4-I_3I_1-F_4^2-2I_1F_5-I_1,\\ \frac{d F_5}{dz}&=&-4I_1^2z^3+6I_2I_1z^2-9F_4I_1z^2-2I_2^2z+6I_2F_4z+I_3I_1z-4F_4^2z-I_2I_3\\ &&-I_2F_5+I_3F_4+F_4F_5-I_1F_6+3I_1z-I_2+F_4,\\ \frac{d F_6}{dz}&=& -4I_2I_1z^3+12F_4I_1z^3+2I_2^2z^2-9I_2F_4z^2-4I_3I_1z^2+8F_4^2z^2\\ &&-6I_1F_5z^2+3I_2I_3z+5I_2F_5z-5I_3F_4z-8F_4F_5z-I_1F_6z-6I_1z^2\\ &&+3I_2z+I_3^2+3I_3F_5+F_6F_4-5F_4z+2F_5^2+I_3+F_5. \end{eqnarray*} We further reduce this system to a second order ODE in the following way. First we express $F_5(z)$ in terms of $F_4(z)$ and its first derivative using the first equation, obtaining (here and thereafter we assume $I_1\neq 0$): $$F_5=\frac{1}{2I_1}\left(4I_1^2z^2-2I_2I_1z+F_4I_1z+I_2F_4-I_3I_1-F_4^2-\frac{dF_4}{dz}-I_1\right).$$ We substitute this in the second equation and solve for $F_6$: \begin{eqnarray*} F_6& =& -\frac{1}{2I_1^2} \left(8I_1^3z^3-8I_1^2I_2z^2+14I_1^2F_4z^2+2I_1I_2^2z-9I_1F_4I_2z-2I_1^2I_3z+7I_1F_4^2z+F_4I_2^2\right.\\ &&\left.+I_1I_2I_3-2F_4^2I_2-I_1F_4I_3+\frac{dF_4}{dz}zI_1+F_4^3+2I_1^2z-I_1I_2-F_4\frac{dF_4}{dz}-\frac{d^2F_4}{dz^2}\right). \end{eqnarray*} Substituting these expressions for $F_5$ and $F_6$ in terms of $F_4$ and its derivatives in the last ODE of the system above, we obtain a third order nonlinear ODE for $F_4$. Multiplying it by $(-I_1z+I_2-F_4)$ and by $2I_1^2$, it is possible to recognize it that it is a total derivative with respect to $z$ of an expression involving the second derivative of $F_4$. Integrating this expression one obtains the nonlinear second order ODE: \begin{eqnarray*} 0&=&8I_2^2I_1^2z^2-I_3I_1^3z^2-10I_2I_1^3z^3+13I_1^3z^3F_4-2I_2^3I_1z-I_3F_4^2I_1+8F_4^3I_1z-2I_2zI_1^2\\ &&-2I_1F_4I_2+2F_4zI_1^2+\frac{7}{2}I_2^2F_4^2+\frac{31}{2}F_4^2I_1^2z^2+2I_2I_3I_1^2z-23I_1^2I_2z^2F_4 +11I_1I_2^2zF_4\\ &&-17I_2F_4^2I_1z-2I_1^2I_3zF_4+2I_1I_2I_3F_4+\frac{3}{2}F_4^4+\frac{1}{2}\left(\frac{dF_4}{dz}\right)^2+C_1\\ &&+I_1\frac{dF_4}{dz}+I_1^3z^2+I_1F_4^2+(-I_1z+I_2-F_4)\frac{d^2F_4}{dz^2}+4I_1^4z^4-I_2^3F_4-4I_2F_4^3, \end{eqnarray*} where $C_1$ is the constant of integration. Now we show that this ODE can be reduced a three-parameters ODE that contains the full Painlev\'e IV equation for a special value of one of the parameters. First we do a change of variables of the form $F_4(z)=f(z)-I_1z+I_2$ in order to obtain a term of the form $f(z)\frac{d^2 f}{dz^2}$ which is the term that appears in Painlev\'e IV. Doing this we obtain the following ODE: \begin{eqnarray*} 0&=&\frac{3}{2}f(z)^4+\left(2zI_1+2I_2\right)f(z)^3+\left(-I_3I_1+\frac{1}{2}I_1^2z^2+I_1zI_2+I_1+\frac{1}{2}I_2^2\right)f(z)^2\\ &&-f(z)\frac{d^2f(z)}{dz^2}+I_3I_1I_2^2-\frac{1}{2}I_1^2-I_1I_2^2+\frac{1}{2}\left(\frac{d^2f(z)}{dz^2}\right)+C1. \end{eqnarray*} Since in Painlev\'e IV the coefficient in front of $f(z)^3$ is of $4z$, we introduce the affine transformation $z=w-\frac{I_2}{I_1}$ (assuming $I_1\neq 0$) and we call $g(w)=f\left(w-\frac{I_2}{I_1}\right)$. In this way, the previous ODE becomes: \begin{eqnarray*} g(w)\frac{d^2 g(w)}{dw^2}&=&\frac{3}{2}g(w)^4+2I_1wg(w)^3+\left(-I_3I_1+I_1+\frac{1}{2}I_1^2w^2\right)g(w)^2\\ &&+\frac{1}{2}\left(\frac{dg(w)}{dw}\right)^2+I_3I_1I_2^2-I_1I_2^2-\frac{1}{2}I_1^2+C_1. \end{eqnarray*} The previous ODE depends on the following combination of constants: $I_1$, $I_1-I_3I_1$ and on $I_3I_1I_2^2-I_1I_2^2-\frac{1}{2}I_1^2+C_1$. So calling $A=I_1, B=I_1-I_3I_1, C=I_3I_1I_2^2-I_1I_2^2-\frac{1}{2}I_1^2+C_1$ we can rewrite it as \begin{eqnarray*} g(w)\frac{d^2 g(w)}{dw^2}&=&\frac{3}{2}g(w)^4+2Awg(w)^3+\left(B+\frac{1}{2}A^2w^2\right)g(w)^2\\ &&+\frac{1}{2}\left(\frac{dg(w)}{dw}\right)^2+C. \end{eqnarray*} The solutions of this ODE parameterize locally bi-flat regular $F$-manifolds in dimension three. Now we prove that this equation is equivalent to the full Painlev\'e IV family, using a suitable double scaling of the independent and dependent variables. We introduce the variable $t=\alpha w$ and rescale $g$ to $\beta g$. Moreover, we introduce the notation $y(t)=g\left(\frac{t}{\alpha}\right)=g(w)$. Then $$\frac{d g(w)}{dw}=\frac{d y(t)}{dt}\frac{dt}{dw}=\frac{dy(t)}{dt}\alpha.$$ Performing both transformations the previous ODE gets rescaled to \begin{eqnarray*} \beta^2\alpha^2 y(t)\frac{d^2 y(t)}{dt^2}&=&\frac{3}{2}\beta^4 y(t)^4+2\frac{A}{\alpha}t\beta^3 y(t)^3+\left(B+\frac{1}{2}\frac{A^2}{\alpha^2}t^2\right)\beta^2y(t)^2\\ &&+\frac{1}{2}\beta^2\alpha^2\left(\frac{dy(t)}{dt}\right)^2+C. \end{eqnarray*} To reduce this to a constant multiple $\gamma$ of the full Painlev\' e IV family, we look for a nontrivial solution of the following algebraic system: $$\beta^2\alpha^2 =\gamma, \;\; \beta^4=\gamma, \; \; \frac{A}{\alpha}\beta^3=2\gamma, \; \; \frac{1}{2}\frac{A^2}{\alpha^2}\beta^2=2\gamma.$$ A nontrivial solution is given by $\alpha=\beta=\sqrt{\frac{A}{2}}, \; \gamma=\frac{A^2}{4}.$ With this choice the ODE becomes: \begin{eqnarray*} \frac{A^2}{4} y(t)\frac{d^2 y(t)}{dt^2}&=&\frac{3}{2}\frac{A^2}{4} y(t)^4+A^2t y(t)^3+\left(B\frac{A}{2}+\frac{1}{2}A^2t^2\right)y(t)^2\\ &&+\frac{1}{2}\frac{A^2}{4}\left(\frac{dy(t)}{dt}\right)^2+C. \end{eqnarray*} If $A=I_1\neq 0$, then dividing both sides by $\frac{A^2}{4}$ we obtain \begin{eqnarray*} y(t)\frac{d^2 y(t)}{dt^2}&=&\frac{3}{2} y(t)^4+4t y(t)^3+\left(2\frac{B}{A}+2t^2\right)y(t)^2\\ &&+\frac{1}{2}\left(\frac{dy(t)}{dt}\right)^2+\frac{CA^2}{4} \end{eqnarray*} Introducing the constants $c=\frac{CA^2}{4}$ and $b=-\frac{B}{A}$ we obtain \begin{eqnarray*} y(t)\frac{d^2 y(t)}{dt^2}&=&\frac{3}{2} y(t)^4+4t y(t)^3+2\left(t^2-b\right)y(t)^2\\ &&+\frac{1}{2}\left(\frac{dy(t)}{dt}\right)^2+c, \end{eqnarray*} which is indeed the full Painlev\' e IV family. \begin{flushright} $\Box$ \end{flushright} \begin{rmk} In the proof of the previous Theorem we have assumed that $I_1\neq 0$, hence the genericity statement. If $I_1=0$ then the system (\ref{F1},\,\ref{F2},\,\ref{F3},\,\ref{F4},\,\ref{F5},\,\ref{F6}) reduces to a system of ODEs that can be integrated explicitly. In particular, using the integrals of motion $I_1=0$, $I_2$ and $I_3$, the system obtained by reduction and involving only $F_4$, $F_5$ and $F_6$ is lower triangular. \end{rmk} \subsection{The case of two distinct eigenvalues and two Jordan blocks} In this subsection we analyze the case in which the operator $L_p$ has two distinct eigevanlues, one eigenvalue with algebraic multiplicity two (and nontrivial $2\times2$ Jordan block), while the other eigenvalue is simple. In this case, we use Hertling's Decomposition Lemma (Thereom 2.11 from \cite{H}) to obtain the following result. \begin{thm} Let $(M, \nabla_1, \nabla_2, \circ, *, e, E)$ be non-semisimple regular bi-flat $F$-manifold in dimension three such that $L_p$ has exactly two distinct eigenvalues and two Jordan blocks. Then there exist local coordinates $\{u^1, u^2, u^3\}$ such that \begin{enumerate} \item $e, E, \circ $ are given by \begin{eqnarray} e&=&\partial_{u^1}+\partial_{u^3}\\ E&=&u^1\partial_{u^1}+u^2\partial_{u^2}+u^3\partial_{u^3}\\ c^i_{jk}&=&\delta^k_{i+j-1} \;\text{ if }\; 1\leq i,j,k\leq 2\\ c^3_{33}&=&1\\ c^i_{jk}&=& 0 \; \;\text{ in all other cases } \end{eqnarray} \item The Christoffel symbols $\Gamma_{jk}^{(1)i}$ for $\nabla_1$ are given by: $$\Gamma_{13}^{(1)3}=\frac{F_4\left(\frac{u^3-u^1}{u^2}\right)}{u^2},\; \; \Gamma_{22}^{(1)1}= \frac{F_3\left(\frac{u^3-u^1}{u^2}\right)}{u^2},\;\; \Gamma_{22}^{(1)2}=\frac{F_6\left(\frac{u^3-u^1}{u^2}\right)}{u^2},$$ $$ \Gamma_{23}^{(1)3}=\frac{F_1\left(\frac{u^3-u^1}{u^2}\right)}{u^2},\;\; \Gamma_{31}^{(1)1}=\frac{F_2\left(\frac{u^3-u^1}{u^2}\right)}{u^2},\;\; \Gamma_{31}^{(1)2}=\frac{F_5\left(\frac{u^3-u^1}{u^2}\right)}{u^2},\;\;,$$ $$\Gamma_{11}^{(1)1}=-\Gamma_{31}^{(1)1},\;\; \Gamma_{11}^{(1)2}=-\Gamma_{31}^{(1)2},\;\; \Gamma_{11}^{(1)3}=-\Gamma_{13}^{(1)3},\;\; \Gamma_{12}^{(1)2}=-\Gamma_{31}^{(1)1},\;\; \Gamma_{12}^{(1)3}=-\Gamma_{23}^{(1)3},\;\; $$ $$ \Gamma_{21}^{(1)2}=-\Gamma_{31}^{(1)1},\;\; \Gamma_{21}^{(1)3}=-\Gamma_{23}^{(1)3},\;\; \Gamma_{23}^{(1)2}=\Gamma_{31}^{(1)1},\;\;$$ $$ \Gamma_{33}^{(1)1}=-\Gamma_{31}^{(1)1},\;\; \Gamma_{33}^{(1)2}=-\Gamma_{31}^{(1)2},\;\; \Gamma_{33}^{(1)3}=-\Gamma_{13}^{(1)3},$$ where the functions $F_1, \dots F_6$ satisfy the system \begin{eqnarray} \label{F1bis} \frac{dF_1}{dz}& =& -\frac{F_3F_4-F_1^2+F_1F_6+F_1}{z},\\ \label{F2bis} \frac{d F_2}{dz}&=& \frac{F_3F_5-F_2F_1-F_2}{z},\\ \label{F3bis} \frac{d F_3}{dz}&=&0,\\ \label{F4bis} \frac{d F_4}{dz}&=&-\frac{F_3F_4-F_1^2+F_1F_6+F_4z+F_1}{z^2},\\ \label{F5bis} \frac{d F_5}{dz}&=&\frac{-F_5F_1 z+F_5F_6z+F_2F_4z+F_3 F_5-F_5z-F_2F_1-F_2}{z^2},\\ \label{F6bis} \frac{d F_6}{dz}&=&-2F_3F_5+2F_2F_1. \end{eqnarray} in the variable $z=\frac{u^3-u^1}{u^2}$ while the other symbols not obtainable from the above list using the symmetry of the connection are identically zero. \item The Christoffel symbols $\Gamma_{jk}^{(2)i}$ for $\nabla_2$ are uniquely determined by the Christoffel symbols of $\nabla_1$ via the following formulas: $$ \Gamma_{11}^{(2)1} = \frac{\Gamma_{22}^{(1)1}(u^2)^2-\Gamma_{31}^{(1)1}u^3u^1-u^1}{(u^1)^2},\; \; \Gamma_{11}^{(2)2} = \frac{\Gamma_{31}^{(1)1}u^3u^2-\Gamma_{31}^{(1)2}u^3u^1+\Gamma_{22}^{(1)2}(u^2)^2+u^2}{(u^1)^2},$$ $$ \Gamma_{11}^{(2)3}= \frac{\Gamma_{23}^{(1)3}u^2u^3-\Gamma_{13}^{(1)3}u^1u^3}{(u^1)^2},\; \; \Gamma_{12}^{(2)1}= -\frac{\Gamma_{22}^{(1)1}u^2}{u^1},\;\; \Gamma_{12}^{(2)2}= -\frac{\Gamma_{31}^{(1)1}u^3+\Gamma_{22}^{(1)2}u^2+1}{u^1},$$ $$ \Gamma_{12}^{(2)3}= -\frac{\Gamma_{23}^{(1)3}u^3}{u^1},\;\; \Gamma_{13}^{(2)1}= \Gamma_{31}^{(1)1},\;\; \Gamma_{13}^{(2)2}= \Gamma_{31}^{(1)2},\;\; \Gamma_{13}^{(2)3}= \Gamma_{13}^{(1)3},$$ $$ \Gamma_{22}^{(2)1}=\Gamma_{22}^{(1)1},\;\; \Gamma_{22}^{(2)2}=\Gamma_{22}^{(1)2},\;\; \Gamma_{23}^{(2)2}= \Gamma_{31}^{(1)1},\;\; \Gamma_{23}^{(2)3}=\Gamma_{23}^{(1)3},\;\; \Gamma_{33}^{(2)1}=-\frac{\Gamma_{31}^{(1)1} u^1}{u^3},$$ $$ \Gamma_{33}^{(2)2}= -\frac{\Gamma_{31}^{(1)2}u^1+\Gamma_{32}^{(1)2}u^2}{u^3},\;\; \Gamma_{33}^{(2)3}= -\frac{\Gamma_{23}^{(1)3}u^2+\Gamma_{13}^{(1)3}u^1+1}{u^3}, $$ while the other symbols not obtainable from the above list using the symmetry of the connection vanish identically. \item The dual product $*$ is obtained via formula \eqref{nm} using $\circ$ and $E$. \end{enumerate} \end{thm} \emph{ Proof} The first point of the Theorem is a direct consequence of the results of \cite{DH} and of Hertling's Decomposition Lemma (Thereom 2.11 from \cite{H}) . Imposing that $\nabla_{(1)}$ is torsionless, that it is compatible with $\circ$, and that it satisfies $\nabla_{(1)}e=0$, we obtain the following constraints on $\Gamma_{ij}^{(1)k}$: $$ \Gamma_{11}^{(1)1} = -\Gamma_{31}^{(1)1},\;\; \Gamma_{11}^{(1)2} = -\Gamma_{31}^{(1)2},\; \; \Gamma_{11}^{(1)3}= -\Gamma_{13}^{(1)3},\;\; \Gamma_{12}^{(1)1} = 0,\;\; \Gamma_{12}^{(1)2} = -\Gamma_{31}^{(1)1},$$ $$\Gamma_{33}^{(1)1} = -\Gamma_{31}^{(1)1}, \;\; \Gamma_{33}^{(1)2} = -\Gamma_{31}^{(1)2},\;\; \Gamma_{33}^{(1)3} = -\Gamma_{13}^{(1)3}, $$ $$ \Gamma_{12}^{(1)3} = -\Gamma_{23}^{(1)3}, \; \; \Gamma_{21}^{(1)1} = 0,\;\; \Gamma_{21}^{(1)3}= -\Gamma_{23}^{(1)3}, $$ $$ \Gamma_{22}^{(1)3}= 0,\;\; \Gamma_{23}^{(1)1} = 0,\;\; \Gamma_{23}^{(1)2} = \Gamma_{31}^{(1)1},$$ together with the trivial constraints $\Gamma_{ij}^{(1)k}=\Gamma_{ji}^{(1)k}$. Then we impose a series of constraints on the dual connection $\nabla_{(2)}$ that are sufficient to determine uniquely the Christoffel symbols $\Gamma_{ij}^{(2)k}$ in terms of the Christoffel symbols $\Gamma_{ij}^{(1)k}$. These constraints are the requirement that $\nabla_{(2)}$ is almost hydrodynamically equivalent to $\nabla_{(1)}$, i.e. $(d_{\nabla_1}-d_{\nabla_2})(X\,\circ)=0$ and $\nabla_{(2)}E=0$. These constraints give the formulas in the third point, that express $\Gamma_{ij}^{(2)k}$ in terms of $\Gamma_{ij}^{(1)k}$. Once we have expressed the Christoffel symbols of $\nabla_{(2)}$ in terms of the Christoffel symbols of $\nabla_{(1)}$, we obtain a system of PDEs in $\Gamma_{ij}^{(1)k}$, imposing that commutativity of $\nabla_{(2)}$ with ${\rm Lie}_E$ and the commutativity of ${\rm Lie}_e$ with $\nabla_{(1)}$. The latter system in particular implies that $\Gamma_{ij}^{(1)k}(u^1, u^2, u^3)$ can be expressed as functions of two variables, as $\Gamma_{ij}^{(1)k}(u^2, u^3-u^1)$. Following a procedure similar to process described in the proof of Theorem \ref{thm1}, we can solve the two systems and we find that (here $z=\frac{u^3-u^1}{u^2}$): $$\Gamma_{13}^{(1)3} = \frac{F_4(z)}{u^2},\;\; \Gamma_{22}^{(1)1} = \frac{F_3(z)}{u^2}, \;\;\Gamma_{22}^{(1)2} = \frac{F_6(z)}{u^2}, $$ $$\Gamma_{23}^{(1)3} = \frac{F_1(z)}{u^2},\;\; \Gamma_{31}^{(1)1} = \frac{F_2(z)}{u^2},\;\;\Gamma_{31}^{(1)2}= \frac{F_5(z)}{u^2}, \;\;\Gamma_{32}^{(1)2} = \frac{F_7(z)}{u^2},$$ for arbitrary smooth functions $F_i(z)$. At this point, we impose that $\nabla_{(2)}$ is a torsionless connection and this gives the only constraint $\Gamma_{31}^{(1)1}=\Gamma_{32}^{(1)2}$ or equivalently $F_2(z)=F_7(z)$. Imposing the zero curvature conditions for $\nabla_{(1)}$, we obtain the system of equations (\ref{F1bis},\ref{F2bis},\ref{F3bis},\ref{F4bis},\ref{F5bis},\ref{F6bis}). To conclude we observe that it is easy to check by straightforward computations that the remaining conditions (namely the flatness of $\nabla_2$ and the compatibility of $\nabla_2$ with $*$) are automatically satisfied once the functions $F_i$ are chosen among the solutions of the system (\ref{F1bis}, \ref{F2bis}, \ref{F3bis}, \ref{F4bis}, \ref{F5bis}, \ref{F6bis}). \begin{flushright} $\Box$ \end{flushright} Now we prove that the system (\ref{F1bis}, \,\ref{F2bis}, \,\ref{F3bis}, \,\ref{F4bis}, \,\ref{F5bis}, \,\ref{F6bis}) can be reduced to Painlev\' e V equation. \begin{thm} Regular bi-flat $F$-manifolds in dimension three such that $L_p$ has two distinct eigenvalues and two Jordan blocks are locally parameterized by solutions of the full Painlev\' e V equation. \end{thm} \emph{Proof} Since $\frac{d F_3}{dz}=0$ we set $F_3=I_1$. It is straightforward to check via direct computation that the system above has two additional integrals of motion, namely $F_1-F_4z=I_2$ and $F_6+2F_2=I_3$, where we have indicated with $I_2$ and $I_3$ the corresponding values of the integrals of motion. Using these three integrals of motion in the system above we reduce it to the following three ODEs: \begin{equation} \label{eqaux1} -F_4F_2z+I_1F_5-I_2F_2-\frac{dF_2}{dz}z-F_2=0, \end{equation} \begin{equation} \label{eqaux2} -F_4F_5z-2F_2F_5z-I_2F_5+I_3F_5+F_4F_2-\frac{dF_5}{dz}z-F_5+\frac{dF_2}{dz}=0, \end{equation} \begin{equation} \label{eqaux3} F_4^2z^2+2F_4F_2z^2+2I_2F_4z+2I_2F_2z-I_3F_4z-F_4I_1+I_2^2-I_2I_3-\left(\frac{dF_4}{dz}z+F_4\right)z-F_4z-I_2=0. \end{equation} We solve for $F_5$ in \eqref{eqaux1} and we substitute in \eqref{eqaux2} to obtain a second order ODE in $F_2$ and $F_4$, call it $\alpha$. We solve for $F_2$ in the third equation \eqref{eqaux3} and we substitute in $\alpha$ thus obtaining a complicate nonlinear \emph{third order} ODE involving only $F_4$, given by (we have renamed $F_4$ with $F$): $$ -z^8\left(\frac{dF}{dz}\right)^3+z\left(4Fz^6+7I_2z^5\right)\left(\frac{dF}{dz}\right)^2+z\left(2Fz^7+2z^6I_2\right)\left(\frac{dF}{dz}\right)\left(\frac{d^2F}{dz^2}\right)+$$ $$z(3F^4z^7+12I_2F^3z^6-2I_3F^3z^6-2I_1F^3z^5+18I_2^2F^2z^5-6I_2I_3F^2z^5-2F^3z^6$$ $$-6I_1I_2F^2z^4+12I_2^3Fz^4-6I_2^2I_3Fz^4-6I_2F^2z^5-6I_1I_2^2Fz^3+3I_2^4z^3-2I_2^3I_3z^3$$ $$-6I_2^2Fz^4-3F^2z^5-2I_1I_2^3z^2-2I_2^3z^3-10I_2Fz^4 +I_1^2I_2^2z-10I_2^2z^3)\left(\frac{dF}{dz}\right)$$ $$+z\left(-5F^2z^6-12I_2Fz^5-7I_2^2z^4\right)\left(\frac{d^2F}{dz^2}\right)+z\left(-F^2z^7-2I_2Fz^6-I_2^2z^5\right)\left(\frac{d^3F}{dz^3}\right)$$ $$+4F^5z^7+z(17I_2z^5-3I_3z^5-I_1z^4-3z^5)F^4$$ $$+z\left(28I_2^2z^4-10I_2I_3z^4-2I_1I_2z^3-I_1I_3z^3-10I_2z^4-I_1^2z^2-I_1z^3\right)F^3$$ $$+z\left(22I_2^3z^3-12I_2^2I_3z^3-3I_1I_2I_3z^2-12I_2^2z^3-3I_1^2I_2z-3I_1I_2z^2-I_2z^3\right)F^2$$ $$+z\left(8I_2^4z^2-6I_2^3I_3z^2+2I_1I_2^3z-3I_1I_2^2I_3z-6I_2^3z^2-I_1^2I_2^2-3I_1I_2^2z-2I_2^2z^2\right)F$$ $$+z\left(I_2^5z-I_2^4I_3z+I_1I_2^4-I_1I_2^3I_3-I_2^4z-I_1I_2^3\right)=0$$ This ODE can be reduced to a second order ODE since it admits a nontrivial integrating factor $\mu$ given by $$\mu=\frac{-Fz^2-I_2z+I_1}{z^3(Fz+I_2)^3}.$$ The resulting second order nonlinear ODE is given by: $$ \left(-F^3z^8-3I_2^2z^7+I_1F^2z^6-3I_2^2Fz^6+2I_1I_2Fz^5-I_2^3z^5+z^4I_1I_2^2\right)\left(\frac{d^2F}{dz^2}\right)$$ $$+\left(F^2z^8+I_2^2z^6+2I_2Fz^7-\frac{1}{2}I_1Fz^6-\frac{1}{2}I_2z^5I_1\right)\left(\frac{dF}{dz}\right)^2+$$ $$+\left(-z^7F^3-5z^6I_2F^2+3z^5I_1F^2-7z^5I_2^2I_2^2F+7z^4I_1I_2F-3I_2^3z^4+4z^3I_1I_2^2\right)\left(\frac{dF}{dz}\right)+$$ $$+F^6z^8+\left(-z^7-I_3z^7+6I_2z^7-\frac{5}{2}I_1z^6\right)F^5+$$ $$+\left(-5I_2I_3z^6-\frac{25}{2}I_2z^5I_1+2I_1I_3z^5+15I_2^2z^6-5I_2z^6+2I_1^2z^4+2I_1z^5\right)F^4$$ $$+\left(6I_1I_2I_3z^4-I_1^2I_3z^3-10I_2^2I_3z^5-\frac{45}{2}z^4I_1I_2^2+8I_1^2I_2z^3+\right.$$ $$\left.+6I_1I_2 z^4+Cz^4-I_2z^5-\frac{1}{2}I_1^3z^2-I_1^2z^3+20I_2^3z^5-10I_2^2z^5\right)F^3$$ $$ +\left(6I_1I_2^2I_3z^3-3I_1^2I_2I_3z^2-I_1I_2z^3+6z^3I_1I_2^2-\frac{3}{2}I_1^3I_2z-3I_1^2I_2 z^2+\right.$$ $$\left.+3I_2Cz^3-10I_2^3I_3z^4-\frac{35}{2}I_1I_2^3z^3+11I_1^2I_2^2z^2+15I_2^4z^4-10I_2^3z^4-2 I_2^2z^4\right)F^2+$$ $$+\left(2I_1I_2^3I_3z^2-3I_1^2I_2^2I_3z-5I_1I_2^4z^2+6I_1^2I_2^3z+2I_1I_2^3z^2-3I_1^2I_2^2 z-\frac{5}{2}I_1I_2^2z^2\right.$$ $$\left.+3I_2^2Cz^2-5I_2^4I_3z^3-I_2^3z^3-I_1^3I_2^2+6I_2^5z^3-5I_2^4 z^3\right)F$$ $$+I_2^6z^2+I_1^2I_2^4-I_2^5z^2-I_1^2I_2^3-I_2^5I_3z^2-I_1^2I_2^3I_3+I_2^3Cz-\frac{3}{2}I_1I_2^3z=0.$$ Now we show that this equation can be reduced to Painlev\' e V through a series of nonlinear transformations. First we consider the transformation $F\mapsto I_1\frac{F}{z^2}-\frac{I_2}{z}$, in this way the second order ODE above becomes (we have multiplied it by $z^3$): $$\frac{I_1^3F(-2I_1F^2z^4+2I_1z^4F)}{2z}\left(\frac{d^2F}{dz^2}\right) +\frac{I_1^3F(2I_1z^4F-I_1z^4)}{2z}\left(\frac{dF}{dz}\right)^2$$ $$+\frac{I_1^3F(-2I_1F^2z^3+2I_1Fz^3)}{2z}\left(\frac{dF}{dz}\right)$$ $$ +\frac{I_1^3F}{2z}(2I_1^3F^5-2I_1^2I_3F^4z-5I_1^3F^4+4I_1^2I_3F^3z-2I_1^2F^4z+5I_1I_2^2F^2z^2-4I_1I_2I_3F^2z^2+z^2I_1I_2^2$$ $$+4I_1^3F^3-2I_1^2I_3F^2z+4I_1^2F^3z-2I_1I_2^2Fz^2-4I_1I_2F^2z^2-I_1^3F^2-2I_1^2F^2z-4I_1F^2z^2+2F^2Cz^2) = 0.$$ Then we introduce the transformation $F\mapsto \frac{1}{1-F^{-1}}$ and the constant $\alpha=4I_1^4I_2^2-4I_1^4I_2I_3-4I_1^4I_2-4I_1^4+2I_1^3C$ and express $C$ in terms of $\alpha$ and in terms of the other constants. In this way the equation becomes (after factoring out common factors): $$(2I_1^4F^2z^2-2I_1^4z^2F)\left(\frac{d^2F}{dz^2}\right) +(-3I_1^4z^2F+I_1^4z^2)\left(\frac{dF}{dz}\right)^2+$$ $$+(2I_1^4zF^2-2I_1^4zF)\left(\frac{dF}{dz}\right) +\alpha F^5-3F^4\alpha$$ $$+\left(-2I_1^5\frac{I_3}{z}+\frac{I_1^6}{z^2}+I_1^4I_2^2+3\alpha-2\frac{I_1^5}{z}\right)F^3+\left(2\frac{I_1^5I_3}{z}+\frac{I_1^6}{z^2}-3I_1^4I_2^2-\alpha+2\frac{I_1^5}{z}\right)F^2$$ $$+3I_1^4FI_2^2-I_1^4I_2^2=0.$$ Finally we introduce a transformation of the independent variable, setting $z=\frac{1}{s}$ and defining $G(s)=F(z)=F\left(\frac{1}{s}\right).$ Using this transformation the second order ODE above becomes $$ \left(2I_1^4G^2s^2-2I_1^4Gs^2\right)\left(\frac{d^2G}{ds^2}\right)+\left(-3I_1^4Gs^2+I_1^4s^2\right)\left(\frac{dG}{ds}\right)^2+\left(2I_1^4G^2s-2I_1^4Gs\right)\left(\frac{dG}{ds}\right)+ $$ $$+\alpha G^5-3G^4\alpha+(I_1^6s^2-2I_1^5I_3s-2I_1^5s+I_1^4I_2^2+3\alpha)G^3+$$ \begin{equation}\label{painleveV1} +(I_1^6s^2+2I_1^5I_3s+2I_1^5s-3I_1^4I_2^2-\alpha)G^2+3I_1^4I_2^2G-I_1^4I_2^2=0.\end{equation} Now we show that this is indeed the Painlev\' e V equation. Recall that the Painlev\'e V is given by $$\frac{d^2y}{dx^2}=\left(\frac{1}{2y}+\frac{1}{y-1}\right)\left(\frac{dy}{dx}\right)^2-\frac{\frac{dy}{dx}}{x}+\frac{(y-1)^2\left(a y+\frac{b}{y}\right)}{x^2}+\frac{gy}{x}+\frac{dy(y+1)}{y-1}$$ where $a, b, g, d$ are parameters. Taking common denominator and multiplying it by $2y(y-1)x^2$ the Painlev\'e becomes $$(2y^2x^2-2yx^2)\left(\frac{d^2y}{dx^2}\right)+(-3yx^2+x^2)\left(\frac{dy}{dx}\right)^2+(2y^2x-2yx)\left(\frac{dy}{dx}\right)-2y^5a$$ \begin{equation}\label{painleveV2}+6y^4a-(2dx^2+2gx+6a+2b)y^3+(2gx-2dx^2+2a+6b)y^2-6yb+2b=0 \end{equation} In order to compare \eqref{painleveV1} with \eqref{painleveV2}, we divide \eqref{painleveV1} by $I_1^4$ (assuming $I_1\neq 0$, see the Remark after the proof) and obtain: $$ \left(2G^2s^2-2Gs^2\right)\left(\frac{d^2G}{ds^2}\right)+\left(-3Gs^2+s^2\right)\left(\frac{dG}{ds}\right)^2+\left(2G^2s-2Gs\right)\left(\frac{dG}{ds}\right)+ $$ $$+\frac{\alpha}{I_1^4} G^5-G^4\frac{3\alpha}{I_1^4}+\left(I_1^2s^2-2I_1I_3s-2I_1s+I_2^2+\frac{3\alpha}{I_1^4}\right)G^3+$$ \begin{equation}\label{painleveV3} +\left(I_1^2s^2+2I_1I_3s+2I_1s-3I_2^2-\frac{\alpha}{I_1^4}\right)G^2+3I_2^2G-I_2^2=0.\end{equation} Comparing \eqref{painleveV3} with \eqref{painleveV2} we get the following correspondence among parameters: $$2b=-I_2^2,\; \;-2a=\frac{\alpha}{I_1^4}, \;\; -2d=I_1^2, \;\; g=I_1I_3+I_1.$$ These relations can be easily inverted determining $I_1, I_2, I_3, \alpha$ in terms of the parameters $a,b, d, g$. Thus we have obtained the full Painlev\' e V. \begin{flushright} $\Box$ \end{flushright} \begin{rmk} In the proof of the previous Theorem we have assumed that $I_1\neq 0$, hence the genericity statement. If $I_1=0$ then the system (\ref{F1bis},\,\ref{F2bis},\,\ref{F3bis},\,\ref{F4bis},\,\ref{F5bis},\,\ref{F6bis}) reduces to a system of ODEs that can be integrated explicitly. \end{rmk} \begin{rmk} In the two-dimensional case there is only one regular non-semisimple model for a bi-flat $F$-manifold (the operator $L$ has necessarily two equal eigenvalues and one Jordan block). The computations become much easier and one can easily show that there exist local coordinates $\{u^1, u^2\}$ such that \begin{enumerate} \item The Christoffel symbols $\Gamma_{jk}^{(1)i}$ for $\nabla_1$ are given by: $$\Gamma_{22}^{(1)1}=\frac{C_1}{u^2},\qquad \Gamma_{22}^{(1)3}=\frac{C_2}{u^2},$$ while the other symbols are identically zero. \item The Christoffel symbols $\Gamma_{jk}^{(2)i}$ for $\nabla_2$ are uniquely determined by the Christoffel symbols of $\nabla_1$ via the following formulas: \begin{eqnarray*} \Gamma_{11}^{(2)1}&=&\frac{\Gamma_{22}^{(1)1}(u^2)^2-2u^1}{(u^1)^2},\\ \Gamma_{11}^{(2)2}&=&\frac{\Gamma_{22}^{(1)2}(u^2)^2+2u^2}{(u^1)^2},\\ \Gamma_{12}^{(2)1}&=&-\frac{u^2}{u^1}\Gamma_{22}^{(1)1},\\ \Gamma_{12}^{(2)2}&=&-\frac{u^2}{u^1}\Gamma_{22}^{(1)2}-\frac{2}{u^1},\\ \Gamma_{22}^{(2)1}&=&\Gamma_{22}^{(1)1},\\ \Gamma_{22}^{(2)2}&=&\Gamma_{22}^{(1)2}.\\ \end{eqnarray*} \end{enumerate} \end{rmk} \subsection{Regular case and confluences of Painlev\' e equations} In this Section, we have shown that there exists an intimate relationship between regular bi-flat $F$-manifolds in dimension three on one hand and Painlev\'e transcendents on the other. Our analysis leads us to conclude that regular bi-flat $F$-manifolds in dimension three are characterized by continuous and discrete moduli. The discrete moduli are provided by the Jordan normal form for the operator $L$, which in turns determine which of the Painlev\' e equations control the continuous moduli. Furthermore, the well-known confluence of the Painlev\'e equations is associated to a corresponding degeneration of the form of the operator $L$ characterizing regular three-dimensional bi-flat $F$-manifold. In this way, confluences of the Painlev\'e equations are mirrored in the collision of eigenvalues and the creation of non-trivial Jordan blocks according to the following diagram: \begin{equation*} \xymatrix{ P_{VI} \ar@{<-->}[dd] \ar[rrrrr]^{\text{confluence }}&&&&&P_{V}\ar@{<-->}[dd] \ar[rrrrr]^{\text{confluence }} &&&&&P_{IV}\ar@{<-->}[dd]\\ &&&&& &&&&& \\ L_1\ar[rrrrr]^{\text{degeneration of distinct eigenvalues }}_{\text{preserving regularity}} &&&&& L_2 \ar[rrrrr]^{\text{degeneration of distinct eigenvalues }}_{\text{preserving regularity}}&&&&& L_3} \end{equation*} As an open problem, let us mention the fact that it would be interesting to extend this correspondence to include the remaining Painlev\'e transcendents on one side and possibly non-regular bi-flat $F$-manifolds on the other. \section{Multi-flat $F$-manifolds in the regular non-semisimple case} \subsection{Tri-flat $F$-manifolds } Contrary to the semisimple situation, in this Section we are going to show that in the regular non-semisimple case there exist indeed tri and multi-flat $F$-manifolds. For simplicity we focus are attention on the case in which the Jordan normal form of the operator $L$ contains only one Jordan block with the same eigenvalues. In particular the next two theorems show that regular tri-flat and multi-flat $F$-manifolds in dimension three such that $L_p$ has three equal eigenvalues do exist and are locally represented as it follows. \begin{thm}\label{thmtriflatregular} Let $(M, \nabla_1, \nabla_2, \nabla_3, \circ_1, \circ_2, \circ_3, E_1:=e, E_2:=E, E_3:=E^2=E\circ_1 E)$ be a regular tri-flat $F$-manifold in dimension three such that $L_p$ has three equal eigenvalues. Then there exist local coordinates $\{u^1, u^2, u^3\}$ such that \begin{enumerate} \item $E_1:=e, E_2:=E, \circ_1=\circ$ are given by \eqref{canonical1}, \eqref{canonical2}, \eqref{canonical3}. \item The Christoffel symbols $\Gamma_{jk}^{(1)i}$ for $\nabla_1$ are given by: $$\Gamma_{23}^{(1)1}=\Gamma_{32}^{(1)1}=\Gamma_{33}^{(1)2}=\frac{f_1}{u^2}, \; \Gamma_{32}^{(1)3}=\Gamma_{23}^{(1)3}=\frac{F_2\left(\frac{u^3}{u^2} \right)}{u^2},\; \Gamma_{32}^{(1)2}=\Gamma_{23}^{(1)2}=\frac{f_3}{u^2},$$ $$\Gamma_{22}^{(1)1}=\frac{F_4\left(\frac{u^3}{u^2} \right)}{u^2}, \;\Gamma_{22}^{(1)2}=\frac{F_5\left(\frac{u^3}{u^2} \right)}{u^2}, \; \Gamma_{22}^{(1)3}=\frac{F_6\left(\frac{u^3}{u^2} \right)}{u^2}, \; \Gamma_{33}^{(1)3}=\frac{f_3-F_4\left(\frac{u^3}{u^2} \right)}{u^2},$$ where $f_1$ and $f_3$ are constants and the functions $F_2, F_4, F_5, F_6$ are given by \begin{eqnarray} \label{F2triflatregular} F_2(z)&=&-f_1z^2-1,\\ \label{F4triflatregular} F_4(z)&=&-2f_1z,\\ \label{F5triflatregular} F_5(z)&=&-f_1z^2-2f_3z,\\ \label{F6triflatregular} F_6(z)&=&-f_3z^2+2z \end{eqnarray} in the variable $z=\frac{u^3}{u^2}$ while the other symbols are identically zero. \item The Christoffel symbols $\Gamma_{jk}^{(2)i}$ for $\nabla_2$ and the Christoffel symbols $\Gamma_{jk}^{(3)i}$ for $\nabla_3$ are uniquely determined by the Christoffel symbols of $\nabla_1$ via the procedure explained in the proof of the theorem. In particular, $\Gamma_{jk}^{(2)i}$ can be expressed in terms of $\Gamma_{jk}^{(1)i}$ via the same formulas appearing in Theorem \ref{thm1}. \item The product $\circ_2$ is obtained via formula \eqref{nm} using $\circ_1:=\circ$ and $E$ (and analogously for $\circ_3$). \end{enumerate} \end{thm} \noindent \emph{ Proof}. The first part of the proof is the same as the proof given for Theorem \ref{thm1}. To determine the Christoffel symbols $\Gamma_{ij}^{(1)k}$ for the torsionless connection $\nabla_1$, the same conditions appearing in the proof of Theorem \ref{thm1} are imposed resulting in a system of algebraic equations for $\Gamma^{(1)k}_{ij}$. These symbols are in general functions of $u^1, u^2, u^3$. Furthermore, imposing the commutativity of $\nabla_1$ and $\rm{Lie}_e$ we get that $\Gamma^{(1)k}_{i,j}$ do not depend on $u^1$. The Christoffel symbols for the connection $\nabla_{(2)}$ are determined in terms of the Christoffel symbols for the connection $\nabla_{(1)}$ exactly like in the proof of Theorem \ref{thm1}. Furthermore, imposing the commutativity of $\nabla_{(2)}$ with $\rm{Lie}_E$, and using the fact that $\Gamma^{(2)i}_{jk}$ are expressed uniquely in terms of $\Gamma^{(1)i}_{jk}$, we obtain a system of PDEs for the unknowns $\Gamma^{(1)i}_{jk}$, which can be solved exactly in the same way presented in the proof of Theorem \ref{thm1} (indeed the Christoffel symbols $\Gamma_{jk}^{(1)i}$ are expressed in the same way in terms of the functions $F_1, \dots F_6$ at this stage of the proof). Now we introduce the third connection $\nabla_{(3)}$ and we impose that it is almost hydrodynamically equivalent to $\nabla_{(1)}$ (and consequently to $\nabla_{(2)}$) and that $\nabla_{(3)} E_3=0$, where $E_{3}:=E^2=E\circ E$. In the David-Hertling coordinates $E^2$ has components $((u^1)^2, 2u^2u^1, 2u^3u^1+(u^2)^2)$. This again is enough to determine uniquely all the Christoffel symbols of $\nabla_{(3)}$ in terms of the Christoffel symbols of $\nabla_{(1)}$. However, at this point of the proof the Christoffel symbols for $\nabla_{(1)}$ are given in terms of functions $F_1, \dots, F_6$ of $z=\frac{u^3}{u^2}$. Therefore if we impose the commutativity of $\rm{Lie}_{E^2}$ with $\nabla_{(3)}$ (coming as always from the flatness of $\nabla_{(3)}$) we obtain a very simple system of ODEs in the functions $F_1, \dots, F_6$ In this case, the system forces $F_1$ and $F_3$ to be constants, while the other equations can be easily integrated. The additional constants appearing in the integration process are determined in such a way that $\nabla_{(1)}$ is flat. In this way we get the formulas for $\Gamma_{jk}^{(1)i}$ appearing in the statement of the theorem. Once the constants are chosen in this way, $\nabla_{(2)}$ and $\nabla_{(3)}$ turn out to be automatically flat and moreover the compatibility of each connection with the corresponding product is also fulfilled, as a straightforward calculation readily shows. \begin{flushright} $\Box$ \end{flushright} \subsection{An example with infinitely many compatible flat structures} With similar computations it is possible to add further connections and try to construct $F$-manifolds with four or more compatible flat connections. A very remarkable phenomenon is the following: once a quadri-flat $F$-manifold has been constructed, no new conditions arise if one tries to equip it with further flat compatible connections. In other words, regular quadri-flat $F$-manifolds in dimension three with operator $L$ consisting of a single Jordan block are automatically "infinitely"-flat $F$-manifolds. \begin{thm}\label{thm2quadriflatregular} The data \begin{eqnarray*} c^k_{ij}&=&\delta^k_{i+j-1},\\ E_{(0)}&=&e=\partial_{u^1},\\ \label{powersE} E_{(l+1)}&=&E^l=(u^1)^l\partial_{u^1}+lu^2(u^1)^{l-1}\partial_{u^2}+\left(lu^3(u^1)^{l-1}+\frac{1}{2}(l^2-l)(u^2)^2(u^1)^{l-2}\right)\partial_{u^3},\\ \end{eqnarray*} and \begin{eqnarray*} &&\Gamma_{11}^{(l+1)1}=-\frac{l}{u^1},\;\;\Gamma_{11}^{(l+1)2}=\frac{lu^2(la^2 +la+a+2)}{(a+2)(u^1)^2}\\ &&\Gamma_{11}^{(l+1)3}=\frac{l((2la^2+2la+a+2)u^1u^3-(la^2+2la+a+2)(u^2)^2+(lab+2lb)u^1u^2 )}{(a+2)(u^1)^3}\\ &&\Gamma_{12}^{(l+1)1}=\Gamma_{21}^{(l+1)1}=0,\;\;\Gamma_{12}^{(l+1)2}=\Gamma_{21}^{(l+1)2}=-\frac{l(a^2+2a+2)}{(u^1)(a+2)},\;\;\Gamma_{23}^{(l+1)3}=\Gamma_{32}^{(l+1)3}=\frac{a}{u^2}\\ &&\Gamma_{12}^{(l+1)3}=\Gamma_{21}^{(l+1)3}=\frac{l((la^2+a^2+2la+4a+4)(u^2)^2-2a^2u^1u^3 -(2ab+4b)u^1u^2)}{2u^2(a+2)(u^1)^2},\\ &&\Gamma_{13}^{(l+1)1}=\Gamma_{31}^{(l+1)1}=\Gamma_{13}^{(l+1)2}=\Gamma_{31}^{(l+1)2}=\Gamma_{22}^{(l+1)1}=0,\;\;\Gamma_{13}^{(l+1)3}=\Gamma_{31}^{(l+1)3}=-\frac{l(a+1)}{u^1},\\ &&\Gamma_{22}^{(l+1)3}=-\frac{((la^2+3la+2l)(u^2)^2-(ab-2b)u^1u^2+2au^1u^3)}{(a+2)u^1(u^2)^2},\;\;\Gamma_{22}^{(l+1)2}=\frac{a(a+1)}{u^2(a+2)}\\ &&\Gamma_{23}^{(l+1)1}=\Gamma_{32}^{(l+1)1}=\Gamma_{23}^{(l+1)2}=\Gamma_{32}^{(l+1)2}= \Gamma_{33}^{(l+1)1}=\Gamma_{33}^{(l+1)2}=\Gamma_{33}^{(l+1)3}= 0, \end{eqnarray*} locally define a regular three dimensional multi-flat $F$-manifold $(M, \nabla_{(l)}, \circ_l, E_{(l)},\,l=1,2...)$ for any value of the constants $a$ and $b$. \end{thm} \emph{ Proof}. The proof develops along the same lines of Theorem \ref{thmtriflatregular}, so here we just highlight the main differences. The first steps, including the determination of $\Gamma_{jk}^{(3)i}$ in terms of $\Gamma_{jk}^{(1)i}$ are the same. Imposing as in the proof of Theorem \ref{thmtriflatregular} the commutativity of $\rm{Lie}_{E_3}$ with $\nabla_{(3)}$ we obtain a simple system of ODEs in $F_i$, $i=1,\dots,6$ from which we deduce that $F_1$ and $F_3$ have to be constants. The ODEs are integrated, but this time the constants are left free at this stage of the proof. Instead we introduce the fourth connection $\nabla_{(4)}$ and as usual we impose it is almost hydrodynamically equivalent to $\nabla_{(1)}$ and that $\nabla_{(4)} E_4=0$. This is enough to express $\nabla_{(4)}$ in terms of $\nabla_{(1)}$. Furthermore we impose the commutativity of $\rm{Lie}_{E_4}$ with $\nabla_{(4)}$ which forces $f_1=f_3=0$. Some of the remaining constants appearing from the integration of the system of ODEs are fixed imposing that $\nabla_{(l)}$, $l=1,2,3,4$ are flat. Once this is done, the compatibility of each connection with the corresponding product is automatically satisfied and can be checked via a straightforward computation. At this point proceeding in a similar way one can construct one connection for each power of the Euler vector field (it is easy to prove by induction that the components of these vector fields are given by the formula \eqref{powersE}) without obtaining additional constraints. \begin{flushright} $\Box$ \end{flushright} \begin{rmk} Instead of considering the special eventual identities given by powers of the Euler vector field one can try to repeat the above construction considering arbitrary eventual identities. It is easy to check that these are given by $$G_1(u^1)\partial_{u^1}+G_2(u^1, u^2)\partial_{u^2}+\left(-u^3G'_1+2u^3\frac{\partial G_2}{\partial u^2}+G_3(u^1, u^2)\right)\partial_{u^3}$$ where $G_1(u^1),G_2(u^1, u^2),G_3(u^1, u^2)$ are arbitrary functions. It turns out (after long computations) that the previous construction works only for the subset of the eventual identities corresponding to the choice \begin{eqnarray*} &&G_1(u^1)=f(u^1),\,\,G_2(u^1, u^2)=f'u^2,\,\,G_3(u^1, u^2)=\frac{(u^2)^2f''}{2} \end{eqnarray*} where $f$ is an arbitrary function of $u^1$. In particular, the powers of $E$ are obtained by setting $f=(u^1)^l$. For arbitrary $f$ the formulas for the associated Christoffel symbols are \begin{eqnarray*} &&\Gamma_{11}^{(l+1)1}=-\frac{f'}{f},\;\;\Gamma_{11}^{(l+1)2}=-\frac{u^2((-a^2-2a-2)(f')^2+(a+2)ff'')}{f^2(a+2)}\\ &&\Gamma_{11}^{(l+1)3}=\frac{(2a^2+6a+4)(u^2)^2(f')^3+(-4a^2u^3-2abu^2-6au^3-4bu^2-4u^3)f(f')^2}{2f^3(a+2)}+\\ &&\qquad\qquad\frac{ (-2a^2-7a-6)(u^2)^2ff'f''+(2a+4)u^3f^2f''+(a+2)(u^2)^2f^2f''')}{2f^3(a+2)}\\ &&\Gamma_{12}^{(l+1)1}=\Gamma_{21}^{(l+1)1}=0,\;\;\Gamma_{12}^{(l+1)2}=\Gamma_{21}^{(l+1)2}=-\frac{(a^2+2a+2)f'}{(a+2)f},\;\;\Gamma_{23}^{(l+1)3}=\Gamma_{32}^{(l+1)3}=\frac{a}{u^2}\\ &&\Gamma_{12}^{(l+1)3}=\Gamma_{21}^{(l+1)3}=\frac{(a+2)^2(u^2)^2ff''-(2a^2+6a+4)(u^2)^2(f')^2+(2a^2u^3+2abu^2+4bu^2)ff'}{2u^2(a+2)f^2},\\ &&\Gamma_{13}^{(l+1)1}=\Gamma_{31}^{(l+1)1}=\Gamma_{13}^{(l+1)2}=\Gamma_{31}^{(l+1)2}=\Gamma_{22}^{(l+1)1}=0,\;\;\Gamma_{13}^{(l+1)3}=\Gamma_{31}^{(l+1)3}=-\frac{(a+1)f'}{f},\\ &&\Gamma_{22}^{(l+1)3}=\frac{(-a^2-3a-2)(u^2)^2f'+(abu^2-2au^3+2bu^2)f}{(a+2)(u^2)^2f},\;\;\Gamma_{22}^{(l+1)2}=\frac{a(a+1)}{u^2(a+2)}\\ &&\Gamma_{23}^{(l+1)1}=\Gamma_{32}^{(l+1)1}=\Gamma_{23}^{(l+1)2}=\Gamma_{32}^{(l+1)2}= \Gamma_{33}^{(l+1)1}=\Gamma_{33}^{(l+1)2}=\Gamma_{33}^{(l+1)3}= 0. \end{eqnarray*} \end{rmk} \section{Appendix 1} Let us consider the system of first order partial differential equations \begin{eqnarray} \label{bif1} &&\partial_k\Gamma^i_{ij}=-\Gamma^i_{ij}\Gamma^i_{ik}+\Gamma^i_{ij}\Gamma^j_{jk} +\Gamma^i_{ik}\Gamma^k_{kj}, \quad i\ne k\ne j\ne i,\\ \label{bif2} &&e(\Gamma^i_{ij})=0,\qquad i\ne j\\ \label{bif3} &&E(\Gamma^i_{ij})=-\Gamma^i_{ij},\qquad i\ne j \end{eqnarray} for the $n(n-1)$ unknown functions $\Gamma^i_{ij}$ ($i\ne j$). In this Appendix we will prove the following theorem: \begin{thm} The system (\ref{bif1},\ref{bif2},\ref{bif3}) is complete, that is all the compatibility conditions $$ \partial_l\partial_k\Gamma^i_{ij}-\partial_k\partial_l\Gamma^i_{ij}=0, \qquad\forall k,l=1,...,n. $$ are satisfied. \end{thm} \proof First of all it is easy to check that \begin{equation}\label{comp1} \partial_l\partial_k\Gamma^i_{ij}-\partial_k\partial_l\Gamma^i_{ij}=0 \end{equation} for distinct indices $i,j,k,l$. Indeed, expanding the left hand side of \eqref{comp1} one gets $$(\Gamma^i_{il}\Gamma^l_{lk}+\Gamma^i_{ik}\Gamma^k_{kl}-\Gamma^i_{ik}\Gamma^i_{il})\Gamma^k_{kj}+(\Gamma^k_{kl}\Gamma^l_{lj}+\Gamma^k_{kj}\Gamma^j_{jl}-\Gamma^k_{kj}\Gamma^k_{kl})\Gamma^i_{ik}$$ $$+(\Gamma^i_{il}\Gamma^l_{lj}+\Gamma^i_{ij}\Gamma^j_{jl}-\Gamma^i_{ij}\Gamma^i_{il})\Gamma^j_{jk}+(\Gamma^j_{jl}\Gamma^l_{lk}+\Gamma^j_{jk}\Gamma^k_{kl}-\Gamma^j_{jk}\Gamma^j_{jl})\Gamma^i_{ij}$$ $$-(\Gamma^i_{il}\Gamma^l_{lj}+\Gamma^i_{ij}\Gamma^j_{jl}-\Gamma^i_{ij}\Gamma^i_{il})\Gamma^i_{ik}-(\Gamma^i_{il}\Gamma^l_{lk}+\Gamma^i_{ik}\Gamma^k_{kl}-\Gamma^i_{ik}\Gamma^i_{il})\Gamma^i_{ij}$$ $$-(\Gamma^i_{ik}\Gamma^k_{kl}+\Gamma^i_{il}\Gamma^l_{lk}-\Gamma^i_{il}\Gamma^i_{ik})\Gamma^l_{lj}-(\Gamma^l_{lk}\Gamma^k_{kj}+\Gamma^l_{lj}\Gamma^j_{jk}-\Gamma^l_{lj}\Gamma^l_{lk})\Gamma^i_{il}$$ $$-(\Gamma^i_{ik}\Gamma^k_{kj}+\Gamma^i_{ij}\Gamma^j_{jk}-\Gamma^i_{ij}\Gamma^i_{ik})\Gamma^j_{jl}-(\Gamma^j_{jk}\Gamma^k_{kl}+\Gamma^j_{jl}\Gamma^l_{lk}-\Gamma^j_{jl}\Gamma^j_{jk})\Gamma^i_{ij}$$ $$+(\Gamma^i_{ik}\Gamma^k_{kj}+\Gamma^i_{ij}\Gamma^j_{jk}-\Gamma^i_{ij}\Gamma^i_{ik})\Gamma^i_{il}+(\Gamma^i_{ik}\Gamma^k_{kl}+\Gamma^i_{il}\Gamma^l_{lk}-\Gamma^i_{il}\Gamma^i_{ik})\Gamma^i_{ij}=0.$$ In order to prove that \begin{eqnarray*} &&\partial_i\partial_k\Gamma^i_{ij}-\partial_k\partial_i\Gamma^i_{ij}=0\\ &&\partial_j\partial_k\Gamma^i_{ij}-\partial_k\partial_j\Gamma^i_{ij}=0 \end{eqnarray*} for $k\ne i,j$ we observe that from \eqref{bif2} and \eqref{bif3} it follows that \begin{eqnarray*} \partial_i\Gamma^i_{ij}&=&\frac{1}{u^j-u^i}\left(\sum_{l\ne i,j}(u^l-u^j)\partial_l\Gamma^i_{ij}+\Gamma^i_{ij}\right),\\ \partial_j\Gamma^i_{ij}&=&\frac{1}{u^j-u^i}\left(\sum_{l\ne i,j}(u^i-u^l)\partial_l\Gamma^i_{ij}-\Gamma^i_{ij}\right).\\ \end{eqnarray*} Using the above identities and writing $(u^i-u^j)\partial_i$ and $(u^i-u^j)\partial_j$ as \begin{eqnarray*} (u^i-u^j)\partial_i&=&E-u^je-\sum_{l\ne i}(u^l-u^j)\partial_l\\ (u^i-u^j)\partial_j&=&-E+u^ie-\sum_{l\ne j}(u^i-u^l)\partial_l \end{eqnarray*} respectively, we obtain \begin{eqnarray*} (\partial_k\partial_i-\partial_i\partial_k)\Gamma^i_{ij}&=&\frac{1}{u^j-u^i}\left(\sum_{l\ne i,j}(u^l-u^j)\partial_k\partial_l\Gamma^i_{ij}+2\partial_k\Gamma^i_{ij}\right)-\partial_i\partial_k\Gamma^i_{ij}=\\ &&\frac{1}{u^j-u^i}\left(\sum_{l\ne i,j}(u^l-u^j)\partial_k\partial_l\Gamma^i_{ij}+2\partial_k\Gamma^i_{ij}+(u^i-u^j)\partial_i\partial_k\Gamma^i_{ij}\right)=\\ &&\frac{1}{u^j-u^i}\left[E(\partial_k\Gamma^i_{ij})-u^je(\partial_k\Gamma^i_{ij})+2\partial_k\Gamma^i_{ij}\right],\\ (\partial_k\partial_j-\partial_j\partial_k)\Gamma^i_{ij}&=&\frac{1}{u^j-u^i}\left(\sum_{l\ne i,j}(u^i-u^l)\partial_k\partial_l\Gamma^i_{ij}-2\partial_k\Gamma^i_{ij}\right)-\partial_j\partial_k\Gamma^i_{ij}=\\ &&\frac{1}{u^j-u^i}\left(\sum_{l\ne i,j}(u^i-u^l)\partial_k\partial_l\Gamma^i_{ij}-2\partial_k\Gamma^i_{ij}+(u^i-u^j)\partial_j\partial_k\Gamma^i_{ij}\right)\\ &&\frac{1}{u^j-u^i}\left[-E(\partial_k\Gamma^i_{ij})+u^ie(\partial_k\Gamma^i_{ij})-2\partial_k\Gamma^i_{ij}\right]. \end{eqnarray*} where we have used the identity \eqref{comp1}. The result follow from the identities \begin{eqnarray*} E(\partial_k\Gamma^i_{ij})&=&E(-\Gamma^i_{ij}\Gamma^i_{ik}+\Gamma^i_{ij}\Gamma^j_{jk}+\Gamma^i_{ik}\Gamma^k_{kj})\\ &=&-2(-\Gamma^i_{ij}\Gamma^i_{ik}+\Gamma^i_{ij}\Gamma^j_{jk}+\Gamma^i_{ik}\Gamma^k_{kj})\\ &=&-2\partial_k\Gamma^i_{ij} \end{eqnarray*} and \begin{equation}\label{edgamma} e(\partial_k\Gamma^i_{ij})=e(-\Gamma^i_{ij}\Gamma^i_{ik}+\Gamma^i_{ij}\Gamma^j_{jk}+\Gamma^i_{ik}\Gamma^k_{kj})=0. \end{equation} To conclude we have to prove that $$\partial_i\partial_j\Gamma^i_{ij}-\partial_j\partial_i\Gamma^i_{ij}=0.$$ Writing $\partial_i$ as $$\partial_i=e-\sum_{l\ne i}\partial_l$$ and using \eqref{comp1} and \eqref{bif2} we obtain the equivalent condition $$\partial_i(e(\Gamma^i_{ij})-e(\partial_i\Gamma^i_{ij})=-e(\partial_i\Gamma^i_{ij})=0.$$ Taking into account that $e(u^j-u^i)=e(u^l-u^j)=0$ we obtain $$-e(\partial_i\Gamma^i_{ij})=-\frac{1}{u^j-u^i}\left(\sum_{l\ne i,j}(u^l-u^j)e(\partial_l\Gamma^i_{ij})+e(\Gamma^i_{ij})\right).$$ The result follows from the identities \eqref{edgamma} and \eqref{bif2}. \endproof \section{Appendix 2} In this Appendix we show how to reconstruct a solution of the system \eqref{mainsys} starting from a solution of Painlev\'e VI equation. More precisely, we will construct solutions of the system \begin{equation}\label{mainsysA} \begin{split} \frac{dF_{12}}{dz}&=-\frac{(F_{12}(z)F_{23}(z)-F_{12}(z)F_{13}(z))z-F_{12}(z)F_{23}(z)+F_{32}(z)F_{13}(z)}{z(z-1)},\\ \frac{dF_{21}}{dz}&=\frac{(F_{21}(z)F_{23}(z)-F_{21}(z)F_{13}(z))z+F_{23}(z)F_{31}(z)-F_{23}(z)F_{21}(z)}{z(z-1)},\\ \frac{dF_{13}}{dz}&=\frac{(F_{12}(z)F_{23}(z)-F_{12}(z)F_{13}(z))z-F_{12}(z)F_{23}(z)+F_{32}(z)F_{13}(z)}{z(z-1)},\\ \frac{dF_{31}}{dz}&=-\frac{(-F_{31}(z)F_{12}(z)+F_{21}(z)F_{32}(z))z+F_{31}(z)F_{32}(z)-F_{21}(z)F_{32}(z)}{z(z-1)},\\ \frac{dF_{23}}{dz}&=-\frac{(F_{21}(z)F_{23}(z)-F_{21}(z)F_{13}(z))z+F_{23}(z)F_{31}(z)-F_{23}(z)F_{21}(z)}{z(z-1)},\\ \frac{dF_{32}}{dz}&=\frac{(-F_{31}(z)F_{12}(z)+F_{21}(z)F_{32}(z))z+F_{31}(z)F_{32}(z)-F_{21}(z)F_{32}(z)}{z(z-1)} \end{split} \end{equation} starting from solutions of the equation \begin{equation}\label{sigmaA} \begin{split} [z(z-1)f'']^2=&[q_2-(d_2-d_3)g_2-(d_1-d_3)g_1]^2-4f'g_1g_2. \end{split} \end{equation} (where $g_1=f-zf'+\frac{q_1}{2}$ and $ g_2=(z-1)f'-f+\frac{q_1}{2}$) which is related to Painlev\'e VI equation by the elementary transformation $$f=-\phi(z)-\frac{1}{4}(d_{13}-d_{23})^2z+\frac{1}{4}d_{13}(d_{13}-d_{23}).$$ Given a specific instance of equation \eqref{sigmaA} and a solution $f(z)$, define $d_1$ as a root of the cubic polynomial $$\lambda^3-(2d_{13}-d_{23})\lambda^2+(d_{13}^2-d_{13}d_{23}-q_1)\lambda+q_1d_{13}-q_2$$ and $d_2$ and $d_3$ as $$d_2=d_1-d_{13}+d_{23},\quad d_3=d_1-d_{13}.$$ In this way the parameters $d_1,d_2,d_3,q_1,q_2$ satisfy the identity $$q_2=-d_3q_1+d_1d_2d_3,$$ since that the values of the parameters are related to the values of the first integrals $I_i$ which are related by a similar identity. Notice that the constants $d_1,d_2,d_3,q_2$ are determined up to a sign, since the equation \eqref{sigmaA} is invariant under the simultaneous substitution $$d_1\to -d_1,\quad d_2\to -d_2,\quad d_3\to -d_3,\quad q_2\to -q_2.$$ Therefore, once a root of the cubic polynomial above has been chosen, the only indetermination left is in the choice of simultaneous signs for $d_1, d_2, d_3, q_2.$ Given $d_1,d_2,d_3$, $q_1$ and $f(z)$ one can reconstruct the solution $(F_{12}(z),F_{21}(z),F_{13}(z),F_{31}(z),F_{23}(z),F_{32}(z))$ of the system \eqref{mainsys} solving the algebraic system \begin{eqnarray*} &&F_{12}+F_{13}=\pm d_1,\quad F_{23}+F_{21}=\pm d_2,\quad F_{31}+F_{32}=\pm d_3,\\ &&F_{12}F_{21}=f',\qquad F_{23}F_{32}=g_1,\qquad F_{13}F_{31}=g_2. \end{eqnarray*} The solution is \begin{equation}\label{F_ij} \begin{split} &F_{12} =\pm \frac{\mu f'}{\mu d_2-g_1},\qquad F_{21} =\pm\left( d_2-\frac{g_1}{\mu}\right),\qquad F_{13} =\pm\left( d_1- \frac{\mu f'}{\mu d_2-g_1}\right)\\ & F_{31} =\pm\left( -\mu+d_3\right),\qquad F_{23} = \pm\frac{g_1}{\mu},\qquad F_{32} =\pm\mu \end{split} \end{equation} where $\mu$ satisfies \begin{equation}\label{mu} (f'-d_1d_2)\mu^2+(d_1d_2d_3+d_1g_1-d_2g_2-d_3f')\mu-d_1d_3g_1+g_1g_2=0. \end{equation} Now by hypothesis the function $f$ is a solution of the equation $$[z(z-1)f'']^2=[q_2-d_{23}g_2-d_{13}g_1]^2-4f'g_1g_2,$$ where $g_1=f-zf'+\frac{q_1}{2}$ and $g_2=(z-1)f'-f+\frac{q_1}{2}$. Defining the constants $d_1,d_2,d_3$ and the functions $F_{ij}$ as above we obtain $$[z(z-1)f'']^2=[q_2-(d_2-d_3)F_{13}F_{31}-(d_1-d_3)F_{23}F_{32}]^2 -4F_{23}F_{31}F_{12}F_{13}F_{32}F_{21}.$$ Moreover, by construction we obtain the system that the functions $F_{ij}$ have to satisfy: \begin{equation}\label{sysFij}\begin{split} q_1=F_{31}F_{13}+F_{12}F_{21}+F_{23}F_{32},\qquad q_2=-d_3q_1+d_1d_2d_3,\\ d_1=\pm(F_{12}+F_{13}),\quad d_2=\pm(F_{21}+F_{23}),\quad d_3=\pm(F_{31}+F_{32}). \end{split} \end{equation} Using these identities we obtain $$[z(z-1)f'']^2=[F_{23}F_{31}F_{12}-F_{13}F_{32}F_{21}]^2.$$ Since the functions $F_{ij}$ are defined up to a sign, due to the form of system \eqref{sysFij}, in a neighborhood of a point $z_0\neq 0, 1$ such that $f''(z_0)\ne 0$ we can always choose the simultaneous sign of $F_{ij}$ in such a way that the following relation holds: $$f''=\frac{F_{23}F_{31}F_{12}-F_{13}F_{32}F_{21}}{z(z-1)}.$$ In this way there is no freedom in the definition of the functions $F_{ij}$ even if we do not know {\it a priori} the right choice of the sign. Taking into account the definition of the functions $g_1$ and $g_2$ we obtain the system \begin{eqnarray*} &&(F_{12}F_{21})'=f'',\quad(F_{13}F_{31})'=(z-1)f'',\quad(F_{23}F_{32})'=-zf'',\\ &&(F_{12}+F_{13})'=0,\quad (F_{21}+F_{23})'=0,\quad (F_{31}+F_{32})'=0. \end{eqnarray*} It is easy to check that it is equivalent to the system \eqref{mainsysA}, just written in a different coordinate system provided that the jacobian determinant does not vanish. It is easy to check that this happens when $f''$ vanishes. This means that the case where $f$ is a linear function of $z$ must be treated separately. Given linear solutions of \eqref{sigmaA} the existence of corresponding solutions of \eqref{mainsysA} is not automatically guaranteed. Moreover, it turns out that there are some exceptional linear solutions for which the polynomial \eqref{mu} vanishes identically. In this case families of solutions of the system \eqref{mainsysA} correspond to the same solution of \eqref{sigmaA}. For instance, the linear solution associated with tridimensional tri-flat $F$-manifolds, namely $$f = -C_{12}C_{23}-C_{23}^2+C_{23}+zC_{12}C_{23}+\frac{1}{2}(C_{12}^2+C_{12}C_{23}+C_{23}^2-C_{12}-C_{23}),$$ is related to the following one parameter family of solutions of the system \eqref{mainsysA}: \begin{eqnarray*} &&F_{12}=\frac{C_{12}C_{23}}{F_{21}},\,F_{13}=-\frac{C_{12}C_{23}-C_{12}F_{21}}{F_{21}},\,F_{23}=-F_{21}+C_{23}\\ &&F_{31}=\frac{F_{21}(-1+C_{12}+C_{23})}{-F_{21}+C_{23}},\,F_{32}=-\frac{C_{12}C_{23}+C_{23}^2-C_{23}}{-F_{21}+C_{23}} \end{eqnarray*} where \begin{footnotesize} \begin{eqnarray*} &&F_{21}=\frac{C(C_{12}-2)(C_{23}z+C_{12}-1){\rm hypergeom}([-C_{12}+1,C_{31}], [2-C_{12}],\frac{z}{z-1})}{(z-1)\left(C(C_{12}-2){\rm hypergeom}([-C_{12}+1, C_{31}], [2-C_{12}],\frac{z}{z-1})+\left(\frac{z}{z-1}\right)^{C_{12}-1}\right)}+\\ &&\frac{\left(C(C_{12}-1)(-C_{31}){\rm hypergeom}([2-C_{12}, 1+C_{31}], [3-C_{12}], \frac{z}{z-1})+C_{23}\left(\frac{z}{z-1}\right)^{C_{12}-1}(z-1)\right)z}{(z-1)^2\left(C(C_{12}-2){\rm hypergeom}([-C_{12}+1, C_{31}], [2-C_{12}],\frac{z}{z-1})+\left(\frac{z}{z-1}\right)^{C_{12}-1}\right)} \end{eqnarray*} \end{footnotesize} with $C_{31}=1-C_{12}-C_{23}$. \section*{Ackowledgements} We thank Davide Guzzetti for a useful discussion and Boris Dubrovin for very helpful comments. P.L. is partially supported by the Italian MIUR Research Project \emph{Teorie geometriche e analitiche dei sistemi Hamiltoniani in dimensioni finite e infinite} and by GNFM Progetto Giovani 2014 \emph{Aspetti geometrici e analitici dei sistemi integrabili}. The visit of A.A. at the University of Milano-Bicocca has been supported by GNFM. A.A. thanks the \emph{Dipartimento di Matematica e Applicazioni of Univesity of Milano-Bicocca} for the kind hospitality, while part of this work was done.
1,116,691,499,455
arxiv
\section{Introduction and background information} \label{Introduction} \noindent Nilpotent and locally nilpotent subgroups are proven to be very efficient means to study groups. In particular, the Fitting subgroup and the Frattini subgroup help in study of finite groups, while the Plotkin-Hirsch radical allows to consider infinite groups (see \cite{Fitting Gruppen endlicher Ordnung, Gaschuetz Frattini, Plotking - radical, Plotking - Radical groups, Plotking - Generalized soluble and nilpotent groups, Uber lokal-nilpotente Gruppen, RobinsonFinitenessConditions} and literature cited therein). This trend is naturally inherited by varieties of groups: nilpotent varieties are in some sense simpler varieties (they are irreducible, have finite base rank, their finitely generated groups are factors of finite powers of relatively free groups, etc.), and investigation of nilpotent subvarieties of a variety is an approach to study general varieties. In the current note we use sequences of nilpotent subvarieties to study varieties generated by non-nilpotent wreath products of groups (here we assume Cartesian wreath products, although the analogs of the statements below also are true for direct wreath products). Let us introduce the general context in which we examine varieties generated by wreath products. One of the most efficient methods to study product varieties $\U\V$ is finding some groups $A$ and $B$ such that $\U = \var A$, $\V = \var B$, and the wreath product $A \Wr B$ generates $\U\V$, that is, when the equality \begin{equation} \tag{$*$} \label{EQUATION_main} \var{A \Wr B} = \var{A} \var{B} \end{equation} holds for the given $A$ and $B$. Indeed, the product $\U \V$ consists of all possible extensions of all groups $A \in \U$ by all groups $B \in \V$. If \eqref{EQUATION_main} holds for some fixed groups $A$ and $B$, generating the varieties $\U$ and $\V$, then one can restrict to consideration of $\var{A \Wr B}$, which is easier to study rather than to explore all the extensions inside $\U \V$. Examples of application of this approach are numerous: for earliest results and references see Chapter 2 of Hanna Neumann's monograph~\cite{HannaNeumann} and the work of G.~Baumslag, R.G.~Burns, G.~Higman, C.~Haughton, B.H.~Neumann, H.~Neumann, P.M.~Neumann~\cite{Burns65, B+3N, Some_remarks_on_varieties}, etc. This motivated our systematic study of equality \eqref{EQUATION_main} for as wide classes of groups as possible. In \cite{AwrB_paper}--\cite{Metabelien} we gave a complete classification of all cases, when \eqref{EQUATION_main} holds for {\it abelian} groups $A$ and $B$, and in \cite{wreath products algebra i logika} and \cite{shmel'kin criterion} we fully classified the cases when $A$ and $B$ are any {\it finite} groups. In the current note we consider the case, when $A$ and $B$ are any $p$-groups of finite exponents, $A$ is nilpotent and $B$ is abelian. This is a part of larger, only partially published research on cases when \eqref{EQUATION_main} holds for nilpotent and abelian groups with some conditions (see also~\cite{classification theorem}). The main theorem of this note is: \begin{Theorem} \label{Theorem wr p-groups Let $A$ be a non-trivial nilpotent $p$-group of finite exponent, and $B$ be a non-trivial abelian group of finite exponent $p^v$. Then the wreath product $A \Wr B$ generates the variety $\var{A} \var{B}$, that is, the variety $\var{A} \A_{p^v}$ if and only if the group $B$ contains a subgroup isomorphic to the direct product $C_{p^v}^\infty$ of countably many copies of the cyclic group $C_{p^v}$ of order $p^v$. \end{Theorem} Since $B$ is a non-trivial group of finite exponent, $v >0$ and by Pr\"ufer-Kulikov's theorem~\cite{Robinson, Kargapolov Merzlyakov} it is a direct product of copies of some finite cyclic subgroups of prime-power orders. The theorem above states that in this direct product the cycles of order $p^v$ must be present at lest {\it countably many} times, whereas the number of direct summands of orders $p^{v-1}\!, p^{v-2}\!, \ldots, p$ is of no importance. We below without any definitions use the basic notions of the theory of varieties of groups such as varieties, relatively free groups, discriminating groups, etc. All the necessary definitions and background information can be found in Hanna Neumann's monograph~\cite{HannaNeumann}. Following the conventional notation, we denote by ${\sf Q}\X$, ${\sf S}\X$, ${\sf C}\X$ and ${\sf D}\X$, the classes of all homomorphic images, subgroups, Cartesian products and of direct products of finitely many groups of $\X$, respectively. By Birkhoff's Theorem~\cite{BirkhoffQSC,HannaNeumann} for any class of groups $\X$ the variety $\var{\X}$ generated by it can be obtained from $\X$ by three operations: $\varr{\X}={\sf QSC}\,\X$. For information on wreath products we refer to~\cite{HannaNeumann, Kargapolov Merzlyakov}. For the given classes of groups $\X$ and $\Y$ we denote $\X \Wr \Y = \{ X\Wr Y \,|\, X\in \X, Y\in \Y\}$. The specific notions, related to $K_p$-series and to nilpotent wreath products can be found in \cite{Liebeck_Nilpotency_classes, Meldrum nilpotent, Shield nilpotent a, Shield nilpotent b, Marconi nilpotent} and in Chapter 4 of J.D.P.~Meldrum's monograph \cite{Meldrum book}. When this work was in progress we discussed the topic with Prof. A.Yu.~Ol'shanskii, who suggested ideas for an alternative (very different from our approach) proof for Theorem~\ref{Theorem wr p-groups} using the arguments of his work \cite{Olshanskii Neumanns-Shmel'kin}. In~\cite{classification theorem} we present the outlines of both proofs side by side. \section{The $K_p$-series and the proof for Theorem~\ref{Theorem wr p-groups}} \label{k_p series} In spite of the fact that the soluble wreath products are ``many'' (the wreath product of any soluble group is soluble), the nilpotent wreath products are ``fewer'': as it is proved by G.~Baumslag in 1959, a Cartesian or direct wreath product of non-trivial groups $A$ and $B$ is nilpotent if and only if $A$ is a nilpotent $p$-group of finite exponent $p$, and $B$ is a finite $p$-group~\cite{Baumslag nilp wr}. Even after having such an easy-to-use criterion to detect, if the given wreath product is nilpotent, it turned out to be a much harder task and took almost two decades to explicitly compute its nilpotency class in general case. H.~Liebeck started by consideration of the cases of wreath products of abelian groups~\cite{Liebeck_Nilpotency_classes}, and the final general formula was found by D.~Shield's work~\cite{Shield nilpotent a, Shield nilpotent b} in 1977. Later the proof was much shortened by R.~Marconi~\cite{Marconi nilpotent}. In order to write down the formula we need the notion of $K_p$-series. For the given group $G$ and the prime number $p$ the $K_p$ series $K_{i,p}(G)$ of $G$ is defined for $i=1, 2, \ldots$ by: \begin{equation} \label{EQUATION K_p} K_{i,p}(G) = \!\!\!\!\!\!\!\!\!\!\! \prod_{\text{$r, j$ with $r p^j \ge i$}} \!\!\!\!\!\!\!\!\!\! \gamma_r(G)^{p^j}, \end{equation} where $\gamma_r(G)$ is the $r$'th term of the lower central series of $G$. In particular, $K_{1,p}(G) = G$ holds for any $G$. From definition it is clear that a $K_p$ series is a descending series, although it may not be strictly descending: some of its neighbor terms may coincide. If $G$ is abelian, then in \eqref{EQUATION K_p} the powers $\gamma_1(G)^{p^j}=G^{p^j}$ of the first term $\gamma_1(G) = G$ need be considered only. \begin{Example} If $G=C_{p^3} \times C_{p} \times C_{p}$ with $p = 5$, then it is easy to calculate that: $$ K_{1,p}(G) = G; \quad K_{2,p}(G) = \cdots = K_{5,p}(G) \cong C_{p^2}; $$ $$ K_{6,p}(G) = \cdots = K_{25,p}(G) \cong C_{p}; \quad K_{26,p}(G) \cong \{1\}. $$ \end{Example} If $G$ is some finite $p$-group, using the $K_p$-series one may introduce the following additional parameters: let $d$ be the maximal integer such that $K_{d,p}(G) \not= \{1\}$. Then for each $s=1,\ldots, d$ define $e(s)$ by $$ p^{e(s)} = |K_{s,p} / K_{s+1,p}|, $$ and set $a$ and $b$ by the rules: $$ a = 1 + (p-1) \sum_{s=1}^d \left(s \cdot e(s)\vphantom{a^b}\right), \quad b = (p-1)d. $$ The above integer $d$ does exist, and our notations are correct, for, a finite $p$-group is nilpotent, its $K_p$-series will eventually reach the trivial subgroup. To keep the notations simpler, the initial group $G$ is not included in the notations of $d$, $e(s)$, $a$ and $b$. But from the context it will always be clear which group $G$ is being considered. \vskip3mm Let $A$ be a nilpotent $p$-group of exponent $p^u$, and $B$ be an abelian group of exponent $p^v$, with $u,v >0$. Assume all the parameters $d$, $e(s)$, $a$, $b$ are defined specifically for the group $B$. Then by Shield's formula \cite{Shield nilpotent b} (see also Theorem 2.4 in \cite{Meldrum book}) the nilpotency class of the wreath product $A \Wr B$ is the maximum \begin{equation} \label{EQUATION A wr B class} \max_{h = 1, \ldots, \, c} \{ a \, h + (s(h)-1)b \}, \end{equation} where $s(h)$ is defined as follows: $p^{s(h)}$ is the exponent of the $h$'th term $\gamma_h(A)$ of the lower central series of $A$. \begin{Example} If (again for $p = 5$) $B$ is the group $C_{p^3} \times C_{p} \times C_{p}$ mentioned in previous example, and if $A$ is the group, say, $C_{p^2} \times C_{p}$, then $d = 25$; $e(1) = 3$, $e(2) = e(3) = e(4) = 0$, $e(5) = 1$, $e(6) = \cdots = e(24) = 0$, $e(25) = 1$, $e(26) = 0$; $a = 133$; $b = 100$; $c = 1$ and $s(1) = 2$. Thus, the nilpotency class of the wreath product $A \Wr B = (C_{p^2} \times C_{p}) \Wr (C_{p^3} \times C_{p} \times C_{p})$ in this case is equal to $$ a \cdot 1 + (s(1) - 1)b = 133 + (2-1) 100 = 233. $$ \end{Example} \vskip3mm In order to prove Theorem~\ref{Theorem wr p-groups} we will apply Shield's formula to two auxiliary groups. To construct the first group denote by $\beta$ the cardinality of $B$ and by $A^\beta$ the Cartesian product of $\beta$ copies of $A$. For the given fixed positive integer $l$ and for the integer $t \ge l$ introduce the group $Z(l,t)$ as the direct product: \begin{equation} \label{EQUATION A Wr Z definition} Z(l,t) =\underbrace{C_{p^v} \times \cdots \times C_{p^v}}_l \,\, \times \,\, \underbrace{C_{p^{v-1}} \times \cdots \times C_{p^{v-1}}}_{t-l}. \end{equation} \begin{Lemma} \label{LEMMA bound for A beta Wr Z Assume $A$, $B$ and $\beta$ are defined as above and $l$ is any positive integer. Also, assume the exponent of $\gamma_c(A)$ is $p^\alpha$ ($\alpha \not= 0$, since the class of $A$ is c). Then there is a positive integer $t_0$ such that for all $t > t_0$ the nilpotency class of the wreath product $A^\beta \Wr Z(l,t)$ is equal to \begin{equation} \label{EQUATION bound 1} c + c\,t(p-1)\left( \frac{1 - p^{v-1}}{\hskip-4mm 1-p} +l/t \cdot p^{v-1} \right) + (\alpha-1)(p-1)p^{v-1}. \end{equation} \end{Lemma} \begin{proof} Denote $Z = Z(l,t)$ and notice that $A^\beta \Wr Z$ is nilpotent by Baumslag's theorem~\cite{Baumslag nilp wr}. Let us compute the $K_{p}$-series for $Z$ and, to keep the notations simpler, not include in the formulas the underbraces of \eqref{EQUATION A Wr Z definition} with $l$ and $t-l$. For $i=1$ we have $K_{1,p}(Z) = Z$. For $i=2, \ldots, p$ we get: $$ K_{i,p}(Z) = Z^p =C_{p^{v-1}} \times \cdots \times C_{p^{v-1}} \,\, \times \,\,\, C_{p^{v-2}} \times \cdots \times C_{p^{v-2}}. $$ For $i=p^k+1, \ldots, p^{k+1}$ we have: $$ K_{i,p}(Z) = Z^{p^{k+1}} =C_{p^{v-(k+1)}} \times \cdots \times C_{p^{v-(k+1)}} \,\, \times \,\,\, C_{p^{v-(k+2)}} \times \cdots \times C_{p^{v-(k+2)}}. $$ In particular, for $i=p^{v-3}+1, \ldots, p^{v-2}$ we get: $$ K_{i,p}(Z) = Z^{p^{v-2}} =C_{p^2} \times \cdots \times C_{p^2} \,\, \times \,\,\, C_{p} \times \cdots \times C_{p}, $$ for $i=p^{v-2}+1, \ldots, p^{v-1}$ we get: $$ K_{i,p}(Z) = Z^{p^{v-1}} =C_{p} \times \cdots \times C_{p} \quad \text{(just $l$ factors)}, $$ and, finally, for $i=p^{v-1}+1$ the series terminates on $K_{i,p}(Z) = \1$. Therefore, $d = p^{v-1}$ and all the parameters $e(i)$ are zero except the following ones: $$ e(1) = e(p) = \cdots = e(p^{u-2}) = t \quad \text{and} \quad e(p^{u-1}) = l. $$ Thus: $$ a = 1 + (p-1)(t + p t + \cdots + p^{v-2} t + p^{v-1} l) $$ $$ = 1 + t(p-1)\left( \frac{1 - p^{v-1}}{\hskip-4mm 1-p} +l/t \cdot p^{v-1} \right) $$ and $$ b = (p-1)d=(p-1)p^{v-1}. $$ To deal with the parameters $s(h)$, $h = 1, \ldots, c$, for $A^\beta$ notice that $\gamma_h(A^\beta)$ is a subgroup of the Cartesian power $\left(\gamma_h(A)\right)^\beta$ (they may not be equal if $\beta$ is infinite) and, on the other hand, $\gamma_h(A^\beta)$ contains elements with exponent equal to the exponent of $\left(\gamma_h(A)\right)^\beta$. Therefore, the exponents of $\gamma_h(A^\beta)$ and of $\gamma_h(A)$ are equal for all $h=1,\ldots,c$, and the parameters $s(h)$ are the same for both $\gamma_h(A^\beta)$ and of $\gamma_h(A)$. By Shield's formula the nilpotency class of $A^\beta \Wr Z$ is the maximum of values \begin{equation} \label{EQUATION two summands} a \, h + (s(h)-1)b, \quad h = 1, \ldots, c. \end{equation} In spite of the fact that having a larger $h$ we get a larger summand $a h$, it may turn out that for some $h < c$ the exponent of $\gamma_h(A)$ is so much larger than the exponent of $\gamma_c(A)$ that for the given $t$ the highest value of \eqref{EQUATION two summands} is achieved not for $h = c$ (examples are easy to build). However, the second summand in \eqref{EQUATION two summands} may get just $c$ distinct values not dependent on $t$, whereas the first summand includes $a$, which grows infinitely and monotonically with $t$. Thus, even with a fixed $l$ there is a large enough $t_0$ such that the maximal value of \eqref{EQUATION two summands} is achieved with $h = c$ for all $t \ge t_0$. To finish this proof just recall that we denoted the exponent of $\gamma_c(A)$ by $p^\alpha$. \end{proof} \vskip2mm To introduce our second group we need a finitely generated (and, in fact, also finite, since it is in a locally finite variety) subgroup $\tilde A$ of $A$ such that the exponents of terms $\gamma_h(\tilde A)$ and $\gamma_h(A)$ are equal for each $h = 1, \ldots, c$. Clearly, the nilpotency classes of $\tilde A$ and of $A$ will then be equal. Notice that each term $\gamma_h(A)$ contains such an element $a_h$, the exponent of which is equal to the exponent $p^{s(h)}$ of $\gamma_h(A)$. This is possible, since $A$ is a $p$-group of finite exponent. Since $a_h$ is an element of the verbal subgroup $\gamma_h(A)$ for the word $\gamma_h(x_1,\ldots, x_h)$, there are some finitely many elements $a_{h,1}, \ldots, a_{h,r_h}\in A$ such that $a_h \in \gamma_h \left(\langle a_{h,1}, \ldots, a_{h,r_h}\rangle \right)$. Collecting these finitely many generators for all $h$, we get the group $$ \tilde A = \langle a_{h,i}, \ldots, a_{h,r_h} \,|\, h = 1, \ldots, c \, \rangle, $$ which does have the property we needed. Assume $\tilde A$ is a $z$-generator group and denote by $Y(z,t)$ the product: \begin{equation} \label{EQUATION A Wr Y definition} Y(z,t) =\underbrace{C_{p^v} \times \cdots \times C_{p^v}}_{t-z}. \end{equation} Then we have the following value for the nilpotency class of the wreath product $\tilde A \Wr Y(z,t)$: \begin{Lemma} \label{LEMMA bound for tilde A Wr Z Assume $A$, $\tilde A$, $z$ and $\alpha$ are defined as above. Then there is a positive integer $t_1$ such that for all $t > t_1$ the nilpotency class of the wreath product $\tilde A \Wr Y(z,t)$ is equal to \begin{equation} \label{EQUATION bound 2} c + c(t-z)(p-1) \frac{1 - p^{v}}{\hskip-1mm 1-p} + (\alpha-1)(p-1)p^{v-1}. \end{equation} \end{Lemma} \begin{proof} Denote $Y = Y(z,t)$ and notice that $\tilde A \Wr Y$ is nilpotent. Let us compute the $K_{p}$-series for $Y$ by the same routine procedure as in previous proof. For $i=1$ we have $K_{1,p}(Y) = Y$. For $i=2, \ldots, p$ we have: $$ K_{i,p}(Y) = Y^p =C_{p^{v-1}} \times \cdots \times C_{p^{v-1}}. $$ For $i=p^k+1, \ldots, p^{k+1}$ we have: $$ K_{i,p}(Y) = Y^{p^{k+1}} =C_{p^{v-(k+1)}} \times \cdots \times C_{p^{v-(k+1)}}. $$ In particular, for $i=p^{v-3}+1, \ldots, p^{v-2}$ we get: $$ K_{i,p}(Y) = Y^{p^{v-2}} =C_{p^2} \times \cdots \times C_{p^2}, $$ for $i=p^{v-2}+1, \ldots, p^{v-1}$ we get: $$ K_{i,p}(Y) = Y^{p^{v-1}} =C_{p} \times \cdots \times C_{p}, $$ and, finally, for $i=p^{v-1}+1$ we get $K_{i,p}(Y) = \1$. Again, $d = p^{v-1}$ and the only non-zero parameters $e(i)$ are: $$ e(1) = e(p) = \ldots = e(p^{u-1}) = t-z. $$ Thus: $$ a = 1 + (p-1)\Large\left( (t-z) + p (t-z) + \cdots + p^{v-1} (t-z) \Large\right) $$ $$ = 1 + (t-z)(p-1) \frac{1 - p^{v}}{\hskip-1mm 1-p} $$ and $$ b = (p-1)d=(p-1)p^{v-1}. $$ Here the parameters $s(h)$, $h = 1, \ldots, c$, are the same for $\tilde A$ and $A$, so the nilpotency class of $\tilde A \Wr Y$ is, like in previous proof, the maximum of values \begin{equation} \label{EQUATION two summands second} a \, h + (s(h)-1)b, \quad h = 1, \ldots, c, \end{equation} where $p^{s(h)}$ is the exponent of $\gamma_h(A)$. This exponent for some $h < c$ may be so much larger than the exponent of $\gamma_c(A)$ that for the given $t$ the highest value of \eqref{EQUATION two summands second} is achieved not for $h = c$. However, there is a large enough $t_1$ such that the maximal value of \eqref{EQUATION two summands second} is achieved with $h = c$ for all $t \ge t_1$. Thus, we can assume $h=c$ in formula \eqref{EQUATION two summands second} with $s(c) = \alpha$. \end{proof} \begin{Remark} Clearly, it would be possible to compute the exact values for the limits $t_0$ and $t_1$ in Lemma~\ref{LEMMA bound for A beta Wr Z} and Lemma~\ref{LEMMA bound for tilde A Wr Z}. However, that would bring nothing but lengthy calculations, since the exact values of $t_0$ and $t_1$ are not relevant for the rest. \end{Remark} \vskip2mm Before we proceed to the proof of Theorem~\ref{Theorem wr p-groups} let us bring two technical lemmas, where we group a few facts, which either are known in the literature, or are proved by us earlier (see Proposition 22.11 and Proposition 22.13 in \cite{HannaNeumann}, Lemma 1.1 and Lemma 1.2 in \cite{AwrB_paper} and also \cite{ShmelkinOnCrossVarieties}). The proofs can be found in \cite{AwrB_paper}, and we bring these lemmas here without arguments: \begin{Lemma} \label{X*WrY_belongs_var For arbitrary classes $\X$ and $\Y$ of groups and for arbitrary groups $X^*$ and $Y$, where either $X^*\in {\sf Q}\X$, or $X^*\in {\sf S}\X$, or $X^*\in {\sf C}\X$, and where $Y\in \Y$, the group $X^* \Wr Y$ belongs to the variety $\var{\X \Wr \Y}$. \end{Lemma} \begin{Lemma} \label{XWrY*_belongs_var For arbitrary classes $\X$ and $\Y$ of groups and for arbitrary groups $X$ and $Y^*$, where $X\in\X$ and where $Y^*\in {\sf S}\Y$, the group $X \Wr Y^*$ belongs to the variety $\var{\X \Wr \Y}$. Moreover, if $\X$ is a class of abelian groups, then for each $Y^*\in {\sf Q}\Y$ the group $X \Wrr Y^*$ also belongs to $\var{\X \Wr \Y}$. \end{Lemma} Now we can prove the main statement: \begin{proof}[Proof of Theorem~\ref{Theorem wr p-groups}] That the condition of the theorem is sufficient is easy to deduce from the discriminating properties of $C_{p^v}^\infty$ (see \cite{B+3N} or Corollary 17.44 in \cite{HannaNeumann}). Since $B$ and $C_{p^v}^\infty$ generate the same variety $\A_{p^v}$ and, since $C_{p^v}^\infty$ is isomorphic to a subgroup of $B$, then by \cite[17.44]{HannaNeumann} $B$ also discriminates $\A_{p^v}$. It remains to apply Baumslag's theorem: since $B$ discriminates $\var{B} = \A_{p^v}$, the wreath product $A \Wr B$ discriminates and, thus, generates the product $\var{A} \A_{p^v}$ (see \cite{B+3N} or the statements 22.42, 22.43, 22.44 in \cite{HannaNeumann}). Turning to the proof of necessity of the condition suppose the group $B$ contains no subgroup isomorphic to $C_{p^v}^\infty$. By Pr\"ufer-Kulikov's theorem~\cite{Robinson, Kargapolov Merzlyakov} $B$ is a direct product of some (probably infinitely many) finite cyclic subgroups, the orders of which all are some powers of $p$. Since $B$ is of exponent $p^v$, all these orders are bounded by $p^v$, and there is at least one factor isomorphic to $C_{p^v}$. By assumption, there are only finitely many, say $l$, such factors, and collecting them together, we get $B = B_1 \times B_2$, where $$ \text{$B_1 = C_{p^v} \times \cdots \times C_{p^v}$ \quad ($l$ factors),} $$ and where $B_2$ is a direct product of some cycles of orders not higher than $p^{v-1}$. Take an arbitrary $t$-generator group $G$ in variety $\varr{A \Wr B}$. By \cite[16.31]{HannaNeumann} $G$ is in variety generated by all the $t$-generator subgroups of $A \Wr B$. Assume $T$ is one of such $t$-generator subgroups and denote by $H$ its intersection with the base subgroup $A^B$ of $A \Wr B$. Then $$ T / H \cong (T A^B) / A^B \le (A \Wr B)/ A^B \cong B $$ and, thus, $T$ is an extension of $H$ by means of an at most $t$-generator subgroup $B'$ of $B= B_1 \times B_2$. By Kaloujnine-Krasner's theorem \cite{KaloujnineKrasner} the group $G$ is embeddable into $H \Wr B'$ (see also~\cite{Ol'shanskii Kaluzhnin - Krasner}). The group $B'$ is a direct product of at most $t$ cycles, of which at most $l$ cycles are of order $p^v$, and the rest are of strictly lower orders. So $B'$ is isomorphic to a subgroup of $Z(l,t)$ for a suitable $t$. Since $H$ is a subgroup in $A^\beta$, we can apply Lemma~\ref{X*WrY_belongs_var} and Lemma~\ref{XWrY*_belongs_var} to get that $$ H \Wr B \in \var{A^\beta \Wr Z(l,t)}. $$ According to Lemma~\ref{LEMMA bound for A beta Wr Z} we get that the nilpotency class of $H \Wr B$ and of $T$ are bounded by formula \eqref{EQUATION bound 1} for all $t > t_0$. \vskip3mm Our proof will be completed if we discover a $t$-generator group in $\var{A} \var{B}$, with nilpotency class higher than \eqref{EQUATION bound 1}, at lest for some $t$. The group $\tilde A \Wr Y(z,t)$ of Lemma~\ref{LEMMA bound for tilde A Wr Z} is $t$-generator, because $\tilde A$ is a $z$-generator group. For sufficiently large $t > t_1$ the nilpotency class of this group is given by formula \eqref{EQUATION bound 2}. To compare the values of \eqref{EQUATION bound 1} and \eqref{EQUATION bound 2} notice that they both consist of three summands, of which the first and the third are the same in both formulas. Let us compare the second summands in \eqref{EQUATION bound 1} and in \eqref{EQUATION bound 2}. After we eliminate the common multiplier $p-1$ in both of them, we have: \begin{equation} \label{EQUATION compare 1} c\,t\left( \frac{1 - p^{v-1}}{\hskip-4mm 1-p} +l/t \cdot p^{v-1} \right) = c\,t \, \frac{1 - p^{v-1}}{\hskip-4mm 1-p} + c \, l \, p^{v-1} \end{equation} and \begin{equation*} \label{EQUATION compare 2} c(t-z) \frac{1 - p^{v}}{\hskip-1mm 1-p} = c(t-z) \left( \frac{1 - p^{v-1}}{\hskip-4mm 1-p} + p^{v-1} \right) \end{equation*} \begin{equation} \label{EQUATION compare 3} = c\,t \frac{1 - p^{v-1}}{\hskip-4mm 1-p} + c\,t \, p^{v-1} - cz \frac{1 - p^{v-1}}{\hskip-4mm 1-p} - c z \, p^{v-1}. \end{equation} The summand $c\,t \, \frac{1 - p^{v-1}}{\hskip-4mm 1-p}$ is the same in \eqref{EQUATION compare 1} and \eqref{EQUATION compare 3}, so we can eliminate it also, and just compare the remaining expressions: \begin{equation} \label{EQUATION compare 4} c \, l \, p^{v-1} \quad \text{and} \quad c\,t \, p^{v-1} - cz \frac{1 - p^{v-1}}{\hskip-4mm 1-p} - c z \, p^{v-1}. \end{equation} Since $c, l$ and $v$ are fixed, the left-hand side of \eqref{EQUATION compare 4} is a positive constant. Since $z$ also is fixed, the second and third summands on the right-hand side of \eqref{EQUATION compare 4} are some negative constants, which make the sum smaller. But, whatever these negative constants be, the other summand $c\,t \, p^{v-1} $ in \eqref{EQUATION compare 4} grows infinitely as $t$ grows. So for sufficiently large $t^*$ the value of \eqref{EQUATION compare 3} is larger than that of \eqref{EQUATION compare 1} (if necessary, we may also take $t^* > t_0, \, t_1$). Thus, the nilpotency class of the $t$-generator group $\tilde A \Wr Y(z,t)$ from the variety $\var{A} \var{B}$ is higher than the maximum of the nilpotency classes of the $t$-generator groups in $\var{A \Wr B}$ for all $t > t^*$. So $\tilde A \Wr Y(z,t)$ does not belong to $\var{A \Wr B}$, and the proof of the theorem is completed. \end{proof} It would not be hard to compute the exact value for $t^*$ in the proof above. We omit it to avoid routine calculations. \begin{Remark} The reader may compare the proofs in this note with proofs in Section 4 in~\cite{AwrB_paper} or Section 6 in~\cite{Metabelien}, where we considered similar problem for wreath products of {\it abelian} $p$-groups. That time we used the specially defined functions $\lambda (A, B, t)$, and for bounds on nilpotency classes of wreath products of abelian groups we applied Liebeck's formula~\cite{Liebeck_Nilpotency_classes}. As one may notice, in \cite{AwrB_paper, Metabelien} we had a by far simpler situation than what we discussed in Lemma~\ref{LEMMA bound for A beta Wr Z} and Lemma~\ref{LEMMA bound for tilde A Wr Z}. \end{Remark} Turning to the examples of usage of Theorem~\ref{Theorem wr p-groups} notice that Example 4.6 in~\cite{AwrB_paper} and Example 6.9 in~\cite{Metabelien} already are illustrations of Theorem~\ref{Theorem wr p-groups}, since they consider wreath products of abelian $p$-groups of finite exponents. As an example with a nilpotent, non-abelaian passive group we may consider: \begin{Example}The dihedral group $D_4$ is of nilpotency class $2$. Its order is $8$ and the exponent is $4=2^2$. L.G.~Kov{\'a}cs in \cite{Kovacs dihedral} has computed the variety it generates: $\var {D_4}= \A_2^2 \cap \Ni_2$. That for any finite $2$-group $B$ the wreath product $D_4 \Wr B$ does not generate the product $\var{D_4} \var {B} = (\A_2^2 \cap \Ni_2 ) \var {B}$ is clear from the fact that $D_4 \Wrr B$ is a nilpotent group, whereas no product variety may be nilpotent (if both factors are non-trivial). Now take $B$ to be an infinite abelian group of exponent, say, $2^v$. By Theorem~\ref{Theorem wr p-groups} $$ D_4 \Wr B = (\A_2^2 \cap \Ni_2 ) \var {B} = (\A_2^2 \cap \Ni_2 ) \A_{p^v} $$ holds if and only if $B$ contains a subgroup isomorphic to $C_{p^v}^\infty$. In particular, if $$ B = \underbrace{C_{p^v} \times \cdots \times C_{p^v}}_{n} \,\, \times \,\, \underbrace{C_{p^{v-1}} \times \cdots \times C_{p^{v-1}} \times \cdots}_{\infty}, $$ then $D_4 \Wr B$ does not generate $(\A_2^2 \cap \Ni_2 ) \A_{p^v}$. Moreover, it will not generate it even if we add to $B$ the ``large'' direct factor $$ \underbrace{C_{p^{v-2}} \times \cdots \times C_{p^{v-2}} \times \cdots}_{\infty} \,\, \times \cdots \times \,\, \underbrace{C_{p} \times \cdots \times C_{p} \times \cdots}_{\infty} . $$ \end{Example} The quaternion group $Q_8$ of order eight generates the same variety as $D_4$ (see~\cite{HannaNeumann}), and it also is nilpotent of class $2$. So a similar example can be constructed for this group also. \vskip10mm
1,116,691,499,456
arxiv
\section{Introduction} \label{} Impacts of network structures on the dynamical processes attract special attentions in recent years, to cite examples, the epidemic spreading on networks [1-3], the response of complex networks to stimuli [4,5] and the synchronization of dynamical systems on networks [6-9].In this paper, we will consider the collective motions on complex networks, just like the phonon in regular lattices. By means of the random matrix theory (RMT), we find that the nearest neighbor level spacing (NNLS) distributions for spectra of complex networks generated with WS Small-world, Erdos-Renyi and Growing Randomly Network models can be described with Brody distribution in a unified way. This unified description can be used as a new measurement to characterize complex networks. The results tell us that the topological structure of a complex network can induce a special kind of collective motions. Under environmental perturbations, the collective motion modes can transition between each other abruptly. This kind of sensitivity to outside perturbations is called collective chaos in this paper. It is found that the properties of the collective chaos are determined only by the structures of the networks. Without the aid of the dynamical model presented in references [4,5], we show for the first time that the dynamics on complex networks can be in collective order, soft chaotic or even hard chaotic states. \section{The Method} \label{} Wigner, Dyson, Mehta and others developed the Random Matrix Theory (RMT) to understand the energy levels of complex nuclei and other kinds of complex quantum systems [10-13]. In recent literature, the spectral density function and the time series analysis methods are used to capture properties of complex networks [14-19]. One of the most important concepts in RMT is the nearest neighbor level spacing (NNLS) distribution. A general picture emerging from experiments and theories is that if the classical motion of a dynamical system is regular, the NNLS distribution of the corresponding quantum system behaves according to a Poisson distribution. If the corresponding classical motion is chaotic, the NNLS distribution will behave in accordance with the Wigner--Dyson ensembles. This is the content of the famous Bohigas conjecture [20,21]. Hence, the NNLS distribution of a quantum system can tell us the dynamical properties of the corresponding classical system. Consider an undirected network of $N$coupled identical oscillators. The Hamiltonian reads, \begin{equation} \label{eq1} H = \sum\limits_{n = 1}^N {h_0 (x_n ,p_n )} + \frac{1}{2} \cdot \sum\limits_{m \ne n}^N {A_{mn} \cdot V(x_m ,x_n )} , \end{equation} \noindent where, $h_0 (x_n ,p_n )$ is the Hamiltonian of the $n$'th oscillator, $V(x_m ,x_n )$ the coupling potential between the $m$'th and the $n$'th oscillators and $A$ the adjacent matrix of the network. The Hamiltonian of the corresponding quantum system can be represented as, \begin{equation} \label{eq2} \hat {H} = \sum\limits_{n = 1}^N {\hat {h}_0 (x_n ,p_n )} + \frac{1}{2} \cdot \sum\limits_{m \ne n}^N {A_{mn} \cdot \hat {V}(x_m ,x_n )} \quad . \end{equation} Assuming the site energy of each oscillator is $\varepsilon _0 $ and the eigenfunction is $\varphi _0 $, we have $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over {h}} _0 (x_n ,p_n )\varphi _0 (x_n ) = \varepsilon _0 \varphi _0 (x_n )$. The matrix elements of $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over {H}} $ reads, \begin{equation} \label{eq3} \begin{array}{l} H_{mn} \\ = \left\langle {\varphi _0 (x_m )} \right|h_0 (x_m )\left| {\varphi _0 (x_n )} \right\rangle + A_{mn} \cdot \left\langle {\varphi _0 (x_m )} \right|V(x_m ,x_n )\left| {\varphi _0 (x_n )} \right\rangle \\ = \varepsilon _0 \cdot \delta _{mn} + A_{mn} \cdot V_{mn} \\ \end{array} \end{equation} The pattern of the spectrum of $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over {H}} $ does not dependent on the values of $\varepsilon _0 $ and $V_{mn} $. Assigning $\varepsilon _0 = 0$ and $V_{mn} = 1$, we have $H = A$. By this way, the spectrum of the adjacent matrix $A$ can be used to calculate the NNLS distribution of the quantum system. To make our discussion as self-contained as possible, we review briefly the procedure to obtain the NNLS distribution from the spectrum of $A$, denoted with $\left\{ {E_i \left| {i = 1,2,3, \cdots ,N} \right.} \right\}$. $N$ is the total number of the energy levels. To ensure that the distances between the energy levels are expressed in units of local mean energy level spacing, we should first map the energy levels $E_i $ to new variables called ``unfolded energy levels'' $\xi _i $. This procedure is called unfolding, which is generally a non-trivial task [22]. Define the cumulative density function as, \begin{equation} \label{eq4} G(E) \equiv N\int_{ - \infty }^E {g(s)ds} , \end{equation} \noindent where $g(s)$ is the density function of the initial energy level spectrum. We have, \begin{equation} \label{eq5} G(E)\left| {E_{k + 1} > E \ge E_k } \right. = k. \end{equation} Preprocess the spectrum, $\left\{ {E_k \left| {k = 1,2,3, \cdots ,N} \right.} \right\}$, and the corresponding accumulative density function, $\left\{ {G(E_k )\left| {k = 1,2,3, \cdots N} \right.} \right\}$, so that for each of them the mean is set to zero and the variance equals to $1$, i.e., \begin{equation} \label{eq6} \lambda _k = \frac{E_k - \frac{1}{N} \sum\nolimits_{m = 1}^N {E_m } }{\left[ {\sum\nolimits_{j = 1}^N {\left( {E_j - \frac{1}{N}\sum\nolimits_{m = 1}^N {E_m } } \right)^2}}\right]^{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 2}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$2$}}}, \end{equation} \begin{equation} \label{eq7} F(\lambda _k ) = \frac{G(E_k ) - \textstyle{1 \over N}\sum\nolimits_{m = 1}^N {G(E_m )} }{\left[ {\sum\nolimits_{j = 1}^N {\left( {G(E_j ) - \textstyle{1 \over N}\sum\nolimits_{m = 1}^N {G(E_m )} } \right)^2}} \right]^{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 2}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$2$}}}. \end{equation} Dividing the accumulative function $F(\lambda )_{ }$ into two components, i.e., the smooth term $F_{av} (\lambda )$ and the fluctuation term $F_f (\lambda )$, the unfolded energy levels can be obtained as, \begin{equation} \label{eq8} \xi _k = F_{av} (\lambda _k ). \end{equation} Because we have not enough information on the accumulative density function at present time, a polynomial is employed to describe the relation between $\xi _k $ and $\lambda _k $, as follows, \begin{equation} \label{eq9} \xi _k = \sum\limits_{l = 0}^L {c_l \cdot (\lambda _k )^l} = F_{av} (\lambda _k ). \end{equation} It is found that a large value of $L > 9$ can lead to a considerable good fitting result. To guarantee the fitting results exact enough, we assign the value as $L = 17$. Defining the nearest neighbor level spacing (NNLS) as, \begin{equation} \label{eq10} \left\{ {s_i = w \cdot \frac{(\xi _{i + 1} - \xi _i )}{\sigma _\xi }} \right\}\left| {i = 1,2,3, \cdots (N - 1)} \right., \end{equation} \noindent the Brody distribution of the NNLS reads, \begin{equation} \label{eq11} P(s) = \frac{\beta }{\eta }s^{\beta - 1}\exp \left[ { - \left( {\frac{s}{\eta }} \right)^\beta } \right], \end{equation} \noindent which is also called Weibull distribution in the research field of life data analysis [23]. In the definition of NNLS,$w$ is a factor to make the values of the NNLS in a conventional range to get a reliable fitting result, and $\sigma _\xi = \sqrt {\frac{\sum\nolimits_{i = 1}^{N - 1} {\xi _i ^2} }{N - 1}} $. Introducing the function, $Q(s) = \int_0^s {P(t)dt} $, some trivial computation lead to [23], \begin{equation} \label{eq12} \ln R(s) \equiv \ln \left[ {\ln \left( {\frac{1}{1 - Q(s)}} \right)} \right] = \beta \ln s - \beta \ln \eta , \end{equation} \noindent based upon which we can get reliable values of the parameters $\beta $ and $\eta $. To obtain the function $Q(s)$, we should divide the interval where the NNLS distributes into many bins. The size of a bin can be chosen to be a fraction of the square root of the variance of the NNLS, which reads, $\varepsilon = \frac{1}{R}\sqrt {\frac{\sum\nolimits_{i = 1}^{N - 1} {s_i ^2} }{N - 1}} $. If $R$ is unreasonable small, $Q(s)$ cannot capture the exact features in actual probability distribution function (PDF), while a much large $R$ will induce strong fluctuations. The value of the parameter $R$ is assigned 20 in this paper, because the fitting results are stable in a considerable wide range about this value. A Brody distribution reveals that the corresponding classical complex system is in a soft chaotic state [10]. For the two extreme conditions of $\beta = 1$ and $\beta = 2$, the Brody distribution will reduce to the Poisson and the Wigner distributions. And the corresponding classical complex systems are in the hard chaotic and order states, respectively. For the quantum system considered in this paper, it can be initially in a quantum state of $\left| {E_n } \right\rangle $, corresponding to the eigenvalue of $E_n $. Under a weak environmental perturbation, the state will display completely different behaviors [24-26]. If the system is in an order state, the transition probability of the initial state to a new state $\left| {E_m } \right\rangle $ will decrease rapidly with the increase of $\left| {E_n - E_m } \right|$, and the transition occurs mainly between the initial state and its neighboring states. If the system is in a chaotic state, the transitions between all the states in the chaotic regime the initial state belongs to can occur with almost same probabilities. In the classical dynamics, the corresponding states are the collective motion modes, just like phonon in regular lattices. Under perturbations the state of a chaotic system can transition between the collective modes in the same chaotic regime abruptly, while the state of an order system can transition between the initial mode and its neighboring states only. Consequently, the chaotic state of the $N$ identical oscillators is a kind of collective behavior rather than the individual properties of each oscillator. It is called collective chaos in this paper. The above discussions tell us that this collective chaos dependents only on the structure of the undirected network. \section{Results} \label{} Given $N$ nodes, an Erdos-Renyi network can be constructed just by connecting each pair with a probability $p_{ER}$ [27,28]. It is demonstrated that there exists a critical point $p_c = \frac{1}{N}$. For $p_{ER} < p_c $, the adjacency matrix can reduce into many sub-matrices, the couplings between the energy levels will be very weak and the NNLS will obey a Poisson form. For $p_{ER} \ge p_c $, the fraction of the nodes forming the largest sub-graph grows rapidly. The couplings between the energy levels will become stronger and stronger, and the NNLS should obey a Brody or even a Wigner form. Simulation results presented in Fig.1 to Fig.3 are consistent with this theoretical prediction. \begin{figure} \scalebox{1}[1]{\includegraphics{final1.eps}} \caption{\label{fig:epsart} The cumulative density function of the spectra of four Erdos-Renyi networks. The circles are the actual values, while the solid lines are fitting results with a 17-ordered polynomial function. } \end{figure} \begin{figure} \scalebox{1}[1]{\includegraphics{final2.eps}} \caption{\label{fig:epsart} Determine the values of parameters $(\beta,\eta)$ for the four Erdos-Renyi networks by means of the relation presented in $Eq.12$. In the main region we are interested the NNLS distributions obey a Brody form almost exactly. For $p_{ER}<p_c=\frac{1}{N}$,we have $\beta=0.931 \sim 1.0$, i.e., the distribution obeys a Poisson form. For $p_{ER}>p_c$, the distributions obey a Brody distribution form very near the Wigner one. The errors of the parameters $(\beta,\eta)$ are less than $0.01$.} \end{figure} \begin{figure} \scalebox{1}[1]{\includegraphics{final3.eps}} \caption{\label{fig:epsart} The NNLS distributions for the four Erdos-Renyi networks. In the main regions we are interested, the theoretical results can fit with the actual ones very well. The errors of the parameters $(\beta,\eta)$ are less than $0.01$.} \end{figure} \begin{figure} \scalebox{1}[1]{\includegraphics{final4.eps}} \caption{\label{fig:epsart} The cumulative density function of the spectra of four selected WS small-world networks. The circles are the actual values, while the solid lines are fitting results with a 17-ordered polynomial function.} \end{figure} \begin{figure} \scalebox{1}[1]{\includegraphics{final5.eps}} \caption{\label{fig:epsart} Determine the values of parameters $(\beta,\eta)$ for the four selected WS small-world networks by means of the relation presented in $Eq.12$. In the main region we are interested the NNLS distributions obey a Brody form almost exactly. The errors of the parameters $(\beta,\eta)$ are less than $0.01$.} \end{figure} \begin{figure} \scalebox{1}[1]{\includegraphics{final6.eps}} \caption{\label{fig:epsart} The NNLS distributions for the four selected WS small-world networks. In the main regions we are interested, the theoretical results can fit with the actual ones very well. The errors of the parameters $(\beta,\eta)$ are less than $0.01$.} \end{figure} \begin{figure} \scalebox{1}[1]{\includegraphics{final7.eps}} \caption{\label{fig:epsart} The parameters $(\beta,\eta)$ for all the WS small-world networks constructed in this paper. In the short region $p_r \in [0.0,0.1]$, the value of $\beta$ increases from $1.0$ to $1.75$, i.e., the NNLS distribution evolves from a Poisson to a near Wigner form. In the other region $p_r \in [0.1,1]$, the networks behave almost same. Comparison tells us that the Erdos-Renyi networks with $p_{ER}=\frac{2J}{N}(J \ge 1)$ are similar with the WS small-world network with $p_r=1$. The errors of the parameters are less than $0.01$.} \end{figure} Secondly, we consider the one-dimensional lattice small-world model designed by Watts and Strogatz (WS small-world model) [29]. Take a one-dimensional lattice of nodes with periodic boundary conditions, and join each node with its $k$ right-handed nearest neighbors. Going through each edge in turn and with probability $p_r $ rewiring one end of this edge to a new node chosen randomly. During the rewiring procedure double edges and self-edges are forbidden. Numerical simulations by Watts and Strogatz show that this rewiring process allows the small-world model to interpolate between a regular lattice and a random graph with the constraint that the minimum degree of each node is fixed [29]. The parameter $k$ is chosen to be 2, and $N$ is 3000. Fig.4 to Fig.6 present some typical results for different values of rewiring probability. In the main region we are interested, the NNLS distribution can be described with a Brody distribution almost exactly. Fig.7 presents the values of the parameters $\beta $ and $\eta $ versus the rewiring probability $p_r $. In a short region $p_r \in [0.0,0.1]$, the value of $\beta $ increases from $1$ to $1.75$, i.e., the NNLS distribution evolves from a Poisson to a near Wigner form. In the other region $p_r \in [0.1,1.0]$ the networks behave almost same. Comparison tells us that the Erdos-Renyi networks with $p_{ER} = \frac{2J}{N}(J \ge 1)$ are similar with the WS small-world network with $p_r = 1$. \begin{figure} \scalebox{1}[1]{\includegraphics{final8.eps}} \caption{\label{fig:epsart} The cumulative density function of the spectra of four selected GRN networks. The circles are the actual values, while the solid lines are fitting results with a 17-ordered polynomial function.} \end{figure} \begin{figure} \scalebox{1}[1]{\includegraphics{final9.eps}} \caption{\label{fig:epsart} Determine the values of parameters $(\beta,\eta)$ for the four selected GRN networks by means of the relation presented in $Eq.12$. In the main region we are interested the NNLS distributions obey a Brody form almost exactly. The errors of the parameters $(\beta,\eta)$ are less than $0.01$.} \end{figure} \begin{figure} \scalebox{1}[1]{\includegraphics{final10.eps}} \caption{\label{fig:epsart} The NNLS distributions for the four selected GRN networks. In the main regions we are interested, the theoretical results can fit with the actual ones very well. The errors of the parameters $(\beta,\eta)$ are less than $0.01$.} \end{figure} \begin{figure} \scalebox{1}[1]{\includegraphics{final11.eps}} \caption{\label{fig:epsart} The parameters $(\beta,\eta)$ for all the GRN networks constructed in this paper. In a large region $p_r \in [0.0,0.8]$, the value of $\beta$ oscillates around $0.68$, i.e, the NNLS distributions deviate significantly from the Poisson form in a way opposite to that of WS small-world networks. In the other region $p_r \in [0.8,1]$, the value of $\beta$ decreases rapidly to $\sim 0.5$. The errors of the parameters $(\beta,\eta)$ are less than $0.01$.} \end{figure} The third considered is the growing random networks (GRN) model [30]. Giving several connected seeds, at each step a new node is added and a link to one of the earlier nodes is created. The connection kernel $A_k $, defined as the probability a newly introduced node links to a preexisting node with $k$ degree, determines the structure of this network. A group of GRN networks determined by a special kind of kernel, $A_k \propto k^\theta (0 \le \theta \le 1)$, are considered in this present paper. For this kind of networks, the degree distributions decrease as a stretched exponential in $k$. Setting $\theta = 1$ we can obtain a scale-free network. Fig.8 to Fig.10 present some typical results for GRN networks. Fig.11 shows that in a wide range of $0 \le \theta \le 0.8$, the value of the parameter $\beta $ oscillates basically around $0.68$, i.e., the NNLS distributions deviate significantly from the Poisson form in a way opposite to that of WS small-world networks. In the other region $p_r \in [0.8,1.0]$ the value of $\beta $ decreases rapidly to $\sim 0.50$. The values of the parameter $\eta $ are also presented. Fig.12 shows the relation between $\beta $ and $\eta $. Each point represents a complex network. The results for the three kinds of networks are all illustrated. The WS small-world networks and the GRN networks are separated by the Poisson form, i.e., $\beta = 1$, significantly. The Erdos-Renyi networks with $p_{ER} \le p_c = \textstyle{1 \over N}$ obey near Poisson distribution, while that with $p_{ER} > p_c = \textstyle{1 \over N}$ are similar with the almost complete random WS small-world networks ($p_r \sim 1.0)$. \begin{figure} \scalebox{1}[1]{\includegraphics{final12.eps}} \caption{\label{fig:epsart} The relation between the two parameters $(\beta,\eta)$ . Each point corresponds to a complex network. The results for the three kinds of networks are all illustrated. The WS small-world networks and the GRN networks are separated by the Poisson form, i.e., $\beta=1$ , significantly. The Erdos-Renyi networks with $p_{ER} <p_c=\frac{1}{N}$ obey near Poisson distribution, while that with $p_{ER} \ge p_c$ are similar with the almost complete random WS small-world networks ($p_r \sim 1$ ). The position of a network in this scheme may tell some useful information. The errors of the parameters $(\beta,\eta)$ are less than $0.01$.} \end{figure} \section{Summary} \label{} In summary, based upon the RMT theory we investigate the NNLS distributions for the ER, the WS Small-world and the GRN networks. The Brody form can describe all these distributions in a unified way. The NNLS distributions of the quantum systems of the network of $N$ coupled identical oscillators tell us that the corresponding classical dynamics on the Erdos-Renyi networks with $p_{ER} < p_c = \frac{1}{N}$ are in the state of collective order, while that on the Erdos-Renyi networks with $p_{ER} > p_c = \frac{1}{N}$ in the state of collective chaos. On WS small-world networks, the classical dynamics evolves from collective order to collective chaos rapidly in the region $p_r \in [0.0,0.1]$, and then keeps chaotic up to $p_r = 1.0$. For GRN model, contrary to that on the WS small world networks, the classical dynamics are in special states deviate from order significantly in an opposite way. These dynamical characteristics are determined only by the structures of the considered complex networks. In a very recent paper [31], the authors point out that for some biological networks the NNLS distributions obey the Wigner form. The dynamics on these networks should be in a state of collective chaos. And the removal of nodes may change this dynamical characteristic from collective chaos to collective order. Therefore, constructing a mini network with selected key nodes should be considered carefully in discussing the collective dynamics on a complex network. Without the aid of the simplified model of dynamics presented in [3,4] we obtain the dynamical characteristics on complex networks. The NNLS distribution can capture directly the relation between the structure of a complex network and the dynamics on it. It should be pointed out that the collective chaos induced by the structures of complex networks is completely different with the individual chaotic states of the oscillators on the networks. For a network with regular structure, the classical dynamical processes on it should display collective order, even if the coupled identical oscillators on the nodes may be in chaotic states. On the contrary, for a complex network with a complex structure (e.g., a WS small-world structure), the classical dynamical processes on it should display collective chaos, even if the coupled identical oscillators on the nodes may be in order states as harmonic oscillations. Each network can be measured by a couple values of $(\beta ,\eta )$. The position of a complex network in $\beta $ versus $\eta $ scheme may tell us useful information for classification of real world complex networks. \section{Acknowledgements} \label{} This work was partially supported by the National Natural Science Foundation of China under Grant No. 70471033, 10472116 and No.70271070. One of the authors (H. Yang) would like to thank Prof. Y. Zhuo, Prof. J. Gu in China Institute of Atomic Energy and Prof. S. Yan in Beijing Normal University for stimulating discussions.
1,116,691,499,457
arxiv
\section{\label{Introduction}Introduction} The demand for efficient optical detectors is constantly growing due to rapid developments in telecommunication, light imaging, detection and ranging (LIDAR) systems and other military and research fields \cite{Tosi, CAMPBELL2008221, bertone2007avalanche, Mitra2006, Williams2017, Nada2020, Pasquinelli2020}. Photodetectors are increasingly being incorporated in photonic integrated circuits for Internet of Things and 5G communications \cite{li20185g,Chowdhury5G,Liu_III_V}. These applications require higher sensitivity in comparison to traditional \textit{p-i-n} photodiodes \cite{apd_recent}. Avalanche photodiodes (APDs) are often deployed instead due to their higher sensitivity, enabled by their intrinsic gain mechanism. However, the stochastic nature of the impact ionization process of APDs adds an excess noise factor $F(M)= \langle m^2\rangle/\langle m\rangle^2 = kM+(1-k)(2-1/M)$ to the shot noise current, $\langle i_{shot}^{2}\rangle =2qIM^{2} F(M) \Delta f$ \cite{mcintyre1966multiplication,Teich1986,Teich1990}. Here, $q$ is the electron charge, $I$ is the total photo plus dark current, $m$ is the per primary electron avalanche gain, $M = \langle m\rangle$ the average multiplication gain and $\Delta f$ is the bandwidth. A low value of $k$, which is the ratio of hole ionization coefficient $\beta$ to the electron ionization coefficient, $\alpha$, is desirable for designing low-noise n-type APDs. This ratio stipulates that for pure electron injection, a significantly lower hole ionization than the electron ionization rate leads to reduced shot noise. If impact ionization is caused by pure hole injection, $k$ in the equation will be replaced by $1/k$. Recently, several III-V digital alloys, i.e., short-period superlattices with binary components stacked alternately in a periodic manner, were found to exhibit extremely low noise currents and a high gain-bandwidth product in the short-infrared wavelength spectrum \cite{InAlAs_expt,AlInAsSb_expt,AlAsSb_expt}. Characterization of InAlAs, AlInAsSb and AlAsSb digital alloy APDs have shown very small values of $k$, \cite{InAlAs_expt,AlInAsSb_expt,AlAsSb_expt} whereas other digital alloys, like InGaAs and AlGaAs, demonstrate much higher $k$ value \cite{InGaAs_expt,AlGaAs_expt}. Based on previous full-band Monte Carlo simulations, \cite{AlInAsSb_MC, InAlAs_MC,InAlAs_MC2} the low $k$ has been attributed to the presence of superlattice minigaps inside the valence band of the material bandstructure, along with an enhanced effective mass arising from the lower band-width available to the holes. Such valence band minigaps often co-exist with similar (but not symmetrical) minigaps in the conduction band. However, electrons in the conduction band typically have very low effective mass, which allows quantum tunneling and enhanced phonon scattering to circumvent minigaps in the conduction band. Furthermore certain digital alloys showing mini-gaps do not exhibit low noise, and the reason behind that has not yet been addressed. Our postulate is that a combination of valence band minigap, a large separation between tight-hole and split-off bands, and corresponding enhanced hole effective mass tend to limit hole ionization coefficient. A comprehensive analysis is clearly necessary to understand the carrier impact ionization in these materials. In this paper, we employ a fully atomistic, Environment-Dependent Tight Bindng (EDTB) model, \cite{TanETB} calibrated to Density Functional Theory (DFT) bandstructure as well as wavefunctions, to compute the bandstructures of several III-V digital alloys. Using a full three-dimensional quantum kinetic Non-Equilibrium Green's Method (NEGF) formalism with the EDTB Hamiltonian as input, we compute the ballistic transmission across these digital alloys that accounts for intraband quantum tunneling across minigaps and light-hole/split-off bands offset. Additionally, a full-band Boltzmann transport solver is employed to determine the energy resolved carrier occupation probability under the influence of an electric field in order to study the effect of optical phonon scattering in these short-period superlattices. The calculations are performed using computational resources at University of Virginia and XSEDE \cite{xsede}. Using these transport formalisms, we elucidate the impact of minigap sizes, light-hole/split-off band offset and effective masses on carrier transport in the valence band. Our simulations demonstrate that the squashing of subbands into tighter band-widths, such as arising from minigap formation, or the engineering of large light-hole/split-off band offset lead to the suppression in transport of one carrier type, by resisting quantum tunneling or phonon-assisted thermal jumps. For InAlAs, the improved performance is primarily due to the minigaps generated by the digital alloy periodicity and the corresponding enhanced effective mass. For AlInAsSb and AlAsSb, the gain is a combination of minigaps, large effective mass and LH/SO offset. The LH/SO offsets in these two alloys results arise from the strong spin-orbit coupling due to the Sb atoms, a characteristic which is also observed in their random alloy counterparts that exhibit low noise. A quantitative comparison of the various alloy gains measured is presented in the last two columns of Table IV. The unique superlattice structure of the digital alloys opens the possibility for designing new low-noise alloy combinations for detection of other frequency ranges. Ideally, it is easier and cheaper to at first computationally study the suitability of the alloys for achieving low noise before actually fabricating these. For this purpose, we need a set of design criteria for judging the alloy performance using theoretically calculated parameters. Based on our simulations, we propose five simple inequalities that can be used to judge the suitability of digital alloys for use in low-noise APDs. We judge the aptness of five existing digital alloys- InAlAs, InGaAs, AlGaAs, AlInAsSb and AlAsSb. We observe that the inequalities provide a good benchmark for gauging the applicability of digital alloys for use in low-noise APDs. \section{\label{Simulation}Simulation Method} \subsection{Environment Dependent Tight Binding and Band Unfolding for atomistic description}\label{bandstructure_sec} \begin{figure}[b] \includegraphics[width=0.45\textwidth]{structure.pdf} \caption{\label{structure} (a) Digital alloy structure (b) typical structure of an APD } \end{figure} In order to understand the influence of minigap filtering in digital alloy structures, an accurate band structure over the entire Brillouin zone is required. The periodic structure of the InAlAs digital alloy is shown in Fig.~\ref{structure}(a) and Fig.~\ref{structure}(b) shows the typical structure of a \textit{p-i-n} APD. We have developed an Environment-Dependent Tight Binding (EDTB) Model to accurately calculate the band structure of alloys \cite{TanETB,TanSi}. Traditional tight binding models are calibrated directly to bulk bandstructures near their high symmmetry points and not to the underlying chemical orbital basis sets \cite{TanSi}. These models are not easily transferable to significantly strained surfaces and interfaces where the environment has a significant impact on their material chemistry. In other words, the tight binding parameters work directly with the eigenvalues (E-k) and not with the full eigenvectors. While the crystallographic point group symmetry is enforced by the angular transformations of the orbitals, the radial components of the Bloch wavefunctions, which determine bonding and tunneling properties, are left uncalibrated. Previously, in order to incorporate accuracy of radial components, an Extended H\"uckel theory \cite{huckel_cnt,huckel_silicon} was used that incorporated explicit Wannier basis sets created from non-orthogonal atomic orbitals that were fitted to Density Functional Theory for the bulk Hamiltonian. The fitted basis sets were transferrable to other environments by simply recomputing the orbital matrix elements that the bonding terms were assumed to be proportional to. As an alternative, the EDTB model employs conventional orthogonal Wannier like basis sets. The tight binding parameters of this model are generated by fitting to both Hybrid functional (HSE06) \cite{heyd2003hybrid} band structures and orbital resolved wave functions. Our tight binding model can incorporate strain and interface induced changes in the environment by tracking changes in the neighboring atomic coordinates, bond lengths and bond angles. The onsite elements of each atom have contributions from all its neighboring atoms. The fitting targets include unstrained and strained bulk III-V materials as well as select alloys. We have shown in the past that our tight binding model has the capability of matching the hybrid functional band structures for bulk, strained layers and superlattices \cite{TanETB,AhmedTFET}. The band structures of the alloys contain a massive number of spaghetti-like bands due to the large supercell of the system that translates to a small Brillouin zone with closely separated minibands and minigaps. In order to transform the complicated band structure into something tractable, we employ the technique of band unfolding \cite{tan_unfolding,boykin_unfolding1,boykin_unfolding2}. This method involves projecting the eigenvalues back to the extended Brillouin zone of the primitive unit cell of either component, with weights set by decomposing individual eigenfunctions into multiple Bloch wavefunctions with different wave vectors in the Brillouin zone of the original primitive unit cell. The supercell eigenvector $\mid \vec{K}m \rangle$ is expressible in terms of the linear combination of primitive eigenvectors $\mid \vec{k_i}n \rangle$. The eigenstate $E_p$ of an atom with wavector $k$ can be expressed as a linear combination of atomic-orbital wavefunctions. The supercell electron wavefunction $| \psi_{m\vec{K}}^{SC} \rangle$ can be written as a linear combination of electron wavefunctions in the primitive cell as \cite{InAlAs_expt} \begin{equation} | \psi_{m\vec{K}}^{SC} \rangle = \sum_n a\left(\vec{k_i},n;\vec{K},m\right) |\psi_{n\vec{k_i}}^{PC} \rangle \end{equation} \begin{equation} \vec{k_i} \in \{ \vec{\tilde{k_i}}\}\nonumber \end{equation} where, $| \psi_{n\vec{k_i}}^{PC} \rangle$ is the electron wavefunction for the wave vector $\vec{k_i}$ in the $n$th band of the primitive cell. Here, $\vec{\bm{K}}$ and $\vec{\bm{k}}$ denote the reciprocal vector in supercell and primitive cell respectively. The folding vector $\vec{\bm{G}}_{\vec{\bm{k}}\rightarrow \vec{\bm{K}}}$ contains the projection relationship and is expressed as \begin{equation} \vec{\bm{K}}=\vec{\bm{k}}-\vec{\bm{G}}_{\vec{\bm{k}}\rightarrow \vec{\bm{K}}} ~. \end{equation} The projection of the supercell wavefunction $| \psi_{m\vec{K}}^{SC} \rangle$ into the primitive cell wavefunction $| \psi_{n\vec{k_i}}^{PC} \rangle$ is given as \begin{equation} P_{m \vec{K}}=\sum_n \mid \langle \psi_{m \vec{K}}^{SC} | \psi_{n \vec{k_i}}^{PC}\rangle \mid^2 ~. \end{equation} Plotting these projection coefficients gives a cleaner picture of the band evolution from the individual primitive components to the superlattice bands. \subsection{Non-Equilibrium Green's Function Method for coherent transmission} \label{NEGF} Under the influence of a large electric field it is possible for carriers to move across minigaps by means of quantum tunneling. Such transport involves a sum of complex transmissions limited by wavefunction symmetry between several minibands. We make use of the Non-Equilibrium Green's Function formalism to compute the ballistic transmission and study the influence of minigaps on quantum tunneling in digital alloys. The digital alloys we are interested in studying are translationally invariant in the plane perpendicular to the growth direction and have finite non-periodic hopping in the transport (growth) direction. Thus, we need a device Hamiltonian $H$ whose basis is Fourier transformed into $k$-space in the perpendicular $x-y$ plane but is in real space in the $z$ growth direction, i.e., $H\left(r_z,k_x,k_y\right)$. Conventionally, this can be done with a DFT Hamiltonian in real space, $H\left(r_z,r_x,r_y\right)$, which is Fourier transformed along the transverse axes to get $H\left(r_z,k_x,k_y\right)$. However, DFT Hamiltonians are complex and sometimes do not match with bulk material bandstructure. Thus, it is simpler to utilize a tight binding Hamiltonian whose $E-\vec{k}$s are calibrated to bulk bandstructure, and inverse transform along the growth direction. The matrix elements of the 3D EDTB Hamiltonian are given in the basis of symmetrically orthogonalized atomic orbitals $\left|nb\textbf{R}\right>$. Here $\textbf{R}$ denotes the position of the atom, $n$ is the orbital type ($s,p,d$ or $s^*$) and $b$ denotes the type of atom (cation or anion). The Hamiltonian can also be represented in $k-$space basis $\left|nb\textbf{k}\right>$ by Fourier transforming the elements of the real-space Hamiltonian. The 3D Hamiltonian is then converted into a quasi-1D Hamiltonian \cite{stovneng1993multiband}. The Hamiltonian elements can be represented in the basis $\left|nbj\textbf{k}_{||}\right>$ with``parallel'' momentum $\textbf{k}_{||}=(k_x,k_y)$ and ``perpendicular'' position $x_j=a_L/4$ as parameters. For a zinc-blende crystal, the distance between nearest-neighbour planes is one-fourth the lattice constant $a_L$. The 3D Hamiltonian is converted to the the quasi-1D one by means of a partial Fourier transform \cite{stovneng1993multiband,stickler2013theory}: \begin{equation} \left|nbj\textbf{k}_{||}\right>=L_{BZ}^{-1/2}\int dk_z e^{-ik_z ja_L/4} \left|nb\textbf{k}\right> ~. \end{equation} Here $L_{BZ} =8\pi/a_L$ is the length of the one-dimensional (1D) Brillouin zone over which the $k_z$ integral is taken. The quasi-1D Hamiltonian is position dependent in the growth direction. Thus, we are able to utilize the accurate bandstructure capibility of the EDTB. In presence of contacts, the time-independent open boundary Schr\"odinger equation reads \begin{equation} (EI-H-\Sigma_1-\Sigma_2)\Psi = S_1+S_2 \end{equation}where, $E$ represents energy, $I$ denotes identity matrix and $\Sigma_{1,2}$ are the self-energies for the left and right contact respectively describing electron outflow, while $S_{1,2}$ are the inflow wavefunctions. The solution to this equation is $\Psi = G(S_1+S_2)$, where the Green's function \cite{datta2000nanoscale} \begin{equation} G(E)=\left[EI-H-\Sigma_1-\Sigma_2\right]^{-1} ~. \end{equation} Here $H$ includes the applied potential, added to the onsite 1D elements. Assuming the contacts are held in local equilibria with bias-separated quasifermi levels $\mu_{1,2}$, we can write the bilinear thermal average $\langle S_iS^\dagger_i \rangle = \Gamma_if(E-\mu_i)$ where $f$ is the Fermi-Dirac distribution and $\Gamma_{1,2} = i(\Sigma_{1,2}-\Sigma_{1,2}^\dagger)$ denoting the broadening matrices of the two contacts. The equal time current $I = q(d/dt + d/dt^\prime)Tr\langle \Psi^\dagger(t)\Psi(t^\prime)\rangle|_{t=t^\prime}$ then takes the Landauer form $I = (q/h)\int dET(f_1-f_2)$, where the coherent transmission between the two contacts is set by the Fisher-Lee formula \begin{equation} T(E)=Tr\left[\Gamma_1 G \Gamma_2 G^\dagger \right] \end{equation} where $Tr$ represents the trace operator. The energy resolved net current density from the layer $m$ to layer $m+1$ is expressed as\cite{stovneng1993multiband}: \begin{eqnarray} J_{m,m+1}(E) &=& -\frac{iq}{h} \int \frac{\textbf{k}_{||}}{(2\pi)^2} Tr[G^{n,p}_{m+1,m}H_{m,m+1} \\ & & -G^{n,p}_{m,m+1}H_{m+1,m}] \nonumber \end{eqnarray} where, $G^{n} = \langle \psi^\dagger\psi\rangle$ and $G^p = \langle \psi\psi^\dagger\rangle$ represent electron ($n$) and hole density ($p$) correspondingly and $H_{m,m+1}$ is the tight binding hopping element between layers $m$ and $m+1$ along the transport/growth direction. \subsection{Boltzmann Transport Model for incoherent scattering} \label{BTE} The NEGF approach is particularly suited to ballistic transport where coherent quantum effects dominate. Incoherent scattering requires a self-consistent Born approximation which is computationally quite involved. We need a practical treatment of scattering. Under an external electric field, the carrier distributions in digital alloys no longer follow a local Fermi-distribution, but re-distribute over real-space and momentum space. To understand the carrier distribution under electric field in digital alloys, we employed the multi-band Boltzmann equation. \begin{eqnarray} \vec{v}\cdot \nabla_{\textbf{r}} f_n + \vec{F}\cdot \nabla_{\textbf{k}} f_n & =& \sum_{m, \vec{p}'} S\left(\vec{p}',\vec{p}\right)f_m\left(\vec{p}'\right) \left[1-f_n\left(\vec{p}\right)\right] \\ & & - \sum_{m,\vec{p}'} S\left(\vec{p},\vec{p}'\right)f_n\left(\vec{p}\right) \left[1-f_m\left(\vec{p}'\right)\right]\nonumber \end{eqnarray} Here, $f = f(\textbf{r},\textbf{k})$ is the carrier distribution, $n$ and $m$ are band indices, $\vec{p}$ and $\vec{p}'$ are the momenta of the carriers, and $S\left(\vec{p}',\vec{p}\right) $ is the scattering rate. The left hand side of this equation alone describes the ballistic trajectory in the phase space of carriers under electric field. The right hand side of the equation corresponds to the scattering processes including intra-band and inter-band scattering. In a homogenenous system where the electric field is a constant, the distribution function is independent of position, $\nabla_{\textbf{r}} f = 0$ and the equation is reduced to \begin{eqnarray}\label{bte_kspace} \vec{F}\cdot \nabla_{\textbf{k}} f_n & =& \sum_{m,\vec{p}'} S\left(\vec{p}',\vec{p}\right)f_m\left(\vec{p}'\right) \left[1-f_n\left(\vec{p}\right)\right] \\ & & - \sum_{m,\vec{p}'} S\left(\vec{p},\vec{p}'\right)f_n\left(\vec{p}\right) \left[1-f_m\left(\vec{p}'\right)\right]\nonumber ~. \end{eqnarray} For APDs, it is critical to consider optical phonon scattering, which is the dominant process besides tunneling that allows carriers to overcome the minigap arising in the band structures of digital alloys. The optical phonon has a non-trivial energy of $\hbar \omega_{opt}$ that can be absorbed or emitted by carriers. The scattering rate $S\left(\vec{p}',\vec{p}\right) $ has the form set by Fermi's Golden Rule \begin{equation} S\left(\vec{p}',\vec{p}\right) = \frac{2\pi}{\hbar} \left|H_{\vec{p}, \vec{p}'}\right|^2 \delta_{\vec{p}',\vec{p}\pm \vec{\beta}} \delta \left(E(\vec{p}')-E(\vec{p})\pm \hbar \omega_{opt}\right) ~. \end{equation} The $E(\vec{p})$ and $E(\vec{p}')$ are band structures of digital alloy calculated by the tight binding model. $H_{p,p^\prime}$ can be calculated by evaluating electron-phonon coupling matrix elements explicitly. In this work, we extract a constant effective constant scattering strength $H_{\vec{p}, \vec{p}'}$ from experimental mobility $\mu$. The scattering lifetime $\tau$, which is $1/S\left(\vec{p}',\vec{p}\right)$, can be extracted from the mobility using $\mu=q\tau/m^*$. Due to lack of experimental mobilities of the digital alloys, we considered the average of the binary constituent room temperature mobilities for extracting the lifetime. A simple average is done since the binary constituents in periods of most of the digital alloys considered here are equally divided. In using room temperature values the underlying assumption is that the dominant scattering mechanism here is phonon scattering due to large phonon population. Ionized impurity scattering is considered to be much lower due to digital alloys having clean interfaces \cite{AlInAsSb_expt}. It is then possible to extract $H_{\vec{p}, \vec{p}'}$ from the scattering lifetime. To get the equilibrium solution, we solve Eq.~\ref{bte_kspace} self-consistently, starting from an initial distribution $f = \delta_{\vec{k},0}$. A detailed model of carrier transport in APDs also requires a NEGF treatment of impact ionization self-energies and a Blanter-Buttiker approach to extract shot noise, but we leave that to future work. Our focus here is on conductive near-ballistic transport, and the role of quantum tunneling and perturbative phonon scattering in circumventing this. \section{\label{results}Results and Discussion} \begin{figure}[t] \includegraphics[width=0.48\textwidth]{illustration2.pdf} \caption{\label{fig:impact_ionization_mechnism} Impact ionization process in normal (random alloy) APD and superlattice APD. In both APDs, it is easier for electrons to gain energy and reach the impact ionization threshold (c). In normal APDs (a), holes find it harder to gain high energy compared to electrons because of thermalization. The hole energy is reduced by thermalization due to various scattering processes as shown in (d).In superlattice APD (b), the existence of minigaps makes it harder for holes to reach higher energies. The minigaps acts as barrier that prevent holes from moving to the lower valence bands. In the plots, the y-axis $E$ is the total energy (kinetic+potential) meaning in between inelastic scattering events the particles travel horizontally. } \end{figure} There are three common ways to achieve low noise and high gain-bandwidth product - selecting a semiconductor with favorable impact ionization coefficients, scaling the multiplication region to exploit the non-local aspect of impact ionization, and impact ionization engineering using appropriately designed heterojunctions \cite{apd_recent}. Typically, the lower hole impact ionization coefficient in semiconductors is due to stronger scattering in the valence bands, as depicted in Fig.~\ref{fig:impact_ionization_mechnism}(a). Previously, the lowest noise with favorable impact ionization characteristics were realized with Si in the visible and near-infrared range, \cite{lee1964ionization,conradi1972distribution,grant1973electron,kaneda1976model} and InAs \cite{marshall2008electron,marshall2011high,sun2014record,sun2012high,ker2012inas} and HgCdTe \cite{beck2001mwir,beck2004hgcdte} in the mid-infrared spectrum. In comparison, InGaAs/InAlAs \cite{InGaAs_random,InAlAs_random} random alloy APDs exhibit significantly higher noise than Si, HgCdTe or InAs, which are the highest performance telecommunications APDs. In the recent past, digital alloy InAlAs APDs have demonstrated lower noise compared to their random alloy counterpart \cite{InAlAs_expt}. This seems a surprise, as the suppression of one carrier type (the opposite of ballistic flow expected in an ordered structure) is necessary for low excess noise. Initially, the low value of $k$ in InAlAs was attributed to the presence of minigaps \cite{InAlAs_MC2}. However, minigaps were also observed in InGaAs digital alloy APDs which have higher excess noise\cite{ahmed2018apd,InGaAs_expt}. So, a clearer understanding of the minigap physics was needed and hence a comprehensive study was required. \begin{figure}[t] \includegraphics[width=0.45\textwidth]{f_vs_gain.pdf} \caption{\label{fig:expt k values} Experimentally measured Excess noise vs. Multiplication gain of InGaAs, AlGaAs, InAlAs, AlInAsSb and AlAsSb digital alloys are shown here \cite{InAlAs_expt,InGaAs_expt,AlGaAs_expt,AlInAsSb_expt,AlAsSb_expt}. The dotted lines for the corresponding k’s are plotted using McIntyre’s formula \cite{mcintyre1966multiplication}.} \end{figure} Our recent results suggest that well defined minigaps introduced in the valence band of digital alloys suppress the density of high energy holes and thereby reduce the impact ionization greatly, as shown in Fig.~\ref{fig:impact_ionization_mechnism}(b). In a regular low-noise electron-injected APD, the electron ionization coefficient is much higher than the hole ionization coefficient. Thus, electrons can easily climb to higher kinetic energies in the conduction band, depicted in Fig.~\ref{fig:impact_ionization_mechnism}(c), and participate in the impact ionization process by gaining the impact ionization threshold energy. On the other hand holes lose energy by various inelastic scattering processes (Fig.~\ref{fig:impact_ionization_mechnism}(d)), collectively known as thermalization. Thermalization prevents holes from reaching their secondary impact ionization threshold. In superlattice APDs, minigaps provide an additional filter mechanism that prevents holes from reaching the threshold energy required to initiate secondary impact ionization. \begin{figure}[b] \includegraphics[width=0.49\textwidth]{material_unitcells.pdf} \caption{\label{fig:material_structure} Lattice structures of (a) InGaAs, (b) AlGaAs, (c) InAlAs, (d) AlInAsSb and (e) AlAsSb digital alloys considered in this paper.} \end{figure} The effect of minigaps is shown in Fig.~\ref{fig:impact_ionization_mechnism}(e). However, not all digital alloy APDs exhibit low noise. The excess noise $F(M)$ vs. multiplication gain characteristics of experimental InGaAs, AlGaAs, InAlAs, AlInAsSb and AlAsSb digital alloy APDs are shown in Fig. \ref{fig:expt k values} \cite{InAlAs_expt,InGaAs_expt,AlGaAs_expt,AlInAsSb_expt,AlAsSb_expt}. InGaAs APDs have the highest excess noise while AlAsSb has the lowest. The dotted lines represent the theoretical $F(M)$ vs. $M$ calculated using the well known McIntyre's formula \cite{mcintyre1966multiplication}, introduced in the first paragraph of this paper. In order to understand the underlying physics in these digital alloys, an in-depth analysis of the material bandstructure and its effect on the carrier transport is required. We calculate the atomistic DFT-calibrated EDTB bandstructure of these materials and unfold their bands using the techniques described in section \ref{bandstructure_sec}, to understand the underlying physics of their noise performance. In Fig.~\ref{fig:material_structure}, we show the periods of the different digital alloys considered- (a) 6ML InGaAs, (b) 6ML AlGaAs, (c) 6ML InAlAs, (d) 10ML Al$_{0.7}$In$_{0.3}$AsSb and (e) 5ML AlAsSb. Here, 6ML InGaAs includes 3ML InAs and 3ML GaAs, 6ML AlGaAs has 3ML AlAs and 3ML GaAs, and 6 ML InAlAs has 3ML InAs and 3ML AlAs. 10ML Al$_{0.7}$In$_{0.3}$AsSb consists of 3ML AlSb, 1ML AlAs, 3ML AlAs and 3ML InAs in its period. AlAsSb has 4ML AlSb and 1ML AlAs. The unfolded bandstructures of these alloys are shown in Fig. \ref{fig:bandstructure}. We observe that minigaps exist in at least one of the valence bands (heavy-hole, light-hole or split-off) for all the material combinations. The InAlAs valence band structure is magnified in Fig. \ref{fig:InAlAs_zoomed}. The minigap between the LH and SO band is denoted in the figure. Additionally, the large separation between the LH and SO bands at the $\Gamma$ point is highlighted. \begin{figure*}[t] \includegraphics[width=1\textwidth]{bandstructure_all_3.pdf} \caption{\label{fig:bandstructure} Unfolded bandstructure of (a) 6ML InGaAs (b) 6ML AlGaAs (c) 6ML InAlAs (d) 10ML AlInAsSb (e) 5ML AlAsSb. The minigaps of InGaAs, InAlAs, AlInAsSb and AlAsSb real bandstructures are shown in the insets.} \end{figure*} \begin{figure}[b] \includegraphics[width=0.47\textwidth]{InAlAs_bandstructure_zoomed.pdf} \caption{\label{fig:InAlAs_zoomed} A magnified picture of the InAlAs valence band shows the minigap closest to the valence band edge. The split between the LH and SO at the $\Gamma$ point is also highlighted. } \end{figure} \begin{table}[b] \centering \begin{center} \begin{tabular}{ |m{1.3cm}|m{0.85cm} m{0.85cm} m{0.85cm} m{0.85cm} m{0.85cm} m{0.85cm} m{0.85cm}| } \hline \textbf{Material} & $E_G$ (eV) & $\Delta E_{b}$ (eV) & $\Delta E_{m}$ (eV) & \textbf{HH} $m^*$ & \textbf{LH} $m^*$ & \textbf{SO} $m^*$ & $\Delta E_{LS}$ (eV)\\ \hline InGaAs & 0.63 & 0.34 & 0.03 & 0.31 & 0.13 & 0.045 & 0.35 \\ \hline AlGaAs &1.94 & 1.03 & 0.34 & 0.45 & 0.31 & 0.12 & 0.33 \\ \hline InAlAs &1.23 & 0.30 & 0.12 & 0.5 & 0.4 & 0.1 & 0.31\\ \hline AlInAsSb & 1.19 & 0.33 & 0.06 & 0.42 & 0.38 & 0.08 & 0.48\\ \hline AlAsSb & 1.6 & 0.56 & 0.1 & 0.45 & 0.3 & 0.13 & 0.54 \\ \hline \end{tabular} \end{center} \caption{Material parameters of the different digital alloys simulated in this paper.} \label{table:1} \end{table} The role of the minigaps on hole localization is not identical across different alloys. For instance, the presence of minigaps in material bandstructure is not sufficient to realize low noise in APDs. Taking a closer look at the bandstructures, we observe that the positions in energy of the minigaps with respect to the valence band edge differ from one material to another. Additionally, the minigap sizes of the different alloys vary in magnitude. A complimentary effect of the minigap size is the flattening of the energy bands, \textit{i.e.}, a large minigap size results in flatter bands around the gap. This in turn results in an increased effective mass which tends to inhibit carrier transport. Table \ref{table:1} lists the energy location of the minigap with respect to the valence band edge $\Delta E_{b}$, the minigap size $\Delta E_m$, the light-hole (LH) and split-off (SO) band effective masses and the energy difference between the LH and SO bands $\Delta E_{LS}$ at the $\Gamma$ point for the digital alloys studied. \begin{figure}[t] \includegraphics[width=0.45\textwidth]{minigap_trans.pdf} \caption{\label{fig:minigap_current_density} Small minigaps in the valence band, as shown in (a), create a small tunneling barrier which can be overcome by holes with low mass. The spectral current density for InGaAs, which has a small minigap and small LH effective mass, is shown in (b). The current spectrum for InGaAs in the Fermi window is continuous. The creation of a large tunneling barrier by a larger minigap is shown in (c). This barrier prevents hole transmission. InAlAs has a larger minigap and LH $m^{*}$. Regions of low current density is observed within the Fermi window in the InAlAs spectral current density in (d). The large minigap in InAlAs results in reduced transmission as shown in the $T(E)$ vs. (E) plot of (e). The simulations for (b), (d) and (e) were conducted under bias of $V=0.25V$.} \end{figure} \begin{figure}[t] \includegraphics[width=0.47\textwidth]{NEGF_transmission_all_2.pdf} \caption{\label{fig:negf_transmission_all} The Transmission $T(E)$ vs. Energy $E$ for all the digital alloys at $V=0.25V$ in (a) and $V=0.5V$ in (b). A $21\times 21$ grid for transverse wavevectors is used. } \end{figure} We can see in the table that there are significant variations in minigap size and position between different materials. At first glance, there seems to be no direct correlation between these variations and the excess noise, prompting us to do added transport analyses. Under high electric field, a carrier must gain at least the threshold energy, $E_{TH}$, in order to impact ionize. Typically, $E_{TH}$ is assumed to be approximately 1.5 times the material bandgap, $E_G$. Thus, in the presence of minigaps, electrons/holes must bypass these gaps by some transport mechanism in order to gain energy equivalent to $E_{TH}$. The two such major transport mechanisms are quantum mechanical tunneling and optical phonon scattering. Our transport study must incorporate these two mechanisms to understand the effectiveness of minigaps on the APD excess noise. We employ the NEGF formalism described in Section \ref{NEGF} to compute the ballistic transmission in the valence band as a function of energy, $T(E)$, dominated by tunneling processes. The effect of different minigap sizes is highlighted in Fig. \ref{fig:minigap_current_density}. For our simulation, we set the quasi-Fermi level of the left contact at $-qV$ below the valence band edge and quasi-Fermi level of the right contact at another $-qV$ below. This is done in order to only observe the intraband tunneling inside the valence band which is responsible for overcoming minigaps under ballistic conditions. In Fig. \ref{fig:minigap_current_density} (a), We demonstrate that a small minigap in the valence band creates a small tunneling barrier for the holes. A hole with a small enough effective mass will be able to tunnel across this barrier and render it ineffective. That is the case for InGaAs which has a LH effective mass of $0.13 m_{0}$ and $\Delta E_{m}=0.03eV$. The spectral current density for InGaAs under a bias $V=0.25V$ is shown in Fig. \ref{fig:minigap_current_density} (b). We observe that the current spectrum in the valence band is continuous in the Fermi energy window and there is no drop in transmission due to the minigap. For a large minigap, the holes encounter a larger tunneling barrier, as shown in Fig. \ref{fig:minigap_current_density} (c), preventing them from gaining the threshold energy $E_{TH}$ for secondary impact ionization. This case is operational in InAlAs digital alloys, as shown in the spectral density plot in Fig. \ref{fig:minigap_current_density} (d). InAlAs has a minigap size of $0.12eV$ and LH effective mass of $0.4m_{0}$. Within the Fermi window we see that there are regions with extremely low current due to low tunneling probability across the minigap. This is further demonstrated by the $T(E)$ vs. $E$ plot in Fig. \ref{fig:minigap_current_density} (e). Here, it is observed that there are regions of low transmission for InAlAs whereas the InGaAs transmission is continuous. This signifies that the minigaps in the InAlAs valence band are large enough to prevent holes from gaining in kinetic energy, resulting in a low hole ionization coefficient. In order to investigate the role of minigaps in the remaining digital alloys, we look at the transmission vs. energy plots for all the alloys. The $T(E)$ vs. $E$ characteristics for the five digital alloys are shown in Fig. \ref{fig:negf_transmission_all} for two bias conditions, (a) $V=0.25V$ and (b) $V=0.5V$. We use a $21\times21$ grid for the transverse wavevectors ($k_{x},k_{y}$) within the first Brillouin zone. For this simulation, the structure length for InGaAs, AlGaAs, InAlAs and AlAsSb is considered to be two periods. For AlInAsSb we consider one period length. This allows us to keep the structure lengths as close as possible. We consider lengths of $3.48nm$ InGaAs, $3.42nm$ AlGaAs, $3.54nm$ InAlAs, $3.06nm$ AlInAsSb and $3.08nm$ AlAsSb channels. The channel sizes chosen are small compared to actual device lengths in order to keep the computation tractable. For both the bias conditions in Fig. \ref{fig:negf_transmission_all} we see there are energy ranges for InAlAs, AlInAsSb and AlAsSb in which the transmission probability drops drastically. This low tunneling probability can be attributed to two factors. The first factor is the presence of a sizeable minigap in all directions in the material bandstructure. The other contributing factor is the separation between the LH and SO bands. This factor is partly responsible for the low transmission regions in AlInAsSb and AlAsSb, whose minigap sizes (from Table \ref{table:1}) are smaller than InAlAs but also demonstrate lower excess noise. InGaAs and AlGaAs do not have any large drop in transmission for both biases. This characteristic implies that either the minigap size is too small to affect the carrier transport like in InGaAs or there is no minigap at all as in AlGaAs. \begin{figure}[b] \includegraphics[width=0.47\textwidth]{30ML_jdensity.eps} \caption{\label{fig:current_spec_all} Energy resolved current spectral density in the valence band for (a) InGaAs, (b) AlGaAs, (c) InAlAs, (d) AlInAsSb and (e) AlAsSb. The bias for the simulation is set to $V=0.25V$ and total period length is 30 monolayers.} \end{figure} For further confirmation of these observations, we compute the spectral current density for the case of constant total period length of all the structures. The period size of each unit cell stays the same but the number of unit cells is increased to make the total period length the same for all alloys. We consider the case with total period of 30MLs and voltage bias of $0.25V$. The current spectral density plots for the five digital alloys using a $15\times15$ transverse wavevector grid are shown in Fig.~\ref{fig:current_spec_all}. Smaller number of grid points are used here to save computation time. In the figure, a very small minigap is observed for InGaAs within the Fermi window and a continuous spectrum is seen for AlGaAs. Regions of low transmission/current are observed for InAlAs, AlInAsSb and AlAsSb. These observations are consistent with our previous calculations. We can thus infer that at least under fully coherent transport including tunneling, holes will not be able to gain sufficient kinetic energy to achieve impact ionization. \begin{figure}[b] \includegraphics[width=0.45\textwidth]{vb_probability.pdf} \caption{\label{fig:bte} Carrier Occupation Probability vs. Energy for the valence band in the presence of optical phonon scattering computed using BTE simulation. InAlAs, AlInAsSb and AlAsSb have lower occupation probability compared to InGaAs and AlGaAs. This prevents holes from gaining the ionization threshold energy.} \end{figure} \begin{table}[b] \begin{center} \begin{tabular}{ |c| c c| } \hline \textbf{Material} & $\mu _h$ ($ cm^{2}/Vs $) & $E_{opt}$ ($meV$)\\ \hline InAs & 500 & 30 \\ \hline AlAs & 200 & 50 \\ \hline GaAs & 400 & 35\\ \hline AlSb & 400 & 42\\ \hline \end{tabular} \end{center} \caption{Electron/hole mobilities and optical phonon energies of binary compounds that form the digital alloys \cite{piprek2013semiconductor,shur1996handbook}.} \label{table:2} \end{table} Besides tunneling processes it is possible for carriers to jump across energy gaps through inelastic scattering. In APDs, the dominant scattering mechanism is intervalley optical phonon scattering. Using the BTE model described in Section \ref{BTE}, the effect of phonon scattering in digital alloys is studied. The carrier mobilities and optical phonon energies of the binary constituents of the alloys used in the BTE simulations are listed in Table ~\ref{table:2}. An effective scattering strength $H_{\vec{p}, \vec{p}'}$ is obtained from the mobility values as described in Section \ref{BTE}. For our BTE simulations, we use the heavy-hole effective masses outlined in Table.~\ref{table:1}. We compute the carrier occupation probability in the valence band under a high electric field of $1 MV/cm$, by solving the three-dimensional Boltzmann equation with the entire set of tight binding energy bands within the Brillouin zone of the digital alloy. The optical phonon energy and mobilities of each alloy are taken to be the average of the binary constituent optical phonon energies and their mobilities. The energy resolved carrier occupation probability for all the alloys is shown in Fig.~\ref{fig:bte}. The valence band plot in Fig.~\ref{fig:bte} shows that the occupation probability for InAlAs, AlInAsSb and AlAsAsb is lower than the other two alloys at high energies. The optical phonon energies of these alloys are not sufficiently large to overcome their minigaps and thus prevent holes from ramping their kinetic energies up to $E_{TH}$. The top few valence bands of InGaAs are shown on the left side of Fig.~\ref{fig:3D carrier}(a) and the valence band carrier density distribution is projected onto the bottom. The bands are inverted for better view. For clearer understanding, the InGaAs carrier density distribution contour is also shown on the right. The valence band carrier distributions for the other alloys are shown in Fig.~\ref{fig:3D carrier}(b) AlGaAs, (c) InAlAs, (d) AlInAsSb and (e) AlAsSb. By studying the contours of each material, we observe that the densities for InAlAs, AlInAsSb and AlAsSb are more localized compared to thåt of AlGaAs and InGaAs. This is once again consistent with the lower hole impact ionization of InAlAs, AlInAsSb and AlAsSb. \begin{figure} \includegraphics[width=0.45\textwidth]{bte_contour.pdf} \caption{\label{fig:3D carrier} Carrier density distribution for (a) InGaAs, (b) AlGaAs, (c) InAlAs, (d) AlInAsSb and (e) AlAsSb. } \end{figure} For InGaAs and AlGaAs, the bandwidths are large enough to allow both holes and electrons to reach $E_{TH}$ easily. The resulting values of $k$ for these materials are quite high. Correspondingly, these two alloys have higher excess noise. In contrast in InAlAs, AlInAsSb and AlAsSb, it is easy for electrons to reach the threshold energy, but the holes are confined close to the valence band edge. This results in asymmetric ionization coefficients which give a low $k$, leading in turn to low excess noise. Armed with these results, we attempt to paint a clearer picture on how the minigaps and band splitting can reduce the excess noise in APDs. Specifically, we propose a set of empirical inequalities that can used to judge the excess noise performance of a digital alloy. \section{Empirical Inequalities} Based on our experimental results and theoretical calculations, five inequalities are proposed that use only material parameters like effective mass and minigap size obtained from our material bandstructures as inputs. In this paper, the transport is in the $[001]$ direction. Since the minigaps considered lie in the LH band, we use the unfolded LH effective mass value in the $\Gamma-[001]$ direction for the inequalities. The masses are obtained using the relationship $\hbar^2 k^2/2m^*=E(1+\alpha E)$ where $\alpha=[(1-m^*/m_0)^2]/E_G$ \cite{FAWCETT19701963}. In reality, the effective masses are complicated tensors that cannot be included in these empirical inequalities but are captured by the NEGF simulations described in Section.~\ref{NEGF}. A digital alloy material should favor low noise if it satisfies the majority of these inequalities. The four main inequalities are: \begin{equation} {\rm{Inequality~(1)~~~~~}}\Delta E_b/E_{TH}<<1 \nonumber \end{equation} \begin{equation} {\rm{Inequality~(2)~~~~~}}E_{opt}/\Delta E_m<<1 \nonumber \end{equation} \begin{equation} {\rm{Inequality~(3)~~~~~}}exp\left(-\frac{4\sqrt{2m_l}\Delta E_m^{3/2}}{3q\hbar F} \right)<<1 \nonumber \end{equation} \begin{equation} {\rm{Inequality.~(4)~~~~~}}exp\left(-\frac{4\sqrt{2m_l}\Delta E_{LS}^{3/2}}{3q\hbar F} \right)<<1 \nonumber \end{equation} Here, $\Delta E_b$ represents the energy difference between the VB maximum and the first minigap edge in the VB, $E_{opt}$ is the optical phonon energy and $\Delta E_m$ gives the size of the minigap. The longitudinal effective mass of the band in which the minigap exists is represented by $m_l$. $\Delta E_{LS}$ signifies the energy difference between the LH and SO bands at the $\Gamma$ point. A pictorial view of the different energy differences and inequalities mentioned above is shown in Fig. \ref{fig:inqualities}. \begin{figure}[b] \includegraphics[width=0.49\textwidth]{inequalities.pdf} \caption{\label{fig:inqualities} Criteria for designing low noise digital alloy APDs. Inequality~(1) states that the bandwidth to the first minigap is lower than the ionization threshold energy. Inequality~(2) asserts that the optical phonon energy has to be less than the minigap size. The tunneling probability for holes to jump across the minigap or from the light-hole band to the split-off band must be low. These are described by Inequality~(3) and Inequality~(4).} \end{figure} The first inequality, Inequality~(1), states that the energy bandwidth $\Delta E_b$ must be less than the ionization threshold energy $E_{TH}$. This means a carrier cannot gain sufficient kinetic energy to impact ionize before reaching the minigap. When a carrier reaches a minigap it faces a barrier (Fig.~\ref{fig:inqualities}), which it can overcome by phonon scattering or quantum tunneling. Inequality~(2) sets the condition for phonon scattering across the minigap. If the $E_{opt}$ of the material is less than $\Delta E_m$, then the phonon scattering of the carriers across the minigap is inhibited because carriers cannot gain sufficient energy to jump across the gap. It is possible for the carrier to still overcome the minigap by tunneling, and the condition for that is given in Inequality~(3), in terms of the tunneling probability across the minigap under the influence of an electric field. To compute the tunneling probability we consider a triangular barrier in the minigap region and use the well-known Fowler-Nordheim equation. Together Inequalities~(2) and (3) give the effectiveness of the minigap in limiting hole ionization in digital alloys. \begin{figure}[t] \includegraphics[width=0.45\textwidth]{LH_SO_inequality.pdf} \caption{\label{fig:LH_SO inequality} Effect of spin-orbit coupling on LH/SO separation. (a) Weak coupling results in small $\Delta E_{LS}$ and (b) strong coupling results in large $\Delta E_{LS}$.} \end{figure} Electron injected digital alloys can in fact achieve low noise even in the absence of minigaps, for instance in a material with a large separation $\Delta E_{LS}$ between the LH and SO bands, like AlAsSb. Holes within HH/LH bands are limited near the valence band edge by thermalization (hole-phonon scattering) due to the heavy effective masses in these bands, preventing them from reaching the ionization threshold energy within the band. An alternate pathway to ionization involves the split-off band. Since the split-off band has a low effective mass, holes require much smaller momentum to reach higher energies in this band, so that holes entering this band from HH/LH can quickly gain their ionization threshold energy. The separation between HH/LH and SO bands is controlled by spin-orbit coupling, as shown in Fig.~\ref{fig:LH_SO inequality}. Strong spin-orbit coupling due to inclusion of heavy elements, like antimony or bismuth, can increase the separation $\Delta E_{LS}$, as shown in Fig.~\ref{fig:LH_SO inequality}(b). When $\Delta E_{LS}$ is large it becomes very difficult for holes to reach the threshold energy. Inequality~(4) is accordingly important for APDs in which electron impact ionization is the dominant process, and is a measure of hole tunneling from the light-hole to the split-off band. An inherent fifth inequality, satisfied by these five alloys, is \begin{equation} E_{SC}<E_{TH} \label{ineq5} \end{equation} $E_{SC}$ is the energy gained by a hole between successive phonon scattering events, expressed as $E_{SC}=\lambda_{mfp}/F$. The \textit{z}-directed mean free path, $\lambda_{mfp}=v_{sat}\tau_{SC}/2$, where $v_{sat}$ is the saturation velocity and $\tau_{SC}$ is the scattering lifetime. $E_{SC}$ values of the five alloys at electric fields of $100kV/cm$ and $500kV/cm$ are given in Table.~\ref{table:3}. We extract $\tau_{SC}$ for an alloy by assuming a virtual crystal approximation of the component binary alloy scattering times. $\tau_{SC}$ values for InAs, GaAs, AlAs and AlSb are $0.08ps$, $0.09$, $0.08ps$ and $0.11ps$, respectively \cite{yang2020materials}. A similar average is done for the ternary alloy saturation velocities. Due to unavailability of AlSb $v_{sat}$, InAs $v_{sat}$ is used for AlInAsSb and AlAs $v_{sat}$ for AlAsSb. InAs, GaAs and AlAs $v_{sat}$ values used are $5\times10^4 m/s$, $9\times10^4 m/s$ and $8\times10^4 m/s$, respectively \cite{Palankovski}. \begin{table}[b] \begin{center} \begin{tabular}{|m{1.5cm}|m{2cm}|m{2cm}|m{1.5cm}|} \hline Material & $E_{SC}$ (eV) & $E_{SC}$ (eV) & $E_{TH}$ (eV)\\ & at $100kV/cm$ & at $500kV/cm$ &\\ \hline InGaAs & 0.029 & 0.149 & 0.95 \\ \hline AlGaAs & 0.036 & 0.181 & 3.91 \\ \hline InAlAs & 0.028 & 0.138 & 1.85 \\ \hline AlInAsSb & 0.024 & 0.119 & 1.79 \\ \hline AlAsSb & 0.038 & 0.19 & 2.4 \\ \hline \end{tabular} \end{center} \caption{$E_{SC}$ values at $F=100kV/cm$ and $F=500kV/cm$, and $E_{TH}$ of the five alloys. For a material with equal conduction and valence band effective masses, considering parabolic bands, the threshold energy $E_{TH}=1.5 E_G$ \cite{ridley2013quantum}. The same assumption is made here for the fifth inequality as this standard practice in the APD literature.} \label{table:3} \end{table} \begin{table*}[t] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Material} & \textbf{Inequality 1} & \textbf{Inequality 2} & \textbf{Inequality 3} & \textbf{Inequality 4} & \textbf{$k$ (digital alloy)} & \textbf{$k$ (random alloy)} \\ \hline InGaAs & \cellcolor{teagreen} 0.38 & \cellcolor{cinnabar} 1.08 & \cellcolor{cinnabar} 0.88 & \cellcolor{cinnabar} 0.006 & \cellcolor{cinnabar} 0.3\cite{InGaAs_expt} & \cellcolor{cinnabar} 0.5\cite{InGaAs_expt} \\ \hline AlGaAs & \cellcolor{bittersweet} 1 & \cellcolor{bittersweet} $\infty$ & \cellcolor{bittersweet} 1 & \cellcolor{bittersweet} $7.2 \times 10^{-4}$ & \cellcolor{bittersweet} 0.1\cite{AlGaAs_expt} & \cellcolor{bittersweet} 0.2\cite{AlGaAs_expt} \\ \hline InAlAs & \cellcolor{ufogreen} 0.16 & \cellcolor{ufogreen} 0.33 & \cellcolor{ufogreen} 0.17 & \cellcolor{teagreen} $5.6 \times 10^{-4}$ & \cellcolor{teagreen} 0.05\cite{InAlAs_expt} & \cellcolor{bittersweet} 0.2\cite{InAlAs_expt} \\ \hline AlInAsSb & \cellcolor{ufogreen} 0.17 & \cellcolor{teagreen} 0.59 & \cellcolor{teagreen} 0.53 & \cellcolor{turquoisegreen} $7.9 \times 10^{-7}$ & \cellcolor{turquoisegreen} 0.01\cite{AlInAsSb_expt} & \cellcolor{ufogreen} 0.018\cite{AlInAsSb_expt2} \\ \hline AlAsSb & \cellcolor{turquoisegreen} 0.22 & \cellcolor{turquoisegreen} 0.45 & \cellcolor{turquoisegreen} 0.3 & \cellcolor{ufogreen} $3.4 \times 10^{-7}$ & \cellcolor{ufogreen} 0.005\cite{AlAsSb_expt} & \cellcolor{turquoisegreen} 0.05\cite{AlAsSb_expt2} \\ \hline \end{tabular} \end{center} \caption{Suitability of digital alloys for attaining low noise is judged using the proposed inequalities. Here, the color green means beneficial for low noise and red indicates it is detrimental. The impact of the inequality in determining the experimentally determined ionization coefficient ratio $k$ of the material is depicted by the color shades. A darker shade indicates that the inequality has a greater impact on the value of $k$. The experimental random alloy $k$'s of the five alloys are given in column 6.} \label{table:4} \end{table*} In order to validate these inequalities as design criteria, we apply them to the set of digital alloys mentioned in this paper. We consider a high electric field of $1MV/cm$ for Inequalities~(3) and (4). The values of the left sides of the inequalities for the five alloys- InGaAs, AlGaAs, InAlAs, AlInAsSb and AlAsSb, are given in the first four columns of Table \ref{table:3}, while the measured $k$ is provided as reference in column 5. The table cells are colored green or red. Green cells aid in noise suppression (left sides of the inequalities are relatively small) and red is detrimental to reducing noise (left sides larger and corresponding inequalities not satisfied). Additionally, the color intensities highlight the strength of that inequality (how far the left side is from equality with the right side). A lighter shade represents a smaller impact while a darker shade means that condition has a greater effect on the impact ionization noise. For example, in the case of InGaAs, Inequality~(1) is shaded light green which means it does not effect noise performance significantly. However, the remaining inequalities for InGaAs are shaded dark red, indicating their key role in the high noise and hence high $k$ of InGaAs. The inequalities for AlGaAs, which has a slightly lower $k$, have a lighter shade of red. There are no minigaps for AlGaAs in the light-hole band. There is a minigap in the SO band of AlGaAs which is very deep in the valence band and there are other available states at that energy. Thus, holes can gain sufficient momentum to jump to other bands and bypass the minigap. So, we consider $\Delta E_m=0$ for it. We accordingly expect that AlGaAs has a lower noise. However, since the LH effective mass for AlGaAs is greater than InGaAs, it has lower hole impact ionization and thus lower noise compared to InGaAs. The remaining alloys have significantly lower noise compared to these two. The boxes for InAlAs, AlInAsSb and AlAsSb are all green. This means these three alloys are quite favorable for attaining low excess performance. InAlAs has a minigap size $\Delta E_m=0.12 eV$ which is larger than its optical phonon energy. It also has a large LH effective mass which prevents quantum tunneling across the minigap, as well as the LH-SO separation $\Delta E_{LS}$ which is comparable to that of AlGaAs and InGaAs. AlInAsSb has a low value for Inqequality~(1), so that box is shaded dark green. However, for Inequalities~(2) and (3) the values for AlInAsSb are higher than that of InAlAs and are thus shaded in a lighter color. AlInAsSb has a larger LH-SO separation than InAlAs and hence its Inequality~(4) has a darker shade. In AlAsSb, the values for Inequalities~(1)-(3) have medium shades as they lie between the maximum and minimum values in each of these columns for the corresponding inequalities. However, AlAsSb has a large $\Delta E_{LS}=0.54eV$, so its Inequality~(4) is shaded dark green. Based on the inequality values it would seem InAlAs would have the lowest noise since it has the darkest shades. However, looking at the Inequality~(4) values for these three materials we can infer that the LH-SO separation plays a critical role in reducing noise. Here, AlAsSb has the lowest $k=0.005$ and also the largest $\Delta E_{LS}$. On the contrary, InAlAs has the highest $k=0.1$ and the smallest $\Delta E_{LS}$. Finally, inequality 5, discussed in the context of split-off states (Eq.~\ref{ineq5}), is trivially satisfied by all five studied alloys. While important, it is thus not tabulated here, as it does not alter the status quo. In short, the values of the inequalities in Table~\ref{table:3} give a fairly good understanding of the excess noise performance of the set of digital alloys considered in this paper. They can potentially serve as empirical design criteria for judging new digital alloys in consideration as potential material candidates for digital alloy superlattice APDs. \section{Conclusion} In this paper, we have studied the digital alloy valence band carrier transport using NEGF and BTE formalisms. Based on our simulation results, we explain how minigaps and LH/SO offset impede hole impact ionization in APDs and improve their excess noise performance. When these gaps/offsets are sufficiently large they cannot bridged across by quantum tunneling or phonon scattering processes. Furthermore, we propose five inequalities as empirical design criteria for digital alloys with low noise performance capabilities. Material parameters calculated computationally are used as inputs for these. We validate these criteria by explaining the excess noise performance of several experimentally fabricated digital alloy APDs. The design criteria can be used to computationally design new digital alloy structures and benchmark them before actually fabricating these. \section*{Acknowledgment} This work was funded by National Science Foundation grant NSF 1936016. The authors thank Dr. John P David of University of Sheffield and Dr. Seth R. Bank of University of Texas-Austin for important discussions and insights. The calculations are done using the computational resources from High-Performance Computing systems at the University of Virginia (Rivanna) and the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
1,116,691,499,458
arxiv
\section{Introduction} Many intractable computational problems on graphs admit tractable algorithms when applied to trees or forests. \e{Tree decomposition} extracts a tree structure from a graph by grouping nodes into \e{bags} (where each bag is treated as a single node). The corresponding operation on hypergraphs is that of a \e{generalized hypertree decomposition}~\cite{DBLP:journals/jair/GottlobGS05}, in which the bags are associated with both nodes and hyperedges. A generalized hypertree decomposition entails a tree decomposition of the \e{primal} graph (which has the same set of nodes, and an edge between every two nodes that co-occur in a hyperedge) and an assignment of hyperedge labels (edge covers) to the tree nodes~\cite{DBLP:conf/wg/GottlobGMSS05}. Tree decomposition and generalized hypertree decomposition have a plethora of applications, including join optimization in databases~\cite{DBLP:conf/sigmod/TuR15,DBLP:journals/jair/GottlobGS05}, containment of database queries, constraint-satisfaction problems~\cite{DBLP:journals/jcss/KolaitisV00}, computation of Nash equilibria in games~\cite{DBLP:journals/jair/GottlobGS05}, analysis of probabilistic graphical models~\cite{LauSpi-JRS88}, and weighted model counting~\cite{DBLP:conf/sum/KenigG15} Past research has focused on obtaining a ``good'' tree decomposition for the given graph, where goodness is typically by means of the \e{width}: the maximal cardinality of a bag (minus one). Nevertheless, finding a tree decomposition of a minimal width is NP-hard~\cite{arnborg1987complexity}. Moreover, in various applications the measure of goodness is different from (though related to) the width~\cite{DBLP:conf/sum/KenigG15,DBLP:conf/wg/GottlobGMSS05}. In this paper, we explore the task of \e{enumerating} all (or a subset of) the tree decompositions of a graph. Such algorithms have been proposed in the past for small graphs (representing database queries), without complexity guarantees~\cite{DBLP:conf/sigmod/TuR15}. Our main result is an enumeration algorithm that runs in incremental polynomial time. We first need to define which tree decompositions should be enumerated. For example, if we take a graph that is already a tree, we do not wish to enumerate the tree decompositions that group nodes with no reason; in fact, the tree itself is the only reasonable decomposition in this case. Therefore, we consider only tree decompositions that cannot be ``improved'' by removing or splitting a bag. Such a tree decomposition, which we formally define in Section~\ref{sec:preliminaries}, is said to be \e{proper}. We first show that such tree decompositions are in a bijective correspondence to the \e{minimal triangulations} of the graph at hand. A \e{triangulation} is a set of edges that is added to the graph to make it chordal, and it is \e{minimal} if no strict subset of it is a triangulation of the graph. So, our task is reduced to that of enumerating all of the minimal triangulations of a graph. Our main contribution is an algorithm for enumerating all the minimal triangulations of an input graph in incremental polynomial time. Our approach is as follows. Parra and Scheffler~\cite{DBLP:journals/dam/ParraS97} have shown that there is a one-to-one correspondence between the minimal triangulations and the maximal sets of non-crossing minimal separators. (The precise definitions are in Section~\ref{sec:preliminaries}.) So, enumerating the minimal triangulations boils down to enumerating these maximal sets, which can be thought of as maximal independent sets of the graph $\mathcal{G}$ that represents crossings among minimal separators. It is well known that all the maximal independent sets of a graph can be enumerated with polynomial delay~\cite{DBLP:journals/ipl/JohnsonP88,DBLP:journals/jcss/CohenKS08}. However, this is insufficient for us, since the graph $\mathcal{G}$ is not given as input, and in fact, can have an exponential number of nodes (in the size of the original given graph). Therefore, we cannot construct this graph ahead of time to establish incremental polynomial time. Instead, we use a result by Berry et al.~\cite{conf/wg/BerryBC99}, showing that the minimal separators of a graph can be enumerated with polynomial delay. We devise an algorithm that enumerates the maximal independent sets of a graph by assuming that nodes are given by a polynomial-delay iterator, and by assuming some other complexity bounds that are proved to hold in the case of minimal separators. The rest of the paper is organized as follows. In Section~\ref{sec:preliminaries} we give preliminary definitions and notation, recall basic results from the literature, and provide some initial insights. In Section~\ref{sec:SGR} we define the notion of a \e{succinct graph representation} (where nodes are given via an iterator), and give an algorithm for enumerating the maximal independent sets of such a graph. In Section~\ref{sec:enumMT} we prove that the graph of minimal separating sets satisfies the complexity assumptions needed for the enumeration algorithm, and thereby establish our main result. Finally, in Section~\ref{sec:GenExtInd}, we give a generic algorithm for extending a set of edges to a minimal triangulation. \section{Preliminaries}\label{sec:preliminaries} \subsection{Graphs and Cliques} The graphs in this work are undirected. For a graph $g$, the set of nodes is denoted by $\mathsf{V}(g)$, and the set of edges (where an edge is a pair $\set{u,v}$ of distinct nodes) is denoted by $\mathsf{E}(g)$. If $U\subseteq\mathsf{V}(g)$, then $g_{|U}$ denotes the subgraph of $g$ \e{induced} by $U$; hence, $\mathsf{V}(g_{|U})=U$ and $\mathsf{E}(g_{|U})=\set{\set{u,v}\in\mathsf{E}(g)\mid \set{u,v}\subseteq U}$. Given a subset $S$ of $\mathsf{V}(g)$, we denote by $g\setminus S$ the graph obtained from $g$ by removing all the nodes in $S$ (along with their incident edges), that is, the graph $g_{|\mathsf{V}(g)\setminus S}$. The \e{neighborhood} of a node $v$ of $g$, denoted $N_g(v)$, is the set $\set{u\mid\set{u,v} \in \mathsf{E}(g)}$. The \e{neighborhood} of a set $U$ of nodes of $g$, denoted $N_g(U)$, is the set $\cup_{v\in U}N_g(v)\setminus U$; in words, the neighborhood of $U$ consists of every node that is a neighbor of some node in $U$, and is not itself in $U$. Let $g$ be a graph. Let $U$ be a set of nodes of $g$. We say that $U$ is a \e{clique} (\e{of $g$}) if every two nodes of $U$ are connected by an edge. We say that $U$ is a \e{maximal clique} if $U$ is a clique and $U$ is not strictly contained in any other clique. We denote by $\mathrm{\textsc{MaxClq}}(g)$ the set of all the maximal cliques of $g$. The operation of \e{saturating} $U$ (\e{in $g$}) is that of connecting every non-adjacent pair of nodes in $U$ by a new edge. Hence, if $h$ is obtained from $g$ by saturating $U$, then $U$ is a clique of $h$. \subsection{Minimal Separators} Let $g$ be a graph, and let $S$ be a subset of $\mathsf{V}(g)$. Let $u$ and $v$ be two nodes of $g$. We say that $S$ is a \e{$(u,v)$-separator} of $g$ if $u$ and $v$ belong to distinct connected components of $g\setminus S$. We say that $S$ is a \e{minimal} $(u,v)$-separator of $g$ if no strict subset of $S$ is a $(u,v)$-separator. We say that $S$ is a \e{minimal separator} of $g$ if there are two nodes $u$ and $v$ such that $S$ is a minimal $(u,v)$-separator. In each of these forms of a separator, we may omit ``of $g$'' if $g$ is clear from the context. We denote by $\mathrm{\textsc{MinSep}}(g)$ the set of all the minimal separators of $g$. We mention that the number of minimal separators (i.e., $|\mathrm{\textsc{MinSep}}(g)|$) may be exponential in the number of nodes (i.e., $|\mathsf{V}(g)|$). Let $g$ be a graph, and let $S$ and $T$ be two minimal separators of $g$. We say that $S$ \e{crosses} $T$, denoted $S\mathbin{\natural}_g T$, if there are nodes $v,u\in T$ such that $S$ is a $(v,u)$-separator. If $g$ is clear from the context, we may omit it and write simply $S\mathbin{\natural} T$. It is known that $\mathbin{\natural}$ is a symmetric relation: if $S$ crosses $T$ then $T$ crosses $S$~\cite{DBLP:journals/dam/ParraS97,DBLP:journals/tcs/KloksKS97}. Hence, if $S\mathbin{\natural} T$ then we may also say that $S$ and $T$ are \e{crossing}. When $S$ and $T$ are non-crossing, then we also say that they are \e{parallel}. \subsection{Chordality and Triangulation} Let $g$ be a graph. For a cycle $c$ in $g$, a \e{chord} of $c$ is an edge $e\in\mathsf{E}(g)$ that connects two nodes that are non-adjacent in $c$. We say that $g$ is \e{chordal} if every cycle of $g$ of length greater than three has a chord. Dirac~\cite{Dirac1961} has shown a characterization of chordal graphs by means of their minimal separators. \begin{citedtheorem}{Dirac~\cite{Dirac1961}} \label{thm:Dirac} A graph $g$ is chordal if and only if every minimal separator of $g$ is a clique. \end{citedtheorem} Rose~\cite{Rose1970597} has shown that a chordal graph $g$ has fewer minimal separators than nodes (that is, if $g$ is chordal then $|\mathrm{\textsc{MinSep}}(g)|<|\mathsf{V}(g)|$). Moreover, it is known that we can find all of these minimal separators in linear time. \begin{citedtheorem}{Kumar and Madhavan~\cite{Kumar1998155}} There is an algorithm that, given a chordal graph $g$, computes $\mathrm{\textsc{MinSep}}(g)$ in $O(|\mathsf{V}(g)|+|\mathsf{E}(g)|)$ time. \end{citedtheorem} A \e{triangulation} of a graph $g$ is a graph $h$ such that $\mathsf{V}(g)=\mathsf{V}(h)$, $\mathsf{E}(g)\subseteq\mathsf{E}(h)$, and $h$ is chordal. A \e{minimal triangulation} of $g$ is triangulation $h$ of $g$ with the following property: for every graph $h'$ with $\mathsf{V}(g)=\mathsf{V}(h')$, if $\mathsf{E}(g)\subseteq\mathsf{E}(h')\subsetneq\mathsf{E}(h)$ then $h'$ is non-chordal (or in other words, $h'$ is not a triangulation of $g$). In particular, if $g$ is already chordal then $g$ is the only minimal triangulation of itself. We denote by $\mathrm{\textsc{MinTri}}(g)$ the set of all the minimal triangulations of $g$. \subsection{Tree Decomposition} Let $g$ be a graph. A \e{tree decomposition} $d$ of $g$ is a pair $(t,\beta)$, where $t$ is a tree and $\beta:\mathsf{V}(t)\rightarrow2^{\mathsf{V}(g)}$ is a function that maps every node of $t$ into a set of nodes of $g$, so that all of the following hold. \begin{itemize} \item Nodes are covered: for every node $u\in\mathsf{V}(g)$ there is a node $v\in\mathsf{V}(t)$ such that $u\in\beta(v)$. \item Edges are covered: for every edge $e\in\mathsf{E}(g)$ there is a node $v\in\mathsf{V}(t)$ such that $e\subseteq\beta(v)$. \item For all $u,v,w\in\mathsf{V}(t)$, if $v$ is on the path between $u$ and $w$, then $\beta(v)$ contains $\beta(u)\cap\beta(w)$. This property is termed the \e{junction-tree} property or the \e{running-intersection} property. \end{itemize} Let $g$ be a graph, and let $d=(t,\beta)$ be a tree decomposition of $g$. For a node $v$ of $t$, the set $\beta(v)$ is called a \e{bag} of $d$. We denote by $\mathrm{bags}(d)$ the set $\set{\beta(v)\mid v\in\mathsf{V}(t)}$. We denote by $\algname{saturate}(g,d)$ the graph obtained from $g$ by saturating (i.e., adding an edge between every pair of nodes in) every bag of $d$. Let $d_1$ and $d_2$ be two tree decompositions of a graph $g$. We say that $d_1$ and $d_2$ are \e{bag equivalent}, denoted $d_1\mathrel{\equiv_{\mathsf{b}}} d_2$, if $\mathrm{bags}(d_1)=\mathrm{bags}(d_2)$. We denote by $d_1\sqsubseteq d_2$ the fact that for every bag $b_1\in\mathrm{bags}(d_1)$ there exists a bag $b_2\in\mathrm{bags}(d_2)$ such that $b_1\subseteq b_2$. Let $g$ be a graph, and let $d$ and $d'$ be tree decompositions of $g$. We say that $d'$ \e{strictly subsumes} $d$ if $d'\sqsubseteq d$ and $\mathrm{bags}(d)\not\subseteq\mathrm{bags}(d')$. A tree decomposition is \e{proper} if it is not strictly subsumed by any tree decomposition, and it is \e{improper} otherwise. Figure \ref{fig:decompositions} shows examples of proper and improper tree decompositions. \subsection{Enumeration} An \e{enumeration problem} $\mathsf{P}$ is a collection of pairs $(x,Y)$ where $x$ is an \e{input} and $Y$ is a finite set of \e{answers} for $x$, denoted by $\mathsf{P}(x)$. A \e{solver} for an enumeration problem $\mathsf{P}$ is an algorithm that, when given an input $x$, produces a sequence of answers such that every answer in $\mathsf{P}(x)$ is printed precisely once. Johnson, Papadimitriou and Yannakakis~\cite{DBLP:journals/ipl/JohnsonP88} introduced several different notions of \e{efficiency} for enumeration algorithms, and we recall these now. Let $\mathsf{P}$ be an enumeration problem, and let $A$ be solver for $\mathsf{P}$. We say that $A$ runs in: \begin{itemize} \item \e{polynomial total time} if the total execution time of $A$ is polynomial in $(|x|+|\mathsf{P}(x)|)$; \item \e{polynomial delay} if the time between every two consecutive answers produced is polynomial in $|x|$; \item \e{incremental polynomial time} if, after generating $N$ answers, the time to generate the next $(N+1)$st answer is polynomial in $(|x|+N)$. \end{itemize} \subsection{Preliminary Insights} In this section we give some preliminary insights on our definitions so far. \begin{figure}[t] \centering \input{tdexamples.pspdftex} \caption{A graph $g$ and tree decompositions $d_1$, $d_2$ and $d_3$ of $g$. The decomposition $d_1$ is proper, but $d_2$ and $d_3$ are subsumed by $d_1$, and hence, improper.} \label{fig:decompositions} \end{figure} The following proposition is a folklore, but we give a proof for completeness. \begin{proposition}\label{proposition:cliquesInBags} Let $d$ be a tree decomposition of a graph $g$. Every clique of $g$ is contained in some bag of $d$. \end{proposition} \begin{qproof} Denote $d=(t,\beta)$ and let $C$ be a clique of $g$. Every node $v$ in $C$ defines a subtree of $t$ that is induced by the bags that contain $v$. Since $d$ covers the edges of $g$, every two nodes in $C$ must share some bag in $d$, and hence, their subtrees must share a vertex. Heggernes~\cite{Heggernes:2006} shows that every collection of subtrees of a tree satisfies the \e{Helly property}: if every two subtrees share a vertex, then there exists a vertex that is shared by all the subtrees. In particular, there exists a vertex in $d$ common to all of these subtrees; this shared node corresponds to a bag that contains $C$. \end{qproof} The following proposition states that in a proper tree decomposition, there is no containment among bags. \begin{proposition} \label{proposition:antichain} If $d$ is a proper tree decomposition of a graph $g$, then $\mathrm{bags}(d)$ is an antichain w.r.t.~set inclusion. \end{proposition} \begin{qproof} We need to show that a proper tree decomposition cannot have two bags with one contained in the other. Assume, by way of contradiction, that $d$ is a tree decomposition of $g$ with two bags $B,C \in \mathrm{bags}(d)$ where $B \subseteq C$. Let $A$ be the second bag in the path from $B$ to $C$. Since $d$ is a tree decomposition and $A$ is on the path from $B$ to $C$, we get that $B=B \cap C \subseteq A$. Define $d'$ to be the graph obtained from $d$ by removing $B$ and connecting $A$ to all other neighbors of $B$, as illustrated in Figure~\ref{fig:antichain}. We will show that $d'$ is a tree decomposition for $g$. The first two properties of the tree decomposition still hold because $A$ contains $B$. Consider the path between two bags $\alpha$ and $\beta$ of $d'$. If the path between them is the same as in $d$, the third property still holds. If it changed, then the path used to go through $B$, and the only new bag that may appear in this path is $A$. In this case, $\alpha \cap \beta \subseteq B \subseteq A$, and the third property holds as well. We have found a tree decomposition $d'$ for $g$ that strictly subsumes $d$, hence $d$ is improper, hence a contradiction. \end{qproof} Jordan~\cite{Jordan} shows the following characterization of chordal graphs by means of tree decompositions. \begin{citedtheorem}{Jordan~\cite{Jordan}}\label{theorem:Jordan} A graph $g$ is chordal if and only if it has a tree decomposition where all the bags are cliques of $g$. \end{citedtheorem} \begin{figure}[t] \centering \input{antichain.pspdftex} \caption{Obtaining a strictly subsuming tree decomposition $d'$ given a tree decomposition $d$ with $B \subseteq C$.} \label{fig:antichain} \end{figure} From Theorem~\ref{theorem:Jordan}, the following proposition easily follows. \begin{proposition}\label{proposition:sat} If $d$ is a tree decomposition of a graph $g$, then $\algname{saturate}(g,d)$ is a triangulation of $g$. \end{proposition} \begin{qproof} It is straightforward to show that $d$ is a tree decomposition of $\algname{saturate}(g,d)$. Hence, since every bag of $d$ is a clique of $\algname{saturate}(g,d)$, it follows from Theorem~\ref{theorem:Jordan} that $\algname{saturate}(g,d)$ is chordal. \end{qproof} The next proposition states that a chordal graph $g$ has a single proper tree decomposition, up to the equivalence $\mathrel{\equiv_{\mathsf{b}}}$, with the set of bags being precisely the set of maximal cliques. \begin{proposition}\label{prop:chordal-one-td} If $d$ is a proper tree decomposition of a chordal graph $g$, then $\mathrm{bags}(d)=\mathrm{\textsc{MaxClq}}(g)$. \end{proposition} \begin{qproof} According to Proposition~\ref{proposition:cliquesInBags}, every clique of $g$ is contained in some bag of $d$, and according to Theorem~\ref{theorem:Jordan}, $g$ has some tree decomposition, say $d'$, where all the bags are cliques of $g$. So we have that $d'\sqsubseteq d$. If $\mathrm{bags}(d)\not\subseteq\mathrm{bags}(d')$, then $d'$ strictly subsumes $d$, in contradiction to the fact that $d$ is proper. Hence $\mathrm{bags}(d)\subseteq\mathrm{bags}(d')$, meaning that the bags of $d$ are cliques of $g$. It thus follows that every maximal clique is a bag of $d$, or in notation, $\mathrm{\textsc{MaxClq}}(g)\subseteq\mathrm{bags}(d)$. Finally, Proposition~\ref{proposition:antichain} states that the bags of $d$ are an antichain w.r.t.~set inclusion, and hence, $\mathrm{bags}(d)\subseteq\mathrm{\textsc{MaxClq}}(g)$. We conclude that $\mathrm{bags}(d)=\mathrm{\textsc{MaxClq}}(g)$, as claimed. \end{qproof} The next theorem relates proper tree decompositions to minimal triangulations, and reduces the enumeration of the former into the enumeration of the latter. \begin{theorem}\label{thm:proper-mint} Let $g$ be a graph. There is a bijection $M$ between $\mathrm{\textsc{MinTri}}(g)$ and the equivalence classes of $\mathrel{\equiv_{\mathsf{b}}}$ over the proper tree decompositions of $g$. Moreover, given a minimal triangulation $h$ of $g$, the proper tree decompositions in the class $M(h)$ can be enumerated with polynomial delay. \end{theorem} \begin{qproof} Based on Proposition~\ref{prop:chordal-one-td}, we define $M$ to be the function that maps every $h\in \mathrm{\textsc{MinTri}}(g)$ to the equivalence class of the proper tree decomposition of $h$. Next, we now prove that $M$ is as desired. \paragraph{$M$ has the right range.} Let $h$ be a minimal triangulation of $g$, and let $d$ be a tree decomposition in $M(h)$. Then $d$ is a proper tree decomposition of $h$, and therefore $d$ is a tree decomposition of $g$, and we need to show that $d$ is a \e{proper} tree decomposition of $g$. According to Proposition~\ref{prop:chordal-one-td}, we have $\mathrm{bags}(d)=\mathrm{\textsc{MaxClq}}(h)$, and therefore, $\algname{saturate}(g,d)=h$. Assume, by way of contradiction, that $d$ is improper. Then $d$ is strictly subsumed by some tree decomposition $d'$ of $g$, meaning that $d'\sqsubseteq d$ and $\mathrm{bags}(d)\not\subseteq\mathrm{bags}(d')$. Let $h'$ be graph $\algname{saturate}(g,d')$. From Proposition~\ref{proposition:sat} it follows that $h'$ is a triangulation of $g$. From $d'\sqsubseteq d$ and the fact that every bag of $d$ is a clique of $h$, we conclude that $\mathsf{E}(h')\subseteq\mathsf{E}(h)$. And since $h$ is a minimal triangulation, we get that $\mathsf{E}(h')=\mathsf{E}(h)$, and therefore $h=h'$. This means that both $d$ and $d'$ are tree decompositions of $h$, and $d$ is strictly subsumed by $d'$, which contradicts the fact that $d$ is a proper tree decomposition of $h$. \eat{ Since $\mathrm{bags}(d)\not\subseteq\mathrm{bags}(d')$, we conclude that there is a bag $A$ in $d$ that does not appear in $d'$. Being a bag in $d$, we know that $A$ is a maximal clique in $h$. i.e. $A$ defines a maximal clique in $h$ which isn't a maximal clique in $h'$. If $A$ is not a clique in $h'$, then $h'$ is a triangulation of $g$ that does not contain all edges of $h$, in contradiction to fact that $h$ is a minimal triangulation. So $A$ is a clique in $h'$ but it is not maximal, meaning that it is part of a bigger clique $B$ containing $A$. But since $h' \subseteq h$, $B$ is also a clique in $h$, in contradiction to the fact that $A$ is maximal in $h$. } \paragraph{$M$ is injective.} Let $h_1$ and $h_2$ be two minimal triangulations such that $h_1 \neq h_2$. We need to show that $M(h_1)\neq M(h_2)$. Without loss of generality, assume that the edge $\set{u,v}$ is in $h_1$ but not in $h_2$. The nodes $u$ and $v$ are part of some maximal clique of $h_1$, so they share a bag in $M(h_1)$. But they are not part of any clique of $h_2$, so they do not share any bag in $M(h_2)$. Therefore, $M(h_1) \neq M(h_2)$, as claimed. \paragraph{$M$ is surjective} Given a proper tree decomposition $d$ of $g$, we need to show that there exists a minimal triangulation $h$ of $g$ such that $d\in M(h)$. Consider the graph $h=\algname{saturate}(g,d)$. We will show that $h$ is a minimal triangulation, and that $d$ belongs to $M(h)$. We first show that $h$ is a minimal triangulation of $g$. According to Proposition~\ref{proposition:sat}, $h$ is a triangulation of $g$. Assume, by way of contradiction, that $h$ is not minimal. Then there exists a minimal triangulation $h'$ of $g$ that is obtained from $h$ by removing some edges; denote one of these edges by $e$. Consider a tree decomposition $d'\in M(h')$. The clique containing $e$ in $h$ is not a clique in $h'$, and therefore $\mathrm{bags}(d)\not\subseteq\mathrm{bags}(d')$. Also note that since $h' \subseteq h$, every maximal clique of $h'$ is contained in some maximal clique of $h$, and therefore $d'\sqsubseteq d$. Then $d'$ strictly subsumes $d$, in contradiction to the fact that $d$ is proper. Finally, we need to show that $\mathrm{bags}(d) = \mathrm{\textsc{MaxClq}}(h)$. But this follows immediately from the observation that $d$ is a proper tree decomposition of $h$, and then applying Proposition~\ref{prop:chordal-one-td}. \eat{ Let $B_1 \in \mathrm{bags}(d)$. Since every bag in $d$ is saturated to a clique in $h$, there exists $C_1 \in \mathrm{\textsc{MaxClq}}(h)$ such that $B_1 \subseteq C_1$. Note that $d$ is also a tree decomposition for $h$, so according to lemma \ref{proposition:cliquesInBags}, every clique in $\mathrm{\textsc{MaxClq}}(h)$ is contained in some bag in $\mathrm{bags}(d)$, meaning there exists $B_2 \in \mathrm{bags}(d)$ such that $C_1 \subseteq B_2$. We have that $B_1 \subseteq C_1 \subseteq B_2$, and since according to Proposition~\ref{proposition:antichain}, $\mathrm{bags}(d)$ is an antichain to inclusion, we get that $B_1 = C_1 = B_2$. Hence $\mathrm{bags}(d) \subseteq \mathrm{\textsc{MaxClq}}(h)$. Similarly we prove that $\mathrm{\textsc{MaxClq}}(h) \subseteq \mathrm{bags}(d)$. Let $C'_1 \in \mathrm{\textsc{MaxClq}}(h)$, as before we get $B'_1 \in \mathrm{bags}(d)$ and $C'_2 \in \mathrm{\textsc{MaxClq}}(h)$ such that $C'_1 \subseteq B'_1 \subseteq C'_2$. As $\mathrm{\textsc{MaxClq}}(h)$ is also an antichain to inclusion, we get that $C'_1 = B'_1 = C'_2$. } \paragraph{Enumerating proper tree decompositions.} Jordan~\cite{Jordan} shows that, given a chordal graph $h$, a tree over the bags that represent the maximal cliques of $h$ is a tree decomposition if and only if it is a maximal spanning tree, where the weight of an edge between two bags is the size of their intersection. Hence, our enumeration problem is reduced to enumerating all maximal spanning trees, which can be solved in polynomial delay~\cite{DBLP:journals/ijcm/YamadaKW10}. Since Gavril~\cite{GAVR} has shown that in chordal graphs the number of maximal cliques of $h$ is at most the number of nodes of $h$, we have a polynomial delay algorithm for enumerating the tree decompositions. This concludes the proof. \end{qproof} \section{Enumerating Maximal Independent Sets on Succinct Graph Representations}\label{sec:SGR} Our algorithm for enumerating minimal triangulations is done within an abstract framework of \e{succinct graph representations}, where a graph may be exponentially larger than its representation, and we have access to the nodes and edges through efficient algorithms. Formally, a \e{Succinct Graph Representation} (\e{SGR}) is a triple $(\mathcal{G},A_{\mathsf{V}},A_{\mathsf{E}})$, where: \begin{itemize} \item $\mathcal{G}$ is a function that maps strings $x$, called \e{instances}, to graphs $\mathcal{G}(x)$; \item $A_{\mathsf{V}}$ is an enumeration algorithm that, given an instance $x$, enumerates the nodes of $\mathcal{G}(x)$; \item $A_{\mathsf{E}}$ is an algorithm that, given an instance $x$ and two nodes $v$ and $u$ of $\mathcal{G}(x)$, determines whether $v$ and $u$ are connected by an edge in $\mathcal{G}(x)$. \end{itemize} An SGR $(\mathcal{G},A_{\mathsf{V}},A_{\mathsf{E}})$ is said to be \e{polynomial} if: \e{(a)} $A_{\mathsf{V}}$ enumerates with polynomial delay, \e{and (b)} $A_{\mathsf{E}}$ terminates in polynomial time; here, both polynomials are with respect to $|x|$. Observe that in a polynomial SGR, the (representation) size of every node $v$ of $\mathcal{G}(x)$ is polynomial in that of $x$ (since writing $v$ is within the polynomial delay). \def\mathsf{MSep}{\mathsf{MSep}} \def^{\mathsf{ms}}{^{\mathsf{ms}}} \begin{example}\label{example:msep} The \e{separator graph} of a graph $g$ is the graph that has the set $\mathrm{\textsc{MinSep}}(g)$ of minimal separators as its node set, and an edge between every two minimal separators that are crossing (i.e., $S,T\in\mathrm{\textsc{MinSep}}(g)$ such that $S\mathbin{\natural} T$)~\cite{DBLP:journals/dam/ParraS97}. Throughout this paper we denote by $\mathsf{MSep}$ the SGR $(\mathcal{G}^{\mathsf{ms}},A_{\mathsf{V}}^{\mathsf{ms}},A_{\mathsf{E}}^{\mathsf{ms}})$, where: \begin{itemize} \item $\mathcal{G}^{\mathsf{ms}}$ is a function that maps the representation of a graph $g$ to its separator graph $\mathcal{G}^{\mathsf{ms}}(g)$. \item $A_{\mathsf{V}}$ is an enumeration algorithm that, given a graph $g$, enumerates its set $\mathrm{\textsc{MinSep}}(g)$ of minimal separators. We can use here the algorithm of Berry et al.~\cite{conf/wg/BerryBC99} that enumerates $\mathrm{\textsc{MinSep}}(g)$ with polynomial delay. Specifically, the time between two consecutive minimal separators is $O(n^3)$, where $n$ is the number of nodes in $g$. \item $A_{\mathsf{E}}$ is an algorithm that, given a graph $g$ and two minimal separators $S$ and $T$, determines whether $S\mathbin{\natural} T$ efficiently (e.g., by removing $S$ and testing whether $T$ is split along multiple connected components). \end{itemize} In particular, $\mathsf{MSep}$ is a polynomial SGR. \qed\end{example} A polynomial SGR $(\mathcal{G},A_{\mathsf{V}},A_{\mathsf{E}})$ is said to have a \e{polynomial expansion of independent sets} if both of the following hold. \begin{enumerate} \item There is a polynomial $p$ such that for all representations $x$ and independent sets $I$ of $\mathcal{G}(x)$ it holds that $|I|\leq p(|x|)$. \item There is a polynomial-time algorithm that, given $x$ and an independent set $I$ of $\mathcal{G}(x)$, either determines that $I$ is maximal or returns a node $v\notin I$ such that $I\cup\set{v}$ is independent. \end{enumerate} \subsection{Enumerating Maximal Independent Sets in SGRs}\label{sec:enumSGR} In this section we prove the following result. \begin{theorem}\label{thm:sgr-inc-ptime} Let $(\mathcal{G},A_{\mathsf{V}},A_{\mathsf{E}})$ be a polynomial SGR. If $(\mathcal{G},A_{\mathsf{V}},A_{\mathsf{E}})$ has a polynomial expansion of independent sets then there is an algorithm that, given a representation $x$, enumerates the maximal independent sets of $\mathcal{G}(x)$ in incremental polynomial time. \end{theorem} \input{alg.tex} The proof is via the algorithm \algname{EnumMaxIndependent} that is depicted in Figure~\ref{alg:EnumMaxIndependent}. This algorithm is an adaptation of the algorithm for computing full disjunctions in databases~\cite{DBLP:conf/vldb/CohenFKKS06,DBLP:journals/jcss/CohenKS08}. \subsubsection{Algorithm description.} The algorithm maintains two collections, $\mathcal{Q}$ and $\mathcal{O}$, for storing intermediate results (which are maximal independent sets). The set $\mathcal{O}$ stores the results that have already been printed, and $\mathcal{Q}$ stores those that are to be printed. Both collections feature logarithmic-time membership-testing and element-removal operations. In addition, the algorithm maintains a collection $\mathcal{V}$ of nodes of $\mathcal{G}(x)$. The collection $\mathcal{Q}$ is initialized with a single result, which is an arbitrary maximal independent set. This result is obtained through the procedure $\algname{ExtendInd}(x,I)$ that extends a given independent set $I$ into a maximal one. (Note that this procedure can be implemented in polynomial time when $x$ has a polynomial expansion of independent sets.) The sets $\mathcal{O}$ and $\mathcal{V}$ are initialized empty. The algorithm accesses the nodes of $\mathcal{G}(x)$ through an iterator object that is obtained by executing $A_{\mathsf{V}}(x)$, and features two polynomial-time operations: \begin{itemize} \item Boolean $\algname{hasNext}()$ that determines whether there are additional nodes to enumerate. \item $\algname{next()}$ that returns the next node in the iteration. \end{itemize} The algorithm applies the iteration of line~5 until $\mathcal{Q}$ becomes empty, and then it terminates. In every iteration, the algorithm pops an element from $\mathcal{Q}$, prints it, and stores it in $\mathcal{O}$ (lines~6--8). The algorithm then iterates through the nodes in $\mathcal{V}$, and for each node $v$ it applies (lines~10--13) what we call \e{extension of $J$ in the direction of $v$}: it generates the set $J_v$ that consists of $v$ and all the nodes in $J$ that are non-neighbors of $v$, and extends $J_v$ into an arbitrary maximal independent set $K$, again using $\algname{ExtendInd}(x,J_v)$. If $K$ is in neither $\mathcal{Q}$ nor $\mathcal{O}$, then $K$ is added to $\mathcal{Q}$. Finally, the algorithm tests whether it is the case that $\mathcal{Q}$ is empty and the node iterator has additional nodes to process (line~14). While this is the case, the algorithm repeats the following procedure (lines~16--21): generate the next node, add it to $\mathcal{V}$, and extend every result in $\mathcal{O}$ in the direction of every node in $\mathcal{V}$. \subsubsection{Correctness.} We now prove the correctness of the algorithm. \begin{lemma}\label{lemma:sgr-enum-correct} Let $(\mathcal{G},A_{\mathsf{V}},A_{\mathsf{E}})$ be a SGR, and let $x$ be an instance. The algorithm \algname{EnumMaxIndependent}$(x)$ enumerates the maximal independent sets of $\mathcal{G}(x)$. \end{lemma} \begin{qproof} Lines~\ref{algline:checkExistBeg}-\ref{algline:checkExistEnd} of the algorithm guarantee that only maximal independent sets are enumerated and that each set is printed only once. We now prove that every maximal independent set of $\mathcal{G}(x)$ is printed. When the algorithm terminates, $\mathcal{Q}=\emptyset$. Therefore, in the previous iteration, the loop in line~\ref{algline:hasNextLoop} could only have terminated due to $\mathrm{iterator}.\algname{hasNext}()$ returning false. Therefore, upon termination, we have $\mathcal{V} = \mathsf{V}(\mathcal{G}(g))$. Suppose, by way of contradiction, that there is some maximal independent set $H'$ that is not printed by the algorithm. Let $H$ be a maximal independent set of $\mathcal{G}(x)$, among all the printed ones, that yields a maximal intersection $H_m=H \cap H'$. Such a set, $H$, must exist because the algorithm prints at least one maximal independent set. If $H=H'$, then we are done. Otherwise, there is some node $w \in H' \setminus H$. At this point we have established that both the node $w$ was generated and the set $H$ was printed before the algorithm terminated. We divide into two cases as follows. \begin{enumerate} \item The set $H$ was printed before the node $w$ was generated. When $w$ is generated (in line \ref{algline:generateNode}), then $H \in \mathcal{O}$. During this iteration, the set $H_w=\{w\} \cup \{u \in H\mid \negA_{\mathsf{E}}(x,w,u)\}$ will be generated and expanded to a maximal independent set $K \supseteq H_w$. \item The vertex $w$ was generated before the set $H$ was printed. Let us look at the time $H$ is printed and inserted into $\mathcal{O}$. Since, at this point $w \in \mathcal{V}$, then during the iteration of line \ref{algline:nodeLoop1}, the set $H_w=\{w\} \cup \{u \in H\mid \negA_{\mathsf{E}}(x,w,u)\}$ will be generated and expanded to a maximal independent set $K \supseteq H_w$. \end{enumerate} So we have established that before the algorithm terminates, the set $H_w=\{w\} \cup \{u \in H\mid \negA_{\mathsf{E}}(x,w,u)\}$ will be generated and expanded to a maximal independent set $K \supseteq H_w$. Furthermore, $H_m \cup \{w\} \subseteq H_w \subseteq K$ (because $H_m \subseteq H'$ cannot contain any node that crosses $w$). According to the algorithm, one of the following options must hold: (1) $K$ is inserted into $\mathcal{Q}$, (2) $K$ is already in $\mathcal{Q}$ (3) $K$ was in $\mathcal{Q}$ in the past and is now in $\mathcal{O}$. Since the algorithm prints every maximal independent set that is inserted into $\mathcal{Q}$, the existence of $K$ contradicts the choice of $H$ and, hence, the existence of $H'$. \end{qproof} \subsubsection{Execution time.} We now prove that the algorithm enumerates with incremental polynomial time. We denote by $\mathcal{U}=\bigcup_{J \in \mathcal{Q} \cup \mathcal{O}}J$ the set of vertices of $\mathcal{G}$ that are present in one or more maximal independent sets already generated. \begin{lemma}\label{lem:inclusion} During the execution of Algorithm \algname{EnumMaxIndependent}, each time line \ref{algline:hasNextLoop} is reached we have that $\mathcal{V} \subseteq \mathcal{U}$. \end{lemma} \begin{qproof} We prove by induction on the number of times line~\ref{algline:hasNextLoop} is executed. We denote by $\mathcal{V}_j$ ($j \geq 1$) the set $\mathcal{V}$ when line $\ref{algline:hasNextLoop}$ is reached for the $j$th time. Clearly, $\mathcal{V}_1=\emptyset$, so the claim holds. Assume the claim holds for $\mathcal{V}_j$ and we prove for $\mathcal{V}_{j+1}$. If $\mathcal{V}_{j+1} = \mathcal{V}_j$ then by the induction hypothesis, the claim holds. Otherwise, $\mathcal{V}_{j+1}=\mathcal{V}_j \cup \{w\}$ where $w \notin \mathcal{V}_j$. If $\mathcal{V}_{j+1} \subseteq \mathcal{U}$ then we are done. Otherwise, we have that $w \notin \mathcal{U}$ at the time the loop in line~\ref{algline:nodeLoop} was executed for the $j$th time. Let us examine the first iteration of the loop in line~\ref{algline:printedLoop} that is executed with vertex $v=w$. (Such an iteration must occur after at most $|\mathcal{V}_j|$ iterations of line~\ref{algline:nodeLoop}). By this time, either a maximal independent set containing the vertex $w$ was generated and inserted into $\mathcal{Q}$, hence $w \in \mathcal{U}$. Otherwise, since no maximal independent set in $\mathcal{O} \cup \mathcal{Q}$ contains the node $w$, then the set (containing $w$) generated in line~\ref{algline:genMIS}, will be added to $\mathcal{Q}$, thus $w \in \mathcal{U}$. Therefore, the next time line~\ref{algline:hasNextLoop} is reached, we have $\mathcal{V}_{j+1}=\mathcal{V}_j \cup \{w\} \subseteq \mathcal{U}$, as required. \end{qproof} \begin{lemma}\label{lemma:sgr-enum-time} Suppose that $(\mathcal{G},A_{\mathsf{V}},A_{\mathsf{E}})$ is an SGR with a polynomial expansion of independent sets, such that $|I| \leq p(x)$ for every instance $x$ and independent set $I$ of $\mathcal{G}(x)$. \algname{EnumMaxIndependent}$(x)$ computes the maximal independent sets of $\mathcal{G}(x)$ with an $O(p(x)^2 N^3 (s(x) + \log N)+Np(x)a(x))$ delay, where $N$ is the number of sets already generated, $s(x)$ is a bound on the running time of \algname{ExtendInd}$(x,I)$ and $a(x)$ is the delay of the enumeration algorithm $A_{\mathsf{V}}$. \end{lemma} \begin{qproof} Since $\mathcal{G}$ has a polynomial expansion of independent sets, then we have that $|\mathcal{U}| \leq p(x)N$. Furthermore, by Lemma~\ref{lem:inclusion}, we have that $\mathcal{V} \subseteq \mathcal{U}$ each time line~\ref{algline:hasNextLoop} is reached. Therefore, the loop in line~\ref{algline:nodeLoop} can run at most $|\mathcal{U}|+1$ times, or $O(p(x)N)$ times. Hence, the code block of the internal loop in line~\ref{algline:printedLoop} will be executed a total of $O(p(x)N^2)$ times. After this iteration, one of the following can occur: 1. $\mathcal{Q} \neq \emptyset$, in which case a maximal independent set will be printed 2. $\mathrm{iterator}.\algname{hasNext}()=\mathrm{false}$ in which case the algorithm will terminate after printing the contents of $\mathcal{Q}$ 3. $\mathcal{Q}=\emptyset$, $\mathrm{iterator}.\algname{hasNext}()=\textrm{true}$ and let $w$ denote the node generated in line~\ref{algline:generateNode}. Assuming options 1 or 2, we arrive at a delay of~$O(p(x)N^2(s(x)+\log N)+a(x))$. We now analyze the runtime for option 3. There are two cases. If $w \notin \mathcal{U}$, then a new maximal independent set containing $w$ will be generated in inserted into $\mathcal{Q}$ (as in case 1 above). Otherwise, the loop of line~\ref{algline:nodeLoop} may be executed without generating a maximal independent set. The number of such loops is confined by the number of vertices that are part of some maximal independent set already generated, and their cardinality is limited by $|\mathcal{U} \setminus \mathcal{V}|=O(Np(x))$. Once these vertices are generated, a node $w \notin \mathcal{U}$ must be generated or the algorithm will terminate. Overall, before the next maximal independent set is generated, at most $O(|\mathcal{U} \setminus \mathcal{V}|)=O(Np(x))$ vertices are returned by the iterator in time $O(Np(x)a(x))$. Also, the number of times the code block in the loop of line~\ref{algline:printedLoop} is executed is $O(N^3p(x)^2)$. Summarizing the above three cases, we have that the algorithm will either generate a new maximal independent set or terminate in time $O(p(x)^2 N^3 (s(x) + \log N)+Np(x)a(x))$. \end{qproof} From Lemmas~\ref{lemma:sgr-enum-correct} and~\ref{lemma:sgr-enum-time} we establish Theorem~\ref{thm:sgr-inc-ptime}. \section{Enumerating Minimal Triangulations}\label{sec:enumMT} Recall the SGR $\mathsf{MSep}$ of Example~\ref{example:msep}. In this section, we will use known results to reduce the problem of enumerating the minimal triangulations of a graph to the problem of enumerating the maximal independent sets for $\mathsf{MSep}$. We will further show that $\mathsf{MSep}$ has polynomial expansion of independent sets. Consequently, we will apply Theorem~\ref{thm:sgr-inc-ptime} to conclude that the minimal triangulations can be enumerated in incremental polynomial time. \subsection{Reduction to Enumerating Maximal Sets of Pairwise-Parallel Minimal Separators} We will use the following notation. Let $g$ be a graph. We denote by $\mathrm{\textsc{ClqMinSep}}(g)$ the set of minimal separators $S$ of $g$, such that $S$ is a clique of $g$. Let $\varphi$ be a subset of $\mathrm{\textsc{MinSep}}(g)$. We denote by $g_{[\varphi]}$ the graph that results from saturating the minimal separators in $\varphi$. Parra and Scheffler~\cite{DBLP:journals/dam/ParraS97} have shown the following connection between minimal triangulations and maximal sets of pairwise-parallel minimal separators (that is, every two are non-crossing). \begin{citedtheorem}{Parra and Scheffler~\cite{DBLP:journals/dam/ParraS97}}\label{thm:ParraS97} Let $g$ be a graph. \begin{enumerate} \item Let $\varphi=\{S_1,...,S_k\}$ be a maximal set of pairwise parallel minimal separators of $g$. Then $h=g_{[\varphi]}$ is a minimal triangulation of $g$, and $\mathrm{\textsc{MinSep}}(h)=\varphi$. \item Let $h$ be a minimal triangulation of $g$. Then $\mathrm{\textsc{MinSep}}(h)$ is a maximal set of pairwise parallel minimal separators in $g$, and $h=g_{[\mathrm{\textsc{MinSep}}(h)]}$. \end{enumerate} \end{citedtheorem} We conclude the following corollary, which gives the desired reduction. Recall that the graph $\mathcal{G}^{\mathsf{ms}}(g)$ is defined in Example~\ref{example:msep}. \begin{corollary}\label{cor:triang-to-minsep} Given a graph $g$, there is a polynomial-time-computable bijection between $\mathrm{\textsc{MinTri}}(g)$ and the maximal independent sets of $\mathcal{G}^{\mathsf{ms}}(g)$. \end{corollary} \subsection{Polynomial Expansion of Independent Sets} It is left to prove that the SGR $\mathsf{MSep}$ has polynomial expansion of independent sets. (The definition is in Section~\ref{sec:SGR}.) Theorem~\ref{thm:ParraS97}, combined with a result by Rose~\cite{Rose1970597}, gives the first of the two conditions. \begin{corollary}\label{cor:MISsize} Let $g$ be a graph. If $I$ is a (maximal) independent set in $\mathcal{G}^{\mathsf{ms}}(g)$, then $|I|<\mathsf{V}(g)$. \end{corollary} \begin{qproof} Suppose that $I$ is a maximal set of pairwise parallel minimal separators of $g$. Then by Theorem~\ref{thm:ParraS97}, $h=g_{[I]}$ is a minimal triangulation of $g$, and $\mathrm{\textsc{MinSep}}(h)=I$. The graph $h$ is chordal, hence from Rose~\cite{Rose1970597} we get that $|\mathrm{\textsc{MinSep}}(h)|<|V(h)|=|V(g)|$. \end{qproof} We now turn to proving the second condition. We will do so by describing a general procedure for extending a set of pairwise-parallel minimal separators of a graph $g$ to a maximal such set. Algorithm~\ref{alg:BBExtend}, \algname{ExtendInd}, can apply any known polynomial time triangulation heuristic, referred to as $\algname{Triangulate}$, as a black box. It uses the following procedures as subroutines. \begin{itemize} \item $\algname{Saturate}(g,S)$ receives a graph $g$ and a set $S \subseteq V(g)$ of vertices, and saturates $S$ (i.e., modifies $g$ such that $S$ becomes a clique). \item $\algname{Triangulate}(g)$ receives a graph $g$ and returns a triangulation $g'$ of $g$. We assume that this procedure runs in time that is a polynomial function of $(|\mathsf{V}(g)|+|\mathsf{E}(g)|)$. \item $\algname{MinTriSandwich}(g,g')$ receives a graph $g$ and a triangulation $g'$ of $g$, and return \e{minimal} triangulation of $g$. We note that, using one of known algorithms~\cite{Dahlhaus1997,DBLP:journals/siammax/Peyton01,Blair2001125}, this procedure runs in time that is polynomial in the size of the graph. \item $\algname{ExtractMinSeps}(h)$ receives a chordal graph $h$ and returns its set of minimal separators. Using the algorithm of Kumar~\cite{Kumar1998155}, the execution time of this procedure is linear in $h$. \end{itemize} Algorithm~\ref{alg:BBExtend} receives as input a graph $g$ and a set $\varphi$ of pairwise-parallel minimal separators. It then proceeds by saturating the separators in $\varphi$, resulting in $g_{[\varphi]}$. At this stage it will pass $g_{[\varphi]}$ to the triangulation heuristic $\algname{Triangulate}$. We note that $\algname{Triangulate}$ does not have to result in a minimal triangulation. It can involve, for example, a procedure which constructs a tree decomposition, from which a triangulation can be extracted (Proposition~\ref{proposition:sat}). Transforming a non-minimal triangulation to one that is minimal, by removing \e{fill} edges, is called the \e{minimal triangulation sandwich problem}~\cite{Heggernes2006297}. Various polynomial-time algorithms for this problem exist for this problem~\cite{Dahlhaus1997,DBLP:journals/siammax/Peyton01}, and these were reported to perform well in practice~\cite{Blair2001125}. So, at this stage we have a minimal triangulation $g_t$ of $g_{[\varphi]}$. Theorem~\ref{thm:HeggernesSaturation} shows that $g_t$ is also a minimal triangulation of $g$. Lemma~\ref{lemma:Extension} shows that the set of minimal separators of $g_t$ contains $\varphi$. Finally, we can apply the algorithm of Kumar~\cite{Kumar1998155} to extract the minimal separators of the (chordal) graph $g_t$, in linear time. {\def5.5in{2.5in} \begin{algseries}{t}{An algorithm for extending a set $\varphi$ of pairwise-parallel minimal separators \label{alg:BBExtend}} \begin{insidealg}{ExtendInd}{$g$,$\varphi$} \STATE $g_t \mathbin{{:}{=}} \algname{Triangulate}(g_{[\varphi]})$ \label{algline:triHrs} \STATE $h \mathbin{{:}{=}} \algname{MinTriSandwich}(g_{[\varphi]},g_t)$ \label{algline:sandwich} \STATE \textbf{return} $\algname{ExtractMinSeps}(h)$ \label{algline:ExtractSeps} \end{insidealg} \end{algseries}} \subsubsection{Correctness.} To prove correctness of the algorithm, we need the following result by Heggernes~\cite{Heggernes2006297}. \begin{citedtheorem}{Heggernes~\cite{Heggernes2006297}}\label{thm:HeggernesSaturation} Given a graph $g$, let $\varphi$ be an arbitrary set of pairwise non-crossing minimal separators of $g$. Obtain a graph $g_{[\varphi]}$ by saturating each separator in $\varphi$. \begin{enumerate} \item \label{item:Heggernes1st} $\varphi \subseteq \mathrm{\textsc{ClqMinSep}}(g_{[\varphi]})$, that is, $\varphi$ consists of clique minimal separators of $g_{[\varphi]}$. \item \label{item:Heggernes2nd} $\mathrm{\textsc{ClqMinSep}}(g) \subseteq \mathrm{\textsc{MinSep}}(g_{[\varphi]})$; that is, every clique minimal separator of $g$ is a (clique) minimal separator of $g_{[\varphi]}$. \item \label{item:Heggernes4th} Every minimal triangulation of $g_{[\varphi]}$ is a minimal triangulation of $g$. \end{enumerate} \end{citedtheorem} The next Lemma~\ref{lemma:Extension} builds on Theorems~\ref{thm:ParraS97} and~\ref{thm:HeggernesSaturation}. \begin{lemma}\label{lemma:Extension} Let $g$ be a graph, and $\varphi$ a set of pairwise-parallel minimal separators of $g$. Let $g_t$ a minimal triangulation of $g_{[\varphi]}$. Then $\varphi \subseteq \mathrm{\textsc{MinSep}}(g_t)$. \end{lemma} \begin{proof} By Part~\ref{item:Heggernes1st} of Theorem~\ref{thm:HeggernesSaturation} we have that $\varphi \subseteq \mathrm{\textsc{ClqMinSep}}(g_{[\varphi]})$. Since $g_t$ is a minimal triangulation of $g_{[\varphi]}$ then by Theorem~\ref{thm:ParraS97}, $g_t$ is the result of saturating a maximal set, say $\varphi'$, of pairwise-parallel minimal separators of $g_{[\varphi]}$. Therefore, by Part~\ref{item:Heggernes2nd} of Theorem~\ref{thm:HeggernesSaturation} we have $\mathrm{\textsc{ClqMinSep}}(g_{[\varphi]}) \subseteq \mathrm{\textsc{MinSep}}(g_t)$. This implies that $\varphi \subseteq \mathrm{\textsc{MinSep}}(g_t)$, as claimed. \end{proof} We then conclude the correctness of the algorithm. \begin{lemma}\label{lemma:BBCorrectness} Let $\varphi$ be a set of pairwise-parallel minimal separators of a graph $g$. Algorithm~\ref{alg:BBExtend} returns a maximal set $I$ of minimal separators of $g$ such that $\varphi \subseteq I$. Furthermore, the algorithm terminates in polynomial time. \end{lemma} \begin{proof} Assuming the correctness of the procedures $\algname{Triangulate}$, and $\algname{MinTriSandwich}$, the graph $g_t$ is a minimal triangulation of $g_{[\varphi]}$. By Part~\ref{item:Heggernes4th} of Theorem~\ref{thm:HeggernesSaturation}, we have that $g_t$ is a minimal triangulation of $g$. Consequently, from Theorem~\ref{thm:ParraS97} we get that $\mathrm{\textsc{MinSep}}(g_t)=I$ is a maximal set of non-crossing minimal separators of $g$. By Lemma~\ref{lemma:Extension} it holds that $\varphi \subseteq \mathrm{\textsc{MinSep}}(g_t)$, making $I$ an extension of $\varphi$. All of the procedures in Algorithm~\ref{alg:BBExtend} run in time that is polynomial in the size of the graph making it polynomial as well. \end{proof} From Corollary~\ref{cor:MISsize} and Lemma~\ref{lemma:BBCorrectness} we get the main result of this part. \begin{theorem}\label{thm:poly-exp} The SGR $\mathsf{MSep}$ has a polynomial expansion of independent sets. \end{theorem} \subsection{Main Result} From Theorems~\ref{thm:poly-exp} and~\ref{thm:sgr-inc-ptime} we conclude that it is possible to enumerate the maximal independent sets of $\mathsf{MSep}$ in incremental polynomial time. Applying the bijections of Theorems~\ref{cor:triang-to-minsep} and~\ref{thm:proper-mint}, we get the main result of this paper. \begin{theorem}\label{thm:main} There are algorithms that, given a graph $g$, enumerate in incremental polynomial time: \begin{enumerate} \item The minimal triangulations of $g$. \item The proper tree decompositions of $g$. \end{enumerate} \end{theorem} \eat{ \begin{theorem} $\mathsf{MSep}=(\mathcal{G},A_{\mathsf{V}},A_{\mathsf{E}})$ is polynomial. \end{theorem} \begin{qproof} The algorithm $A_{\mathsf{V}}$ that enumerates the minimal separators of $\mathcal{G}(g)$ runs in polynomial delay, within time $O(n^3)$ between consecutive minimal separators. Since the string representation of a graph $g$ is at least as large as the cardinality of its vertex set, the first requirement is met. By the definition of separator graph, testing for the existence of an edge $(u,v) \in \sigma_g$ is equivalent to testing whether $u \mathbin{\natural} v$. Algorithm $A_{\mathsf{E}}$ will run a depth first search over the graph $g \setminus u$ ($u$ represents a minimal vertex separator), beginning from some vertex in the separator $v$. The algorithm returns true if not all the members of $v$ are part of the resulting DFS tree (belong to the same connected component). This can be done in linear time. \end{qproof} } {\def5.5in{4in} \begin{algseries}{t}{An algorithm that decomposes a connected graph using a set of pairwise non-crossing minimal separators, $\varphi$ \label{alg:Decompose}}% \begin{insidealg}{Decompose}{$g$,$\varphi$} \STATE $\mathcal{Q}\mathbin{{:}{=}}\emptyset$ \STATE $\mathcal{O}\mathbin{{:}{=}}\emptyset$ \STATE $\algname{Seps}(g) \mathbin{{:}{=}} \varphi$ \COMMENT{The separators that are contained in $g$} \STATE $\mathcal{Q}\mathbin{{:}{=}} \{g\}$ \WHILE{$\mathcal{Q} \neq \emptyset$} \STATE $c \mathbin{{:}{=}} \mathcal{Q}.pop()$ \IF{$\algname{Seps}(c) = \emptyset$} \STATE $\mathcal{O} \gets \mathcal{O} \cup c$ \ELSE \STATE Select separator $S$ s.t. $S \in \algname{Seps}(c)$ \STATE $\algname{saturate}(c,S)$ \STATE $\mathcal{C}(S) \mathbin{{:}{=}} \algname{getComponents}(c,S)$ \label{algline:decomposeGetComps} \FORALL{$c' \in \mathcal{C}(S)$} \STATE $\algname{Seps}(c') \mathbin{{:}{=}} \{S' \in \algname{Seps}(c) \setminus \{S\} : S' \subseteq \mathsf{V}(c')\}$ \ENDFOR \STATE $\mathcal{Q} \gets \mathcal{Q} \cup \mathcal{C}(S)$ \ENDIF \ENDWHILE \STATE return $\mathcal{O}$ \end{insidealg} \end{algseries}} {\def5.5in{5.5in} \begin{algseries}{t}{An algorithm that finds a maximal set, $I$, of pairwise non-crossing minimal separators of a graph $g$ that contains the input set $\varphi$ \label{alg:ExtendInd}} \begin{insidealg}{ExtendInd}{$g$,$\varphi$} \STATE $\mathcal{Q}\mathbin{{:}{=}}\emptyset$ \STATE $I \mathbin{{:}{=}} \varphi$ \IF{$\varphi \neq \emptyset$} \STATE $\mathcal{Q} \mathbin{{:}{=}} \algname{Decompose}(g,\varphi)$ \COMMENT{Call procedure \ref{alg:Decompose}} \ELSE \STATE $\mathcal{Q}\mathbin{{:}{=}} \{g\}$ \ENDIF \WHILE{$\mathcal{Q}\neq\emptyset$} \label{algline:componentsLoop} \STATE $c\mathbin{{:}{=}}\mathcal{Q}.\algname{pop}()$ \IF{$!\algname{isClique}(c)$} \STATE $(u,v)\mathbin{{:}{=}}\{u,v \in V(c): (u,v)\notin E(c)\}$ \STATE $S \mathbin{{:}{=}} \algname{findMinSep}(\{u,v\},c)$ \label{algline:findMinSep} \STATE $\algname{saturate}(c,S)$ \STATE $\mathcal{C}(S) \mathbin{{:}{=}} \algname{getComponents}(c,S)$ \COMMENT{Each $c' \in \mathcal{C}(S)$ contains $S\cap N_g(c')$ as a clique} \FORALL{$c' \in \mathcal{C}(S)$} \label{algline:forComponents} \STATE $I \mathbin{{:}{=}} I \cup \{N_g(c') \cap S\}$ \COMMENT{In order to include contained separators} \label{algline:addSep} \ENDFOR \STATE $\mathcal{Q} \mathbin{{:}{=}} \mathcal{Q} \cup \mathcal{C}(S)$ \ENDIF \ENDWHILE \STATE $\algname{Return}$ $I$ \end{insidealg} \end{algseries}} \section{A Generic Algorithm for Expanding Independent Sets}\label{sec:GenExtInd} In this section we provide a generic procedure for extending a parallel set of minimal separators to one that is maximal. The procedure is provided in addition the one described in section \ref{sec:enumMT} and enables triangulating the graph by applying separator-based approaches \cite{DBLP:journals/algorithmica/Amir10}. Algorithm \ref{alg:ExtendInd} is a procedure for extending a set, $\varphi$, of pairwise parallel minimal separators of a graph $g$, into one that is maximal. Let $c$ be a connected component of $g$. We denote by $\mathrm{\textsc{seps}}(c)=\{S \in \varphi: S \subseteq \mathsf{V}(c)\}$. Clearly, $\mathrm{\textsc{seps}}(g)=\varphi$. The algorithm uses the following subroutines: \begin{itemize} \item $\algname{saturate}(c,S)$ receives a graph $c$ and a set of vertices $S \subseteq V(c)$ and modifies $c$ such that $S$ is a clique in $c$. The complexity of this procedure is $O(n^2)$. \item $\algname{getComponents}(g,S)$ receives a graph $g$ and a set of pairwise connected vertices $S \subseteq \mathsf{V}(g)$ and returns a set of induced subgraphs of $g$ as follows. Let $c_1,c_2,...,c_k$ be the set of connected components of $g_{|\mathsf{V}(g) \setminus S}$. The procedure returns the set $\mathcal{C}(S)=\bigcup_{i \in [1,k]}\{g_{|c_i\cup N_g(c_i)}\}$. Computing the connected components of a graph can be performed in linear time using depth-first search. \item $\algname{findMinSep}(\{u,v\},c)$ receives a graph $c$ with two non-adjacent vertices $u,v \in c$ and returns a minimal separator, $S$, for this pair. A separator $S$ that is ``close'' to $u$, i.e., $S \subseteq N_c(u)$, can be found in linear time. \item $\algname{isClique}(c)$ receives a graph $c$, and returns true if $c$ is a clique. The complexity of this procedure is $O(n^2)$. \item $\algname{Decompose}$ (Algorithm~\ref{alg:Decompose}) receives a connected graph, $g$, and a set of pairwise parallel separators, $\varphi$, and returns the set of connected components that result from decomposing $g$ according to the separators of $\varphi$. \end{itemize} Algorithm \ref{alg:ExtendInd} first decomposes the graph into connected components, according to the separator set $\varphi$, received as input. Then, each connected component, $c$, is processed, in turn. If it has some pair, $(u,v)$, of unconnected vertices, then a minimal separator for them, $S$, is found, saturated and used to further decompose the connected component. The loop in line \ref{algline:forComponents} iterates over the resulting connected components and updates $I$ with the set of minimal separators included in $S$. Lemmas~\ref{lem:singleSep} and~\ref{lem:disjointSeps} prove general properties of the decomposition process by parallel minimal separators and are used to prove the correctness and complexity of Algorithms \ref{alg:Decompose} (\algname{Decompose}) and \ref{alg:ExtendInd} (\algname{ExtendInd}). \begin{lemma}\label{lem:singleSep} Let $g$ be a connected graph, $\varphi$ be a set of pairwise non-crossing minimal separators and $\mathcal{C}(S)=\algname{getComponents}(g,S)$ where $S \in \varphi$ and the set of nodes in $S$ are pairwise connected. Then: \begin{enumerate} \item $S$, and its subsets, are no longer minimal separators in any of the subgraphs in $\mathcal{C}(S)$. \item For all $S'\in \varphi \setminus \{S\}$, there exists some subgraph $c \in \mathcal{C}(S)$ such that $S' \subseteq c$. \end{enumerate} \end{lemma} \begin{qproof} The subroutine $\algname{getComponents}$ returns the set of subgraphs $\mathcal{C}(S)=\bigcup_ig_i$ where $g_i=g_{|c_i\cup N_g(c_i)}$ is the subgraph induced by the connected component $c_i \in g \setminus S$ and its neighbors, $N_g(c_i)$. Since $S$ is a minimal separator, then for every $c_i \in g \setminus S$ we have that $N_g(c_i) \subseteq S$. Let us assume, by contradiction, that there is some $g_i \in \mathcal{C}(S)$, with two non-adjacent nodes $u,v \in \mathsf{V}(g_i)$ that are separated by $S' \subseteq S$. It cannot be the case that $u,v \in N_g(c_i)$ because $N_g(c_i) \subseteq S$ and $S$ is pairwise connected. If $u,v \in \mathsf{V}(c_i)$ then it must be that $u,v \in \mathsf{V}(g) \setminus S$ and thus should have been in different components of $\mathcal{C}(S)$. Finally, assume that $u \in N_g(c_i) \setminus S'$ and $v \in \mathsf{V}(c_i)$. Since $c_i$ is a connected component and $u$ is adjacent to some node $w \in c_i$, we have a path from $u$ to $v$ (via $w$) that does not pass through $S'$, in contradiction to the fact that $S'$ is a minimal $(u,v)$-separator. If we assume that there is an $S' \in \varphi$ that is not contained in any component of $\mathcal{C}(S)$, then $S'$ must span at least two components in $\mathcal{C}(S)$. This means that $S$ crosses $S'$, and we arrive at a contradiction that the members of $\varphi$ are pairwise non-crossing. \end{qproof} \begin{lemma}\label{lem:disjointSeps} Let $g$ be a connected graph, $S$ be a clique minimal separator of $g$ and $\mathcal{C}(S)=\algname{getComponents}(g,S)$. Let $c_1,c_2 \in \mathcal{C}(S)$, then $\mathrm{\textsc{seps}}(c_1) \cap \mathrm{\textsc{seps}}(c_2)=\emptyset$, where $\mathrm{\textsc{seps}}(c)$ denotes the minimal separators in component $c$. \end{lemma} \begin{qproof} Let us assume, by contradiction, that there is some $S' \in \mathrm{\textsc{seps}}(c_1) \cap \mathrm{\textsc{seps}}(c_2)$. By Lemma \ref{lem:singleSep}, we know that $S' \not\subseteq S$. Therefore, there is some node $w \in S' \setminus S$ such that $w \in c_1 \cap c_2$. But this means that $c_1$ and $c_2$ are connected in $g_{|\mathsf{V}(g)\setminus S}$ (via $w$) in contradiction to the assumption that these are distinct components of $\mathcal{C}(S)$. \end{qproof} Lemma \ref{lem:decomposeCorrectness} proves the correctness of Algorithm \ref{alg:Decompose}, while Lemma~\ref{lem:DecomposeRuntime} applies the proof of Lemma~\ref{lem:seperatorGenOnce} to prove its complexity. \begin{lemma}\label{lem:decomposeCorrectness} Let $g$ denote a connected graph provided as input to $\algname{Decompose}$. For every pair of nodes $(u,v) \in \mathsf{V}(g)$, $u$ and $v$ will reside in distinct components $c_u,c_v \in \mathcal{O}$ if and only if they are separated by some $S \in \varphi$. \end{lemma} \begin{qproof} If $u$ and $v$ are in distinct components $c_u,c_v \in \mathcal{O}$, then this must be the result of a separation applied in line \ref{algline:decomposeGetComps} using one of the separators of $S \in \varphi$. Let $S \in \varphi$ be a $(u,v)$-separator and assume, by contradiction, that at the end of the procedure $u$ and $v$ reside in a common component $c \in \mathcal{O}$. We first show that $(u,v)$ cannot be connected by an edge in $c$. If this were the case, then either $(u,v) \in \mathsf{E}(g)$, contradicting the fact that they are even separable by $S$. Otherwise, the edge is a result of saturating some separator $S'\in \varphi$, in which case we get that $S$ crosses $S'$, a contradiction. Therefore, there is a path from $u$ to $v$ in $c$, and thus, $S \cap c \neq \emptyset$. If $S \not\subseteq c$, then it must be crossed by some other separator $S'$, and we arrive at a contradiction. Otherwise, $S\subseteq c$, and by definition $S \in \mathrm{\textsc{seps}}(c)$. But this cannot be the case because $c$ is inserted into $\mathcal{O}$ only if $\mathrm{\textsc{seps}}(c)=\emptyset$ (line 7). Hence, we arrive at a contradiction for the existence of a component $c$ containing both $u$ and $v$. \end{qproof} The lemmas that follow prove the correctness and complexity of Algorithm ~\ref{alg:ExtendInd}. \begin{lemma}\label{lem:seperatorGenOnce} Every separator discovered in line \ref{algline:findMinSep} is generated and added to the resulting set, $I$, exactly once. \end{lemma} \begin{qproof} Assume, by contradiction, that there is some minimal separator $S$ that is generated twice in line \ref{algline:findMinSep}. First from component $c$ and then from component $c'$. There are two options: (1) $c'$ is contained in some component of $\algname{getComponents}(c,S)$. But this contradicts Lemma \ref{lem:singleSep} which states that $S$ cannot be a separator in any of the subgraphs of $\algname{getComponents}(c,S)$. (2) Otherwise, let $S'$ be the earliest separator such that the members of $c$ and $c'$ belong to distinct components (such a separator must exist because we start out with a connected graph $g$). But this contradicts Lemma \ref{lem:disjointSeps} which states that the separators of $c$ and $c'$ must be disjoint. \end{qproof} \begin{lemma}\label{lem:DecomposeRuntime} Algorithm \algname{Decompose} runs in polynomial time. \end{lemma} \begin{qproof} Using the same arguments as those in the proof of Lemma~\ref{lem:seperatorGenOnce}, it is shown that every separator $S \in \varphi$ is processed exactly once by Algorithm~\ref{alg:Decompose}. This, combined with the fact that the subroutines called by \algname{Decompose} run in polynomial time, brings us to the desired result. \end{qproof} \begin{lemma}\label{lem:nonCrossing} The set of minimal vertex separators in $I$ are pairwise non-crossing. \end{lemma} \begin{qproof} We show by induction on the number of iterations of the loop in line \ref{algline:componentsLoop} that the set of separators in $I$ are pairwise non-crossing in $g$, and that they represent cliques in all the components in $\mathcal{Q}$ that contain them. Since the input $\varphi$ is a set of pairwise non-crossing minimal separators, that undergo saturation, the claim holds before the loop in line \ref{algline:componentsLoop}. Assume the claim holds until some iteration $j > 0$. Let $c$ be the component processed in iteration $j+1$. If the minimal separator $S$ (found in line \ref{algline:findMinSep}) crosses some separator $S' \in I$, then it means that there exist $x,y \in S'$ such that $x,y \in c$ and $(x,y) \notin E(c)$ in contradiction to the induction assumption stating that $S'$ is a clique in $c$. \end{qproof} \begin{lemma}\label{lem:nonCrossingMaximal} The set of minimal vertex separators $I$, returned by algorithm \ref{alg:ExtendInd}, is a maximal set of pairwise non-crossing minimal separators of $g$. \end{lemma} \begin{qproof} We have already shown in Lemma \ref{lem:nonCrossing} that the set of separators in $I$ is pairwise non-crossing, we now show that it is maximal. Let $S'$ be a minimal separator of graph $g$ such that for every $S \in I$, $S$ and $S'$ are non-crossing. We show that $S' \in I$. Let $c$ be the {\em latest} component, processed by algorithm \ref{alg:ExtendInd}, such that $S' \subseteq c$. Such a component must exist because $S'$ is not crossed by any separator in $I$ (Lemma \ref{lem:singleSep}). Let $S$ be the minimal separator generated in line \ref{algline:findMinSep}. If $S=S'$, then we are done because $S \in I$. Otherwise, by definition of non-crossing, there exists some component $c' \in \mathcal{C}(S)$ such that $S' \subseteq c' \subseteq c$. But this is a contradiction because $c$ is the latest component processed that contains $S'$. \end{qproof} \begin{theorem}\label{thm:ptime} Algorithm \ref{alg:ExtendInd} runs in time that is polynomial in $|\mathsf{V}(g)|=n$. \end{theorem} \begin{qproof} Lemma \ref{lem:nonCrossingMaximal} establishes that the set of separators, $I$, returned by algorithm \ref{alg:ExtendInd}, is maximal pairwise-parallel. By corollary \ref{cor:MISsize}, $|I| < n$. By lemma \ref{lem:seperatorGenOnce}, each member of $I$ is generated and inserted into $I$ exactly once. Therefore, line \ref{algline:addSep}, as well as the number of iterations of the loop in line \ref{algline:componentsLoop}, in algorithm \ref{alg:ExtendInd}, will be executed at most $n$ times. Since all of the subroutines referred to in algorithm \ref{alg:ExtendInd} can be performed in polynomial time, the complexity of algorithm \ref{alg:ExtendInd} is polynomial in $n$. \end{qproof} \begin{theorem} $\mathsf{MSep}=(\mathcal{G},A_{\mathsf{V}},A_{\mathsf{E}})$ has a polynomial expansion of independent sets. \end{theorem} \begin{qproof} By corollary \ref{cor:MISsize} the size of each maximal independent set, $I$, of $\mathcal{G}(g)$ is at most $|V(g)|=n$, thereby satisfying the first requirement of polynomial expansion. By Lemma \ref{lem:nonCrossingMaximal} and Theorem \ref{thm:ptime}, we can check that $I$ is a maximal set of pairwise parallel separators, in polynomial time, by verifying that the output of algorithm \ref{alg:ExtendInd}, when provided with input $I$, is simply $I$. \end{qproof} Hence, as a corollary we get the main result of this paper. \begin{corollary} The minimal triangulations of a given graph can be enumerated in incremental polynomial time. Hence, due to Theorem~\ref{thm:proper-mint}, the proper tree decompositions of a given graph can be enumerated in incremental polynomial time. \end{corollary} \begin{comment} \subsection{Extending Independent Sets using Black-Box Triangulations} \label{subsec:triangulations} We describe three heuristics for triangulating, or constructing a perfect elimination ordering, of a graph $g$. The heuristics described can be implemented in polynomial time in the size of the graph but may return graphs whose width is exponentially larger than their actual treewidth. We then describe how the triangulations resulting from these methods can be combined with algorithm \ref{alg:ExtendInd} in order to generate the corresponding maximal separator sets. \textbf{Max-Cardinality heuristic} \cite{Tarjan:1984:SLA:1169.1179}: The vertices of $g$ are numbered in decreasing order from $n$ to $1$. As the next vertex to label choose an unlabeled vertex that is adjacent to the largest number of previously labeled vertices, breaking ties arbitrarily. The process is repeated until all vertices are ordered. \textbf{Min-Fill heuristic}: The vertices are ordered in ascending order from $1$ to $n$. As the next vertex to label choose the vertex, $v$, whose elimination will result in the smallest number of edges added to the graph in order to make it simplicial. Then, remove $v$ from the graph. Repeat the process, breaking ties arbitrarily, until all vertices have been eliminated. \textbf{Min-Degree heuristic} \cite{Berry2003}: The vertices are ordered in ascending order from $1$ to $n$. As the next vertex to label choose the vertex with the minimum degree and remove it from the graph. Repeat the process, breaking ties arbitrarily, until all vertices have been labeled. Let $\varphi$ be a set of pairwise non-crossing minimal separators of $g$. We denote by $F=\bigcup_{S \in \varphi}(\mathsf{E}(\algname{saturate}(S,g))\setminus \mathsf{E}(g_{|S}))$ the set of edges added to $g$ as a result of saturating the separators of $\varphi$. We denote by $\mathcal{C}(\varphi)$ the set of connected components resulting from calling algorithm~\ref{alg:Decompose} with $g$ and $\varphi$. We would like to extend $F$ into a triangulation of $g$, $F' \supseteq F$, using one (or more) of the heuristics described above. We can guarantee that the set of separators, $\varphi$, remain separators of $g$ throughout the triangulation process by ensuring that every added edge (i.e., every edge $e \in F' \setminus F$) is confined to a single component of $\mathcal{C}(\varphi)$. Overall, the process will be as follows: \begin{enumerate} \item Call $\algname{decompose}(g,\varphi)$ that returns $\mathcal{C}(\varphi)$. \item Triangulate each connected component $c \in \mathcal{C}(\varphi)$ using some triangulation heuristic. \item Extend $\varphi$ to a maximal set of minimal separators using $\algname{ExtendInd}(c,\emptyset)$ for each $c \in \mathcal{C}(\varphi)$. (Since chordality is a hereditary property then the minimal separators extracted from each component $c$ and its subsets are cliques. Therefore, no additional edges will be added to the graph.) \end{enumerate} We note that step 2 above does not have to result in a minimal triangulation. It can involve, for example, a procedure which constructs a tree decomposition $T_c$ over the subgraph $c \in \mathcal{C}(S)$. A triangulation for $c$ can be extracted from $T_c$ by saturating its bags (Proposition~\ref{proposition:sat}). Transforming a (non)-minimal triangulation to one that is minimal, by removing fill edges, is called the \e{minimal triangulation sandwich problem} \cite{Heggernes2006297}. Various polynomial time algorithms for this problem exist \cite{Dahlhaus1997,DBLP:journals/siammax/Peyton01} which were reported to run fast in practice \cite{Blair2001125}. Applying triangulation algorithms on the subgraphs resulting from the decomposition at step 1 above introduces opportunities for optimization. The triangulation algorithms are applied to smaller graphs, leading to better performance. Also, different components can undergo different triangulation algorithms based on their characteristics. \end{comment}
1,116,691,499,459
arxiv
\section{Introduction} For a set $\Omega\subset{\ensuremath{\mathbb R}}^d$ in $d=1,2$ dimensions, a quantizer is given by $N$ reproduction or quantization points $\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss=\{{\ensuremath{\mathbf p}}_1,\dots,{\ensuremath{\mathbf p}}_N\}\subset \ensuremath{\Omega}$ associated with $N$ quantization regions $\ensuremath{\mathcal R}=\{\ensuremath{\mathcal R}_1,\dots,\ensuremath{\mathcal R}_N\}\subset\ensuremath{\Omega}$, defining a partition of $\ensuremath{\Omega}$. To measure the quality of a given quantizer, the Euclidean distance between the source samples and reproduction points is commonly used as the distortion function. We will study quantizers with parameter depending distortion functions which minimize the average distortion over $\ensuremath{\Omega}$ for a given continuous source sample distribution $\ensuremath{\lambda}:\ensuremath{\Omega}\to [0,1]$, as investigated for example in \cite{Erdem16,KJ17,KKSS18} with a fixed set of parameters. Contrary to a fixed parameter selection, we will assign to each quantization point variable parameters to control the distortion function of the each quantization point individually. Such controllable distortion functions widens the scope of quantization theory and allows one to apply quantization techniques to many parameter dependent network and locational problems. In this work, we will consider for the distortion function of ${\ensuremath{\mathbf p}}_n$ a Euclidean square-distance, which is multiplicatively weighted by some $a_n>0$ and additively weighted by some $b_n>0$. Furthermore, we exponentially weight all distortion functions by some fixed exponent $\ensuremath{\gamma}\geq 1$. To minimize the average distortion, the optimal quantization regions are known to be generalized Voronoi (Möbius) regions, which can be non-convex and disconnected sets \cite{BWY07}. In many applications, as in sensor or vehicle deployments, the optimal weights and parameters are usually unknown, but adjustable, and one wishes therefore to optimize the deployment over all admissible parameter values, see for example \cite{ML}. We will characterize such \emph{quantizers with parameterized distortion measures} over one-dimensional convex target regions, i.e., over closed intervals. As a motivation, we will demonstrate such a parameter driven quantizer for an unmanned aerial vehicle (UAV) deployment to provide energy-efficient communication to ground terminals in a given target region $\ensuremath{\Omega}$. Here, the parameters relate to the UAVs flight heights. \ifarxiv \else Due to page limitations, all proofs are presented in \cite{GWJ18b}. \fi \paragraph{Notation} By $[N]=\{1,2,\dots,N\}$ we denote the first $N$ natural numbers, ${\ensuremath{\mathbb N}}$. We will write real numbers in ${\ensuremath{\mathbb R}}$ by small letters and row vectors by bold letters. The Euclidean norm of ${\ensuremath{\mathbf x}}$ is given by $\Norm{{\ensuremath{\mathbf x}}}=\sqrt{\sum_n x_n^2}$. The open ball in ${\ensuremath{\mathbb R}}^d$ centered at ${\ensuremath{\mathbf c}}\in{\ensuremath{\mathbb R}}^d$ with radius $r\geq 0$ is denoted by $\Ball\left({\ensuremath{\mathbf c}},r\right)=\set{\ensuremath{\boldsymbol{ \ome}}}{\|\ensuremath{\boldsymbol{ \ome}}-{\ensuremath{\mathbf c}}\|^2\le r}$. We denote by $\Vor^c$ the complement of the set $\Vor\subset{\ensuremath{\mathbb R}}^d$. The positive real numbers are denoted by ${\ensuremath{\mathbb R}}_+:=\set{a\in{\ensuremath{\mathbb R}}}{a>0}$. Moreover, for two points ${\ensuremath{\mathbf a}},{\ensuremath{\mathbf b}}\in{\ensuremath{\mathbb R}}^d$, we denote the generated half space between them, which contains ${\ensuremath{\mathbf a}}\in{\ensuremath{\mathbb R}}^d$, as $\HS({\ensuremath{\mathbf a}}, {\ensuremath{\mathbf b}})$. \section{System model}\label{sec:model} To motivate the concept of parameterized distortion measures, we will investigate the deployment of $N$ UAVs positioned at $\ensuremath{\mathbf Q}=\{{\ensuremath{\mathbf q}}_1,\dots,{\ensuremath{\mathbf q}}_N\}\subset(\ensuremath{\Omega}\times{\ensuremath{\mathbb R}}_+)^N$ to provide a wireless communication link to ground terminals (GTs) in a given target region $\ensuremath{\Omega}\subset{\ensuremath{\mathbb R}}^d$. Here, the $n$th UAV's position, ${\ensuremath{\mathbf q}}_n=({\ensuremath{\mathbf p}}_n,h_n)$, is given by its ground position ${\ensuremath{\mathbf p}}_n=(x_n,y_n)\in\ensuremath{\Omega}$, representing the quantization point, and its flight height $h_n$, representing its distortion parameter. The optimal UAV deployment is then defined by the minimum average communication power (distortion) to serve GTs distributed by a density function $\ensuremath{\lambda}$ in $\ensuremath{\Omega}$ with a minimum given data rate $R_b$. Hereby, each GT will select the UAV which requires the smallest communication power, resulting in so called generalized Voronoi (quantization) regions of $\ensuremath{\Omega}$, as used in \cite{Erdem16,GJ,GJcom18,GJ18,KJ17,ML,MLCS,KKSS18}. \ifarxiv We also assume that the communication between all users and UAVs is orthogonal, i.e., separated in frequency or time (slotted protocols). \fi In the recent decade, UAVs with directional antennas have been widely studied in the literature \cite{BJL,MSF,HA,KMR,HSYR,MWMM}, to increase the efficiency of wireless links. However, in \cite{BJL,MSF,HA,KMR,HSYR,MWMM}, the antenna gain was approximated by a constant within a 3dB beamwidth and set to zero outside. This ignores the strong angle-dependent gain of directional antennas, notably for low-altitude UAVs. To obtain a more realistic model, we will consider an antenna gain which depends on the actual radiation angle $\theta_n\in[0,\frac{\pi}{2}]$ from the $n$th UAV at ${\ensuremath{\mathbf q}}_n$ to a GT at $\ensuremath{\boldsymbol{ \ome}}$, see \figref{fig:uavdirected}. To capture the power falloff versus the line-of-sight distance $d_n$ along with the random attenuation and the path-loss, we adopt the following propagation model \cite[(2.51)]{Gol05} \begin{equation} PL_{dB}=10\log_{10}{K}-10\alpha\log_{10}(d_n/d_0)-\psi_{dB}, \end{equation} where $K$ is a unitless constant depending on the antenna characteristics, $d_0$ is a reference distance, $\alpha\geq 1$ is the terrestrial path-loss exponent, and $\psi_{dB}$ is a Gaussian random variable following $\mathcal{N}\left(0,\sigma^2_{\psi_{dB}}\right)$. This air-to-ground or terrestrial path-loss model is widely used for UAV basestations path-loss models \cite{MSBD16a}. Practical values of $\ensuremath{\alpha}$ are between $2$ and $6$ and depend on the Euclidean distance of GT $\ensuremath{\boldsymbol{ \ome}}$ and UAV ${\ensuremath{\mathbf q}}_n$ \begin{align} d_n(\ensuremath{\boldsymbol{ \ome}})= d({\ensuremath{\mathbf q}}_n,(\ensuremath{\boldsymbol{ \ome}},0))=\sqrt{\|{\ensuremath{\mathbf p}}_n-\ensuremath{\boldsymbol{ \ome}}\|^2+h_n^2}=\sqrt{(x_n-x)^2+(y_n-y)^2+h_n^2}\label{eq:eucd}. \end{align} For common practical measurements, see for example \cite{AG18}. Typically maximal heights for UAVs are $<1000$m, due to flight zone restrictions of aircrafts. Hence, the received power at UAV $n$ can be represented as $P_{RX}=P_{TX}G_{TX}G_{RX}Kd^{\alpha}_0 d_n^{-\alpha}(\ensuremath{\boldsymbol{ \ome}})10^{-\frac{\psi_{dB}}{10}}$, where $G_{TX}$ and $G_{RX}$ are the antenna gains of the transmitter and the receiver, respectively. Here, we assume perfect omnidirectional transmitter GT antennas with an isotropic gain and directional receiver UAV antennas. The angle dependent antenna gains are \begin{equation} \GGT >0\quad,\quad \GUAV = \cos\left(\theta_n\right)=h_n/d_n(\ensuremath{\boldsymbol{ \ome}}), \label{eq:Gdirected} \end{equation} see \cite[p.52]{Bal05a}. The combined antenna intensity is then proportional to $G=\GUAV \GGT K$, see \figref{fig:uavdirected}. \begin{figure} \centering \def\svgwidth{.85\textwidth} \scriptsize{ \import{.}{UAVquacopter2_new.pdf_tex}} \caption{UAV deployment with directed antenna beam and associated GT cells for $\ensuremath{\alpha}=2$ and $N=2$ for a uniform GT distribution.} \label{fig:uavdirected} \end{figure} Accordingly, the received power can be rewritten as \begin{equation} P_{RX}=P_{TX}h_n \GGT K d^{\alpha}_0 d_n^{-\alpha-1}(\ensuremath{\boldsymbol{ \ome}})10^{-\frac{\psi_{dB}}{10}}. \end{equation} To achieve a reliable communication between GT and UAV with bit-rate at least $\Rb$ for a channel bandwidth $B$ and noise power density $N_0$, the Shannon formula requires $B\log_2\left(1+\frac{P_{RX}}{BN_0}\right)\ge\Rb$. The minimum transmission power to UAV ${\ensuremath{\mathbf q}}_n$ is then given by% \ifarxiv \begin{align} P_{TX}=\big(2^{\frac{\Rb}{B}}\!-\!1\big)B N_0d({\ensuremath{\mathbf q}}_n,(\ensuremath{\boldsymbol{ \ome}},0))^{\alpha+1}10^{\frac{\psi_{dB}}{10}} (h_n \GGT Kd^{\alpha}_0)^{-1} \end{align} \else $P_{TX}=\big(2^{\frac{\Rb}{B}}\!-\!1\big)B N_0d({\ensuremath{\mathbf q}}_n,(\ensuremath{\boldsymbol{ \ome}},0))^{\alpha+1}10^{\frac{\psi_{dB}}{10}} (h_n \GGT Kd^{\alpha}_0)^{-1}$ \fi% with expectation \begin{align} \Expect{P_{TX}}&\! =\! \frac{(2^{\frac{\Rb}{B}}-1)N_0 }{ h_n \GGT Kd^{\alpha}_0} \frac{d_n^{\alpha+1}(\ensuremath{\boldsymbol{ \ome}})}{\sqrt{2\pi}\sigma_{\psi_{dB}}} \int_{{\ensuremath{\mathbb R}}} \exp\left(\!-\frac{\psi^2_{dB}}{2\sigma^2_{\psi_{dB}}} \!+\! \ln(10) \frac{\psi_{dB}}{10}\right)\!d\psi_{dB} \!=\!\frac{\ensuremath{\beta}}{h_n} d_n^{2\ensuremath{\gamma}}(\ensuremath{\boldsymbol{ \ome}})\label{eq:expectedTX} % \end{align} where the independent and fixed parameters are given by \begin{align} \ensuremath{\beta}=(2^{\frac{\Rb}{B}}-1)B N_0 \exp\big(-\frac{\ensuremath{\sigma}_{\psi_{dB}}^2 (\ln 10)^2}{200}\big)(\GGT K)^{-1} d_0^{-\ensuremath{\alpha}}\quad\text{and}\quad \ensuremath{\gamma}=\frac{\ensuremath{\alpha}+1}{2}. \end{align} Since our goal is to minimize the average transmission power \eqref{eq:expectedTX} we define the $n$th \emph{parameter distortion function} as \begin{align} D(\ensuremath{\boldsymbol{ \ome}},{\ensuremath{\mathbf p}}_n,a_n,b_n)=\ensuremath{\beta}\cdot \left(a_n\Norm{{\ensuremath{\mathbf p}}_n-\ensuremath{\omega}}_2^2+b_n\right)^\ensuremath{\gamma} \label{eq:Eptx} \end{align} where $a_n=h_n^{-1/\ensuremath{\gamma}}$ and $b_n=h_n^{2-1/\ensuremath{\gamma}}$. As can be seen from \eqref{eq:Eptx}, the distortion is a function of the parameter $h_n$ in addition to the distance between the reproduction point ${\ensuremath{\mathbf p}}_n$ and the represented point $\ensuremath{\boldsymbol{ \ome}}$. From a quantization point of view, one can start with the distortion function \eqref{eq:Eptx} without knowing the UAV power consumption formulas in this section. This is what we will do in the next section. For simplicity, we will set from here on $\ensuremath{\beta}=1$, since it will not affect the quantizer. \section{Optimizing Quantizers with parameterized distortion measures}\label{sec:optmize1D} The communication power cost \eqref{eq:Eptx} defines, with $h_n$ and fixed $\ensuremath{\gamma}\geq 1$, a parameter-dependent distortion function for ${\ensuremath{\mathbf p}}_n$. For a given source sample GT density $\df$ in $\Omega$ and UAV deployment, the average power is the average distortion for given \emph{quantization and parameter points} $(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)$ with quantization sets $\ensuremath{\mathcal R}=\{\ensuremath{\mathcal R}_n\}$, which is called the \emph{average distortion} of the quantizer $(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\ensuremath{\mathcal R})$ \begin{equation} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\ensuremath{\mathcal R}) = \sum_{n=1}^N \int_{\ensuremath{\mathcal R}_n} D(\ensuremath{\boldsymbol{ \ome}},{\ensuremath{\mathbf p}}_n,h_n)\ensuremath{\lambda}(\ensuremath{\boldsymbol{ \ome}})d\ensuremath{\boldsymbol{ \ome}}. \label{eq:Pbar} \end{equation} The $N$ quantization sets, which minimize the average distortion for given quantization and parameter points $(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)$, define a generalized Voronoi tessellation $\Vor=\{\Vor_n(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)\}$ \begin{align} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\Vor) :=\!\int_{\Omega}\min_{n\in[N]} \left\{ \Dis(\ensuremath{\boldsymbol{ \ome}},{\ensuremath{\mathbf p}}_n,h_n) \right\} \df(\ensuremath{\boldsymbol{ \ome}})d\ensuremath{\boldsymbol{ \ome}} =\sum_{n=1}^{N}\int_{\Vor_n(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)}\!\!\!\!\!\Dis(\ensuremath{\boldsymbol{ \ome}},{\ensuremath{\mathbf p}}_n,h_n) \df(\ensuremath{\boldsymbol{ \ome}})d\ensuremath{\boldsymbol{ \ome}} \label{eq:optPbar}, \end{align} where the \emph{generalized Voronoi regions} $\Vor_n(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)$ are defined as the set of sample points $\ensuremath{\boldsymbol{ \ome}}$ with smallest distortion to the $n$th quantization point ${\ensuremath{\mathbf p}}_n$ with parameter $h_n$. Minimizing the \emph{average distortion} $\AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\Vor)$ over all parameter and quantization points can be seen as an \emph{$N-$facility locational-parameter optimization problem} \cite{GJ, GJcom18, GJ18,OBSC00}. By the definition of the Voronoi regions \eqref{eq:optPbar}, this is equivalent to the minimum average distortion over all $N-$level parameter quantizers% \begin{align} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*,\Vor^*) = \min_{(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)\in\ensuremath{\Omega}^N\times{\ensuremath{\mathbb R}}_+^N} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\Vor) = \min_{(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)\in\ensuremath{\Omega}^N\times{\ensuremath{\mathbb R}}_+^N} \min_{\ensuremath{\mathcal R}=\{\ensuremath{\mathcal R}_n\}\subset\ensuremath{\Omega}} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\ensuremath{\mathcal R}), \label{eq:optquanteqoptdeploy} \end{align} which we call the \emph{$N-$level parameter optimized quantizer}. To find local extrema of \eqref{eq:optPbar} analytically, we will need that the objective function $\AvDis$ be continuously differentiable at any point in $\Omega^N\times {\ensuremath{\mathbb R}}_+^N$, i.e., the gradient should exist and be a continuous function. Such a property was shown to be true for piecewise continuous non-decreasing distortion functions in the Euclidean metric over $\ensuremath{\Omega}^N$ \cite[Thm.2.2]{CMB05} and weighted Euclidean metric \cite{GJ}. Then the necessary condition for a local extremum is the vanishing of the gradient at a critical point\ifarxiv\footnote{Note, if $\nabla \Pbar$ is not continuous in $\ensuremath{\mathcal P}^N$ than any jump-point is a potential critical point and has to be checked individually.}\fi. First, we will derive the generalized Voronoi regions for convex sets $\Omega$ in $d$ dimensions for any parameters $h_n\in{\ensuremath{\mathbb R}}_+$ for the quantization points ${\ensuremath{\mathbf p}}_n$, which are special cases of \emph{M{\"o}bius diagrams (tessellations)}, introduced in \cite{BWY07}. \begin{lemma}\label{lem:moebiusdia} Let $\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss=\{\gp_1,\gp_2,\dots, \gp_N\}\subset \Omega^N\subset ({\ensuremath{\mathbb R}}^d)^N$ for $d\in\{1,2\}$ be the quantization points and $\fH=(\fh_1,\dots,\fh_N)\in{\ensuremath{\mathbb R}}_+^N$ the associated parameters. Then the average distortion of $(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)$ over all samples in $\ensuremath{\Omega}$ distributed by $\df$ for some exponent $\ensuremath{\gamma}\geq1$ % \begin{align} \AvDis\left(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\fH, \Vor\right) = \sum_{n=1}^{N} \int_{\Vor_n} \! \frac{ (\Norm{\gp_n- \ensuremath{\boldsymbol{ \ome}}}^2 +h_n^2)^{\ensuremath{\gamma}}}{h_n} \df(\ensuremath{\boldsymbol{ \ome}})d\ensuremath{\boldsymbol{ \ome}} \label{eq:minimizationmoebius} \end{align} % has generalized Voronoi regions $\Vor_n= \Vor_{n}(\gP,\fH)= \bigcap_{m\not=n} \Vor_{nm}$, where the dominance regions of quantization point $n$ over $m$ are given by % \begin{align} \Vor_{nm}=\Omega\cap\begin{cases} \HS(\gp_n,\gp_m)&, h_m=h_n \\ \Ball({\ensuremath{\mathbf c}}_{nm},r_{nm}) &, h_n<h_m\\ \Ball^c({\ensuremath{\mathbf c}}_{nm},r_{nm}) &, h_n>h_m \end{cases}\label{eq:moebius} \end{align} % and center and radii of the balls are given by % \begin{align} {\ensuremath{\mathbf c}}_{nm}\!=\!\frac{\gp_n - h_{nm}\gp_m}{1-h_{nm}} \quad\text{and}\quad r_{nm}\!=\!\left(\frac{h_{nm}}{\left(1-h_{nm}\right)^2}\Norm{\gp_n-\gp_m}^2 + h_n^2 \frac{h_{nm}^{1-2\ensuremath{\gamma}} -1}{1-h_{nm}}\right)^{\frac{1}{2}}. \label{eq:rnmcnm} \end{align} % Here, we introduced the parameter ratio of the $n$th and $m$th quantization point as % \begin{align} h_{nm}= \left(h_n/h_m\right)^{\frac{1}{\ensuremath{\gamma}}}. \end{align} % \end{lemma} \ifarxiv \begin{remark} It is also possible that two quantization points are equal, but have different parameters. If the parameter ratio is very small or very large, one quantization point can become redundant, i.e., if its optimal quantization set is empty. In fact, if we optimize over all quantizer points, such a case will be excluded, which we will show for one-dimension in \lemref{lemma:allActive}. \end{remark} \fi \ifarxiv \begin{proof} The minimization of the distortion functions over $\Omega$ defines an assignment rule for a generalized Voronoi diagram $\Vor(\gP,\bH)=\{\Vor_1,\Vor_2,\dots,\Vor_N\}$ where % \begin{align} \Vor_{n} =\Vor_n(\gP,\bH):= &\set{\ensuremath{\boldsymbol{ \ome}}\in\ensuremath{\Omega}}{ a_n\Norm{{\ensuremath{\mathbf p}}_n-\ensuremath{\boldsymbol{ \ome}}}^2 +b_n \leq a_m\Norm{{\ensuremath{\mathbf p}}_m-\ensuremath{\boldsymbol{ \ome}}}^2 +b_m, m\not=n} \end{align} % is the $n$th generalized Voronoi region, see for example \cite[Cha.3]{OBSC00}. Here we denoted the weights by the positive numbers % \begin{align} a_n = h_n^{-\frac{1}{\ensuremath{\gamma}}}, \quad b_n=h_n^{2-\frac{1}{\ensuremath{\gamma}}} \end{align} % which define a \emph{M{\"o}bius diagram} \cite{BK06b,BWY07}. The bisectors of M{\"o}bius diagrams are circles or lines in ${\ensuremath{\mathbb R}}^2$ as we will show below. The $n$th Voronoi region is defined by $N-1$ inequalities, which can be written as the intersection of the $N-1$ \emph{dominance regions} of ${\ensuremath{\mathbf p}}_n$ over ${\ensuremath{\mathbf p}}_m$, given by % \begin{align} \Vor_{nm}=\set{\ensuremath{\boldsymbol{ \ome}}\in\ensuremath{\Omega}}{ a_n\Norm{{\ensuremath{\mathbf p}}_n-\ensuremath{\boldsymbol{ \ome}}}^2 +b_n \leq a_m\Norm{{\ensuremath{\mathbf p}}_m-\ensuremath{\boldsymbol{ \ome}}}^2 +b_m}. \end{align} % If $h_n=h_m$ then $a_n=a_m$ and $b_n=b_m$, such that $\Vor_{nm}=\HS({\ensuremath{\mathbf p}}_n,{\ensuremath{\mathbf p}}_m)$, the left half-space between ${\ensuremath{\mathbf p}}_n$ and ${\ensuremath{\mathbf p}}_m$. For $a_n>a_m$ we can rewrite the inequality as % \begin{align*} \Norm{\ensuremath{\boldsymbol{ \ome}}}^2 -2 \skprod{{\ensuremath{\mathbf c}}_{nm}}{\ensuremath{\boldsymbol{ \ome}}} + \frac{a_n^2 \Norm{{\ensuremath{\mathbf p}}_n}^2 \!+\!a_m^2 \Norm{{\ensuremath{\mathbf p}}_m}^2 \!-\! a_na_m(\Norm{{\ensuremath{\mathbf p}}_n}^2 \!+\!\Norm{{\ensuremath{\mathbf p}}_m}^2)}{(a_n-a_m)^2} + \frac{b_n-b_m}{a_n-a_m} \leq & 0 \end{align*} % where the center point is given by % \begin{align} {\ensuremath{\mathbf c}}_{nm}=\frac{a_n{\ensuremath{\mathbf p}}_n- a_m{\ensuremath{\mathbf p}}_m}{a_n-a_m}=a_n \frac{{\ensuremath{\mathbf p}}_n-h_{nm} {\ensuremath{\mathbf p}}_m}{a_n -a_m}=\frac{{\ensuremath{\mathbf p}}_n -h_{nm}{\ensuremath{\mathbf p}}_m}{1-h_{nm}} \end{align} % where we introduced the \emph{parameter ratio} of the $n$th and $m$th quantization point as % \begin{align} h_{nm}:= a_m/a_n=\left(h_n /h_m\right)^{\frac{1}{\ensuremath{\gamma}}}>0. \end{align} % If $0<a_n-a_m$, which is equivalent to $h_n<h_m$, then this defines a ball (disc) and for $h_n>h_m$ its complement. Hence we get % \begin{align} \Vor_{nm}=\begin{cases} \Ball({\ensuremath{\mathbf c}}_{nm},r_{nm})=\set{\ensuremath{\boldsymbol{ \ome}}\in\Omega}{\Norm{\ensuremath{\boldsymbol{ \ome}}-{\ensuremath{\mathbf c}}_{nm}} < r_{nm}},& h_n<h_m\\ \HS({\ensuremath{\mathbf p}}_n,{\ensuremath{\mathbf p}}_m) = \set{\ensuremath{\boldsymbol{ \ome}}\in\Omega}{ \Norm{\ensuremath{\boldsymbol{ \ome}}- {\ensuremath{\mathbf p}}_n}\leq \Norm{\ensuremath{\boldsymbol{ \ome}}- {\ensuremath{\mathbf p}}_m}}, &h_n=h_m\\ \Ball^c({\ensuremath{\mathbf c}}_{nm},r_{nm})=\set{\ensuremath{\boldsymbol{ \ome}}\in\Omega}{\Norm{\ensuremath{\boldsymbol{ \ome}}-{\ensuremath{\mathbf c}}_{nm}} > r_{nm}},& h_n >h_m \end{cases} \end{align} % where the radius square is given by % \begin{align} r_{nm}^2 &= a_na_m\frac{\Norm{{\ensuremath{\mathbf p}}_n-{\ensuremath{\mathbf p}}_m}^2}{(a_n-a_m)^2} + \frac{b_m-b_n}{a_n-a_m} = \frac{a_n}{a_m}\frac{\Norm{{\ensuremath{\mathbf p}}_n-{\ensuremath{\mathbf p}}_m}^2}{(1-\frac{a_n}{a_m})^2} + \frac{b_m-b_n}{a_n-a_m}\label{eq:radiusnm}. \end{align} % The second summand can be written as % \begin{align} \frac{b_m -b_n}{a_n -a_m} & = \frac{h_m^{2-\frac{1}{\ensuremath{\gamma}}} - h_n^{2-\frac{1}{\ensuremath{\gamma}}}}{h_n^{-\frac{1}{\ensuremath{\gamma}}}-h_m^{-\frac{1}{\ensuremath{\gamma}}}} = \frac{h_n^{2} \left(\left(h_n/h_m\right)^{\frac{1}{\ensuremath{\gamma}}-2} -1\right)}{1-\left(h_n/h_m\right)^{\frac{1}{\ensuremath{\gamma}}}} = h_n^2 \frac{h_{nm}^{-\ensuremath{\alpha}} -1}{1-h_{nm}} \label{eq:bmnanm}. \end{align} % % For any $\ensuremath{\gamma}\geq 1$, we have $h_{nm}=(h_n/h_m)^{1/\ensuremath{\gamma}}< 1$ if $h_n<h_m$ and $h_{nm}\geq 1$ else. In both cases \eqref{eq:bmnanm} is positive, which implies a radius $r_{nm}>0$ whenever ${\ensuremath{\mathbf p}}_n\not={\ensuremath{\mathbf p}}_m$. Inserting \eqref{eq:bmnanm} in \eqref{eq:radiusnm} yields the result. \end{proof} \fi \begin{example} We plotted in \figref{fig:uavdirected}, for $N=2$ and $\Omega=[0,1]^2$, the GT regions for a uniform distribution with UAVs placed on \begin{align} {\ensuremath{\mathbf p}}_1=(0.1 , 0.2), h_1=0.5,\quad\text{and}\quad {\ensuremath{\mathbf p}}_2=( 0.6 , 0.6), h_2=1. \end{align} If the second UAV reaches an altitude of $h_2\geq 2.3$, its Voronoi region $\Vor_{2}=\Vor_{2,1}$ will be empty and hence become ``inactive``. \end{example} \subsection{Local optimality conditions} To find the optimal $N-$level parameter quantizer \eqref{eq:optPbar}, we have to minimize the average distortion \eqref{eq:Pbar} over all possible quantization-parameter points, i.e., we have to solve a non-convex \emph{$N-$facility locational-parameter optimization problem}, \begin{align} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*,\Vor^*)= \min_{\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss\in\Omega^N,\bH\in {\ensuremath{\mathbb R}}_+^N} \sum_{n=1}^{N} \int_{\Vor_n(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\bH)} h_n^{-1}(\Norm{{\ensuremath{\mathbf p}}_n-\ensuremath{\boldsymbol{ \ome}}}^2 +h_n^2)^{\ensuremath{\gamma}}\df(\ensuremath{\boldsymbol{ \ome}})d\ensuremath{\boldsymbol{ \ome}}\label{eq:phoptlocal} \end{align} where $\Vor_n(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)$ are the Möbius regions given in \eqref{eq:moebius} for each fixed $(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)$. A point $(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*)$ with Möbius diagram $\Vor^*=\Vor(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*)=\{\Vor_1^*,\dots,\Vor^*_N\}$ is a critical point of \eqref{eq:phoptlocal} if all partial derivatives of $\AvDis$ are vanishing, i.e., if for each $n\in[N]$ it holds \begin{align} 0&= \int_{\Vor^*_n} ({\ensuremath{\mathbf p}}^*_{n}-\ensuremath{\boldsymbol{ \ome}}) (\Norm{{\ensuremath{\mathbf p}}^*_n-\ensuremath{\boldsymbol{ \ome}}}^2 +h_n^{*2})^{\ensuremath{\gamma}-1} \df(\ensuremath{\boldsymbol{ \ome}})d \ensuremath{\boldsymbol{ \ome}}\label{eq:pnopt}\\ 0&= \int_{\Vor^*_n} (\Norm{{\ensuremath{\mathbf p}}^*_n-\ensuremath{\boldsymbol{ \ome}}}^2 +h_n^{*2})^{\ensuremath{\gamma}-1}\cdot (\Norm{{\ensuremath{\mathbf p}}_n^*-\ensuremath{\boldsymbol{ \ome}}}^2 - (2\ensuremath{\gamma}-1) h_n^{*2} ) \df(\ensuremath{\boldsymbol{ \ome}})d\ensuremath{\boldsymbol{ \ome}}. \label{eq:criticalpoint} \end{align} \ifproof \begin{proof} % Since the power function \begin{align} P(\ensuremath{\boldsymbol{ \ome}},{\ensuremath{\mathbf p}},h)= (h^{-\frac{2}{1+\ensuremath{\alpha}}}\Norm{{\ensuremath{\mathbf p}}-\ensuremath{\boldsymbol{ \ome}}}^2 + h^{\frac{2\ensuremath{\alpha}}{1+\ensuremath{\alpha}}})^{\frac{\ensuremath{\alpha}+1}{2}} \end{align} is a polynomial in $\ensuremath{\boldsymbol{ \ome}}$ of degree less than $1+\ensuremath{\alpha}$ for each fixed ${\ensuremath{\mathbf q}}=({\ensuremath{\mathbf p}},h)$, the average distortion function is continuous differentiable, and we obtain by \cite[Thm.1]{WJ18} for the partial derivatives \begin{align} \frac{\partial \Pbar(\ensuremath{\mathbf Q})}{\partial q_{n,i}} = \int_{\Vor_n(\ensuremath{\mathbf Q})} \frac{\partial P(\ensuremath{\boldsymbol{ \ome}},{\ensuremath{\mathbf q}})}{\partial q_{n,i}} \df(\ensuremath{\boldsymbol{ \ome}})d\ensuremath{\boldsymbol{ \ome}}\quad,\quad i\in\{1,2,3\},n\in\{1,2,\dots,N\}. \end{align} Hence, $\ensuremath{\mathbf Q}^*=(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*)$ is a critical point if and only if all partial derivatives vanish \begin{align} 0\overset{!}{=} \nabla_n \Pbar(\ensuremath{\mathbf Q}^*) &= \begin{pmatrix} \int_{\Vor_n} h_n^{*-1} \ensuremath{\gamma} 2 ({\ensuremath{\mathbf p}}^*_{n}-\ensuremath{\boldsymbol{ \ome}}) (\Norm{{\ensuremath{\mathbf p}}_n^*-\ensuremath{\boldsymbol{ \ome}}}^2 +h_n^{*2})^{\ensuremath{\gamma}-1} \df(\ensuremath{\boldsymbol{ \ome}})d \ensuremath{\boldsymbol{ \ome}}\\ \int_{\Vor_n} \left[-h_n^{*-2}(\Norm{{\ensuremath{\mathbf p}}^*_n-\ensuremath{\boldsymbol{ \ome}}}^2 + h_n^{*2})^{\ensuremath{\gamma}} + 2\ensuremath{\gamma} (\Norm{{\ensuremath{\mathbf p}}_n^*-\ensuremath{\boldsymbol{ \ome}}}^2 +h_n^{*2})^{\ensuremath{\gamma}-1}\right] \df(\ensuremath{\boldsymbol{ \ome}})d \ensuremath{\boldsymbol{ \ome}} \end{pmatrix} \notag\\ \ensuremath{\Leftrightarrow} 0&= \begin{pmatrix} \int_{\Vor_n} ({\ensuremath{\mathbf p}}^*_{n}-\ensuremath{\boldsymbol{ \ome}}) (\Norm{{\ensuremath{\mathbf p}}^*_n-\ensuremath{\boldsymbol{ \ome}}}^2 +h_n^{*2})^{\ensuremath{\gamma}-1} \df(\ensuremath{\boldsymbol{ \ome}})d \ensuremath{\boldsymbol{ \ome}}\\ \int_{\Vor_n} (\Norm{{\ensuremath{\mathbf p}}^*_n-\ensuremath{\boldsymbol{ \ome}}}^2 +h_n^{*2})^{\ensuremath{\gamma}-1}\cdot \big(\Norm{{\ensuremath{\mathbf p}}^*_n-\ensuremath{\boldsymbol{ \ome}}}^2+h_n^{*2} - 2\ensuremath{\gamma} h_n^{*2} \big) \df(\ensuremath{\boldsymbol{ \ome}})d \ensuremath{\boldsymbol{ \ome}} \end{pmatrix}\notag\\ \ensuremath{\Leftrightarrow} 0&= \begin{pmatrix} \int_{\Vor_n} ({\ensuremath{\mathbf p}}_{n}^*-\ensuremath{\boldsymbol{ \ome}}) (\Norm{{\ensuremath{\mathbf p}}^*_n-\ensuremath{\boldsymbol{ \ome}}}^2 +h_n^{*2})^{\frac{\ensuremath{\alpha}-1}{2}} \df(\ensuremath{\boldsymbol{ \ome}})d \ensuremath{\boldsymbol{ \ome}}\\ \int_{\Vor_n} (\Norm{{\ensuremath{\mathbf p}}_n^*-\ensuremath{\boldsymbol{ \ome}}}^2 +h_n^{*2})^{\frac{\ensuremath{\alpha}-1}{2}}\cdot (\Norm{{\ensuremath{\mathbf p}}_n^*-\ensuremath{\boldsymbol{ \ome}}}^2 - \ensuremath{\alpha} h_n^{*2} ) \df(\ensuremath{\boldsymbol{ \ome}})d \ensuremath{\boldsymbol{ \ome}} \end{pmatrix}\label{eq:graddph} \end{align} \end{proof} \fi For $N=1$ the integral regions will not depend on $\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss$ or $\vh$ and since the integral kernel is continuous differentiable, the partial derivatives will only apply to the integral kernel. For $N>1$, the conservation-of-mass law, can be used to show that the derivatives of the integral domains will cancel each other out, see also \cite{CMB05}. \ifarxiv \begin{remark} The shape of the regions depend on the parameters, which if different for each quantization point (heterogeneous), generate spherical and not polyhedral regions. We will show later, that homogeneous parameter selection with polyhedral regions will be the optimal regions for $d=1$. \end{remark} \fi \subsection{The optimal $N-$level parameter quantizer in one-dimension for uniform density} In this section, we discuss the parameter optimized quantizer for a one-dimensional convex source $\ensuremath{\Omega}\subset{\ensuremath{\mathbb R}}$, i.e., for an interval $\Omega = [s,t]$ given by some real numbers $s<t$. Under such circumstances, the quantization points are degenerated to scalars, i.e., ${\ensuremath{\mathbf p}}_n=x_n\in[s, t], \forall n\in[N]$. If we shift the interval $\ensuremath{\Omega}$ by an arbitrary $a\in{\ensuremath{\mathbb R}}$, then the average distortion, i.e., the objective function, will not change if we shift all quantization points by the same number $a$. Hence, if we set $a=-s$, we can shift any quantizer for $[s,t]$ to $[0,A]$ where $A=t-s$ without loss of generality. Let us assume a uniform distribution on $\Omega$, i.e. $\df(\ensuremath{\omega})=1/A$. To derive the unique $N-$level parameter optimized quantizer for any $N$, we will first investigate the case $N=1$. \begin{lemma}\label{lem:ggam} Let $A>0$ and $\ensuremath{\gamma}\geq 1$. The unique $1-$level parameter optimized quantizer $(x^*,h^*)$ with distortion function \eqref{eq:Eptx} is given for a uniform source density in $[0,A]$ by % \begin{align} x^*\!=\!\frac{A}{2}, h^*\!=\!\frac{A}{2} g(\ensuremath{\gamma}) \quad\text{and the minimum average distortion} \quad \AvDis(x^*,h^*)\!=\!\left(\frac{A}{2}\right)^{\!2\ensuremath{\gamma}-1}\!\!\!\!g(\ensuremath{\gamma})\notag \end{align} % where $g(\ensuremath{\gamma})=\arg\min_{u>0} F(u,\ensuremath{\gamma})<1/\sqrt{2\ensuremath{\gamma}-1}$ is the unique minimizer of % \begin{align} F(u,\ensuremath{\gamma})=\int_0^1 f(\ensuremath{\omega},u,\ensuremath{\gamma}) d\ensuremath{\omega} \quad\text{with}\quad f(\ensuremath{\omega},u,\ensuremath{\gamma})=\frac{(\ensuremath{\omega}^2+u^2)^{\ensuremath{\gamma}}}{u} \end{align} % which is for fixed $\ensuremath{\gamma}$ a continuous and convex function over ${\ensuremath{\mathbb R}}_+$. For $\ensuremath{\gamma}\in\{1,2,3\}$ the minimizer can be derived in closed form as % \begin{align} g(1) = \sqrt{1/3},\ \ g(2) = \sqrt{ (\sqrt{32/5}-1)/9}, \ \ g(3) = \sqrt{\Big((32/7)^{1/3}-1\Big)/5}.\label{eq:ggam} \end{align} \end{lemma} \ifarxiv \begin{proof} See \appref{app:proof_lemma_ggam}. \end{proof} \fi \begin{remark} The convexity of $F(\cdot,\ensuremath{\gamma})$ can be also shown by using extensions of the Hermite-Hadamard inequality \cite{ZC10}, which allows to show convexity over any interval. Let us note here that for any fixed parameter $h_n>0$, the average distortion $\AvDis(x_n^*\pm \ensuremath{\epsilon},h_n)$ is strictly monotone increasing in $\ensuremath{\epsilon}>0$. Hence, $x_n^*$ is the unique minimizer for any $h_n>0$. We will use this decoupling property repeatedly in the proofs \cite{GWJ18b}. \end{remark} \newcommand{\ensuremath{x^{*}}}{\ensuremath{x^{*}}} \newcommand{\ensuremath{h^{*}}}{\ensuremath{h^{*}}} \newcommand{\ensuremath{{\mathbf q}^{*}}}{\ensuremath{{\mathbf q}^{*}}} \newcommand{\ensuremath{{\mathbf q}^{*}}}{\ensuremath{{\mathbf q}^{*}}} \color{black} \color{black} To derive our main result, we need some general properties of the optimal regions. \color{black} \begin{lemma}\label{lemma:allActive} Let $\ensuremath{\Omega}=[0,A]$ for some $A>0$. The $N-$level parameter optimized quantizer $(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*)\in\ensuremath{\Omega}^N\times{\ensuremath{\mathbb R}}_+^N$ for a uniform source density in $\ensuremath{\Omega}$ has optimal quantization regions $\Vor_n(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*)=[b^*_{n-1},b^*_n]$ with $0\leq b^*_{n-1}<b^*_n\leq A$ and optimal quantization points $x_n^*=(b_n^*+b^*_{n-1})/2$ for $n\in[N]$, i.e., each region is a closed interval with positive measure and centroidal quantization points. \end{lemma} \ifarxiv \begin{proof} See \appref{app:proof_lemma_active}. \end{proof} \fi \begin{remark} Hence, for an $N-$level parameter optimized quantizer, all quantization points are used, which is intuitively, since each additional quantization point should reduce the distortion of the quantizer by partitioning the source in non-zero regions. \end{remark} \begin{theorem}\label{thm:commonheight} Let $N\in{\ensuremath{\mathbb N}}$, $\ensuremath{\Omega}=[0,A]$ for some $A>0$, and $\ensuremath{\gamma}\geq 1$. The \emph{unique $N-$level parameter optimized quantizer} $(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*,\ensuremath{\mathcal R}^*)$ is the uniform scalar quantizer with identical parameter values, given for $n\in[N]$ by % \begin{align} {\ensuremath{\mathbf p}}_n^*=\ensuremath{x^{*}}_n= \frac{A}{2N} (2n-1),\quad h^*=h_n^*= \frac{A}{2N} g(\ensuremath{\gamma}),\quad \ensuremath{\mathcal R}_n^*= \left[\frac{A}{N}(n-1),\frac{A}{N}n\right] \label{eq:optimaldeploy} \end{align} % with minimum average distortion % \begin{align} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*,\ensuremath{\mathcal R}^*)=\left(\frac{A}{2N}\right)^{2\ensuremath{\gamma}-1} \int_0^1 \frac{\big(\ensuremath{\omega}^2+g^2(\ensuremath{\gamma})\big)^{\ensuremath{\gamma}}}{g(\ensuremath{\gamma})} d\ensuremath{\omega}\label{eq:optimumavpow}. \end{align} % For $\ensuremath{\gamma}\in\{1,2,3\}$, the closed form $g(\ensuremath{\gamma})$ is provided in \eqref{eq:ggam}. \end{theorem} \ifarxiv \begin{proof} See \appref{sec:proof_theorem}. \end{proof} \fi \color{black} \begin{example} We plot the optimal heights and optimal average distortion for a uniform GT density in $[0,1]$ over various $\ensuremath{\alpha}$ and $N=2$ in \figref{fig:goptdopt}. Note that the factor $A/2N=1/4$ will play a crucial role for the height and distortion scaling. Moreover, the distortion decreases exponentially in $\alpha$ if $A/2N<1$. Let us set $\ensuremath{\beta}=1=A$. Then, the optimal UAV deployment is pictured in \figref{fig:uavonedim} for $N=2$ and $N=4$. The maximum elevation angle $\ensuremath{\theta}_{\text{max}}$ is hereby constant for each UAV and does not change if the number of UAVs, $N$, increases. Moreover, it is also independent of $A$ and $\ensuremath{\beta}$, since with \eqref{eq:optimaldeploy} we have $\mu_n^*=x^*_n-x^*_{n-1}=A/N$ and % \begin{align} \cos(\ensuremath{\theta}_{\text{max}})=\cos(\ensuremath{\theta}_n)= \frac{h^*}{\mu^*_n/2}= \frac{2N}{A}\frac{A}{2N} g(1)= \frac{1}{\sqrt{3}}. \end{align} \end{example} \begin{figure} \begin{minipage}[b]{0.4\textwidth} \vspace{1ex} \def\svgwidth{1.1\textwidth} \scriptsize{ \input{galphaDalpha_bound.pdf_tex}} \vspace{-3.2ex} \caption{{\small Optimal height (solid) with bound (dashed) and average distortion (dotted) for $N=2,A=1$ and uniform GT density.}} \label{fig:goptdopt} \end{minipage} \hfill \begin{minipage}[b]{0.56\textwidth} \hspace{-2ex} \def\svgwidth{1.08\textwidth} \scriptsize{ \input{OptUAVonedimensional.pdf_tex}} \caption{{\small Optimal UAV deployment in one dimension for $A=1,\ensuremath{\alpha}=1$ and $N=2,4$ over a uniform GT density by \eqref{eq:optimaldeploy}.}} \label{fig:uavonedim} \end{minipage} \end{figure} \section{Llyod-like Algorithms and Simulation Results} In this section, we introduce two Lloyd-like algorithms, Lloyd-A and Lloyd-B, to optimize the quantizer for two-dimensional scenarios. The proposed algorithms iterate between two steps: (1) The reproduction points are optimized through gradient descent while the partitioning is fixed; (ii) The partitioning is optimized while the reproduction points are fixed. In Lloyd-A, all UAVs (or reproduction points) share the common flight height while Lloyd-B allows UAVs at different flight heights. In what follows, we provide the simulation results over the two-dimensional target region $\Omega=[0,10]^2$ with uniform and non-uniform density functions. The non-uniform density function is a Gaussian mixture of the form $\sum_{k=1}^{3}\frac{A_k}{\sqrt{2\pi}\sigma^2_k}\exp{\left(-\frac{\|\ensuremath{\omega}-c_k\|^2}{2\sigma_k}\right)}$, where the weights, $A_k$, $k=1,2,3$ are $0.5$, $0.25$, $0.25$, the means, $c_k$, are $(3,3)$, $(6,7)$, $(7.5,2.5)$, the standard deviations, $\sigma_k$, are $1.5$, $1$, and $2$, respectively. \begin{figure}[!htb] \setlength\abovecaptionskip{0pt} \setlength\belowcaptionskip{0pt} \centering \subfloat[]{\includegraphics[width=2.9in]{DistortionComparsionUniform} \label{uniformDistortion}} \hfil \subfloat[]{\includegraphics[width=2.9in]{DistortionComparsionNonUniform} \label{nonuniformDistortion}} \captionsetup{justification=justified} \caption{\small{The performance comparison of Lloyd-A, Lloyd-B and Random Deployment (RD). (a) Uniform density. (b) Non-uniform density.}} \label{Distortion} \end{figure} To evaluate the performance of the proposed algorithms, we compare them with the average distortion of $100$ random deployments (RDs). Figs. \ref{uniformDistortion} and \ref{nonuniformDistortion}, show that the proposed algorithms outperform the random deployment on both uniform and non-uniform distributed target regions. From \figref{uniformDistortion}, one can also find that the distortion achieved by Lloyd-A and Lloyd-B are very close, indicating that the optimality of the common height, as proved for the one-dimensional case in \secref{sec:optmize1D}, might be extended to the two-dimensional case when the density function is uniform. However, one can find a non-negligible gap between Lloyd-A and Lloyd-B in \figref{nonuniformDistortion} where the density function is non-uniform. For instance, given $16$ UAVs and the path-loss exponent $\alpha=6$, Lloyd-A's distortion is $40.17$ while Lloyd-B obtains a smaller distortion, $28.25$, by placing UAVs at different heights. Figs. \ref{uniformPartitions32} and \ref{uniformPartitions100} illustrate the UAV ground projections and their partitions on a uniform distributed square region. As the number of UAVs increases, the UAV partitions approximate hexagons which implies that the optimality of congruent partition (Theorem \ref{thm:commonheight}) might be extended to uniformly distributed users for two-dimensional sources. \ifarxiv However, the UAV projections in Figs. \ref{nonuniformPartitions32} and \ref{nonuniformPartitions100} show that congruent partition is no longer a necessary condition for the optimal quantizer when distribution is non-uniform. \else However, our simulations in \cite{GWJ18b} show that congruent partition is no longer a necessary condition for the optimal quantizer when the source distribution is non-uniform. \fi \begin{figure}[t] \centering \subfloat[]{\includegraphics[width=2.8in]{HR32} \label{uniformPartitions32}} \hfil \subfloat[]{\includegraphics[width=2.8in]{HR100} \label{uniformPartitions100}} \captionsetup{justification=justified} \vspace{-2ex} \caption{\small{The UAV projections on the ground with generalized Voronoi Diagrams where $\alpha=2$ and the source distribution is uniform. (a) 32 UAVs. (b) 100 UAVs.}} \label{uniformDistortionPartition2} \end{figure} \ifarxiv \begin{figure}[t] \centering \subfloat[]{\includegraphics[width=2.8in]{HR32nonUniform} \label{nonuniformPartitions32}} \hfil \subfloat[]{\includegraphics[width=2.8in]{HR100nonUniform} \label{nonuniformPartitions100}} \captionsetup{justification=justified} \vspace{-2ex} \caption{\small{The UAV projections on the ground with generalized Voronoi Diagrams where $\alpha=2$ and the source distribution is non-uniform. (a) 32 UAVs. (b) 100 UAVs.}} \label{uniformDistortionPartition3} \end{figure} \else \fi \section{Conclusion} We studied quantizers with parameterized distortion measures for an application to UAV deployments. Instead of using the traditional mean distance square as the distortion, we introduce a distortion function which models the energy consumption of UAVs in dependence of their heights. We derived the unique parameter optimized quantizer -- a uniform scalar quantizer with an optimal common parameter -- for uniform source density in one-dimensional space. In addition, two Lloyd-like algorithms are designed to minimize the distortion in two-dimensional space. Numerical simulations demonstrate that the common weight property extends to two-dimensional space for a uniform density. \ifarxiv \appendices \section{Proof of \lemref{lem:ggam}}\label{app:proof_lemma_ggam} To find the optimal $1-$level parameter quantizer $(x^*,h^*)$ for a uniform density $\ensuremath{\lambda}(\ensuremath{\omega})=1/A$, we need to satisfy \eqref{eq:criticalpoint}, i.e., for\footnote{Note, there is no optimizing over the regions, since there is only one.} $\ensuremath{\Omega}=\Vor_1=\Vor_1^*=[0,A]$ % \begin{align} 0&=\int_0^A (x^*-\ensuremath{\omega})\big( (x^*-\ensuremath{\omega})^2+h^{*2}\big)^{\ensuremath{\gamma}-1} d\ensuremath{\omega}. \end{align} % Substituting $x^*-\ensuremath{\omega}$ by $\ensuremath{\omega}$ we get % \begin{align} 0&=\int_{x^*-A}^{x^*} \ensuremath{\omega} \big( \ensuremath{\omega}^2+h^{*2}\big)^{\ensuremath{\gamma}-1} d\ensuremath{\omega}. \end{align} % Since the integral kernel is an odd function in $\ensuremath{\omega}$ and $x^*\in[0,A]$, it must hold % \begin{align} 0=-\int_0^{x^*-A} \ensuremath{\omega}(\ensuremath{\omega}^2+h^{*2})^{\ensuremath{\gamma}-1}d\ensuremath{\omega} + \int_0^{x^*}\ensuremath{\omega}(\ensuremath{\omega}^2+h^{*2})^{\ensuremath{\gamma}-1}d\ensuremath{\omega} \intertext{by substituting $\ensuremath{\omega}$ by $-\ensuremath{\omega}$ we get} \int_{0}^{A-x^*} \ensuremath{\omega}(\ensuremath{\omega}^2+h^{*2})^{\ensuremath{\gamma}-1}d\ensuremath{\omega} = \int_0^{x^*} \ensuremath{\omega}(\ensuremath{\omega}^2+h^{*2})^{\ensuremath{\gamma}-1}d\ensuremath{\omega}. \end{align} % Hence for any choice of $h^*$ it must hold $x^*=A-x^*$, which is equivalent to $x^*=A/2$. To find the optimal parameter, we can just insert $x^*$ into the average distortion % \begin{align} \AvDis(x^*,h) &= \frac{1}{A}\int_0^A \frac{(x^*-\ensuremath{\omega})^2+h^2)^{\ensuremath{\gamma}} }{h}d\ensuremath{\omega} = \frac{1}{A} \int_0^{A/2} \frac{(\ensuremath{\omega}^2+h^2)^{\ensuremath{\gamma}}}{h}d\ensuremath{\omega}\label{eq:AvDisxstar} \intertext{where we substituted again and inserted $x^*=A/2$. By substituting $\ensuremath{\omega}$ with $2\ensuremath{\omega}/A$ and $h$ with $u=2h/A$ we get} &= \int_0^1 \frac{2}{A}\frac{((A\ensuremath{\omega}/2)^2 + (Au/2)^2)^{\ensuremath{\gamma}}}{u}d\ensuremath{\omega} =\left(\frac{A}{2}\right)^{2\ensuremath{\gamma}-1} \int_0^1 f(\ensuremath{\omega},u,\ensuremath{\gamma})d\ensuremath{\omega} \end{align} % where for each $\ensuremath{\gamma}\geq 1$ the integral kernel $f$ is a convex function in ${\ensuremath{\mathbf x}}=(\ensuremath{\omega},u)$ over ${\ensuremath{\mathbb R}}_+^2$. Let us rewrite $f$ as % \begin{align} f(\ensuremath{\omega},u,\ensuremath{\gamma})= \frac{(\ensuremath{\omega}^2+u^2)^{\ensuremath{\gamma}}}{u}= \frac{\Norm{(\ensuremath{\omega},u)}_2^{2\ensuremath{\gamma}}}{u}. \end{align} % Clearly, $\Norm{{\ensuremath{\mathbf x}}}_2$ is a convex and continuous function in ${\ensuremath{\mathbf x}}$ over ${\ensuremath{\mathbb R}}^2$ and since $(\cdot)^{2\ensuremath{\gamma}}$ with $2\ensuremath{\gamma}\geq2$ is a strictly increasing continuous function, the concatenation $f({\ensuremath{\mathbf x}},\ensuremath{\gamma})$ is a strict convex and continuous function over ${\ensuremath{\mathbb R}}_+^2$. Hence, for any ${\ensuremath{\mathbf x}}_1,{\ensuremath{\mathbf x}}_2\in{\ensuremath{\mathbb R}}^2$ we have % \begin{align} \Norm{\ensuremath{\lambda}{\ensuremath{\mathbf x}}_1+(1-\ensuremath{\lambda}){\ensuremath{\mathbf x}}_2}_2^{2\ensuremath{\gamma}}<\ensuremath{\lambda} \Norm{{\ensuremath{\mathbf x}}_1}_2^{2\ensuremath{\gamma}} + (1-\ensuremath{\lambda})\Norm{{\ensuremath{\mathbf x}}_2}_2^{2\ensuremath{\gamma}} \end{align} % for all $\ensuremath{\lambda}\in(0,1)$. But then we have also for any $u_1,u_2\in{\ensuremath{\mathbb R}}^2_+$ and $\ensuremath{\omega}\geq 0$ % \begin{align} f(\ensuremath{\lambda} u_1 +(1-\ensuremath{\lambda})u_2,\ensuremath{\omega},\ensuremath{\gamma}) < \frac{\ensuremath{\lambda}\Norm{(\ensuremath{\omega},u_1)}_2^{2\ensuremath{\gamma}} + (1-\ensuremath{\lambda})\Norm{(\ensuremath{\omega},u_2)}_2^{2\ensuremath{\gamma}}}{\ensuremath{\lambda} u_1+(1-\ensuremath{\lambda})u_2} \label{eq:fnormgam}. \end{align} % Considering the following inequality % \begin{align} \frac{1}{u_1}\! +\!\frac{1}{u_2} &= \left(\frac{1}{u_1}\! + \!\frac{1}{u_2}\right)\frac{\ensuremath{\lambda} u_1 \!+\!(1\!-\!\ensuremath{\lambda})u_2}{\ensuremath{\lambda} u_1 \!+\! (1\!-\!\ensuremath{\lambda})u_2} =\frac{ \left(\ensuremath{\lambda}\! +\!\frac{(1\!-\!\ensuremath{\lambda})u_2}{u_1}\! +\! (1\!-\!\ensuremath{\lambda}) \!+\! \frac{\ensuremath{\lambda} u_1}{u_2}\right)}{\ensuremath{\lambda} u_1 \!+\! (1\!-\!\ensuremath{\lambda})u_2} > \frac{1}{\ensuremath{\lambda} u_1\! +\! (1\!-\!\ensuremath{\lambda})u_2}\notag \end{align} % and \eqref{eq:fnormgam}, we will have % \begin{align} f(\ensuremath{\lambda} u_1 +(1-\ensuremath{\lambda})u_2,\ensuremath{\omega},\ensuremath{\gamma}) < \ensuremath{\lambda} f(u_1,\ensuremath{\omega},\ensuremath{\gamma}) + (1-\ensuremath{\lambda})f(u_2,\ensuremath{\omega},\ensuremath{\gamma}) \end{align} % for every $\ensuremath{\lambda}\in(0,1)$. Hence, the integral kernel is a strictly convex function for every $\ensuremath{\omega}\geq0,\ensuremath{\gamma}\geq 1$, and since the infinite sum (integral) of convex functions is again a convex function, for $u>0$, we have shown convexity of $F(u,\ensuremath{\gamma})$. Note, $f(u,\ensuremath{\omega},\ensuremath{\gamma})$ is continuous in ${\ensuremath{\mathbb R}}_+^2$ since it is a product of the continuous functions $\Norm{(u,\ensuremath{\omega})}_2^{2\ensuremath{\gamma}}$ and $1/(u+0\cdot\ensuremath{\omega})$, and so is $F(u,\ensuremath{\gamma})$. % Therefore, the only critical point of $F(\cdot,\ensuremath{\gamma})$ will be the unique global minimizer % \begin{align} g(\ensuremath{\gamma})=\arg\min_{u>0} F(u,\ensuremath{\gamma}), \end{align} % which is defined by the vanishing of the first derivative: % \begin{align} F'(u) &\!= \!\int_0^{1}\! (\ensuremath{\omega}^2\!+\!u^2)^{\ensuremath{\gamma}-1} \left( (2\ensuremath{\gamma}\!-\!1)-\frac{\ensuremath{\omega}^2}{u^2}\right)d\ensuremath{\omega} \!=\!\frac{1}{u^{2}}\!\int_{0}^{1} (\ensuremath{\omega}^2\!+\!u^2)^{\ensuremath{\gamma}-1}\left((2\ensuremath{\gamma}\!-\!1)u^2-\ensuremath{\omega}^2\right)d\ensuremath{\omega}\label{eq:Fderivative}. \end{align} % Hence, $F'(u)$ can only vanish if $u<1/\sqrt{2\ensuremath{\gamma}-1}$, which is an upper bound on $g(\ensuremath{\gamma})$. The optimal parameter for minimizing the average distortion \eqref{eq:AvDisxstar} is then % \begin{align} h^*= \frac{A}{2} g(\ensuremath{\gamma}) \quad\text{with}\quad \AvDis(x^*,h^*)= \left(\frac{A}{2}\right)^{2\ensuremath{\gamma}-1} g(\ensuremath{\gamma}). \end{align} % Analytical solutions for $F'(u)=0$ are possible for integer valued $\ensuremath{\gamma}$. Let us set $0<x=u^2$ in \eqref{eq:Fderivative}, then for $\ensuremath{\gamma}\in{\ensuremath{\mathbb N}}$, the integrand in \eqref{eq:Fderivative} will be a polynomial in $\ensuremath{\omega}$ of degree $2\ensuremath{\gamma}$ and in $x$ of degree $\ensuremath{\gamma}$. For $\ensuremath{\gamma}\in\{1,2,3\}$ the integrand will be % \begin{align} (\ensuremath{\omega}^2+x)^0 (1x-\ensuremath{\omega}^2)&=x-\ensuremath{\omega}^2\\ (\ensuremath{\omega}^2+x)^1 (3x-\ensuremath{\omega}^2)&=3x^2+2\ensuremath{\omega}^2x -\ensuremath{\omega}^4 \\ (\ensuremath{\omega}^2+x)^2 (5x-\ensuremath{\omega}^2)&=5x^3 +9\ensuremath{\omega}^2 x^2 +3\ensuremath{\omega}^4 x -\ensuremath{\omega}^6 \end{align} % which yield with the definite integrals to % \begin{align} 0 &= \ensuremath{\omega}(x-\frac{\ensuremath{\omega}^2}{3})\Big|_{\ensuremath{\omega}=1}\label{eq:xfirst}\\ 0 &= \ensuremath{\omega}( 3x^2 +\frac{2\ensuremath{\omega}^2x}{3} -\frac{\ensuremath{\omega}^4}{5} )\Big|_{\ensuremath{\omega}=1}\label{eq:xtwo}\\ 0 &= \ensuremath{\omega}( 5x^3 + 3\ensuremath{\omega}^2x^2 +\frac{3\ensuremath{\omega}^4x}{5} -\frac{\ensuremath{\omega}^6}{7} )\Big|_{\ensuremath{\omega}=1}\label{eq:ccubic} \end{align} % Solving \eqref{eq:xfirst} for $x$ yields to the only feasible solution % \begin{align} x=\frac{1}{3} \quad\ensuremath{\Rightarrow} \quad g(1)=\frac{1}{\sqrt{3}}\approx 0.577. \end{align} % The solutions of \eqref{eq:xtwo} are % \begin{align} x_{\pm}= -\frac{1}{9} \pm \sqrt{\frac{1}{81}+\frac{1}{15}} = \frac{\pm \sqrt{32/5} -1}{9} \end{align} % Since only positive roots are allowed, we get as the only feasible solution % \begin{align} g(3)=\frac{\sqrt{\sqrt{32/5}-1}}{3}\approx 0.412. \end{align} % Finally, the cubic equation \eqref{eq:ccubic} results in % \begin{align} 5x^3 + 3 x^2 + \frac{3}{5} x - \frac{1}{7}=0 \end{align} % The solution of a cubic equation can be found in \cite[2.3.2]{Zwi03} by calculating the discriminant % \begin{align} \ensuremath{\Delta}=q^2+4p^3 \quad\text{with}\quad q=\frac{2b^3-9abc+27a^2d}{27a^3},p=\frac{3ac-b^2}{9a^2} \end{align} % Let us identify $a=5,b=3,c=3/5$ and $d=-1/7$, then we get % \begin{align} q&=\frac{6\cdot 9-9\cdot 9 - 27\cdot 5^2\cdot1/7}{27\cdot 5^3} =-\frac{3}{3\cdot 5\cdot 25} -\frac{1}{5\cdot 7}=-\frac{32}{25\cdot35}\\ \ensuremath{\Delta}&=q^2 + 4 \left(\frac{3\cdot 3 -9}{9\cdot 5^2}\right)^3=q^2>0 \end{align} % which indicates only one real-valued root, given by % \begin{align} x=\ensuremath{\alpha}_+^{1/3}+\ensuremath{\alpha}_-^{1/3}-\frac{b}{3a} \quad\text{with}\quad \ensuremath{\alpha}_{\pm}=\frac{-q\pm\sqrt{\ensuremath{\Delta}}}{2} = \left\{0, \frac{32}{25\cdot 35} \right\} \end{align} % which computes to % \begin{align} x=\left( \frac{32}{5^3\cdot 7}\right)^{1/3}- \frac{1}{5} =\frac{(\frac{32}{7})^{1/3}-1}{5} \ensuremath{\Rightarrow} g(5)=\sqrt{\frac{(\frac{32}{7})^{1/3}-1}{5}} \approx 0.363. \end{align} % \fi \ifarxiv% \section{Proof of \lemref{lemma:allActive}}\label{app:proof_lemma_active} % Although, this statement seems to be trivial, it is not straight forward to show. We will use the quantization relaxation for the average distortion $\AvDis$ in \eqref{eq:Pbar} to show that the $N-$level parameter optimized quantizer has strictly smaller distortion than the $(N-1)-$level optimized quantizer \eqref{eq:optPbar}. We define, as in quantization theory, see for example \cite{GN98}, an \emph{$N-$level quantizer} for $\ensuremath{\Omega}$, by a (disjoint) partition $\ensuremath{\mathcal R}=\{\ensuremath{\mathcal R}_n\}_{n=1}^N\subset \ensuremath{\Omega}$ of $\ensuremath{\Omega}$ and assign to each partition region $\ensuremath{\mathcal R}_n$ a quantization-parameter point $({\ensuremath{\mathbf p}}_n,h_n)\in\ensuremath{\Omega}\times{\ensuremath{\mathbb R}}_+$. The assignment rule or \emph{quantization rule} can be anything such that the regions are independent of the value of the quantization and parameter points. Minimizing over the quantizer, that is, over all partitions and possible quantization-parameter points will yield to the parameter optimized quantizer, which is by definition the optimal deployment which generate the generalized Voronoi regions as the optimal partition (tessellation\footnote{Since we take here the continuous case, the integral will not distinguish between open or closed sets.}). This holds for any density function $\ensuremath{\lambda}(\ensuremath{\omega})$ and target area $\ensuremath{\Omega}$. To see this\footnote{We use the same argumentation as in the prove of \cite[Prop.1]{KJ17}.}, let us start with any quantizer $(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\ensuremath{\mathcal R})$ for $\ensuremath{\Omega}$ yielding to the average distortion % \begin{align} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\ensuremath{\mathcal R}) &=\sum_{n=1}^N \int_{\ensuremath{\mathcal R}_n} \Dis({\ensuremath{\mathbf p}}_n,h_n,\ensuremath{\omega})\ensuremath{\lambda}(\ensuremath{\omega}) d\ensuremath{\omega} \geq \sum_{n=1}^N \int_{\ensuremath{\mathcal R}_n} \left(\min_{m\in[N]} \Dis({\ensuremath{\mathbf p}}_m,h_n,\ensuremath{\omega}) \right) \ensuremath{\lambda}(\ensuremath{\omega})d\ensuremath{\omega}\notag\\ &=\int_{\ensuremath{\Omega}} \min_{m\in[N]} \Dis({\ensuremath{\mathbf p}}_m,h_n,\ensuremath{\omega}) \ensuremath{\lambda}(\ensuremath{\omega})d\ensuremath{\omega} =\sum_{n=1}^{N} \int_{\Vor_n(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)} \Dis({\ensuremath{\mathbf p}}_n,h_n,\ensuremath{\omega}) \ensuremath{\lambda}(\ensuremath{\omega})d\ensuremath{\omega} \end{align} % where the first inequality is only achieved if for any $\ensuremath{\omega}\in \ensuremath{\mathcal R}_n$ we have chosen $({\ensuremath{\mathbf p}}_n,h_n)$ to be the optimal quantization point with respect to $\Dis$, or vice versa, if every $({\ensuremath{\mathbf p}},h_n)$ is optimal for every $\ensuremath{\omega}\in \ensuremath{\mathcal R}_n$, which is the definition of the generalized Voronoi region $\Vor(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)$. Therefore, minimizing over all partitions gives equality, i.e. % \begin{align} \min_{\ensuremath{\mathcal R}} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\ensuremath{\mathcal R})=\AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\Vor(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)) \end{align} % for any $(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh)\in\ensuremath{\Omega}^N\times{\ensuremath{\mathbb R}}_+^N$. Hence, we have shown that the parameterized distortion quantizer optimization problem is equivalent to the locational-parameter optimization problem % \begin{align} \min_{\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss\in\ensuremath{\Omega}^N,\vh\in{\ensuremath{\mathbb R}}_+^N} \min_{\ensuremath{\mathcal R}\in\ensuremath{\Omega}^N} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\ensuremath{\mathcal R}) = \min_{\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss\in\ensuremath{\Omega}^N,\vh\in{\ensuremath{\mathbb R}}_+^N} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh,\Vor(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss,\vh))=\AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*,\Vor^*).\label{eq:optquanteqoptdeploy2} \end{align} % We need to show that for the optimal $N-$level parameter-quantizer $(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*,\Vor^*)$ with $\Vor^*=\Vor(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*)$, we have $\mu(\Vor_n)>0$ for all $n\in[N]$. Let us first show that each region is indeed a closed interval, i.e., $\Vor_n^*=[b^*_{n-1},b^*_n]$ with $0\leq b^*_{n-1}\leq b^*_n\leq A$. By the definition of the Möbius regions in \lemref{lem:moebiusdia}, each dominance region is either a single interval (if it is a ball not contained in the target region or a halfspace) or two disjoint intervals (if its a ball contained in the target region), we can not have more than $K_n\leq 2N-2$ disjoint closed intervals for each Möbius (generalized Voronoi) region. Therefore, the $n$th optimal Möbius region is given as $\Vor^*_n=\bigcup_{k=1}^{K_n}v_{n,k}$, where $v_{n,k}=[a_{n,k-1},a_{n,k}]$ are intervals for some $0\leq a_{n,k-1}\leq a_{n,k}\leq A$. Let us assume there are quantization points with disconnected regions, i.e. $K_n>1$ for $n\in\mathcal{I}_d$ and some $\mathcal{I}_d\subset[N]$. Then, we will re-arrange the partition $\Vor^*$ by concatenating the $K_n$ disconnected intervals $v_{n,k}$ to $\ensuremath{\mathcal R}_n=[b_{n-1},b_{n}]$ for $n\in\mathcal{I}_d$ and move the connected regions appropriatly such that for all $n\in[N]$ it holds $\mu(\ensuremath{\mathcal R}_n)=\mu(\Vor^*_n)=b_n-b_{n-1}$ and $b_{n-1}\leq b_{n}$, where we set $b_0=0$ and $b_N=A$. For the new concatenated regions, we move each $q_n^*$ to the center of the new arranged regions, i.e., $\ensuremath{\tilde{q}}_n=\frac{b_n+b_{n-1}}{2}$ for $n\in\mathcal{I}_d$. If for the connected regions $n\in[N]\setminus\mathcal{I}_d$, the quantization point $q^*_n$ is not centroidal, by placing it at the center of the corresponding closed interval, we will obtain a strictly smaller distortion by \lemref{lem:ggam}. Hence, for the optimal quantizer, the quantization points must be centroidal and we can assume $\ensuremath{\tilde{q}}_n=(b_n+b_{n-1})/2$ for all $n\in[N]$. % In this rearrangement, we did not change the parameters $h_n^*$ at all. The rearranged partition $\ensuremath{\mathcal R}=\{\ensuremath{\mathcal R}_n\}$ and replaced quantization points $\ensuremath{\tilde{\vp}}=(\ensuremath{\tilde{q}}_1,\dots,\ensuremath{\tilde{q}}_N)$ provide the average distortion % \begin{align} \AvDis(\ensuremath{\tilde{\vp}},\vh^*,\ensuremath{\mathcal R})=\sum_{n=1}^N \int_{b_{n-1}}^{b_n} \frac{ ((\ensuremath{\tilde{q}}_n\!-\!\ensuremath{\omega})^2+h_n^{*2})^{\ensuremath{\gamma}}}{h_n^{*}} d\ensuremath{\omega} = 2\sum_{n=1}^N \int_{0}^{\frac{b_n-b_{n-1}}{2}} \frac{ (\ensuremath{\omega}^2+h_n^{*2})^{\ensuremath{\gamma}}}{h_n^{*}} d\ensuremath{\omega} \end{align} % where we substituted $\ensuremath{\omega}$ by $\ensuremath{\tilde{q}}_n\!-\!\ensuremath{\omega}$. Since the function $(\ensuremath{\omega}^2+h_n^{*2})^{\ensuremath{\gamma}}$ is strictly monotone increasing in $\ensuremath{\omega}$ for each $\ensuremath{\gamma}>0$, for any $n\in\mathcal{I}_d$, we have % \begin{align} \AvDis_n= \AvDis(\ensuremath{\tilde{q}}_n,h_n^*,\ensuremath{\mathcal R}_n)= 2\int_{0}^{\frac{b_n-b_{n-1}}{2}} \frac{ (\ensuremath{\omega}^2+h_n^{*2})^{\ensuremath{\gamma}}}{h_n^{*}} d\ensuremath{\omega} < \sum_{k=1}^{K_n}\int_{a_{n,k}-q_n^*}^{a_{n,k-1}-q_n^*} \frac{ (\ensuremath{\omega}^2+h_n^{*2})^{\ensuremath{\gamma}}}{h_n^{*}} d\ensuremath{\omega}\label{eq:Avdisn} \end{align} % since the non-zero gaps in $\bigcup_k [a_{n,k}-q_n^*,a_{n,k-1}-q_n^*]$ will lead to larger $\ensuremath{\omega}$ in the RHS integral and therefore to a strictly larger average distortion. Therefore, the points $(\ensuremath{\tilde{\vp}},\vh^*)$ with closed intervals $\{\ensuremath{\mathcal R}_n\}$ have a strictly smaller average distortion, which contradicts the assumption that $({\ensuremath{\mathbf p}}^*,\vh^*)$ is the parameter-optimized quantizer \eqref{eq:optquanteqoptdeploy}. Hence, $K_n=1$ for each $n\in[N]$ and every $\ensuremath{\gamma}\geq 1$. Moreover, the optimal quantization points must be centroids of the intervals, i.e. $x_n^*=(b_n^*+b^*_{n-1})/2$. Now, we have to show that the optimal quantization regions $\Vor_n^*=\{[b^*_{n-1},b^*_n]\}_{n=1}^N$ are not points, i.e., it should hold $b^*_n>b^*_{n-1}$ for each $n\in[N]$. If $b^*_n=b^*_{n-1}$ for some $n$, then the $n$th average distortion $\AvDis_n$ will be zero for this quantization point, since the integral is vanishing. But, then we only optimize over $N-1$ quantization points. So we only need to show that an additional quantization point strictly decreases the minimum average distortion. Hence, take any non-zero optimal quantization region $\Vor_n^*=[b^*_{n-1},b^*_n]$. We know by \lemref{lem:ggam} that the optimal quantizer $q_n^*$ for some closed interval $\Vor_n^*$ must be centroidal for any parameter $h_n$. Hence, if we split $\Vor_n^*$ with $\mu_n^*=b^*_n-b^*_{n-1}$ by a half and put two quantizers $q_{n_1}$ and $q_{n_2}$ with the same parameter $h_n^*$ in the center, we will get by using \eqref{eq:Avdisn} % \begin{align} \AvDis_{n_1}\!+\!\AvDis_{n_2}&=\frac{1}{h_n^*} \left( \int_{b^*_{n-1}}^{b^*_{n-1}+\frac{\mu_n^*}{2}} ( (q_{n_1}-\ensuremath{\omega})^2 + h_n^{*2})^{\ensuremath{\gamma}}d\ensuremath{\omega} +\int_{b_{n-1}^*+\frac{\mu_n^*}{2}}^{b_n^*}((q_{n_2}-\ensuremath{\omega})^2+h_n^{*2})^{\ensuremath{\gamma}}d\ensuremath{\omega}\right)\notag\\ \intertext{Substituting $q_{n_i}-\ensuremath{\omega}$ by $\ensuremath{\omega}$, we get} & = \int_{-\frac{\mu_n^*}{4}}^{\frac{\mu_n^*}{4}} \frac{(\ensuremath{\omega}^2+h_n^{*2})^{\ensuremath{\gamma}}}{h_n^*}d\ensuremath{\omega} + \int_{-\frac{\mu_n^*}{4}}^{\frac{\mu_n^*}{4}} \frac{(\ensuremath{\omega}^2+h_n^{*2})^{\ensuremath{\gamma}}}{h_n^*}d\ensuremath{\omega}\\ & = 2\int_{0}^{\frac{\mu_n^*}{4}} \frac{(\ensuremath{\omega}^2+h_n^{*2})^{\ensuremath{\gamma}}}{h_n^*}d\ensuremath{\omega} + w\int_{0}^{\frac{\mu_n^*}{4}} \frac{ (\ensuremath{\omega}^2+h_n^{*2})^{\ensuremath{\gamma}}}{h_n^*}d\ensuremath{\omega}\\ & < 2\!\int_{0}^{\frac{\mu_n^*}{4}} \frac{(\ensuremath{\omega}^2\!+\!h_n^{*2})^{\ensuremath{\gamma}}}{h_n^*}d\ensuremath{\omega} + 2\!\int_{\frac{\mu_n^*}{4}}^{\frac{\mu_n^*}{2}} \frac{(\ensuremath{\omega}^2\!+\!h_n^{*2})^{\ensuremath{\gamma}}}{h_n^*}d\ensuremath{\omega} = 2\!\int_0^{\frac{\mu_n^*}{2}} \frac{(\ensuremath{\omega}^2\!+\!h_n^{*2})^\ensuremath{\gamma}}{h_n^*} d\ensuremath{\omega}=\AvDis_n. \end{align} % Hence, the average distortion will strictly decrease if $\mu_n^*>0$. Therefore, the $N-$level parameter optimized quantizer will have quantization boundaries $b_n\!>\!b_{n\!-\!1}$ for $n\in[N]$.% % \fi \ifarxiv \section{Proof of \thmref{thm:commonheight}}\label{sec:proof_theorem} We know by \lemref{lemma:allActive} that the optimal quantization regions are closed non-vanishing intervals $\Vor_n^*=[b_{n-1}^*,b_n^*]$ for some $b^*_{n-1}<b_n^*$ with quantization points % \begin{align} {\ensuremath{\mathbf p}}_n^*= x_n^*=\frac{b^*_n+b^*_{n-1}}{2}\label{eq:xnoptimal} \end{align} % for $n\in[N]$. Let us set $\mu_n^*=b_n^*-b_{n-1}^*$ for $n\in[N]$. By substituting $\frac{2(x_n^*-\ensuremath{\omega})}{\mu_n}={ \ensuremath{ \tilde{\ome}} }$ and $\ensuremath{h^{*}}_n=\frac{u^*_n \mu_n^*}{2}$ in the average distortion, we get % \begin{align} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*,\Vor^*)&=\sum_{n=1}^N \int_{b_{n-1}^*}^{b_n^*} \frac{ ((\ensuremath{x^{*}}_n-\ensuremath{\omega})^2+{\ensuremath{h^{*}}_n}^2)^{\ensuremath{\gamma}}}{\ensuremath{h^{*}}_n}\frac{d\ensuremath{\omega}}{A} = \sum_{n=1}^N \int_{1}^{-1} -\frac{ (\mu_n^{*2} { \ensuremath{ \tilde{\ome}} }^2/4 + u_n^{*2}\mu_n^{*2}/4)^{\ensuremath{\gamma}}}{u_n^* \mu_n^*/2} \frac{\mu_n}{2A}d{ \ensuremath{ \tilde{\ome}} }\notag\\ &= \frac{1}{ 2^{2\ensuremath{\gamma}-1}A}\sum_{n=1}^N \mu_n^{*2\ensuremath{\gamma}}\cdot\int_0^1 \frac{(\ensuremath{\omega}^2+u^{*2}_n)^{\ensuremath{\gamma}}}{u^*_n}d \ensuremath{\omega}\label{eq:AvDismutilde} \end{align} % where we used \eqref{eq:xnoptimal} to get for the integral boundaries $2(\ensuremath{x^{*}}_n-b_{n-1}^*)/\mu_n^*=1=-2(\ensuremath{x^{*}}_n-b_{n}^*)/\mu^*_n$. We do not know the value of $u^*_n$ and $\mu_n^*$ but we know that $\mu_n^*>0$ and $\sum_{n=1}^N\mu_n^*=A$ by \lemref{lemma:allActive}. Furthermore, \eqref{eq:AvDismutilde} is the minimum over all such $\mu_n>0$ and $u_n>0$. Hence, it must hold % \begin{align} \AvDis(\ensuremath{\mathbf P }} % Vektorwertiges Ma{\ss^*,\vh^*,\Vor^*) = \frac{1}{2^{2\ensuremath{\gamma}-1}A} \min_{u_n>0} \min_{\substack{\mu_n>0\\ A\!=\! \sum_{n\!=\!1}^N \mu_n}} \sum_{n=1}^N \mu_n^{2\ensuremath{\gamma}}\cdot\left(\int_0^1 \frac{(\ensuremath{\omega}^2+u^2_n)^{\ensuremath{\gamma}}}{u_n}d \ensuremath{\omega}\right) = \frac{g(\ensuremath{\gamma})}{2^{2\ensuremath{\gamma}-1}A} \min_{\substack{\mu_n>0\\ A\!=\! \sum_{n\!=\!1}^N \mu_n}} \sum_{n=1}^N \mu_n^{2\ensuremath{\gamma}} \notag \end{align} % where in the last equality we used \lemref{lem:ggam}. By the Hölder inequality we get for $p=2\ensuremath{\gamma}, q=2\ensuremath{\gamma}/(2\ensuremath{\gamma}-1)$ % \begin{align} \sum_{n=1}^N \mu_n^{2\ensuremath{\gamma}}=\sum_{n=1}^N \mu_n^p = \sum_{n=1}^N \mu_n^p \cdot\Big(\sum_{n=1}^N (1/N)^q \Big)^{p/q} \cdot N \geq\Big( \sum_{n=1}^N \frac{\mu_n }{N}\Big)^p \cdot N = \left(\frac{A}{N}\right)^{2\ensuremath{\gamma}}N\notag \end{align} % where the equality is achieved if and only if $\mu_n^*=A/N$. Hence, the optimal parameter-quantizer is the uniform scalar quantizer $x_n^*=(2n-1)A/2N$ with identical parameters $h^*=(A/2N)g(\ensuremath{\gamma})$ resulting in the minimum average distortion \eqref{eq:optimumavpow}. Let us note here, that for identical parameters, the Möbius regions are closed intervals and reduce to Euclidean Voronoi regions by \lemref{lem:moebiusdia}, for which the optimal tessellation is known to be the uniform scalar quantizer, see for example \cite{GN98}. \fi \section*{References} \ifarxiv \else \vspace{-4ex} \fi \printbibliography \end{document}
1,116,691,499,460
arxiv
\section{Introduction and Notation \subsection{Problem Formulation} We consider a homogeneous isotropic unsteady neutron transport equation in a two-dimensional unit plate $\Omega=\{\vx=(x_1,x_2):\ \abs{\vx}\leq 1\}$ with one-speed velocity $\Sigma=\{\vw=(w_1,w_2):\ \vw\in\s^1\}$ as \begin{eqnarray} \left\{ \begin{array}{rcl} \e^2\dt u^{\e}+\e \vw\cdot\nabla_x u^{\e}+u^{\e}-\bar u^{\e}&=&0\label{transport}\ \ \ \text{in}\ \ [0,\infty)\times\Omega,\\\rule{0ex}{1.0em} u^{\e}(0,\vx,\vw)&=&h(\vx,\vw)\ \ \text{in}\ \ \Omega\\\rule{0ex}{1.0em} u^{\e}(t,\vx_0,\vw)&=&g(t,\vx_0,\vw)\ \ \text{for}\ \ \vw\cdot\vec n<0\ \ \text{and}\ \ \vx_0\in\p\Omega, \end{array} \right. \end{eqnarray} where \begin{eqnarray}\label{average 1} \bar u^{\e}(t,\vx)=\frac{1}{2\pi}\int_{\s^1}u^{\e}(t,\vx,\vw)\ud{\vw}. \end{eqnarray} and $\vec n$ is the outward normal vector on $\p\Omega$, with the Knudsen number $0<\e<<1$. The initial and boundary data satisfy the compatibility condition \begin{eqnarray}\label{compatibility condition} h(\vx_0,\vw)=g(0,\vx_0,\vw)\ \ \text{for}\ \ \vw\cdot\vec n<0\ \ \text{and}\ \ \vx_0\in\p\Omega. \end{eqnarray} We intend to study the diffusive limit of $u^{\e}$ as $\e\rt0$. Based on the flow direction, we can divide the boundary $\Gamma=\{(\vx,\vw):\ \vx\in\p\Omega\}$ into the in-flow boundary $\Gamma^-$, the out-flow boundary $\Gamma^+$, and the grazing set $\Gamma^0$ as \begin{eqnarray} \Gamma^{-}&=&\{(\vx,\vw):\ \vx\in\p\Omega,\ \vw\cdot\vec n<0\},\\ \Gamma^{+}&=&\{(\vx,\vw):\ \vx\in\p\Omega,\ \vw\cdot\vec n>0\},\\ \Gamma^{0}&=&\{(\vx,\vw):\ \vx\in\p\Omega,\ \vw\cdot\vec n=0\}. \end{eqnarray} It is easy to see $\Gamma=\Gamma^+\cup\Gamma^-\cup\Gamma^0$. The study of neutron transport equation dates back to 1950s. The main methods include the explicit formula and spectral analysis of the transport operators(see \cite{Larsen1974}, \cite{Larsen1974=}, \cite{Larsen1975}, \cite{Larsen1977}, \cite{Larsen.D'Arruda1976}, \cite{Larsen.Habetler1973}, \cite{Larsen.Keller1974}, \cite{Larsen.Zweifel1974}, \cite{Larsen.Zweifel1976}). In the classical paper \cite{Bensoussan.Lions.Papanicolaou1979}, a systematic construction of boundary layer was provided via Milne problem. However, this construction was proved to be problematic for steady equation in \cite{AA003} and a new boundary layer construction based on $\e$-Milne problem with geometric correction was presented. In this paper, we extend this result to unsteady equation and consider a more complicated case with initial layer involved. \subsection{Main Results} We first present the well-posedness of the equation (\ref{transport}). \begin{theorem}\label{main 1} Assume $g(t,x_0,\vw)\in L^{\infty}([0,\infty)\times\Gamma^-)$ and $h(\vx,\vw)\in L^{\infty}(\Omega\times\s^1)$. Then for the unsteady neutron transport equation (\ref{transport}), there exists a unique solution $u^{\e}(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$ satisfying \begin{eqnarray}\label{main theorem 1} \im{u^{\e}}{[0,\infty)\times\Omega\times\s^1}\leq C(\Omega)\bigg(\nm{h}_{L^{\infty}(\Omega\times\s^1)}+\nm{g}_{L^{\infty}([0,\infty)\times\Gamma^-)}\bigg). \end{eqnarray} \end{theorem} Then we can show the diffusive limit of the equation (\ref{transport}). \begin{theorem}\label{main 2} Assume $g(t,\vx_0,\vw)\in C^4([0,\infty)\times\Gamma^-)$ and $h(\vx,\vw)\in C^4(\Omega\times\s^1)$. Then for the unsteady neutron transport equation (\ref{transport}), the unique solution $u^{\e}(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$ satisfies \begin{eqnarray}\label{main theorem 2} \lnm{u^{\e}-\u_0-\ub_{I,0}-\ub_{B,0}}=o(1), \end{eqnarray} where the interior solution $\u_0$ is defined in (\ref{expansion temp 8}), the initial layer $\ub_{I,0}$ is defined in (\ref{expansion temp 21}), and the boundary layer $\ub_{B,0}$ is defined in (\ref{expansion temp 9}). Moreover, if $g(t,\theta,\phi)=t^2\ue^{-t}\cos\phi$ and $h(\vx,\vw)=0$, then there exists a $C>0$ such that \begin{eqnarray}\label{main theorem 3} \lnm{u^{\e}-\uc_0-\ubc_{I,0}-\ubc_{B,0}}\geq C>0 \end{eqnarray} when $\e$ is sufficiently small, where the interior solution $\uc_0$ is defined in (\ref{classical temp 2.}), the initial layer $\ubc_{I,0}$ is defined in (\ref{classical temp 21.}), and the boundary layer $\ub_{B,0}$ is defined in (\ref{classical temp 1.}). \end{theorem} \begin{remark} $\theta$ and $\phi$ are defined in (\ref{substitution 2}) and (\ref{substitution 4}). \end{remark} It is easy to see, by a similar argument, the results in Theorem \ref{main 1} and Theorem \ref{main 2} also hold for the one-dimensional unsteady neutron transport equation, where the temporal domain is $[0,\infty)$, spacial domain is $[0,L]$ for fixed $L>0$, and velocity domain is $[-1/2,1/2]$. \subsection{Notation and Structure of This Paper} Throughout this paper, $C>0$ denotes a constant that only depends on the parameter $\Omega$, but does not depend on the data. It is referred as universal and can change from one inequality to another. When we write $C(z)$, it means a certain positive constant depending on the quantity $z$. We write $a\ls b$ to denote $a\leq Cb$. Our paper is organized as follows: in Section 2, we establish the $L^{\infty}$ well-posedness of the equation (\ref{transport}) and prove Theorem \ref{main 1}; in Section 3, we present the asymptotic analysis of the equation (\ref{transport}); in Section 4, we give the main results of the $\e$-Milne problem with geometric correction; in Section 5, we prove the first part of Theorem \ref{main 2}; finally, in Section 6, we prove the second part of Theorem \ref{main 2}. \section{Well-posedness of Unsteady Neutron Transport Equation In this section, we consider the well-posedness of the unsteady neutron transport equation \begin{eqnarray} \left\{ \begin{array}{rcl} \e^2\dt u+\e\vec w\cdot\nabla_xu+u-\bar u&=&f(t,\vx,\vw)\ \ \text{in}\ \ [0,\infty)\times\Omega\label{neutron},\\\rule{0ex}{1.0em} u(0,\vx,\vw)&=&h(\vx,\vw)\ \ \text{in}\ \ \Omega\\\rule{0ex}{1.0em} u(t,\vec x_0,\vec w)&=&g(t,\vec x_0,\vec w)\ \ \text{for}\ \ \vw\cdot\vec n<0\ \ \text{and}\ \ \vx_0\in\p\Omega, \end{array} \right. \end{eqnarray} where the initial and boundary data satisfy the compatibility condition \begin{eqnarray} h(\vx_0,\vw)=g(0,\vx_0,\vw)\ \ \text{for}\ \ \vw\cdot\vec n<0\ \ \text{and}\ \ \vx_0\in\p\Omega. \end{eqnarray} We define the $L^2$ and $L^{\infty}$ norms in $\Omega\times\s^1$ as usual: \begin{eqnarray} \nm{f}_{L^2(\Omega\times\s^1)}&=&\bigg(\int_{\Omega}\int_{\s^1}\abs{f(\vx,\vw)}^2\ud{\vw}\ud{\vx}\bigg)^{1/2},\\ \nm{f}_{L^{\infty}(\Omega\times\s^1)}&=&\sup_{(\vx,\vw)\in\Omega\times\s^1}\abs{f(\vx,\vw)}. \end{eqnarray} Define the $L^2$ and $L^{\infty}$ norms on the boundary as follows: \begin{eqnarray} \nm{f}_{L^2(\Gamma)}&=&\bigg(\iint_{\Gamma}\abs{f(\vx,\vw)}^2\abs{\vw\cdot\vec n}\ud{\vw}\ud{\vx}\bigg)^{1/2},\\ \nm{f}_{L^2(\Gamma^{\pm})}&=&\bigg(\iint_{\Gamma^{\pm}}\abs{f(\vx,\vw)}^2\abs{\vw\cdot\vec n}\ud{\vw}\ud{\vx}\bigg)^{1/2},\\ \nm{f}_{L^{\infty}(\Gamma)}&=&\sup_{(\vx,\vw)\in\Gamma}\abs{f(\vx,\vw)},\\ \nm{f}_{L^{\infty}(\Gamma^{\pm})}&=&\sup_{(\vx,\vw)\in\Gamma^{\pm}}\abs{f(\vx,\vw)}. \end{eqnarray} Similar notation also applies to the space $[0,\infty)\times\Omega\times\s^1$, $[0,\infty)\times\Gamma$, and $[0,\infty)\times\Gamma^{\pm}$. \subsection{Preliminaries} In order to show the $L^2$ and $L^{\infty}$ well-posedness of the equation (\ref{neutron}), we start with some preparations of the penalized neutron transport equation. \begin{lemma}\label{well-posedness lemma 1} Assume $f(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$, $h(\vx,\vw)\in L^{\infty}(\Omega\times\s^1)$ and $g(t,x_0,\vw)\in L^{\infty}([0,\infty)\times\Gamma^-)$. Then for the penalized transport equation \begin{eqnarray}\label{penalty equation} \left\{ \begin{array}{rcl} \lambda u_{\l}+\e^2\dt u_{\l}+\e\vw\cdot\nabla_xu_{\l}+u_{\l}&=&f(t,\vx,\vw)\ \ \text{in}\ \ [0,\infty)\times\Omega,\\\rule{0ex}{1.0em} u_{\l}(0,\vx,\vw)&=&h(\vx,\vw)\ \ \text{in}\ \ \Omega\\\rule{0ex}{1.0em} u_{\l}(t,\vx_0,\vec w)&=&g(t,\vx_0,\vw)\ \ \text{for}\ \ \vw\cdot\vec n<0\ \ \text{and}\ \ \vx_0\in\p\Omega. \end{array} \right. \end{eqnarray} with $\l>0$ as a penalty parameter, there exists a solution $u_{\l}(t,\vx,\vw)\in L^{\infty}([0,T]\times\Omega\times\s^1)$ satisfying \begin{eqnarray} \im{u_{\l}}{[0,\infty)\times\Omega\times\s^1}\leq \im{g}{[0,\infty)\times\Gamma^-}+\im{h}{\Omega\times\s^1}+\im{f}{[0,\infty)\times\Omega\times\s^1}. \end{eqnarray} \end{lemma} \begin{proof} The characteristics $(T(s),X(s),W(s))$ of the equation (\ref{penalty equation}) which goes through $(t,\vx,\vw)$ is defined by \begin{eqnarray}\label{character} \left\{ \begin{array}{rcl} (T(0),X(0),W(0))&=&(t,\vx,\vw)\\\rule{0ex}{2.0em} \dfrac{\ud{T(s)}}{\ud{s}}&=&\e^2,\\\rule{0ex}{2.0em} \dfrac{\ud{X(s)}}{\ud{s}}&=&\e W(s),\\\rule{0ex}{2.0em} \dfrac{\ud{W(s)}}{\ud{s}}&=&0. \end{array} \right. \end{eqnarray} which implies \begin{eqnarray} \left\{ \begin{array}{rcl} T(s)&=&t+\e^2s,\\ X(s)&=&\vx+(\e\vw)s,\\ W(s)&=&\vw, \end{array} \right. \end{eqnarray} Hence, we can rewrite the equation (\ref{penalty equation}) along the characteristics as \begin{eqnarray}\label{well-posedness temp 31} &&u_{\l}(t,\vx,\vw)\\ &=&{\bf 1}_{\{t\geq \e^2t_b\}}\bigg( g(t-\e^2t_b,\vx-\e t_b\vw,\vw)\ue^{-(1+\l)t_b}+\int_{0}^{t_b}f(t-\e^2(t_b-s),\vx-\e(t_b-s)\vw,\vw)\ue^{-(1+\l)(t_b-s)}\ud{s}\bigg)\no\\ &&+{\bf 1}_{\{t\leq \e^2t_b\}}\bigg( h(\vx-(\e t\vw)/\e^2,\vw)\ue^{-(1+\l)t/\e^2}+\int_{0}^{t/\e^2}f(\e^2s,\vx-\e(t/\e^2-s)\vw,\vw)\ue^{-(1+\l)(t/\e^2-s)}\ud{s}\bigg)\no, \end{eqnarray} where the backward exit time $t_b$ is defined as \begin{equation}\label{exit time} t_b(\vx,\vw)=\inf\{s\geq0: (\vx-\e s\vw,\vw)\in\Gamma^-\}. \end{equation} Then we can naturally estimate \begin{eqnarray} &&\im{u_{\l}}{[0,\infty)\times\Omega\times\s^1}\\ &\leq&{\bf 1}_{\{t\geq \e^2t_b\}}\bigg(\ue^{-(1+\l)t_b}\im{g}{[0,\infty)\times\Gamma^-}+\frac{1-\ue^{(1+\l)t_b}}{1+\l}\im{f}{[0,\infty)\times\Omega\times\s^1}\bigg)\no\\ &&+{\bf 1}_{\{t\leq \e^2t_b\}}\bigg(\ue^{-(1+\l)t/\e^2}\im{h}{\Omega\times\s^1}+\frac{1-\ue^{(1+\l)t/\e^2}}{1+\l}\im{f}{[0,\infty)\times\Omega\times\s^1}\bigg)\no\\ &\leq&\im{g}{[0,\infty)\times\Gamma^-}+\im{h}{\Omega\times\s^1}+\im{f}{[0,\infty)\times\Omega\times\s^1}\nonumber. \end{eqnarray} Since $u_{\l}$ can be explicitly traced back to the initial or boundary data, the existence naturally follows from above estimate. \end{proof} \begin{lemma}\label{well-posedness lemma 2} Assume $f(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$, $h(\vx,\vw)\in L^{\infty}(\Omega\times\s^1)$ and $g(t,x_0,\vw)\in L^{\infty}([0,\infty)\times\Gamma^-)$. Then for the penalized neutron transport equation \begin{eqnarray} \left\{ \begin{array}{rcl} \l u_{\l}+\e^2\dt u_{\l}+\e\vw\cdot\nabla_xu_{\l}+u_{\l}-\bar u_{\l}&=& f(t,\vx,\vw)\ \ \text{in}\ \ [0,\infty)\times\Omega\label{well-posedness penalty equation},\\\rule{0ex}{1.0em} u_{\l}(0,\vx,\vw)&=&h(\vx,\vw)\ \ \text{in}\ \ \Omega\\\rule{0ex}{1.0em} u_{\l}(t,\vx_0,\vec w)&=&g(t,\vx_0,\vw)\ \ \text{for}\ \ \vx_0\in\p\Omega\ \ \text{and}\ \vw\cdot\vec n<0. \end{array} \right. \end{eqnarray} with $\l>0$, there exists a solution $u_{\l}(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$ satisfying \begin{eqnarray} \im{u_{\l}}{[0,\infty)\times\Omega\times\s^1}\leq \frac{1+\e}{\l}\bigg(\im{g}{[0,\infty)\times\Gamma^-}+\im{h}{\Omega\times\s^1}+\im{f}{[0,\infty)\times\Omega\times\s^1}\bigg). \end{eqnarray} \end{lemma} \begin{proof} We define an approximating sequence $\{u_{\l}^k\}_{k=0}^{\infty}$, where $u_{\l}^0=0$ and \begin{eqnarray}\label{penalty temp 1} \left\{ \begin{array}{rcl} \l u_{\l}^{k}+\e^2\dt u_{\l}^k+\e\vw\cdot\nabla_xu_{\l}^k+u_{\l}^k-\bar u_{\l}^{k-1}&=&f(t,\vx,\vw)\ \ \text{in}\ \ [0,\infty)\times\Omega,\\\rule{0ex}{1.0em} u_{\l}^k(0,\vx,\vw)&=&h(\vx,\vw)\ \ \text{in}\ \ \Omega\\\rule{0ex}{1.0em} u_{\l}^k(t,\vx_0,\vw)&=&g(t,\vx_0,\vw)\ \ \text{for}\ \ \vx_0\in\p\Omega\ \ \text{and}\ \vw\cdot\vec n<0. \end{array} \right. \end{eqnarray} By Lemma \ref{well-posedness lemma 1}, this sequence is well-defined and $\im{u_{\l}^k}{[0,\infty)\times\Omega\times\s^1}<\infty$. The characteristics and the backward exit time are defined as (\ref{character}) and (\ref{exit time}), so we rewrite equation (\ref{penalty temp 1}) along the characteristics as \begin{eqnarray} \ \end{eqnarray} \begin{eqnarray} &&u_{\l}^k(t,\vx,\vw)\no\\ &=&{\bf 1}_{\{t\geq \e^2t_b\}}\bigg( g(t-\e^2t_b,\vx-\e t_b\vw,\vw)\ue^{-(1+\l)t_b}+\int_{0}^{t_b}(\bar u_{\l}^{k-1}+f)(t-\e^2(t_b-s),\vx-\e(t_b-s)\vw,\vw)\ue^{-(1+\l)(t_b-s)}\ud{s}\bigg)\no\\ &&+{\bf 1}_{\{t\leq \e^2t_b\}}\bigg( h(\vx-(\e t\vw)/\e^2,\vw)\ue^{-(1+\l)t/\e^2}+\int_{0}^{t/\e^2}(\bar u_{\l}^{k-1}+f)(\e^2s,\vx-\e(t/\e^2-s)\vw,\vw)\ue^{-(1+\l)(t/\e^2-s)}\ud{s}\bigg)\no, \end{eqnarray} We define the difference $v^k=u_{\l}^{k}-u_{\l}^{k-1}$ for $k\geq1$. Then $v^k$ satisfies \begin{eqnarray} v^{k+1}(\vx,\vw)&=&{\bf 1}_{\{t\geq \e^2t_b\}}\bigg( \int_{0}^{t_b}\bar v_{\l}^{k-1}(t-\e^2(t_b-s),\vx-\e(t_b-s)\vw,\vw)\ue^{-(1+\l)(t_b-s)}\ud{s}\bigg)\\ &&+{\bf 1}_{\{t\leq \e^2t_b\}}\bigg(\int_{0}^{t/\e^2}\bar v_{\l}^{k-1}(\e^2s,\vx-\e(t/\e^2-s)\vw,\vw)\ue^{-(1+\l)(t/\e^2-s)}\ud{s}\bigg)\no, \end{eqnarray} Since $\im{\bar v^k}{[0,\infty)\times\Omega\times\s^1}\leq\im{v^k}{[0,\infty)\times\Omega\times\s^1}$, we can directly estimate \begin{eqnarray} \im{v^{k+1}}{[0,\infty)\times\Omega\times\s^1}&\leq&\im{v^{k}}{[0,\infty)\times\Omega\times\s^1}\int_0^{\max\{t/\e^2,t_b\}}\ue^{-(1+\l)(t_b-s)}\ud{s}\\ &\leq&\frac{1-\ue^{-(1+\l)\max\{t/\e^2,t_b\}}}{1+\l}\im{v^{k}}{[0,\infty)\times\Omega\times\s^1}.\no \end{eqnarray} Hence, we naturally have \begin{eqnarray} \im{v^{k+1}}{[0,\infty)\times\Omega\times\s^1}&\leq&\frac{1}{1+\l}\im{v^{k}}{[0,\infty)\times\Omega\times\s^1}. \end{eqnarray} Thus, this is a contraction sequence for $\l>0$. Considering $v^1=u_{\l}^1$, we have \begin{eqnarray} \im{v^{k}}{[0,\infty)\times\Omega\times\s^1}\leq\bigg(\frac{1}{1+\l}\bigg)^{k-1}\im{u^{1}_{\l}}{[0,\infty)\times\Omega\times\s^1}, \end{eqnarray} for $k\geq1$. Therefore, $u_{\l}^k$ converges strongly in $L^{\infty}$ to a limit solution $u_{\l}$ satisfying \begin{eqnarray}\label{well-posedness temp 1} \im{u_{\l}}{[0,\infty)\times\Omega\times\s^1}\leq\sum_{k=1}^{\infty}\im{v^{k}}{[0,\infty)\times\Omega\times\s^1}\leq\frac{1+\l}{\l}\im{u_{\l}^1}{[0,\infty)\times\Omega\times\s^1}. \end{eqnarray} Since $u_{\l}^1$ can be rewritten along the characteristics as \begin{eqnarray} &&u_{\l}^1(\vx,\vw)\\ &=&{\bf 1}_{\{t\geq \e^2t_b\}}\bigg( g(t-\e^2t_b,\vx-\e t_b\vw,\vw)\ue^{-(1+\l)t_b}+\int_{0}^{t_b}f(t-\e^2(t_b-s),\vx-\e(t_b-s)\vw,\vw)\ue^{-(1+\l)(t_b-s)}\ud{s}\bigg)\no\\ &&+{\bf 1}_{\{t\leq \e^2t_b\}}\bigg( h(\vx-(\e t\vw)/\e^2,\vw)\ue^{-(1+\l)t/\e^2}+\int_{0}^{t/\e^2}f(\e^2s,\vx-\e(t/\e^2-s)\vw,\vw)\ue^{-(1+\l)(t/\e^2-s)}\ud{s}\bigg)\no, \end{eqnarray} based on Lemma \ref{well-posedness lemma 1}, we can directly estimate \begin{eqnarray}\label{well-posedness temp 2} \im{u_{\l}^1}{[0,\infty)\times\Omega\times\s^1}\leq \im{g}{[0,\infty)\times\Gamma^-}+\im{h}{\Omega\times\s^1}+\im{f}{[0,\infty)\times\Omega\times\s^1}. \end{eqnarray} Combining (\ref{well-posedness temp 1}) and (\ref{well-posedness temp 2}), we can easily deduce the lemma. \end{proof} \subsection{$L^2$ Estimate} It is easy to see when $\l\rt0$, the estimate in Lemma \ref{well-posedness lemma 2} blows up. Hence, we need to show a uniform estimate of the solution to the penalized neutron transport equation (\ref{well-posedness penalty equation}). \begin{lemma}(Green's Identity)\label{well-posedness lemma 3} Assume $f(t,\vx,\vw),\ g(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$ and $\dt f+\vw\cdot\nx f,\ \dt g+\vw\cdot\nx g\in L^2([0,\infty)\times\Omega\times\s^1)$ with $f,\ g\in L^2([0,\infty)\times\Gamma)$. Then for almost all $s,t\in[0,\infty)$, \begin{eqnarray} &&\int_s^t\iint_{\Omega\times\s^1}\bigg((\dt f+\vw\cdot\nx f)g+(\dt g+\vw\cdot\nx g)f\bigg)\ud{\vx}\ud{\vw}\ud{r}\\ &=&\int_s^t\int_{\Gamma}fg\ud{\gamma}\ud{r}+\iint_{\Omega\times\s^1}f(t)g(t)\ud{\vx}\ud{\vw}-\iint_{\Omega\times\s^1}f(s)g(s)\ud{\vx}\ud{\vw},\no \end{eqnarray} where $\ud{\gamma}=(\vw\cdot\vec n)\ud{s}$ on the boundary. \end{lemma} \begin{proof} See \cite[Chapter 9]{Cercignani.Illner.Pulvirenti1994} and \cite{Esposito.Guo.Kim.Marra2013}. \end{proof} \begin{lemma}\label{well-posedness lemma 4} The solution $u_{\l}$ to the equation (\ref{well-posedness penalty equation}) satisfies the uniform estimate in time interval $[s,t]$, \begin{eqnarray}\label{well-posedness temp 3} \\ \e\tm{\bar u_{\l}}{[s,t]\times\Omega\times\s^1} &\leq& C(\Omega)\bigg( \tm{u_{\l}-\bar u_{\l}}{[s,t]\times\Omega\times\s^1}+\tm{f}{[s,t]\times\Omega\times\s^1}+\e\tm{u_{\l}}{[s,t]\times\Gamma^{+}}\no\\ &&+\e\tm{g}{[s,t]\times\Gamma^-}\bigg)+\e^2G(t)-\e^2G(s),\no \end{eqnarray} where $G(t)$ is a function satisfying \begin{eqnarray} G(t)\leq C(\Omega)\tm{u_{\l}(t)}{\Omega\times\s^1}, \end{eqnarray} for $0\leq\l<<1$ and $0<\e<<1$. \end{lemma} \begin{proof} We divide the proof into several steps:\\ \ \\ Step 1:\\ Applying Lemma \ref{well-posedness lemma 3} to the solution of the equation (\ref{well-posedness penalty equation}). Then for any $\phi\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$ satisfying $\e\dt\phi+\vw\cdot\nx\phi\in L^2([0,\infty)\times\Omega\times\s^1)$ and $\phi\in L^{2}([0,\infty)\times\Gamma)$, we have \begin{eqnarray}\label{well-posedness temp 4} &&\l\int_s^t\iint_{\Omega\times\s^1}u_{\l}\phi -\e^2\int_s^t\iint_{\Omega\times\s^1}\dt\phi u_{\l}-\e\int_s^t\iint_{\Omega\times\s^1}(\vw\cdot\nx\phi)u_{\l}+\int_s^t\iint_{\Omega\times\s^1}(u_{\l}-\bar u_{\l})\phi\\ &=&-\e\int_s^t\int_{\Gamma}u_{\l}\phi\ud{\gamma}-\e^2\iint_{\Omega\times\s^1}u_{\l}(t)\phi(t)+\e^2\iint_{\Omega\times\s^1}u_{\l}(s)\phi(s)+\int_s^t\iint_{\Omega\times\s^1}f\phi.\no \end{eqnarray} Our goal is to choose a particular test function $\phi$. We first construct an auxiliary function $\zeta(t)$. Since $u_{\l}(t)\in L^{\infty}(\Omega\times\s^1)$, it naturally implies $\bar u_{\l}(t)\in L^{\infty}(\Omega)$ which further leads to $\bar u_{\l}(t)\in L^2(\Omega)$. We define $\zeta(t,\vx)$ on $\Omega$ satisfying \begin{eqnarray}\label{test temp 1} \left\{ \begin{array}{rcl} \Delta \zeta(t)&=&\bar u_{\l}(t)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1.0em} \zeta(t)&=&0\ \ \text{on}\ \ \p\Omega. \end{array} \right. \end{eqnarray} In the bounded domain $\Omega$, based on the standard elliptic estimate, we have \begin{eqnarray}\label{test temp 3} \nm{\zeta(t)}_{H^2(\Omega)}\leq C(\Omega)\nm{\bar u_{\l}(t)}_{L^2(\Omega)}. \end{eqnarray} \ \\ Step 2:\\ Without loss of generality, we only prove the case with $s=0$. We plug the test function \begin{eqnarray}\label{test temp 2} \phi(t)=-\vw\cdot\nx\zeta(t) \end{eqnarray} into the weak formulation (\ref{well-posedness temp 4}) and estimate each term there. Naturally, we have \begin{eqnarray}\label{test temp 4} \nm{\phi(t)}_{L^2(\Omega)}\leq C\nm{\zeta(t)}_{H^1(\Omega)}\leq C(\Omega)\nm{\bar u_{\l}(t)}_{L^2(\Omega)}. \end{eqnarray} Easily we can decompose \begin{eqnarray}\label{test temp 5} -\e\int_0^t\iint_{\Omega\times\s^1}(\vw\cdot\nx\phi)u_{\l}&=&-\e\int_0^t\iint_{\Omega\times\s^1}(\vw\cdot\nx\phi)\bar u_{\l}-\e\int_0^t\iint_{\Omega\times\s^1}(\vw\cdot\nx\phi)(u_{\l}-\bar u_{\l}). \end{eqnarray} We estimate the two term on the right-hand side of (\ref{test temp 5}) separately. By (\ref{test temp 1}) and (\ref{test temp 2}), we have \begin{eqnarray}\label{wellposed temp 1} \\ -\e\int_0^t\iint_{\Omega\times\s^1}(\vw\cdot\nx\phi)\bar u_{\l}&=&\e\int_0^t\iint_{\Omega\times\s^1}\bar u_{\l}\bigg(w_1(w_1\p_{11}\zeta+w_2\p_{12}\zeta)+w_2(w_1\p_{12}\zeta+w_2\p_{22}\zeta)\bigg)\no\\ &=&\e\int_0^t\iint_{\Omega\times\s^1}\bar u_{\l}\bigg(w_1^2\p_{11}\zeta+w_2^2\p_{22}\zeta\bigg)\nonumber\\ &=&\e\pi\int_0^t\int_{\Omega}\bar u_{\l}(\p_{11}\zeta+\p_{22}\zeta)\nonumber\\ &=&\e\pi\nm{\bar u_{\l}}_{L^2([0,t]\times\Omega)}^2\nonumber\\ &=&\half\e\nm{\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}^2\nonumber. \end{eqnarray} In the second equality, above cross terms vanish due to the symmetry of the integral over $\s^1$. On the other hand, for the second term in (\ref{test temp 5}), H\"older's inequality and the elliptic estimate imply \begin{eqnarray}\label{wellposed temp 2} -\e\int_0^t\iint_{\Omega\times\s^1}(\vw\cdot\nx\phi)(u_{\l}-\bar u_{\l})&\leq&C(\Omega)\e\nm{u_{\l}-\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}\bigg(\int_0^t\nm{\zeta}^2_{H^2(\Omega)}\bigg)^{1/2}\\ &\leq&C(\Omega)\e\nm{u_{\l}-\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}\nm{\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}\nonumber. \end{eqnarray} Based on (\ref{test temp 3}), (\ref{test temp 4}), the boundary condition of the penalized neutron transport equation (\ref{well-posedness penalty equation}), the trace theorem, H\"older's inequality and the elliptic estimate, we have \begin{eqnarray}\label{wellposed temp 3} \\ \e\int_0^t\int_{\Gamma}u_{\l}\phi\ud{\gamma}&=&\e\int_0^t\int_{\Gamma^+}u_{\l}\phi\ud{\gamma}+\e\int_0^t\int_{\Gamma^-}u_{\l}\phi\ud{\gamma}\no\\ &\leq&C(\Omega)\bigg(\e\nm{u_{\l}}_{L^2([0,t]\times\Gamma^+)}\nm{\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}+\e\tm{g}{[0,t]\times\Gamma^-}\nm{\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}\bigg)\nonumber, \end{eqnarray} \begin{eqnarray}\label{wellposed temp 4} \l\int_0^t\iint_{\Omega\times\s^1}u_{\l}\phi&=&\l\int_0^t\iint_{\Omega\times\s^1}\bar u_{\l}\phi+\l\int_0^t\iint_{\Omega\times\s^1}(u_{\l}-\bar u_{\l})\phi=\l\int_0^t\iint_{\Omega\times\s^1}(u_{\l}-\bar u_{\l})\phi\\ &\leq&C(\Omega)\l\nm{\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}\nm{u_{\l}-\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}\nonumber, \end{eqnarray} \begin{eqnarray}\label{wellposed temp 5} \int_0^t\iint_{\Omega\times\s^1}(u_{\l}-\bar u_{\l})\phi\leq C(\Omega)\nm{\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}\nm{u_{\l}-\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}, \end{eqnarray} \begin{eqnarray}\label{wellposed temp 6} \int_0^t\iint_{\Omega\times\s^1}f\phi\leq C(\Omega)\nm{\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}\nm{f}_{L^2([0,t]\times\Omega\times\s^1)}. \end{eqnarray} Note that we will take \begin{eqnarray}\label{wellposed temp 7} -\e^2\iint_{\Omega\times\s^1}u_{\l}(t)\phi(t)+\e^2\iint_{\Omega\times\s^1}u_{\l}(0)\phi(0)=\e^2\bigg(G(t)-G(0)\bigg), \end{eqnarray} where $G(t)=-\displaystyle\iint_{\Omega\times\s^1}u_{\l}(t)\phi(t)$. Then the only remaining term is \begin{eqnarray}\label{wellposed temp 8} -\e^2\int_0^t\iint_{\Omega\times\s^1}\dt\phi u_{\l}&=&-\e^2\int_s^t\iint_{\Omega\times\s^1}\dt\phi (u_{\l}-\bar u_{\l})\\ &\leq&\nm{\dt\nabla\zeta}_{L^2([0,t]\times\Omega\times\s^1)}\nm{u_{\l}-\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}.\no \end{eqnarray} Now we have to tackle $\nm{\dt\nabla\zeta}_{L^2([0,t]\times\Omega\times\s^1)}$.\\ \ \\ Step 3:\\ For test function $\phi(\vx,\vw)$ which is independent of time $t$, in time interval $[t-\delta,t]$ the weak formulation in (\ref{well-posedness temp 4}) can be simplified as \begin{eqnarray}\label{test temp 6} &&\l\int_{t-\delta}^t\iint_{\Omega\times\s^1}u_{\l}\phi -\e\int_{t-\delta}^t\iint_{\Omega\times\s^1}(\vw\cdot\nx\phi)u_{\l}+\int_{t-\delta}^t\iint_{\Omega\times\s^1}(u_{\l}-\bar u_{\l})\phi\\ &=&-\e\int_{t-\delta}^t\int_{\Gamma}u_{\l}\phi\ud{\gamma} -\e^2\iint_{\Omega\times\s^1}u_{\l}(t)\phi+\e^2\iint_{\Omega\times\s^1}u_{\l}({t-\delta})\phi+\int_{t-\delta}^t\iint_{\Omega\times\s^1}f\phi.\no \end{eqnarray} Taking difference quotient as $\delta\rt0$, we know \begin{eqnarray} \frac{\e^2\displaystyle\iint_{\Omega\times\s^1}u_{\l}(t)\phi-\e^2\displaystyle\iint_{\Omega\times\s^1}u_{\l}({t-\delta})\phi}{\delta}\rt \e^2\iint_{\Omega\times\s^1}\dt u_{\l}(t)\phi. \end{eqnarray} Then (\ref{test temp 6}) can be simplified into \begin{eqnarray}\label{test temp 7} &&\e^2\iint_{\Omega\times\s^1}\dt u_{\l}(t)\phi\\ &=&-\l\iint_{\Omega\times\s^1}u_{\l}(t)\phi +\e\iint_{\Omega\times\s^1}(\vw\cdot\nx\phi)u_{\l}(t)-\iint_{\Omega\times\s^1}(u_{\l}(t)-\bar u_{\l}(t))\phi\no\\ &&-\e\int_{\Gamma}u_{\l}(t)\phi\ud{\gamma}+\iint_{\Omega\times\s^1}f(t)\phi.\no \end{eqnarray} For fixed $t$, taking $\phi=\Phi(\vx)$ which satisfies \begin{eqnarray} \left\{ \begin{array}{rcl} \Delta \Phi&=&\dt\bar u_{\l}(t)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1.0em} \Phi&=&0\ \ \text{on}\ \ \p\Omega, \end{array} \right. \end{eqnarray} which further implies $\Phi=\dt\zeta$. Then the left-hand side of (\ref{test temp 7}) is actually \begin{eqnarray} LHS&=&\e^2\iint_{\Omega\times\s^1}\Phi\dt u_{\l}(t)=\e^2\iint_{\Omega\times\s^1}\Phi\dt\bar u_{\l}\\ &=&\e^2\iint_{\Omega\times\s^1}\Phi\Delta\Phi=\e^2\iint_{\Omega\times\s^1}\abs{\nabla\Phi}^2\no\\ &=&\nm{\dt\nabla\zeta(t)}_{L^2(\Omega\times\s^1)}^2.\no \end{eqnarray} By a similar argument as in Step 2 and the Poincar\'e inequality, the right-hand side of (\ref{test temp 7}) can be bounded as \begin{eqnarray} \\ RHS\ls \nm{\dt\nabla\zeta(t)}_{L^2(\Omega\times\s^1)}\bigg(\nm{u_{\l}(t)-\bar u_{\l}(t)}_{L^2(\Omega\times\s^1)}+\l\nm{\bar u_{\l}(t)}_{L^2(\Omega\times\s^1)} +\nm{f(t)}_{L^2(\Omega\times\s^1)}\bigg).\no \end{eqnarray} Therefore, we have \begin{eqnarray} \nm{\dt\nabla\zeta(t)}_{L^2(\Omega\times\s^1)}\ls \nm{u_{\l}(t)-\bar u_{\l}(t)}_{L^2(\Omega\times\s^1)}+\l\nm{\bar u_{\l}(t)}_{L^2(\Omega\times\s^1)} +\nm{f(t)}_{L^2(\Omega\times\s^1)}. \end{eqnarray} For all $t$, we can further integrate over $[0,t]$ to obtain \begin{eqnarray}\label{wellposed temp 9} &&\nm{\dt\nabla\zeta(t)}_{L^2([0,t]\times\Omega\times\s^1)}\\ &\ls& \nm{u_{\l}(t)-\bar u_{\l}(t)}_{L^2([0,t]\times\Omega\times\s^1)}+\l\nm{\bar u_{\l}(t)}_{L^2([0,t]\times\Omega\times\s^1)} +\nm{f(t)}_{L^2([0,t]\times\Omega\times\s^1)}.\no \end{eqnarray} \ \\ Step 4:\\ Collecting terms in (\ref{wellposed temp 1}), (\ref{wellposed temp 2}), (\ref{wellposed temp 3}), (\ref{wellposed temp 4}), (\ref{wellposed temp 5}), (\ref{wellposed temp 6}), (\ref{wellposed temp 7}), (\ref{wellposed temp 8}), and (\ref{wellposed temp 9}), we obtain \begin{eqnarray} &&\e\nm{\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}\\ &\leq& C(\Omega)\bigg((1+\e+\l)\nm{u_{\l}-\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}+\e\nm{u_{\l}}_{L^2([0,t]\times\Gamma^+)}+\nm{f}_{L^2([0,t]\times\Omega\times\s^1)}+\e\tm{g}{[0,t]\times\Gamma^-}\bigg)\nonumber\\ &&+\e^2G(t)-\e^2G(0).\no \end{eqnarray} When $0\leq\l<1$ and $0<\e<1$, we get the desired uniform estimate with respect to $\lambda$. \end{proof} \begin{theorem}\label{LT estimate} Assume $\ue^{\l_0 t}f(t,\vx,\vw)\in L^{2}([0,\infty)\times\Omega\times\s^1)$, $h(\vx,\vw)\in L^{2}(\Omega\times\s^1)$ and $\ue^{\l_0 t}g(t,x_0,\vw)\in L^{2}([0,\infty)\times\Gamma^-)$ for some $\l_0>0$. Then for the unsteady neutron transport equation (\ref{neutron}), there exists $\l_0^{\ast}$ satisfying $0<\l_0^{\ast}\leq\l_0$ and a unique solution $u(t,\vx,\vw)\in L^2([0,\infty)\times\Omega\times\s^1)$ satisfying \begin{eqnarray} &&\frac{1}{\e^{1/2}}\nm{\ue^{\l t}u}_{L^2([0,\infty)\times\Gamma^+)}+\nm{\ue^{\l t}u(t)}_{L^2(\Omega\times\s^1)}+\nm{\ue^{\l t}u}_{L^2([0,\infty)\times\Omega\times\s^1)}\\ &\leq& C(\Omega)\bigg( \frac{1}{\e^2}\nm{\ue^{\l t}f}_{L^2([0,\infty)\times\Omega\times\s^1)} +\nm{h}_{L^2(\Omega\times\s^1)}+\frac{1}{\e^{1/2}}\nm{\ue^{\l t}g}_{L^2([0,\infty)\times\Gamma^-)}\bigg),\no \end{eqnarray} for any $0\leq\l\leq\l_0^{\ast}$. When $\l_0=0$, we have $\l_0^{\ast}=0$. \end{theorem} \begin{proof} We divide the proof into several steps:\\ \ \\ Step 1: Weak formulation.\\ In the weak formulation (\ref{well-posedness temp 4}), we may take the test function $\phi=u_{\l}$ to get the energy estimate \begin{eqnarray} &&\l\nm{u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}^2+\half\e\int_0^t\int_{\Gamma}\abs{u_{\l}}^2\ud{\gamma}\\ &&+\half\e^2\nm{u_{\l}(t)}_{L^2(\Omega\times\s^1)}^2-\half\e^2\nm{h}_{L^2(\Omega\times\s^1)}^2+\nm{u_{\l}-\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}^2\no\\ &=&\int_0^t\iint_{\Omega\times\s^1}fu_{\l}.\no \end{eqnarray} Hence, this naturally implies \begin{eqnarray}\label{well-posedness temp 5} &&\half\e\nm{u_{\l}}_{L^2([0,t]\times\Gamma^+)}^2+\half\e^2\nm{u_{\l}(t)}_{L^2(\Omega\times\s^1)}^2+\nm{u_{\l}-\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}^2\\ &\leq&\int_0^t\iint_{\Omega\times\s^1}fu_{\l}+\half\e^2\nm{h}_{L^2(\Omega\times\s^1)}^2+\half\e\nm{g}_{L^2([0,t]\times\Gamma^-)}^2.\no \end{eqnarray} On the other hand, we can square on both sides of (\ref{well-posedness temp 3}) to obtain \begin{eqnarray}\label{well-posedness temp 6} &&\e^2\tm{\bar u_{\l}}{[0,t]\times\Omega\times\s^1}^2\\ &\leq& C(\Omega)\bigg( \tm{u_{\l}-\bar u_{\l}}{[0,t]\times\Omega\times\s^1}^2+\tm{f}{[0,t]\times\Omega\times\s^1}^2+\e^2\tm{u_{\l}}{[0,t]\times\Gamma^{+}}^2+\e^2\tm{g}{[0,t]\times\Gamma^-}^2\no\\ &&+\e^4\tm{u_{\l}(t)}{\Omega\times\s^1}^2+\e^4\tm{h}{\Omega\times\s^1}^2\bigg).\nonumber \end{eqnarray} Multiplying a sufficiently small constant on both sides of (\ref{well-posedness temp 6}) and adding it to (\ref{well-posedness temp 5}) to absorb $\nm{u_{\l}}_{L^2(\Gamma^+)}^2$, $\tm{u_{\l}(t)}{\Omega\times\s^1}^2$ and $\nm{u_{\l}-\bar u_{\l}}_{L^2(\Omega\times\s^1)}^2$, we deduce \begin{eqnarray} &&\e\nm{u_{\l}}_{L^2([0,t]\times\Gamma^+)}^2+\e^2\nm{u_{\l}(t)}_{L^2(\Omega\times\s^1)}^2+\e^2\nm{\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}^2+\nm{u_{\l}-\bar u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}^2\\&&\qquad\qquad\qquad\leq C(\Omega)\bigg(\tm{f}{[0,t]\times\Omega\times\s^1}^2+ \int_0^t\iint_{\Omega\times\s^1}fu_{\l}+\e^2\nm{h}_{L^2(\Omega\times\s^1)}^2+\e\nm{g}_{L^2([0,t]\times\Gamma^-)}^2\bigg).\nonumber \end{eqnarray} Hence, we have \begin{eqnarray}\label{well-posedness temp 7} &&\e\nm{u_{\l}}_{L^2([0,t]\times\Gamma^+)}^2+\e^2\nm{u_{\l}(t)}_{L^2(\Omega\times\s^1)}^2+\e^2\nm{u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}^2\\ &\leq& C(\Omega)\bigg(\tm{f}{[0,t]\times\Omega\times\s^1}^2+ \int_0^t\iint_{\Omega\times\s^1}fu_{\l}+\e^2\nm{h}_{L^2(\Omega\times\s^1)}^2+\e\nm{g}_{L^2([0,t]\times\Gamma^-)}^2\bigg).\no \end{eqnarray} A simple application of Cauchy's inequality leads to \begin{eqnarray} \int_0^t\iint_{\Omega\times\s^1}fu_{\l}\leq\frac{1}{4C\e^2}\tm{f}{[0,t]\times\Omega\times\s^1}^2+C\e^2\tm{u_{\l}}{[0,t]\times\Omega\times\s^1}^2. \end{eqnarray} Taking $C$ sufficiently small, we can divide (\ref{well-posedness temp 7}) by $\e^2$ to obtain \begin{eqnarray}\label{well-posedness temp 21} &&\frac{1}{\e}\nm{u_{\l}}_{L^2([0,t]\times\Gamma^+)}^2+\nm{u_{\l}(t)}_{L^2(\Omega\times\s^1)}^2+\nm{u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}^2\\ &\leq& C(\Omega)\bigg( \frac{1}{\e^4}\nm{f}_{L^2([0,t]\times\Omega\times\s^1)}^2+\nm{h}_{L^2(\Omega\times\s^1)}^2+\frac{1}{\e}\nm{g}_{L^2([0,t]\times\Gamma^-)}^2\bigg).\no \end{eqnarray} \ \\ Step 2: Convergence.\\ Since above estimate does not depend on $\l$, it gives a uniform estimate for the penalized neutron transport equation (\ref{well-posedness penalty equation}). Thus, we can extract a weakly convergent subsequence $u_{\l}\rt u$ as $\l\rt0$. The weak lower semi-continuity of norms $\nm{\cdot}_{L^2([0,t]\times\Omega\times\s^1)}$ and $\nm{\cdot}_{L^2([0,t]\times\Gamma^+)}$ implies $u$ also satisfies the estimate (\ref{well-posedness temp 21}). Hence, in the weak formulation (\ref{well-posedness temp 4}), we can take $\l\rt0$ to deduce that $u$ satisfies equation (\ref{neutron}). Also $u_{\l}-u$ satisfies the equation \begin{eqnarray} \\ \left\{ \begin{array}{rcl} \e^2\dt(u_{\l}-u)+\e\vec w\cdot\nabla_x(u_{\l}-u)+(u_{\l}-u)-(\bar u_{\l}-\bar u)&=&-\l u_{\l}\ \ \text{in}\ \ \Omega\label{remainder},\\\rule{0ex}{1.0em} (u_{\l}-u)(0,\vx,\vw)&=&0\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1.0em} (u_{\l}-u)(\vec x_0,\vec w)&=&0\ \ \text{for}\ \ \vec x_0\in\p\Omega\ \ \text{and}\ \vw\cdot\vec n<0.\no \end{array} \right. \end{eqnarray} By a similar argument as above, we can achieve \begin{eqnarray} \nm{u_{\l}-u}_{L^2([0,t]\times\Omega\times\s^1)}^2\leq C(\Omega)\bigg(\frac{\l}{\e^4}\nm{u_{\l}}_{L^2([0,t]\times\Omega\times\s^1)}^2\bigg). \end{eqnarray} When $\l\rt0$, the right-hand side approaches zero, which implies the convergence is actually in the strong sense. The uniqueness easily follows from the energy estimates.\\ \ \\ Step 3: $L^2$ Decay.\\ Let $v=\ue^{\l t}u$. Then $v$ satisfies the equation \begin{eqnarray} \left\{ \begin{array}{rcl} \e^2\dt v+\e\vw\cdot\nabla_xv+v-\bar v&=&f+\l\e^2v\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1.0em} v(0,\vx,\vw)&=&h(\vx,\vw)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1.0em} v(\vx_0,\vw)&=&\ue^{\l t}g(t,\vx_0,\vw)\ \ \text{for}\ \ \vx_0\in\p\Omega\ \ \text{and}\ \vw\cdot\vec n<0. \end{array} \right. \end{eqnarray} Similar to the argument in Step 1, we can obtain \begin{eqnarray} &&\frac{1}{\e}\nm{v}_{L^2([0,t]\times\Gamma^+)}^2+\nm{v(t)}_{L^2(\Omega\times\s^1)}^2+\nm{v}_{L^2([0,t]\times\Omega\times\s^1)}^2\\ &\leq& C(\Omega)\bigg( \frac{1}{\e^4}\nm{\ue^{\l t}f}_{L^2([0,t]\times\Omega\times\s^1)}^2+\frac{1}{\e^4}\nm{\l^2\e^4 v}_{L^2([0,t]\times\Omega\times\s^1)}^2 +\nm{h}_{L^2(\Omega\times\s^1)}^2+\frac{1}{\e}\nm{\ue^{\l t}g}_{L^2([0,t]\times\Gamma^-)}^2\bigg).\no \end{eqnarray} Then when $\l$ is sufficiently small, we have \begin{eqnarray} &&\frac{1}{\e}\nm{v}_{L^2([0,t]\times\Gamma^+)}^2+\nm{v(t)}_{L^2(\Omega\times\s^1)}^2+\nm{v}_{L^2([0,t]\times\Omega\times\s^1)}^2\\ &\leq& C(\Omega)\bigg( \frac{1}{\e^4}\nm{\ue^{\l t}f}_{L^2([0,t]\times\Omega\times\s^1)}^2 +\nm{h}_{L^2(\Omega\times\s^1)}^2+\frac{1}{\e}\nm{\ue^{\l t}g}_{L^2([0,t]\times\Gamma^-)}^2\bigg),\no \end{eqnarray} which implies exponential decay of $u$. \end{proof} \subsection{$L^{\infty}$ Estimate} \begin{theorem}\label{LI estimate} Assume $\ue^{\l_0 t}f(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$, $h(\vx,\vw)\in L^{\infty}(\Omega\times\s^1)$ and $\ue^{\l_0 t}g(t,x_0,\vw)\in L^{\infty}([0,\infty)\times\Gamma^-)$ for some $\l_0>0$. Then for the unsteady neutron transport equation (\ref{neutron}), there exists $\l_0^{\ast}$ satisfying $0<\l_0^{\ast}\leq\l_0$ and a unique solution $u(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$ satisfying \begin{eqnarray} &&\nm{\ue^{\l t}u}_{L^{\infty}([0,\infty)\times\Gamma^+)}+\nm{\ue^{\l t}u(t)}_{L^{\infty}(\Omega\times\s^1)}+\nm{\ue^{\l t}u}_{L^{\infty}([0,\infty)\times\Omega\times\s^1)}\\ &\leq& C(\Omega)\bigg( \frac{1}{\e^{4}}\nm{\ue^{\l t}f}_{L^{2}([0,\infty)\times\Omega\times\s^1)} +\frac{1}{\e^{2}}\nm{h}_{L^{2}(\Omega\times\s^1)}+\frac{1}{\e^{5/2}}\nm{\ue^{\l t}g}_{L^{2}([0,\infty)\times\Gamma^-)}\no\\ &&+\nm{\ue^{\l t}f}_{L^{\infty}([0,\infty)\times\Omega\times\s^1)} +\nm{h}_{L^{\infty}(\Omega\times\s^1)}+\nm{\ue^{\l t}g}_{L^{\infty}([0,\infty)\times\Gamma^-)}\bigg),\no \end{eqnarray} for any $0\leq\l\leq\l_0^{\ast}$. When $\l_0=0$, we have $\l_0^{\ast}=0$. \end{theorem} \begin{proof} We divide the proof into several steps to bootstrap an $L^2$ solution to an $L^{\infty}$ solution:\\ \ \\ Step 1: Double Duhamel iterations.\\ The characteristics of the equation (\ref{neutron}) is given by (\ref{character}). Hence, we can rewrite the equation (\ref{neutron}) along the characteristics as \begin{eqnarray} &&u(t,\vx,\vw)\\ &=&{\bf 1}_{\{t\geq \e^2t_b\}}\bigg( g(t-\e^2t_b,\vx-\e t_b\vw,\vw)\ue^{-t_b}+\int_{0}^{t_b}(\bar u+f)(t-\e^2(t_b-s),\vx-\e(t_b-s)\vw,\vw)\ue^{-(t_b-s)}\ud{s}\bigg)\no\\ &&+{\bf 1}_{\{t\leq \e^2t_b\}}\bigg( h(\vx-(\e t\vw)/\e^2,\vw)\ue^{-t/\e^2}+\int_{0}^{t/\e^2}(\bar u+f)(\e^2s,\vx-\e(t/\e^2-s)\vw,\vw)\ue^{-(t/\e^2-s)}\ud{s}\bigg)\no, \end{eqnarray} where the backward exit time $t_b$ is defined as (\ref{exit time}). For the convenience of analysis, we transform it into a simpler form \begin{eqnarray} &&u(t,\vx,\vw)\\ &=&{\bf 1}_{\{t\geq \e^2t_b\}}g(t-\e^2t_b,\vx-\e t_b\vw,\vw)\ue^{-t_b}+{\bf 1}_{\{t\leq \e^2t_b\}} h(\vx-(\e t\vw)/\e^2,\vw)\ue^{-t/\e^2}\no\\ &&+\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2}f(\e^2s,\vx-\e(t/\e^2-s)\vw,\vw)\ue^{-(t/\e^2-s)}\ud{s}\no\\ &&+\frac{1}{2\pi}\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2}\bigg(\int_{-\pi}^{\pi}u(\e^2s,\vx-\e(t/\e^2-s)\vw,\vw_t)\ud{\vw_t}\bigg)\ue^{-(t/\e^2-s)}\ud{s}.\no \end{eqnarray} Here $a\wedge b$ denotes $\min\{a,b\}$. Note we have replaced $\bar u$ by the integral of $u$ over the dummy velocity variable $\vw_t$. For the last term in this formulation, we apply the Duhamel's principle again to $u(\e^2s,\vx-\e(t/\e^2-s)\vw,\vw_t)$ and obtain \begin{eqnarray}\label{well-posedness temp 8} &&u(t,\vx,\vw)\\ &=&{\bf 1}_{\{t\geq \e^2t_b\}}g(t-\e^2t_b,\vx-\e t_b\vw,\vw)\ue^{-t_b}+{\bf 1}_{\{t\leq \e^2t_b\}} h(\vx-(\e t\vw)/\e^2,\vw)\ue^{-t/\e^2}\no\\ &&+\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2}f(\e^2s,\vx-\e(t/\e^2-s)\vw,\vw)\ue^{-(t/\e^2-s)}\ud{s}\no\\ &&+\frac{1}{2\pi}\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2}\bigg(\int_{-\pi}^{\pi}{\bf 1}_{\{s\geq s_b\}}g(\e^2(s-s_b),\vx-\e(t/\e^2-s)\vw,\vw_t)\ue^{-s_b}\ud{\vw_t}\bigg)\ue^{-(t/\e^2-s)}\ud{s}\no\\ &&+\frac{1}{2\pi}\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2}\bigg(\int_{-\pi}^{\pi}{\bf 1}_{\{s\geq s_b\}}h(\vx-\e(t/\e^2-s)\vw-\e s\vw_t,\vw_t)\ue^{-s}\ud{\vw_t}\bigg)\ue^{-(t/\e^2-s)}\ud{s}\no\\ &&+\frac{1}{2\pi}\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2} \bigg(\int_{-\pi}^{\pi}\int_{s-s_b\wedge s}^{s}f(\e^2r,\vx-\e(t/\e^2-s)\vw-\e(s-r)\vw_t,\vw_t)\ue^{-(s-r)}\ud{r}\ud{\vw_t}\bigg)\ue^{-(t/\e^2-s)}\ud{s}\no\\ &&+\frac{1}{2\pi}\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2}\no\\ &&\bigg(\int_{-\pi}^{\pi}\int_{s-s_b\wedge s}^{s}\bar u(\e^2r,\vx-\e(t/\e^2-s)\vw-\e(s-r)\vw_t)\ue^{-(s-r)}\ud{r}\ud{\vw_t}\bigg)\ue^{-(t/\e^2-s)}\ud{s}.\no \end{eqnarray} where we introduce another dummy velocity variable $\vw_s$ and \begin{eqnarray} s_b(\vx,\vw,s,\vw_t)=\inf\{r\geq0: (\vx-\e(t/\e^2-s)\vw-\e r\vw_t,\vw_t)\in\Gamma^-\}. \end{eqnarray} \ \\ Step 2: Estimates of all but the last term in (\ref{well-posedness temp 8}).\\ We can directly estimate as follows: \begin{eqnarray}\label{im temp 1} \abs{{\bf 1}_{\{t\geq \e^2t_b\}}g(t-\e^2t_b,\vx-\e t_b\vw,\vw)\ue^{-t_b}}&\leq&\im{g}{[0,t]\times\Gamma^-}, \end{eqnarray} \begin{eqnarray}\label{im temp 2} \abs{{\bf 1}_{\{t\leq \e^2t_b\}} h(\vx-(\e t\vw)/\e^2,\vw)\ue^{-t/\e^2}} \leq \im{h}{\Omega\times\s^1}, \end{eqnarray} \begin{eqnarray}\label{im temp 3} \abs{\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2}f(\e^2s,\vx-\e(t/\e^2-s)\vw,\vw)\ue^{-(t/\e^2-s)}\ud{s}}\leq \im{f}{[0,t]\times\Omega\times\s^1}, \end{eqnarray} \begin{eqnarray}\label{im temp 4} &&\frac{1}{2\pi}\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2}\bigg(\int_{-\pi}^{\pi}{\bf 1}_{\{s\geq s_b\}}g(\e^2(s-s_b),\vx-\e(t/\e^2-s)\vw,\vw_t)\ue^{-s_b}\ud{\vw_t}\bigg)\ue^{-(t/\e^2-s)}\ud{s}\\ &\leq& \im{g}{[0,t]\times\Gamma^-},\no \end{eqnarray} \begin{eqnarray}\label{im temp 7} &&\abs{\frac{1}{2\pi}\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2}\bigg(\int_{-\pi}^{\pi}{\bf 1}_{\{s\geq s_b\}}h(\vx-\e(t/\e^2-s)\vw-\e s\vw_t,\vw_t)\ue^{-s}\ud{\vw_t}\bigg)\ue^{-(t/\e^2-s)}\ud{s}}\\ &\leq& \im{h}{\Omega\times\s^1},\nonumber \end{eqnarray} \begin{eqnarray}\label{im temp 8} \\ &&\abs{\frac{1}{2\pi}\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2} \bigg(\int_{-\pi}^{\pi}\int_{s-s_b\wedge s}^{s}f(\e^2r,\vx-\e(t/\e^2-s)\vw-\e(s-r)\vw_t,\vw_t)\ue^{-(s-r)}\ud{r}\ud{\vw_t}\bigg)\ue^{-(t/\e^2-s)}\ud{s}}\no\\ &\leq& \im{f}{[0,t]\times\Omega\times\s^1}\nonumber. \end{eqnarray} \ \\ Step 3: Estimates of the last term in (\ref{well-posedness temp 8}).\\ We can first transform the last term $I$ in (\ref{well-posedness temp 8}) into \begin{eqnarray}\label{im temp 9} \ \end{eqnarray} \begin{eqnarray} \abs{I} &\leq&\frac{1}{2\pi}\int_{t/\e^2-t_b\wedge(t/\e^2)}^{t/\e^2}\bigg(\int_{-\pi}^{\pi}\int_{s-s_b\wedge s}^{s}\abs{\bar u(\e^2r,\vx-\e(t/\e^2-s)\vw-\e(s-r)\vw_t)}\ue^{-(s-r)}\ud{r}\ud{\vw_t}\bigg)\ue^{-(t/\e^2-s)}\ud{s}\no\\ &\leq&\frac{1}{2\pi}\int_{0}^{t_b}\bigg(\int_{-\pi}^{\pi}\int_{0}^{s_b}\no\\ &&\abs{\bar u(\e^2(r^{\ast}+s^{\ast}+t/\e^2-t_b-s_b),\vx-\e(t_b-s^{\ast})\vw-\e(s_b-r^{\ast})\vw_t)} \ue^{-(s_b-r^{\ast})}\ud{r^{\ast}}\ud{\vw_t}\bigg)\ue^{-(t_b-s^{\ast})}\ud{s^{\ast}}\no, \end{eqnarray} by substitution $s\rt s^{\ast}=(s-t/\e^2+t_b)$ and $r\rt r^{\ast}=(r-s+s_b)$. Now we decompose the right-hand side in (\ref{im temp 9}) as \begin{eqnarray} \int_{0}^{t_b}\int_{\s^1}\int_0^{s_b}=\int_{0}^{t_b}\int_{\s^1}\int_{s_b-r^{\ast}\leq\delta}+ \int_{0}^{t_b}\int_{\s^1}\int_{s_b-r^{\ast}\geq\delta}=I_1+I_2, \end{eqnarray} for some $\delta>0$. We can estimate $I_1$ directly as \begin{eqnarray}\label{im temp 5} I_1 &\leq&\int_{0}^{t_b}\ue^{-(t_b-r^{\ast})}\bigg(\int_{\max\{0,s_b-\delta\}}^{s_b} \im{u}{[0,t]\times\Omega\times\s^1}\ud{r^{\ast}}\bigg)\ud{s^{\ast}}\leq\delta\im{u}{[0,t]\times\Omega\times\s^1}. \end{eqnarray} Then we can bound $I_2$ as \begin{eqnarray} \\ I_2&\leq&C\int_{0}^{t_b}\int_{\s^1}\int_{0}^{\max\{0,s_b-\delta\}}\no\\ &&\abs{\bar u(\e^2(r^{\ast}+s^{\ast}+t/\e^2-t_b-s_b),\vx-\e(t_b-s^{\ast})\vw-\e(s_b-r^{\ast})\vw_t)}\ue^{-(s_b-r^{\ast})-(t_b-s^{\ast})}\ud{r^{\ast}}\ud{\vw_t}\ud{s^{\ast}}.\no \end{eqnarray} By the definition of $t_b$ and $s_b$, we always have $\e^2(r^{\ast}+s^{\ast}+t/\e^2-t_b-s_b)\in[0,t]$ and $\vx-\e(t_b-s^{\ast})\vw-\e(s_b-r^{\ast})\vw_t\in\bar\Omega$. Hence, we may interchange the order of integration and apply H\"older's inequality to obtain \begin{eqnarray}\label{well-posedness temp 22} \ \end{eqnarray} \begin{eqnarray} I_2&\leq&C\int_{0}^{t_b}\int_{\s^1}\int_{0}^{\max\{0,s_b-\delta\}}{\bf{1}}_{[0,t]\times\Omega} (\e^2(r^{\ast}+s^{\ast}+t/\e^2-t_b-s_b),\vx-\e(t_b-s^{\ast})\vw-\e(s_b-r^{\ast})\vw_t)\ue^{-(s_b-r^{\ast})-(t_b-s^{\ast})}\no\\ &&\abs{\bar u(\e^2(r^{\ast}+s^{\ast}+t/\e^2-t_b-s_b),\vx-\e(t_b-s^{\ast})\vw-\e(s_b-r^{\ast})\vw_t)}\ud{r^{\ast}}\ud{\vw_t}\ud{s^{\ast}}\nonumber\\ &\leq&C\bigg(\int_{0}^{t_b}\int_{\s^1}\int_{0}^{\max\{0,s_b-\delta\}}\ue^{-(s_b-r^{\ast})-(t_b-s^{\ast})}\ud{r^{\ast}}\ud{\vw_t}\ud{s^{\ast}}\bigg)^{1/2}\no\\ &&\times \bigg(\int_{0}^{t_b}\int_{\s^1}\int_{0}^{\max\{0,s_b-\delta\}}\no\\ &&{\bf{1}}_{[0,t]\times\Omega} (\e^2(r^{\ast}+s^{\ast}+t/\e^2-t_b-s_b),\vx-\e(t_b-s^{\ast})\vw-\e(s_b-r^{\ast})\vw_t)\ue^{-(s_b-r^{\ast})-(t_b-s^{\ast})} \no\\ &&\abs{\bar u^2(\e^2(r^{\ast}+s^{\ast}+t/\e^2-t_b-s_b),\vx-\e(t_b-s^{\ast})\vw-\e(s_b-r^{\ast})\vw_t)}\ud{r^{\ast}}\ud{\vw_t}\ud{s^{\ast}}\bigg)^{1/2}.\nonumber \end{eqnarray} Note $\vw_t\in\s^1$, which is essentially a one-dimensional variable. Thus, we may write it in a new variable $\psi$ as $\vw_t=(\cos\psi,\sin\psi)$. Then we define the change of variable $[0,t/\e^2]\times[-\pi,\pi)\times\r\rt [0,t]\times\Omega: (s^{\ast},\psi,r^{\ast})\rt(t',y_1,y_2)=(\e^2(r^{\ast}+s^{\ast}+t/\e^2-t_b-s_b),\vx-\e(t_b-s^{\ast})\vw-\e(s_b-r^{\ast})\vw_t)$, i.e. \begin{eqnarray} \left\{ \begin{array}{rcl} t'&=&\e^2(r^{\ast}+s^{\ast}+t/\e^2-t_b-s_b),\\ y_1&=&x_1-\e(t_b-s^{\ast})w_1-\e(s_b-r^{\ast})\cos\psi,\\ y_2&=&x_2-\e(t_b-s^{\ast})w_2-\e(s_b-r^{\ast})\sin\psi. \end{array} \right. \end{eqnarray} Therefore, for $s_b-r^{\ast}\geq\delta$, we can directly compute the Jacobian \begin{eqnarray} \\ \abs{\frac{\p{(t',y_1,y_2)}}{\p{(s^{\ast},\psi,r^{\ast})}}}=\abs{\abs{\begin{array}{ccc} \e^2&0&\e^2\\ \e w_1&\e(s_b-r^{\ast})\sin\psi&\e\cos\psi\\ \e w_2&-\e(s_b-r^{\ast})\cos\psi&\e\sin\psi \end{array}}}=\e^4(s_b-r^{\ast})\bigg(1-(w_1\cos\psi+w_2\sin\psi)\bigg).\no \end{eqnarray} Thus, in order to guarantee the Jacobian is strictly positive, we may further decompose $I_2$ into $I_{2,1}+I_{2,2}$ where in $I_{2,1}$, we have $w_1\cos\psi+w_2\sin\psi>1-\delta$ and in $I_{2,2}$, we have $w_1\cos\psi+w_2\sin\psi\leq1-\delta$. Since $\vw=(w_1,w_2)\in\s^1$, based on trigonometric identity, we obtain \begin{eqnarray} \\ I_{2,1}&\leq&C\int_{0}^{t_b}\int_{\vw\cdot\vw_t>1-\delta}\int_{0}^{\max\{0,s_b-\delta\}}\no\\ &&\abs{\bar u(\e^2(r^{\ast}+s^{\ast}+t/\e^2-t_b-s_b),\vx-\e(t_b-s^{\ast})\vw-\e(s_b-r^{\ast})\vw_t)}\ue^{-(s_b-r^{\ast})-(t_b-s^{\ast})}\ud{r^{\ast}}\ud{\vw_t}\ud{s^{\ast}}\no\\ &\leq&\delta\im{u}{[0,t]\times\Omega\times\s^1}.\no \end{eqnarray} Hence, we may simplify (\ref{well-posedness temp 22}) as \begin{eqnarray} I_{2,2}&\leq&C\bigg(\int_{0}^{t}\int_{\s^1}\int_{\Omega}\frac{1}{\e^4\delta^2}\abs{\bar u^2(t',y)}\ud{\vec y}\ud{t'}\bigg)^{1/2}. \end{eqnarray} Then we may further utilize $L^2$ estimate of $u$ in Theorem \ref{LT estimate} to obtain \begin{eqnarray}\label{im temp 6} I_{2,2}&\leq&\frac{C}{\e^2\delta}\tm{u}{[0,t]\times\Omega\times\s^1}\\ &\leq&\frac{C(\Omega)}{\delta}\bigg(\frac{1}{\e^{4}}\tm{f}{[0,t]\times\Omega\times\s^1} +\frac{1}{\e^{2}}\tm{h}{\Omega\times\s^1}+\frac{1}{\e^{5/2}}\tm{g}{[0,t]\times\Gamma^-}\bigg)\nonumber. \end{eqnarray} \ \\ Step 4: $L^{\infty}$ estimate.\\ In summary, collecting (\ref{im temp 1}), (\ref{im temp 2}), (\ref{im temp 3}), (\ref{im temp 4}), (\ref{im temp 7}), (\ref{im temp 8}), (\ref{im temp 5}) and (\ref{im temp 6}), for fixed $0<\delta<1$, we have \begin{eqnarray} &&\abs{u(t,\vx,\vw)}\\ &\leq& \delta \im{u}{[0,t]\times\Omega\times\s^1}+\frac{C(\Omega)}{\delta}\bigg(\frac{1}{\e^{4}}\tm{f}{[0,t]\times\Omega\times\s^1} +\frac{1}{\e^{2}}\tm{h}{\Omega\times\s^1}+\frac{1}{\e^{5/2}}\tm{g}{[0,t]\times\Gamma^-}\bigg)\no\\ &&+\bigg(\im{f}{[0,t]\times\Omega\times\s^1} +\im{h}{\Omega\times\s^1}+\im{g}{[0,t]\times\Gamma^-}\bigg).\no \end{eqnarray} Then we may take $0<\delta\leq1/2$ to obtain \begin{eqnarray} &&\abs{u(t,\vx,\vw)}\\ &\leq& \half\im{u}{[0,t]\times\Omega\times\s^1}+C(\Omega)\bigg(\frac{1}{\e^{4}}\tm{f}{[0,t]\times\Omega\times\s^1} +\frac{1}{\e^{2}}\tm{h}{\Omega\times\s^1}+\frac{1}{\e^{5/2}}\tm{g}{[0,t]\times\Gamma^-}\bigg)\no\\ &&+\bigg(\im{f}{[0,t]\times\Omega\times\s^1} +\im{h}{\Omega\times\s^1}+\im{g}{[0,t]\times\Gamma^-}\bigg).\no \end{eqnarray} Taking supremum of $u$ over all $(t,\vx,\vw)$, we have \begin{eqnarray} &&\im{u}{[0,t]\times\Omega\times\s^1}\\ &\leq& \half\im{u}{[0,t]\times\Omega\times\s^1}+C(\Omega)\bigg(\frac{1}{\e^{4}}\tm{f}{[0,t]\times\Omega\times\s^1} +\frac{1}{\e^{2}}\tm{h}{\Omega\times\s^1}+\frac{1}{\e^{5/2}}\tm{g}{[0,t]\times\Gamma^-}\bigg)\no\\ &&+\bigg(\im{f}{[0,t]\times\Omega\times\s^1} +\im{h}{\Omega\times\s^1}+\im{g}{[0,t]\times\Gamma^-}\bigg).\no \end{eqnarray} Finally, absorbing $\im{u}{[0,t]\times\Omega\times\s^1}$, we get \begin{eqnarray} \im{u}{[0,t]\times\Omega\times\s^1}&\leq& C(\Omega)\bigg(\frac{1}{\e^{4}}\tm{f}{[0,t]\times\Omega\times\s^1} +\frac{1}{\e^{2}}\tm{h}{\Omega\times\s^1}+\frac{1}{\e^{5/2}}\tm{g}{[0,t]\times\Gamma^-}\\ &&+\im{f}{[0,t]\times\Omega\times\s^1} +\im{h}{\Omega\times\s^1}+\im{g}{[0,t]\times\Gamma^-}\bigg).\no \end{eqnarray} \ \\ Step 5: $L^{\infty}$ Decay.\\ Let $v=\ue^{\l t}u$. Then $v$ satisfies the equation \begin{eqnarray} \left\{ \begin{array}{rcl} \e^2\dt v+\e\vw\cdot\nabla_xv+(1-\l\e^2)v-\bar v&=&f\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1.0em} v(0,\vx,\vw)&=&h(\vx,\vw)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1.0em} v(\vx_0,\vw)&=&\ue^{\l t}g(t,\vx_0,\vw)\ \ \text{for}\ \ \vx_0\in\p\Omega\ \ \text{and}\ \vw\cdot\vec n<0. \end{array} \right. \end{eqnarray} By a similar argument as in Step 3 and Step 4, combined with the $L^2$ decay, we can finally show the desired estimate. \end{proof} \subsection{Maximum Principle} \begin{theorem}\label{maximum principle} When $f=0$, the solution $u(t,\vx,\vw)$ to the unsteady neutron transport equation (\ref{neutron}) satisfies the maximum principle, i.e. \begin{eqnarray} \inf\{g(t,\vx_0,\vw), h(\vx,\vw)\}\leq u(t,\vx,\vw)\leq \sup\{g(t,\vx_0,\vw), h(\vx,\vw)\}. \end{eqnarray} \end{theorem} \begin{proof} We claim that it suffices to show $u(t,\vx,\vw)\leq 0$ whenever $g(t,\vx_0,\vw)\leq 0$ and $h(\vx,\vw)\leq0$. Suppose the claim is justified. Then define \begin{eqnarray} m&=&\inf\{g(t,\vx_0,\vw), h(\vx,\vw)\},\\ M&=&\sup\{g(t,\vx_0,\vw), h(\vx,\vw)\}. \end{eqnarray} We have $u_1=u-M$ satisfies the equation \begin{eqnarray} \left\{ \begin{array}{rcl} \e^2\dt u_1+\e\vec w\cdot\nabla_xu_1+u_1-\bar u_1&=&0\ \ \text{in}\ \ [0,\infty)\times\Omega,\\\rule{0ex}{1.0em} u_1(0,\vx,\vw)&=&h(\vx,\vw)-M\ \ \text{in}\ \ \Omega\\\rule{0ex}{1.0em} u_1(t,\vec x_0,\vec w)&=&g(t,\vec x_0,\vec w)-M\ \ \text{for}\ \ \vw\cdot\vec n<0\ \ \text{and}\ \ \vx_0\in\p\Omega, \end{array} \right. \end{eqnarray} Hence, $h-M\leq 0$ and $g-M\leq0$ implies $u_1\leq 0$, which is actually $u\leq M$. Similarly, we have $u_2=m-u$ satisfies the equation \begin{eqnarray} \left\{ \begin{array}{rcl} \e^2\dt u_2+\e\vec w\cdot\nabla_xu_2+u_2-\bar u_2&=&0\ \ \text{in}\ \ [0,\infty)\times\Omega,\\\rule{0ex}{1.0em} u_2(0,\vx,\vw)&=&m-h(\vx,\vw)\ \ \text{in}\ \ \Omega\\\rule{0ex}{1.0em} u_2(t,\vec x_0,\vec w)&=&m-g(t,\vec x_0,\vec w)\ \ \text{for}\ \ \vw\cdot\vec n<0\ \ \text{and}\ \ \vx_0\in\p\Omega, \end{array} \right. \end{eqnarray} Hence, $m-h\leq 0$ and $m-g\leq0$ implies $u_2\leq 0$, which is actually $u\geq m$. Therefore, the maximum principle is established. We now prove the claim that if $g(t,\vx_0,\vw)\leq 0$ and $h(\vx,\vw)\leq0$, we have $u(t,\vx,\vw)\leq 0$. We first consider the penalized neutron transport equation \begin{eqnarray} \left\{ \begin{array}{rcl} \l u_{\l}+\e^2\dt u_{\l}+\e\vw\cdot\nabla_xu_{\l}+u_{\l}-\bar u_{\l}&=& 0\ \ \text{in}\ \ [0,\infty)\times\Omega,\\\rule{0ex}{1.0em} u_{\l}(0,\vx,\vw)&=&h(\vx,\vw)\ \ \text{in}\ \ \Omega\\\rule{0ex}{1.0em} u_{\l}(t,\vx_0,\vec w)&=&g(t,\vx_0,\vw)\ \ \text{for}\ \ \vx_0\in\p\Omega\ \ \text{and}\ \vw\cdot\vec n<0. \end{array} \right. \end{eqnarray} with $\l>0$. Based on Lemma \ref{well-posedness lemma 2}, there exists a solution $u_{\l}(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$. We use the notation in the proof of Lemma \ref{well-posedness lemma 2}. Define an approximating sequence $\{u_{\l}^k\}_{k=0}^{\infty}$, where $u_{\l}^0=0$ and \begin{eqnarray} \left\{ \begin{array}{rcl} \l u_{\l}^{k}+\e^2\dt u_{\l}^k+\e\vw\cdot\nabla_xu_{\l}^k+u_{\l}^k-\bar u_{\l}^{k-1}&=&0\ \ \text{in}\ \ [0,\infty)\times\Omega,\\\rule{0ex}{1.0em} u_{\l}^k(0,\vx,\vw)&=&h(\vx,\vw)\ \ \text{in}\ \ \Omega\\\rule{0ex}{1.0em} u_{\l}^k(t,\vx_0,\vw)&=&g(t,\vx_0,\vw)\ \ \text{for}\ \ \vx_0\in\p\Omega\ \ \text{and}\ \vw\cdot\vec n<0. \end{array} \right. \end{eqnarray} By Lemma \ref{well-posedness lemma 1}, this sequence is well-defined and $\im{u_{\l}^k}{[0,\infty)\times\Omega\times\s^1}<\infty$. Then we rewrite equation (\ref{penalty temp 1}) along the characteristics as \begin{eqnarray} \ \end{eqnarray} \begin{eqnarray} &&u_{\l}^k(t,\vx,\vw)\no\\ &=&{\bf 1}_{\{t\geq \e^2t_b\}}\bigg( g(t-\e^2t_b,\vx-\e t_b\vw,\vw)\ue^{-(1+\l)t_b}+\int_{0}^{t_b}\bar u_{\l}^{k-1}(t-\e^2(t_b-s),\vx-\e(t_b-s)\vw,\vw)\ue^{-(1+\l)(t_b-s)}\ud{s}\bigg)\no\\ &&+{\bf 1}_{\{t\leq \e^2t_b\}}\bigg( h(\vx-(\e t\vw)/\e^2,\vw)\ue^{-(1+\l)t/\e^2}+\int_{0}^{t/\e^2}\bar u_{\l}^{k-1}(\e^2s,\vx-\e(t/\e^2-s)\vw,\vw)\ue^{-(1+\l)(t/\e^2-s)}\ud{s}\bigg)\no, \end{eqnarray} where \begin{equation} t_b(\vx,\vw)=\inf\{s\geq0: (\vx-\e s\vw,\vw)\in\Gamma^-\}. \end{equation} Since $u_{\l}^k(t,\vx,\vw)\leq 0$ naturally implies $\bar u_{\l}^k(t,\vx)\leq 0$, we naturally have $u_{\l}^k(t,\vx,\vw)\leq 0$ when $g(t,\vx_0,\vw)\leq 0$ and $h(\vx,\vw)\leq0$. In the proof of Lemma \ref{well-posedness lemma 2}, we have shown $u_{\l}^k\rt u_{\l}$ in $L^{\infty}$ as $k\rt\infty$. Therefore, we have $u_{\l}(t,\vx,\vw)\leq 0$. Based on the proof of Lemma \ref{LT estimate}, we know $u_{\l}\rt u$ in $L^2$ as $\l\rt0$, where $u$ is the solution of the equation (\ref{neutron}). Then we naturally obtain $u\leq 0$. Also, this is the unique solution to the equation (\ref{neutron}). This justifies the claim and completes the proof. \end{proof} Theorem \ref{maximum principle} naturally leads to the $L^{\infty}$ estimate of the equation (\ref{neutron}). \begin{corollary}\label{wellposedness estimate 2} Assume $h(\vx,\vw)\in L^{\infty}(\Omega\times\s^1)$ and $g(t,x_0,\vw)\in L^{\infty}([0,\infty)\times\Gamma^-)$. Then for the unsteady neutron transport equation (\ref{neutron}) with $f=0$, there exists a unique solution $u(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$ satisfying \begin{eqnarray} &&\nm{u}_{L^{\infty}([0,\infty)\times\Gamma^+)}+\nm{u(t)}_{L^{\infty}(\Omega\times\s^1)}+\nm{u}_{L^{\infty}([0,\infty)\times\Omega\times\s^1)} \leq\nm{h}_{L^{\infty}(\Omega\times\s^1)}+\nm{g}_{L^{\infty}([0,\infty)\times\Gamma^-)}. \end{eqnarray} \end{corollary} \subsection{Well-posedness of Transport Equation} Combining the results in Theorem \ref{LI estimate} and Corollary \ref{wellposedness estimate 2}, we can show an improved $L^{\infty}$ estimate of the equation (\ref{neutron}). \begin{theorem}\label{improved LI estimate} Assume $f(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$, $h(\vx,\vw)\in L^{\infty}(\Omega\times\s^1)$ and $g(t,x_0,\vw)\in L^{\infty}([0,\infty)\times\Gamma^-)$. Then for the unsteady neutron transport equation (\ref{neutron}), there exists a unique solution $u(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$ satisfying \begin{eqnarray} &&\nm{u}_{L^{\infty}([0,\infty)\times\Gamma^+)}+\nm{u(t)}_{L^{\infty}(\Omega\times\s^1)}+\nm{u}_{L^{\infty}([0,\infty)\times\Omega\times\s^1)}\\ &\leq& C(\Omega)\bigg(\frac{1}{\e^{4}}\nm{f}_{L^{2}([0,\infty)\times\Omega\times\s^1)}+\nm{f}_{L^{\infty}([0,\infty)\times\Omega\times\s^1)}\bigg) +\nm{h}_{L^{\infty}(\Omega\times\s^1)}+\nm{g}_{L^{\infty}([0,\infty)\times\Gamma^-)}.\no \end{eqnarray} \end{theorem} \begin{proof} Since the equation (\ref{neutron}) is a linear equation, then we can utilize the superposition property, i.e. we can separate the solution $u=u_1+u_2$ where $u_1$ satisfies the equation \begin{eqnarray}\label{improved temp 1} \left\{ \begin{array}{rcl} \e^2\dt u_1+\e\vec w\cdot\nabla_xu_1+u_1-\bar u_1&=&0\ \ \text{in}\ \ [0,\infty)\times\Omega,\\\rule{0ex}{1.0em} u_1(0,\vx,\vw)&=&h(\vx,\vw)\ \ \text{in}\ \ \Omega\\\rule{0ex}{1.0em} u_1(t,\vec x_0,\vec w)&=&g(t,\vec x_0,\vec w)\ \ \text{for}\ \ \vw\cdot\vec n<0\ \ \text{and}\ \ \vx_0\in\p\Omega, \end{array} \right. \end{eqnarray} and $u_2$ satisfies the equation \begin{eqnarray}\label{improved temp 2} \left\{ \begin{array}{rcl} \e^2\dt u_2+\e\vec w\cdot\nabla_xu_2+u_2-\bar u_2&=&f(t,\vx,\vw)\ \ \text{in}\ \ [0,\infty)\times\Omega,\\\rule{0ex}{1.0em} u_2(0,\vx,\vw)&=&0\ \ \text{in}\ \ \Omega\\\rule{0ex}{1.0em} u_2(t,\vec x_0,\vec w)&=&0\ \ \text{for}\ \ \vw\cdot\vec n<0\ \ \text{and}\ \ \vx_0\in\p\Omega, \end{array} \right. \end{eqnarray} Note that the data in (\ref{improved temp 1}) and (\ref{improved temp 2}) satisfy the compatibility condition (\ref{compatibility condition}). Therefore, we can apply the previous results in this section. Corollary \ref{wellposedness estimate 2} yields \begin{eqnarray}\label{improved temp 3} &&\nm{u_1}_{L^{\infty}([0,\infty)\times\Gamma^+)}+\nm{u_1(t)}_{L^{\infty}(\Omega\times\s^1)}+\nm{u_1}_{L^{\infty}([0,\infty)\times\Omega\times\s^1)} \leq\nm{h}_{L^{\infty}(\Omega\times\s^1)}+\nm{g}_{L^{\infty}([0,\infty)\times\Gamma^-)}. \end{eqnarray} Also, Theorem \ref{LI estimate} leads to \begin{eqnarray}\label{improved temp 4} &&\nm{u_2}_{L^{\infty}([0,\infty)\times\Gamma^+)}+\nm{u_2(t)}_{L^{\infty}(\Omega\times\s^1)}+\nm{u_2}_{L^{\infty}([0,\infty)\times\Omega\times\s^1)}\\ &\leq& C(\Omega)\bigg( \frac{1}{\e^{4}}\nm{f}_{L^{2}([0,\infty)\times\Omega\times\s^1)}+\nm{f}_{L^{\infty}([0,\infty)\times\Omega\times\s^1)} \bigg).\no \end{eqnarray} Combining (\ref{improved temp 3}) and (\ref{improved temp 4}), we have the desired result. \end{proof} Finally, we can apply Theorem \ref{improved LI estimate} to the equation (\ref{transport}) and obtain Theorem \ref{main 1}. \begin{theorem}\label{well-posedness 2} Assume $g(t,x_0,\vw)\in L^{\infty}([0,\infty)\times\Gamma^-)$ and $h(\vx,\vw)\in L^{\infty}(\Omega\times\s^1)$. Then for the unsteady neutron transport equation (\ref{transport}), there exists a unique solution $u^{\e}(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$ satisfying \begin{eqnarray} \im{u^{\e}}{[0,\infty)\times\Omega\times\s^1}\leq \nm{h}_{L^{\infty}(\Omega\times\s^1)}+\nm{g}_{L^{\infty}([0,\infty)\times\Gamma^-)}. \end{eqnarray} \end{theorem} \section{Asymptotic Analysis In this section, we construct the asymptotic expansion of the equation (\ref{transport}). \subsection{Discussion of Compatibility Condition} The initial and boundary data satisfy the compatibility condition \begin{eqnarray} h(\vx_0,\vw)=g(0,\vx_0,\vw)\ \ \text{for}\ \ \vw\cdot\vec n<0. \end{eqnarray} Then in the half-space $\vw\cdot\vec n<0$ at $(0,\vx_0,\vw)$, the equation \begin{eqnarray} \e^2\dt u^{\e}+\e \vw\cdot\nabla_x u^{\e}+u^{\e}-\bar u^{\e}&=&0, \end{eqnarray} is valid, which implies \begin{eqnarray}\label{comp 1} \e^2\dt g(0,\vx_0,\vw)+\e \vw\cdot\nabla_xh(\vx_0,\vw)+h(\vx_0,\vw)-\bar h(\vx_0)&=&0. \end{eqnarray} In order to show the diffusive limit, the condition (\ref{comp 1}) holds for arbitrary $\e$. Since $g$ and $h$ are all independent of $\e$, we must have for $\vw\cdot\vec n<0$, \begin{eqnarray} \dt g(0,\vx_0,\vw)&=&0,\\ \vw\cdot\nabla_xh(\vx_0,\vw)&=&0,\\ h(\vx_0,\vw)-\bar h(\vx_0)&=&0.\label{comp 2} \end{eqnarray} The relation (\ref{comp 2}) implies the improved compatibility condition \begin{eqnarray}\label{improved compatibility condition} h(\vx_0,\vw)=g(0,\vx_0,\vw)=C_0\ \ \text{for}\ \ \vw\cdot\vec n<0, \end{eqnarray} for some constant $C_0$. This fact is of great importance in the following analysis. \subsection{Interior Expansion} We define the interior expansion as follows: \begin{eqnarray}\label{interior expansion} \u(t,\vx,\vw)\sim\sum_{k=0}^{\infty}\e^k\u_k(t,\vx,\vw), \end{eqnarray} where $\u_k$ can be defined by comparing the order of $\e$ via plugging (\ref{interior expansion}) into the equation (\ref{transport}). Thus, we have \begin{eqnarray} \u_0-\bu_0&=&0,\label{expansion temp 1}\\ \u_1-\bu_1&=&-\vw\cdot\nx\u_0,\label{expansion temp 2}\\ \u_2-\bu_2&=&-\vw\cdot\nx\u_1-\dt\u_0,\label{expansion temp 3}\\ \ldots\nonumber\\ \u_k-\bu_k&=&-\vw\cdot\nx\u_{k-1}-\dt\u_{k-2}. \end{eqnarray} \ \\ The following analysis reveals the equation satisfied by $\u_k$:\\ Plugging (\ref{expansion temp 1}) into (\ref{expansion temp 2}), we obtain \begin{eqnarray} \u_1=\bu_1-\vw\cdot\nx\bu_0.\label{expansion temp 4} \end{eqnarray} Plugging (\ref{expansion temp 4}) into (\ref{expansion temp 3}), we get \begin{eqnarray}\label{expansion temp 13} \u_2-\bu_2+\dt\u_0=-\vw\cdot\nx(\bu_1-\vw\cdot\nx\bu_0)=-\vw\cdot\nx\bu_1+\abs{\vw}^2\Delta_x\bu_0+2w_1w_2\p_{x_1x_2}\bu_0. \end{eqnarray} Integrating (\ref{expansion temp 13}) over $\vw\in\s^1$, we achieve the final form \begin{eqnarray} \dt\bu_0-\Delta_x\bu_0=0, \end{eqnarray} which further implies $\u_0(t,\vx,\vw)$ satisfies the equation \begin{eqnarray}\label{interior 1} \left\{ \begin{array}{rcl} \u_0&=&\bu_0,\\ \dt\bu_0-\Delta_x\bu_0&=&0. \end{array} \right. \end{eqnarray} Similarly, we can derive $\u_1(t,\vx,\vw)$ satisfies \begin{eqnarray}\label{interior 2} \left\{ \begin{array}{rcl} \u_1&=&\bu_1-\vw\cdot\nx\u_0,\\ \dt\bu_1-\Delta_x\bu_1&=&0, \end{array} \right. \end{eqnarray} and $\u_k(t,\vx,\vw)$ for $k\geq2$ satisfies \begin{eqnarray}\label{interior 3} \left\{ \begin{array}{rcl} \u_k&=&\bu_k-\vw\cdot\nx\u_{k-1}-\dt\u_{k-2},\\ \dt\bu_k-\Delta_x\bu_k&=&0. \end{array} \right. \end{eqnarray} Note that in order to determine $\u_k$, we need to define the initial data and boundary data. \subsection{Initial Layer Expansion} In order to determine the initial condition for $\u_k$, we need to define the initial layer expansion. Hence, we need a substitution:\\ \ \\ Temporal Substitution:\\ We define the stretched variable $\tau$ by making the scaling transform for $u^{\e}(t)\rt u^{\e}(\tau)$ with $\tau\in [0,\infty)$ as \begin{eqnarray}\label{substitution 0} \tau&=&\frac{t}{\e^2}, \end{eqnarray} which implies \begin{eqnarray} \frac{\p u^{\e}}{\p t}=\frac{1}{\e^2}\frac{\p u^{\e}}{\p\tau}. \end{eqnarray} In this new variable, equation (\ref{transport}) can be rewritten as \begin{eqnarray}\label{initial temp} \left\{ \begin{array}{l}\displaystyle \p_{\tau}u^{\e}+\e\vw\cdot\nabla_xu^{\e}+u^{\e}-\bar u^{\e}=0,\\\rule{0ex}{1.0em} u^{\e}(0,\vx,\vw)=h(\vx,\vw),\\\rule{0ex}{1.0em} u^{\e}(\tau,\vx_0,\vw)=g(\tau,\vx_0,\vw)\ \ \text{for}\ \ \vw\cdot\vec n<0. \end{array} \right. \end{eqnarray} We define the initial layer expansion as follows: \begin{eqnarray}\label{initial layer expansion} \ub_I(\tau,\vx,\vw)\sim\sum_{k=0}^{\infty}\e^k\ub_{I,k}(\tau,\vx,\vw), \end{eqnarray} where $\ub_{I,k}$ can be determined by comparing the order of $\e$ via plugging (\ref{initial layer expansion}) into the equation (\ref{initial temp}). Thus, we have \begin{eqnarray} \p_{\tau}\ub_{I,0}+\ub_{I,0}-\bub_{I,0}&=&0,\label{initial expansion 1}\\ \p_{\tau}\ub_{I,1}+\ub_{I,1}-\bub_{I,1}&=&-\vw\cdot\nabla_x\ub_{I,0},\label{initial expansion 2}\\ \p_{\tau}\ub_{I,2}+\ub_{I,2}-\bub_{I,2}&=&-\vw\cdot\nabla_x\ub_{I,1},\label{initial expansion 3}\\ \ldots\no\\ \p_{\tau}\ub_{I,k}+\ub_{I,k}-\bub_{I,k}&=&-\vw\cdot\nabla_x\ub_{I,k-1}.\label{initial expansion 4} \end{eqnarray} \ \\ The following analysis reveals the equation satisfied by $\ub_{I,k}$:\\ Integrate (\ref{initial expansion 1}) over $\vw\in\s^1$, we have \begin{eqnarray} \p_{\tau}\bub_{I,0}=0. \end{eqnarray} which further implies \begin{eqnarray} \bub_{I,0}(\tau,\vx)=\bub_{I,0}(0,\vx). \end{eqnarray} Therefore, from (\ref{initial expansion 1}), we can deduce \begin{eqnarray} \ub_{I,0}(\tau,\vx,\vw)&=&\ue^{-\tau}\ub_{I,0}(0,\vx,\vw)+\int_0^{\tau}\bub_{I,0}(s,\vx)\ue^{s-\tau}\ud{s}\\ &=&\ue^{-\tau}\ub_{I,0}(0,\vx,\vw)+(1-\ue^{-\tau})\bub_{I,0}(0,\vx).\no \end{eqnarray} This means we have \begin{eqnarray} \left\{ \begin{array}{rcl} \p_{\tau}\bub_{I,0}&=&0,\\\rule{0ex}{1.0em} \ub_{I,0}(\tau,\vx,\vw)&=&\ue^{-\tau}\ub_{I,0}(0,\vx,\vw)+(1-\ue^{-\tau})\bub_{I,0}(0,\vx). \end{array} \right. \end{eqnarray} Similarly, we can derive $\ub_{I,k}(\tau,\vx,\vw)$ for $k\geq1$ satisfies \begin{eqnarray} \left\{ \begin{array}{rcl} \p_{\tau}\bub_{I,k}&=&-\displaystyle\int_{\s^1}\bigg(\vw\cdot\nabla_x\ub_{I,k-1}\bigg)\ud{\vw},\\\rule{0ex}{1.5em} \ub_{I,k}(\tau,\vx,\vw)&=&\ue^{-\tau}\ub_{I,k}(0,\vx,\vw)+\displaystyle\int_0^{\tau}\bigg(\bub_{I,k}-\vw\cdot\nabla_x\ub_{I,k-1}\bigg)(s,\vx,\vw)\ue^{s-\tau}\ud{s}. \end{array} \right. \end{eqnarray} \subsection{Boundary Layer Expansion with Geometric Correction} In order to determine the boundary condition for $\u_k$, we need to define the boundary layer expansion. Hence, we need several substitutions:\\ \ \\ Spacial Substitution 1:\\ We consider the substitution into quasi-polar coordinates $u^{\e}(x_1,x_2)\rt u^{\e}(\mu,\theta)$ with $(\mu,\theta)\in [0,1)\times[-\pi,\pi)$ defined as \begin{eqnarray}\label{substitution 1} \left\{ \begin{array}{rcl} x_1&=&(1-\mu)\cos\theta,\\ x_2&=&(1-\mu)\sin\theta. \end{array} \right. \end{eqnarray} Here $\mu$ denotes the distance to the boundary $\p\Omega$ and $\theta$ is the space angular variable. In these new variables, equation (\ref{transport}) can be rewritten as \begin{eqnarray} \\ \left\{ \begin{array}{l}\displaystyle \e^2\frac{\p u^{\e}}{\p t}-\e\bigg(w_1\cos\theta+w_2\sin\theta\bigg)\frac{\p u^{\e}}{\p\mu}-\frac{\e}{1-\mu}\bigg(w_1\sin\theta-w_2\cos\theta\bigg)\frac{\p u^{\e}}{\p\theta}+u^{\e}-\frac{1}{2\pi}\int_{\s^1}u^{\e}\ud{\vw}=0,\\\rule{0ex}{1.0em} u^{\e}(0,\mu,\theta,w_1,w_2)=h(\mu,\theta,w_1,w_2),\\\rule{0ex}{1.0em} u^{\e}(t,0,\theta,w_1,w_2)=g(t,\theta,w_1,w_2)\ \ \text{for}\ \ w_1\cos\theta+w_2\sin\theta<0.\no \end{array} \right. \end{eqnarray} \ \\ Spacial Substitution 2:\\ We further define the stretched variable $\eta$ by making the scaling transform for $u^{\e}(\mu,\theta)\rt u^{\e}(\eta,\theta)$ with $(\eta,\theta)\in [0,1/\e)\times[-\pi,\pi)$ as \begin{eqnarray}\label{substitution 2} \left\{ \begin{array}{rcl} \eta&=&\dfrac{\mu}{\e},\\\rule{0ex}{1.5em} \theta&=&\theta, \end{array} \right. \end{eqnarray} which implies \begin{eqnarray} \frac{\p u^{\e}}{\p\mu}=\frac{1}{\e}\frac{\p u^{\e}}{\p\eta}. \end{eqnarray} Then equation (\ref{transport}) is transformed into \begin{eqnarray} \\ \left\{ \begin{array}{l}\displaystyle \e^2\frac{\p u^{\e}}{\p t}-\bigg(w_1\cos\theta+w_2\sin\theta\bigg)\frac{\p u^{\e}}{\p\eta}-\frac{\e}{1-\e\eta}\bigg(w_1\sin\theta-w_2\cos\theta\bigg)\frac{\p u^{\e}}{\p\theta}+u^{\e}-\frac{1}{2\pi}\int_{\s^1}u^{\e}\ud{\vw}=0,\\\rule{0ex}{1.0em} u^{\e}(0,\eta,\theta,\vw)=h(\eta,\theta,w_1,w_2),\\\rule{0ex}{1.0em} u^{\e}(t,0,\theta,w_1,w_2)=g(t,\theta,w_1,w_2)\ \ \text{for}\ \ w_1\cos\theta+w_2\sin\theta<0.\no \end{array} \right. \end{eqnarray} \ \\ Spacial Substitution 3:\\ Define the velocity substitution for $u^{\e}(w_1,w_2)\rt u^{\e}(\xi)$ with $\xi\in [-\pi,\pi)$ as \begin{eqnarray}\label{substitution 3} \left\{ \begin{array}{rcl} w_1&=&-\sin\xi,\\ w_2&=&-\cos\xi. \end{array} \right. \end{eqnarray} Here $\xi$ denotes the velocity angular variable. We have the succinct form for (\ref{transport}) as \begin{eqnarray}\label{classical temp} \left\{ \begin{array}{l}\displaystyle \e^2\frac{\p u^{\e}}{\p t}+\sin(\theta+\xi)\frac{\p u^{\e}}{\p\eta}-\frac{\e}{1-\e\eta}\cos(\theta+\xi)\frac{\p u^{\e}}{\p\theta}+u^{\e}-\frac{1}{2\pi}\int_{-\pi}^{\pi}u^{\e}\ud{\xi}=0,\\\rule{0ex}{1.0em} u^{\e}(0,\eta,\theta,\xi)=h(\eta,\theta,\xi),\\\rule{0ex}{1.0em} u^{\e}(t,0,\theta,\xi)=g(t,\theta,\xi)\ \ \text{for}\ \ \sin(\theta+\xi)>0. \end{array} \right. \end{eqnarray} \ \\ Spacial Substitution 4:\\ We make the rotation substitution for $u^{\e}(\xi)\rt u^{\e}(\phi)$ with $\phi\in [-\pi,\pi)$ as \begin{eqnarray}\label{substitution 4} \begin{array}{rcl} \phi&=&\theta+\xi, \end{array} \end{eqnarray} and transform the equation (\ref{transport}) into \begin{eqnarray}\label{transport temp} \left\{ \begin{array}{l}\displaystyle \e^2\dfrac{\p u^{\e}}{\p t}+\sin\phi\frac{\p u^{\e}}{\p\eta}-\frac{\e}{1-\e\eta}\cos\phi\bigg(\frac{\p u^{\e}}{\p\phi}+\frac{\p u^{\e}}{\p\theta}\bigg)+u^{\e}-\frac{1}{2\pi}\int_{-\pi}^{\pi}u^{\e}\ud{\phi}=0,\\\rule{0ex}{1.0em} u^{\e}(0,\eta,\theta,\phi)=h(\eta,\theta,\phi),\\\rule{0ex}{1.0em} u^{\e}(t,0,\theta,\phi)=g(t,\theta,\phi)\ \ \text{for}\ \ \sin\phi>0. \end{array} \right. \end{eqnarray} We define the boundary layer expansion with geometric correction as follows: \begin{eqnarray}\label{boundary layer expansion} \ub_B(t,\eta,\theta,\phi)\sim\sum_{k=0}^{\infty}\e^k\ub_{B,k}(t,\eta,\theta,\phi), \end{eqnarray} where $\ub_{B,k}$ can be determined by comparing the order of $\e$ via plugging (\ref{boundary layer expansion}) into the equation (\ref{transport temp}). Following the idea in \cite{AA003}, in a neighborhood of the boundary, we require \begin{eqnarray} \sin\phi\frac{\p \ub_{B,0}}{\p\eta}-\frac{\e}{1-\e\eta}\cos\phi\frac{\p \ub_{B,0}}{\p\phi}+\ub_{B,0}-\bub_{B,0}&=&0,\label{expansion temp 5}\\ \sin\phi\frac{\p \ub_{B,1}}{\p\eta}-\frac{\e}{1-\e\eta}\cos\phi\frac{\p \ub_{B,1}}{\p\phi}+\ub_{B,1}-\bub_{B,1}&=&\frac{1}{1-\e\eta}\cos\phi\frac{\p \ub_{B,0}}{\p\theta},\label{expansion temp 6}\\ \sin\phi\frac{\p \ub_{B,2}}{\p\eta}-\frac{\e}{1-\e\eta}\cos\phi\frac{\p \ub_{B,2}}{\p\phi}+\ub_{B,2}-\bub_{B,2}&=&\frac{1}{1-\e\eta}\cos\phi\frac{\p \ub_{B,1}}{\p\theta}-\frac{\p\ub_{B,0}}{\p t},\label{expansion temp 7}\\ \ldots\nonumber\\ \sin\phi\frac{\p \ub_{B,k}}{\p\eta}-\frac{\e}{1-\e\eta}\cos\phi\frac{\p \ub_{B,k}}{\p\phi}+\ub_{B,k}-\bub_{B,k}&=&\frac{1}{1-\e\eta}\cos\phi\frac{\p \ub_{B,k-1}}{\p\theta}-\frac{\p\ub_{B,k-2}}{\p t}. \end{eqnarray} where \begin{eqnarray} \bub_{B,k}(t,\eta,\theta)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\ub_{B,k}(t,\eta,\theta,\phi)\ud{\phi}. \end{eqnarray} It is important to note the solution $\ub_{B,k}$ depends on $\e$ and this is the reason why we add the superscript $\e$ to $\u_k$, $\ub_{I,k}$ and $\ub_{B,k}$. \subsection{Initial-Boundary Layer Expansion} Above construction of initial layer and boundary layer yields an interesting fact that at the corner point $(t,\vx)=(0,\vx_0)$ for $\vx_0\in\p\Omega$, the initial layer starting from this point has a contribution on the boundary data, and the boundary layer starting from this point has a contribution on the initial data. Therefore, we have to find some additional functions to compensate these noises. The classical theory of asymptotic analysis requires the so-called initial-boundary layer, where both the temporal scaling and spacial scaling should be used simultaneously. Fortunately, based on our analysis, the improved compatibility condition (\ref{improved compatibility condition}) implies the value at this corner point is a constant for $\vw\cdot\vec n<0$. Then These contribution must be zero at the zeroth order, i.e. \begin{eqnarray} \ub_{I,0}(\tau,\vx_0,\vw)&=&0,\\ \ub_{B,0}(0,\eta,\theta,\phi)&=&0, \end{eqnarray} Therefore, the zeroth order initial-boundary layer is absent. \subsection{Construction of Asymptotic Expansion} The bridge between the interior solution, the initial layer, and the boundary layer is the initial and boundary condition of (\ref{transport}). To avoid the introduction of higher order initial-boundary layer, we only require the zeroth order expansion of initial and boundary data be satisfied, i.e. we have \begin{eqnarray} \u_0(0,\vx,\vw)+\ub_{I,0}(0,\vx,\vw)+\ub_{B,0}(0,\vx,\vw)&=&h(\vx,\vw),\\ \u_0(t,\vx_0,\vw)+\ub_{I,0}(t,\vx_0,\vw)+\ub_{B,0}(t,\vx_0,\vw)&=&g(t,\vx_0,\vw). \end{eqnarray} The construction of $\u_k$. $\ub_{I,k}$ and $\ub_{B,k}$ are as follows:\\ \ \\ Assume the cut-off function $\psi$ and $\psi_0$ are defined as \begin{eqnarray}\label{cut-off 1} \psi(\mu)=\left\{ \begin{array}{ll} 1&0\leq\mu\leq1/2,\\ 0&3/4\leq\mu\leq\infty. \end{array} \right. \end{eqnarray} \begin{eqnarray}\label{cut-off 2} \psi_0(\mu)=\left\{ \begin{array}{ll} 1&0\leq\mu\leq1/4,\\ 0&3/8\leq\mu\leq\infty. \end{array} \right. \end{eqnarray} and define the force as \begin{eqnarray}\label{force} F(\e;\eta)=-\frac{\e\psi(\e\eta)}{1-\e\eta}, \end{eqnarray} \ \\ Step 1: Construction of zeroth order terms.\\ The zeroth order boundary layer solution is defined as \begin{eqnarray}\label{expansion temp 9} \left\{ \begin{array}{rcl} \ub_{B,0}(t,\eta,\theta,\phi)&=&\psi_0(\e\eta)\bigg(\f_0^{\e}(t,\eta,\theta,\phi)-f_0^{\e}(t,\infty,\theta)\bigg),\\ \sin\phi\dfrac{\p \f_0^{\e}}{\p\eta}-F(\e;\eta)\cos\phi\dfrac{\p \f_0^{\e}}{\p\phi}+\f_0^{\e}-\bar \f_0^{\e}&=&0,\\ \f_0^{\e}(t,0,\theta,\phi)&=&g(t,\theta,\phi)\ \ \text{for}\ \ \sin\phi>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}\f_0^{\e}(t,\eta,\theta,\phi)&=&f_0^{\e}(t,\infty,\theta). \end{array} \right. \end{eqnarray} Assuming $g\in L^{\infty}$, by Theorem \ref{Milne theorem 1}, we can show there exists a unique solution $\f_0^{\e}(t,\eta,\theta,\phi)\in L^{\infty}$. Hence, $\ub_{B,0}$ is well-defined.\\ The zeroth order initial layer is defined as \begin{eqnarray}\label{expansion temp 21} \left\{ \begin{array}{rcl} \ub_{I,0}(\tau,\vx,\vw)&=&\ff_0^{\e}(\tau,\vx,\vw)-\ff_0^{\e}(\infty,\vx)\\ \p_{\tau}\bar\ff_0^{\e}&=&0,\\\rule{0ex}{1.0em} \ff_0^{\e}(\tau,\vx,\vw)&=&\ue^{-\tau}\ff_0^{\e}(0,\vx,\vw)+(1-\ue^{-\tau})\bar\ff_0^{\e}(0,\vx),\\ \ff_0^{\e}(0,\vx,\vw)&=&h(\vx,\vw),\\ \lim_{\tau\rt\infty}\ff_0^{\e}(\tau,\vx,\vw)&=&\ff_0^{\e}(\infty,\vx). \end{array} \right. \end{eqnarray} Assuming $h\in L^{\infty}$. Then we can show there exists a unique solution $\ff_0^{\e}(\tau,\vx,\vw)\in L^{\infty}$. Hence, $\ub_{I,0}$ is well-defined.\\ Then we can define the zeroth order interior solution as \begin{eqnarray}\label{expansion temp 8} \left\{ \begin{array}{rcl} \u_0&=&\bu_0,\\\rule{0ex}{1em} \dt\bu_0-\Delta_x\bu_0&=&0,\\\rule{0ex}{1em}\bu_0(0,\vx)&=&\ff_0^{\e}(\infty,\vx)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1em} \bu_0(t,\vx_0)&=&f_0^{\e}(t,\infty,\theta)\ \ \text{on}\ \ \p\Omega, \end{array} \right. \end{eqnarray} where $(t,\vx,\vw)$ is the same point as $(\tau,\eta,\theta,\phi)$. Note that due to the improved compatibility condition (\ref{improved compatibility condition}), we have $\ub_{B,0}(0,\eta,\theta,\phi)=\ub_{I,0}(\tau,\vx_0,\vw)=0$.\\ \ \\ Step 2: Construction of first order terms. \\ Define the first order boundary layer solution as \begin{eqnarray}\label{expansion temp 11} \left\{ \begin{array}{rcl} \ub_{B,1}(t,\eta,\theta,\phi)&=&\psi_0(\e\eta)\bigg(\f_1^{\e}(t,\eta,\theta,\phi)-f_1^{\e}(t,\infty,\theta)\bigg),\\ \sin\phi\dfrac{\p \f_1^{\e}}{\p\eta}-F(\e;\eta)\cos\phi\dfrac{\p \f_1^{\e}}{\p\phi}+\f_1^{\e}-\bar \f_1^{\e}&=&\cos\phi\dfrac{\psi(\e\eta)}{1-\e\eta}\dfrac{\p \ub_{B,0}}{\p\theta},\\\rule{0ex}{1em} \f_1^{\e}(t,0,\theta,\phi)&=&\vw\cdot\nx\u_0(t,\vx_0,\vw)\ \ \text{for}\ \ \sin\phi>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}\f_1^{\e}(t,\eta,\theta,\phi)&=&f_1^{\e}(t,\infty,\theta). \end{array} \right. \end{eqnarray} Define the first order initial layer as \begin{eqnarray}\label{expansion temp 22} \left\{ \begin{array}{rcl} \ub_{I,1}(\tau,\vx,\vw)&=&\ff_1^{\e}(\tau,\vx,\vw)-\ff_1^{\e}(\infty,\vx)\\ \p_{\tau}\bar\ff_1^{\e}&=&-\displaystyle\int_{\s^1}\bigg(\vw\cdot\nabla_x\ub_{I,0}\bigg)\ud{\vw},\\\rule{0ex}{1.5em} \ff_1^{\e}(\tau,\vx,\vw)&=&\ue^{-\tau}\ff_1^{\e}(0,\vx,\vw)+\displaystyle\int_0^{\tau}\bigg(\bar\ff_1^{\e}-\vw\cdot\nabla_x\ub_{I,0}\bigg)(s,\vx,\vw)\ue^{s-\tau}\ud{s},\\ \ff_1^{\e}(0,\vx,\vw)&=&\vw\cdot\nx\u_0(0,\vx,\vw),\\ \lim_{\tau\rt\infty}\ff_1^{\e}(\tau,\vx,\vw)&=&\ff_1^{\e}(\infty,\vx). \end{array} \right. \end{eqnarray} Define the first order interior solution as \begin{eqnarray}\label{expansion temp 10} \left\{ \begin{array}{rcl} \u_1&=&\bu_1-\vw\cdot\nx\u_0,\\ \dt\bu_1-\Delta_x\bu_1&=&0,\\\rule{0ex}{1em}\bu_1(0,\vx)&=&\ff_1^{\e}(\infty,\vx)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1em} \bu_1(t,\vx)&=&f_1^{\e}(t,\infty,\theta)\ \ \text{on}\ \ \p\Omega. \end{array} \right. \end{eqnarray} \ \\ Step 3: Construction of $\ub_2$ and $\u_2$. \\ Define the second order boundary layer solution as \begin{eqnarray} \ \end{eqnarray} \begin{eqnarray} \left\{ \begin{array}{rcl} \ub_{B,2}(t,\eta,\theta,\phi)&=&\psi_0(\e\eta)\bigg(\f_2^{\e}(t,\eta,\theta,\phi)-f_2^{\e}(t,\infty,\theta)\bigg),\\ \sin\phi\dfrac{\p \f_2^{\e}}{\p\eta}-F(\e;\eta)\cos\phi\dfrac{\p \f_2^{\e}}{\p\phi}+\f_2^{\e}-\bar \f_2^{\e}&=&\cos\phi\dfrac{\psi(\e\eta)}{1-\e\eta}\dfrac{\p \ub_{B,1}}{\p\theta}-\dfrac{\p\ub_{B,0}}{\p t},\\\rule{0ex}{1em} \f_2^{\e}(t,0,\theta,\phi)&=&\vw\cdot\nx\u_1(t,\vx_0,\vw)+\dt\u_0(t,\vx_0,\vw)\ \ \text{for}\ \ \sin\phi>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}\f_2^{\e}(t,\eta,\theta,\phi)&=&f_2^{\e}(t,\infty,\theta).\no \end{array} \right. \end{eqnarray} Define the second order initial layer as \begin{eqnarray}\label{expansion temp 23} \left\{ \begin{array}{rcl} \ub_{I,2}(\tau,\vx,\vw)&=&\ff_2^{\e}(\tau,\vx,\vw)-\ff_2^{\e}(\infty,\vx)\\ \p_{\tau}\bar\ff_2^{\e}&=&-\displaystyle\int_{\s^1}\bigg(\vw\cdot\nabla_x\ub_{I,1}\bigg)\ud{\vw},\\\rule{0ex}{1.5em} \ff_2^{\e}(\tau,\vx,\vw)&=&\ue^{-\tau}\ff_2^{\e}(0,\vx,\vw)+\displaystyle\int_0^{\tau}\bigg(\bar\ff_2^{\e}-\vw\cdot\nabla_x\ub_{I,1}\bigg)(s,\vx,\vw)\ue^{s-\tau}\ud{s},\\ \ff_2^{\e}(0,\vx,\vw)&=&\vw\cdot\nx\u_1(0,\vx,\vw)+\dt\u_0(0,\vx,\vw),\\ \lim_{\tau\rt\infty}\ff_2^{\e}(\tau,\vx,\vw)&=&\ff_2^{\e}(\infty,\vx). \end{array} \right. \end{eqnarray} Define the first order interior solution as \begin{eqnarray} \left\{ \begin{array}{rcl} \u_2&=&\bu_2-\vw\cdot\nx\u_1-\dt\u_0,\\ \dt\bu_2-\Delta_x\bu_2&=&0,\\\rule{0ex}{1em}\bu_2(0,\vx)&=&\ff_2^{\e}(\infty,\vx)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1em} \bu_2(t,\vx)&=&f_2^{\e}(t,\infty,\theta)\ \ \text{on}\ \ \p\Omega. \end{array} \right. \end{eqnarray} \ \\ Step 4: Generalization to arbitrary $k$.\\ Similar to above procedure, we can define the $k^{th}$ order boundary layer solution as \begin{eqnarray} \ \end{eqnarray} \begin{eqnarray} \left\{ \begin{array}{rcl} \ub_{B,k}(t,\eta,\theta,\phi)&=&\psi_0(\e\eta)\bigg(\f_k^{\e}(t,\eta,\theta,\phi)-f_k^{\e}(t,\infty,\theta)\bigg),\\ \sin\phi\dfrac{\p \f_k^{\e}}{\p\eta}-F(\e;\eta)\cos\phi\dfrac{\p \f_k^{\e}}{\p\phi}+\f_k^{\e}-\bar \f_k^{\e}&=&\cos\phi\dfrac{\psi(\e\eta)}{1-\e\eta}\dfrac{\p \ub_{B,k-1}}{\p\theta}-\dfrac{\p\ub_{B,k-2}}{\p t},\\\rule{0ex}{1em} \f_k^{\e}(t,0,\theta,\phi)&=&\vw\cdot\nx\u_{k-1}(t,\vx_0,\vw)+\dt\u_{k-2}(t,\vx_0,\vw)\ \ \text{for}\ \ \sin\phi>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}\f_k^{\e}(t,\eta,\theta,\phi)&=&f_k^{\e}(t,\infty,\theta).\no \end{array} \right. \end{eqnarray} Define the $k^{th}$ order initial layer as \begin{eqnarray}\label{expansion temp 24} \left\{ \begin{array}{rcl} \ub_{I,k}(\tau,\vx,\vw)&=&\ff_k^{\e}(\tau,\vx,\vw)-\ff_k^{\e}(\infty,\vx)\\ \p_{\tau}\bar\ff_k^{\e}&=&-\displaystyle\int_{\s^1}\bigg(\vw\cdot\nabla_x\ub_{I,k-1}\bigg)\ud{\vw},\\\rule{0ex}{1.5em} \ff_k^{\e}(\tau,\vx,\vw)&=&\ue^{-\tau}\ff_k^{\e}(0,\vx,\vw)+\displaystyle\int_0^{\tau}\bigg(\bar\ff_k^{\e}-\vw\cdot\nabla_x\ub_{I,k-1}\bigg)(s,\vx,\vw)\ue^{s-\tau}\ud{s},\\ \ff_k^{\e}(0,\vx,\vw)&=&\vw\cdot\nx\u_{k-1}(0,\vx,\vw)+\dt\u_{k-2}(0,\vx,\vw),\\ \lim_{\tau\rt\infty}\ff_k^{\e}(\tau,\vx,\vw)&=&\ff_k^{\e}(\infty,\vx). \end{array} \right. \end{eqnarray} Define the $k^{th}$ order interior solution as \begin{eqnarray} \left\{ \begin{array}{rcl} \u_k&=&\bu_k-\vw\cdot\nx\u_{k-1}-\dt\u_{k-2},\\\rule{0ex}{1em} \dt\bu_k-\Delta_x\bu_k&=&0,\\\rule{0ex}{1em}\bu_k(0,\vx)&=&\ff_k^{\e}(\infty,\vx)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1em} \bu_k&=&f_k^{\e}(t,\infty,\theta)\ \ \text{on}\ \ \p\Omega. \end{array} \right. \end{eqnarray} When $g$ and $h$ are sufficiently smooth, then all the functions defined above are well-posed. The key point here is in the boundary layer, the source term including $\p_{\theta}\ub_{B,k}$ is in $L^{\infty}$ due to the substitution (\ref{substitution 4}). \section{$\e$-Milne Problem In this section, we study the $\e$-Milne problem for $f^{\e}(\eta,\theta,\phi)$ in the domain $(\eta,\theta,\phi)\in[0,\infty)\times[-\pi,\pi)\times[-\pi,\pi)$ \begin{eqnarray}\label{Milne problem} \left\{ \begin{array}{rcl}\displaystyle \sin\phi\frac{\p f^{\e}}{\p\eta}+F(\e;\eta)\cos\phi\frac{\p f^{\e}}{\p\phi}+f^{\e}-\bar f^{\e}&=&S^{\e}(\eta,\theta,\phi),\\ f^{\e}(0,\theta,\phi)&=&H^{\e}(\theta,\phi)\ \ \text{for}\ \ \sin\phi>0,\\\rule{0ex}{1.0em} \lim_{\eta\rt\infty}f^{\e}(\eta,\theta,\phi)&=&f^{\e}_{\infty}(\theta), \end{array} \right. \end{eqnarray} where \begin{eqnarray}\label{Milne average} \bar f^{\e}(\eta,\theta)=\frac{1}{2\pi}\int_{-\pi}^{\pi}f^{\e}(\eta,\theta,\phi)\ud{\phi}, \end{eqnarray} \begin{eqnarray} F(\e;\eta)=-\frac{\e\psi(\e\eta)}{1-\e\eta}, \end{eqnarray} \begin{eqnarray} \psi(\mu)=\left\{ \begin{array}{ll} 1&0\leq\mu\leq1/2,\\ 0&3/4\leq\mu\leq\infty, \end{array} \right. \end{eqnarray} \begin{eqnarray}\label{Milne bounded} \abs{H^{\e}(\theta,\phi)}\leq M, \end{eqnarray} and \begin{eqnarray}\label{Milne decay} \abs{S^{\e}(\eta,\theta,\phi)}\leq Me^{-K\eta}, \end{eqnarray} for $M>0$ and $K>0$ uniform in $\e$ and $\theta$. In this section, since the key variables here are $\eta$ and $\phi$, we temporarily ignore the dependence on $\e$ and $\theta$. We define the norms in the space $(\eta,\phi)\in[0,\infty)\times[-\pi,\pi)$ as follows: \begin{eqnarray} \tnnm{f}&=&\bigg(\int_0^{\infty}\int_{-\pi}^{\pi}\abs{f(\eta,\phi)}^2\ud{\phi}\ud{\eta}\bigg)^{1/2},\\ \lnnm{f}&=&\sup_{(\eta,\phi)\in[0,\infty)\times[-\pi,\pi)}\abs{f(\eta,\phi)}. \end{eqnarray} In \cite[Section 4]{AA003}, the authors proved the following results: \begin{theorem}\label{Milne theorem 1} There exists a unique solution $f(\eta,\phi)$ to the $\e$-Milne problem (\ref{Milne problem}) satisfying \begin{eqnarray} \tnnm{f-f_{\infty}}\leq C\bigg(1+M+\frac{M}{K}\bigg). \end{eqnarray} \end{theorem} \begin{theorem}\label{Milne theorem 2} There exists a unique solution $f(\eta,\phi)$ to the $\e$-Milne problem (\ref{Milne problem}) satisfying \begin{eqnarray} \lnnm{f-f_{\infty}}\leq C\bigg(1+M+\frac{M}{K}\bigg). \end{eqnarray} \end{theorem} \begin{theorem}\label{Milne theorem 3} For $K_0>0$ sufficiently small, the solution $f(\eta,\phi)$ to the $\e$-Milne problem (\ref{Milne problem}) satisfies \begin{eqnarray} \tnnm{e^{K_0\eta}(f-f_{\infty})}\leq C\bigg(1+M+\frac{M}{K}\bigg), \end{eqnarray} \end{theorem} \begin{theorem}\label{Milne theorem 4} For $K_0>0$ sufficiently small, the solution $f(\eta,\phi)$ to the $\e$-Milne problem (\ref{Milne problem}) satisfies \begin{eqnarray} \lnnm{e^{K_0\eta}(f-f_{\infty})}\leq C\bigg(1+M+\frac{M}{K}\bigg), \end{eqnarray} \end{theorem} \begin{theorem}\label{Milne theorem 5} The solution $f(\eta,\phi)$ to the $\e$-Milne problem (\ref{Milne problem}) with $S=0$ satisfies the maximum principle, i.e. \begin{eqnarray} \min_{\sin\phi>0}h(\phi)\leq f(\eta,\phi)\leq \max_{\sin\phi>0}h(\phi). \end{eqnarray} \end{theorem} \begin{remark}\label{Milne remark} Note that when $F=0$, Theorem \ref{Milne theorem 1}, Theorem \ref{Milne theorem 2}, Theorem \ref{Milne theorem 3}, Theorem \ref{Milne theorem 4}, and Theorem \ref{Milne theorem 5} still hold. Hence, we can deduce the well-posedness, decay and maximum principle of the classical Milne problem \begin{eqnarray}\label{classical Milne problem} \left\{ \begin{array}{rcl}\displaystyle \sin\phi\frac{\p f}{\p\eta}+f-\bar f&=&S(\eta,\phi),\\ f(0,\phi)&=&h(\phi)\ \ \text{for}\ \ \sin\phi>0,\\ \lim_{\eta\rt\infty}f(\eta,\phi)&=&f_{\infty}. \end{array} \right. \end{eqnarray} \end{remark} \section{Diffusive Limit In this section, we prove the first part of Theorem \ref{main 2}. \begin{theorem} Assume $g(t,\vx_0,\vw)\in C^4([0,\infty)\times\Gamma^-)$ and $h(\vx,\vw)\in C^4(\Omega\times\s^1)$. Then for the unsteady neutron transport equation (\ref{transport}), the unique solution $u^{\e}(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$ satisfies \begin{eqnarray}\label{main theorem 2} \lnm{u^{\e}-\u_0-\ub_{I,0}-\ub_{B,0}}=o(1), \end{eqnarray} where the interior solution $\u_0$ is defined in (\ref{expansion temp 8}), the initial layer $\ub_{I,0}$ is defined in (\ref{expansion temp 21}), and the boundary layer $\ub_{B,0}$ is defined in (\ref{expansion temp 9}). \end{theorem} \begin{proof} We divide the proof into several steps:\\ \ \\ Step 1: Remainder definitions.\\ We may rewrite the asymptotic expansion as follows: \begin{eqnarray} u^{\e}&\sim&\sum_{k=0}^{\infty}\e^k\u_k+\sum_{k=0}^{\infty}\e^k\ub_{I,k}+\sum_{k=0}^{\infty}\e^k\ub_{B,k}. \end{eqnarray} The remainder can be defined as \begin{eqnarray}\label{pf 1} R_N&=&u^{\e}-\sum_{k=0}^{N}\e^k\u_k-\sum_{k=0}^{N}\e^k\ub_{I,k}-\sum_{k=0}^{N}\e^k\ub_{B,k}=u^{\e}-\q_N-\qb_{I,N}-\qb_{B,N}, \end{eqnarray} where \begin{eqnarray} \q_N&=&\sum_{k=0}^{N}\e^k\u_k,\\ \qb_{I,N}&=&\sum_{k=0}^{N}\e^k\ub_{I,k},\\ \qb_{B,N}&=&\sum_{k=0}^{N}\e^k\ub_{B,k}. \end{eqnarray} Noting the equation is equivalent to the equations (\ref{initial temp}) and (\ref{transport temp}), we write $\ll$ to denote the neutron transport operator as follows: \begin{eqnarray} \ll u&=&\e^2\dt u+\e\vw\cdot\nx u+u-\bar u\\ &=&\p_{\tau}u+\e\vw\cdot\nabla_xu+u-\bar u\\ &=&\e^2\frac{\p u}{\p t}+\sin\phi\frac{\p u}{\p\eta}-\frac{\e}{1-\e\eta}\cos\phi\bigg(\frac{\p u}{\p\phi}+\frac{\p u}{\p\theta}\bigg)+u-\bar u.\nonumber \end{eqnarray} \ \\ Step 2: Estimates of $\ll \q_N$.\\ The interior contribution can be estimated as \begin{eqnarray} \ll\q_0&=&\e^2\dt\q_0+\e\vw\cdot\nx \q_0+\q_0-\bar \q_0\\ &=&\e^2\dt\u_0+ \e\vw\cdot\nx \u_0+(\u_0-\bu_0)=\e^2\dt\u_0+\e\vw\cdot\nx \u_0.\no \end{eqnarray} We have \begin{eqnarray} \abs{\e^2\dt\u_0}&\leq&C\e^2\abs{\dt\u_0}\leq C\e^2,\\ \abs{\e\vw\cdot\nx \u_0}&\leq& C\e\abs{\nx \u_0}\leq C\e. \end{eqnarray} This implies \begin{eqnarray} \abs{\ll \q_0}\leq C\e. \end{eqnarray} Similarly, for higher order term, we can estimate \begin{eqnarray} \ll\q_N=\e^2\dt\q_N+\e\vw\cdot\nx \q_N+\q_N-\bar \q_N&=&\e^{N+2}\dt\u_N+\e^{N+1}\vw\cdot\nx \u_N. \end{eqnarray} We have \begin{eqnarray} \abs{\e^{N+1}\dt\u_N}&\leq&C\e^{N+2}\abs{\dt\u_N}\leq C\e^{N+2},\\ \abs{\e^{N+1}\vw\cdot\nx \u_N}&\leq& C\e^{N+1}\abs{\nx \u_N}\leq C\e^{N+1}. \end{eqnarray} This implies \begin{eqnarray}\label{pf 2} \abs{\ll \q_N}\leq C\e^{N+1}. \end{eqnarray} \ \\ Step 3: Estimates of $\ll \qb_{I,N}$.\\ The initial layer contribution can be estimated as \begin{eqnarray} \ll\qb_{I,0}&=&\p_{\tau}\qb_{I,0}+\e\vw\cdot\nabla_x\qb_{I,0}+\qb_{I,0}-\bar \qb_{I,0}\\ &=&\p_{\tau}\ub_{I,0}+\e\vw\cdot\nabla_x\ub_{I,0}+\ub_{I,0}-\bar \ub_{I,0}=\e\nabla_x\ub_{I,0}.\no \end{eqnarray} Based on the smoothness of $\ub_{I,0}$, we have \begin{eqnarray} \abs{\ll\qb_{I,0}}=\abs{\e\nabla_x\ub_{I,0}}\leq C\e. \end{eqnarray} Similarly, we have \begin{eqnarray} \ll\qb_{I,N}&=&\p_{\tau}\qb_{I,N}+\e\vw\cdot\nabla_x\qb_{I,N}+\qb_{I,N}-\bar \qb_{I,N}=\e^{N+1}\nabla_x\ub_{I,N}. \end{eqnarray} Therefore, we have \begin{eqnarray}\label{pf 4} \abs{\ll\qb_{I,N}}=\abs{\e^{N+1}\nabla_x\ub_{I,N}}\leq C\e^{N+1}. \end{eqnarray} \ \\ Step 4: Estimates of $\ll \qb_{B,N}$.\\ The boundary layer solution is $\ub_k=(f_k^{\e}-f_k^{\e}(\infty))\cdot\psi_0=\v_k\psi_0$ where $f_k^{\e}(\eta,\theta,\phi)$ solves the $\e$-Milne problem and $\v_k=f_k^{\e}-f_k^{\e}(\infty)$. Notice $\psi_0\psi=\psi_0$, so the boundary layer contribution can be estimated as \begin{eqnarray}\label{remainder temp 1} \\ \ll\qb_{B,0}&=&\e^2\frac{\p \qb_{B,0}}{\p t}+\sin\phi\frac{\p \qb_{B,0}}{\p\eta}-\frac{\e}{1-\e\eta}\cos\phi\bigg(\frac{\p \qb_{B,0}}{\p\phi}+\frac{\p \qb_{B,0}}{\p\theta}\bigg)+\qb_{B,0}-\bar \qb_{B,0}\no\\ &=&\e^2\frac{\p \v_0}{\p t}+\sin\phi\bigg(\psi_0\frac{\p \v_0}{\p\eta}+\v_0\frac{\p\psi_0}{\p\eta}\bigg)-\frac{\psi_0\e}{1-\e\eta}\cos\phi\bigg(\frac{\p \v_0}{\p\phi}+\frac{\p \v_0}{\p\theta}\bigg)+\psi_0 \v_0-\psi_0\bar\v_0\nonumber\\ &=&\e^2\frac{\p \v_0}{\p t}+\sin\phi\bigg(\psi_0\frac{\p \v_0}{\p\eta}+\v_0\frac{\p\psi_0}{\p\eta}\bigg)-\frac{\psi_0\psi\e}{1-\e\eta}\cos\phi\bigg(\frac{\p \v_0}{\p\phi}+\frac{\p \v_0}{\p\theta}\bigg)+\psi_0 \v_0-\psi_0\bar\v_0\nonumber\\ &=&\e^2\frac{\p \v_0}{\p t}+\psi_0\bigg(\sin\phi\frac{\p \v_0}{\p\eta}-\frac{\e\psi}{1-\e\eta}\cos\phi\frac{\p \v_0}{\p\phi}+\v_0-\bar\v_0\bigg)+\sin\phi \frac{\p\psi_0}{\p\eta}\v_0-\frac{\psi_0\e}{1-\e\eta}\cos\phi\frac{\p \v_0}{\p\theta}\nonumber\\ &=&\e^2\frac{\p \v_0}{\p t}+\sin\phi \frac{\p\psi_0}{\p\eta}\v_0-\frac{\psi_0\e}{1-\e\eta}\cos\phi\frac{\p \v_0}{\p\theta}\nonumber. \end{eqnarray} It is easy to see \begin{eqnarray} \abs{\e^2\frac{\p \v_0}{\p t}}\leq \e^2\abs{\frac{\p \v_0}{\p t}}\leq C\e^2. \end{eqnarray} Since $\psi_0=1$ when $\eta\leq 1/(4\e)$, the effective region of $\px\psi_0$ is $\eta\geq1/(4\e)$ which is further and further from the origin as $\e\rt0$. By Theorem \ref{Milne theorem 2}, the first term in (\ref{remainder temp 1}) can be controlled as \begin{eqnarray} \abs{\sin\phi\frac{\p\psi_0}{\p\eta}\v_0}&\leq& C\ue^{-\frac{K_0}{\e}}\leq C\e. \end{eqnarray} For the second term in (\ref{remainder temp 1}), we have \begin{eqnarray} \abs{-\frac{\psi_0\e}{1-\e\eta}\cos\phi\frac{\p \v_0}{\p\theta}}&\leq&C\e\abs{\frac{\p \v_0}{\p\theta}}\leq C\e. \end{eqnarray} This implies \begin{eqnarray} \abs{\ll \qb_{B,0}}\leq C\e. \end{eqnarray} Similarly, for higher order term, we can estimate \begin{eqnarray}\label{remainder temp 2} \ll\qb_{B,N}&=&\e^2\frac{\p \qb_{B,N}}{\p t}+\sin\phi\frac{\p \qb_{B,N}}{\p\eta}-\frac{\e}{1-\e\eta}\cos\phi\bigg(\frac{\p \qb_{B,N}}{\p\phi}+\frac{\p \qb_{B,N}}{\p\theta}\bigg)+\qb_{B,N}-\bar \qb_{B,N}\\ &=&\e^{N+2}\frac{\p \v_N}{\p t}+\sum_{i=0}^k\e^i\sin\phi \frac{\p\psi_0}{\p\eta}\v_i-\frac{\psi_0\e^{k+1}}{1-\e\eta}\cos\phi\frac{\p \v_k}{\p\theta}\nonumber. \end{eqnarray} It is obvious that \begin{eqnarray} \abs{\e^{N+2}\frac{\p \v_N}{\p t}}\leq \e^{N+2}\abs{\frac{\p \v_N}{\p t}}\leq C\e^{N+2}. \end{eqnarray} Away from the origin, the first term in (\ref{remainder temp 2}) can be controlled as \begin{eqnarray} \abs{\sum_{i=0}^k\e^i\sin\phi \frac{\p\psi_0}{\p\eta}\v_i}&\leq& C\ue^{-\frac{K_0}{\e}}\leq C\e^{k+1}. \end{eqnarray} For the second term in (\ref{remainder temp 2}), we have \begin{eqnarray} \abs{-\frac{\psi_0\e^{k+1}}{1-\e\eta}\cos\phi\frac{\p \v_k}{\p\theta}}&\leq&C\e^{k+1}\abs{\frac{\p \v_k}{\p\theta}}\leq C\e^{k+1}. \end{eqnarray} This implies \begin{eqnarray}\label{pf 3} \abs{\ll \qb_{B,N}}\leq C\e^{k+1}. \end{eqnarray} \ \\ Step 5: Synthesis.\\ In summary, since $\ll u^{\e}=0$, collecting (\ref{pf 1}), (\ref{pf 2}), (\ref{pf 4}), and (\ref{pf 3}), we can prove \begin{eqnarray} \abs{\ll R_N}\leq C\e^{N+1}. \end{eqnarray} Consider the asymptotic expansion to $N=4$, then the remainder $R_4$ satisfies the equation \begin{eqnarray} \\ \left\{ \begin{array}{rcl} \e\dt R_4+\e \vw\cdot\nabla_x R_4+R_4-\bar R_4&=&\ll R_4,\\ R_4(0,\vx,\vw)&=&\sum_{k=1}^4\e^k\ub_{B,k}(0,\vx,\vw),\\ R_4(t,\vx_0,\vw)&=&\sum_{k=1}^4\e^k\ub_{I,k}(t,\vx_0,\vw)\ \ \text{for}\ \ \vw\cdot\vec n<0\ \ \text{and}\ \ \vx_0\in\p\Omega.\no \end{array} \right. \end{eqnarray} Note that the initial data and boundary data are nonzero due the contribution of initial layer and boundary data at the point $(t,\vx)=(0,\vx_0)$. By Theorem \ref{improved LI estimate}, we have \begin{eqnarray} \im{R_4}{[0,\infty)\times\Omega\times\s^1} &\leq& C(\Omega)\bigg(\frac{1}{\e^{4}}\tm{\ll R_4}{[0,\infty)\times\Omega\times\s^1}+\im{\ll R_4}{[0,\infty)\times\Omega\times\s^1}\bigg)\\ &&+\im{\sum_{k=1}^4\e^k\ub_{B,k}(0,\vx,\vw)}{\Omega\times\s^1}\no\\ &&+\im{\sum_{k=1}^4\e^k\ub_{I,k}(t,\vx_0,\vw)}{[0,t]\times\Gamma^-}\no\\ &\leq&C(\Omega)\bigg(\frac{1}{\e^{4}}(C\e^5)+(C\e^5)\bigg)+C\e+C\e=C(\Omega)\e.\no \end{eqnarray} Hence, we have \begin{eqnarray} \nm{u^{\e}-\sum_{k=0}^4\e^k\u_k-\sum_{k=0}^4\e^k\ub_{I,k}-\sum_{k=0}^4\e^k\ub_{B,k}}_{L^{\infty}([0,\infty)\times\Omega\times\s^1)}=o(1). \end{eqnarray} Since it is easy to see \begin{eqnarray} \nm{\sum_{k=1}^4\e^k\u_k+\sum_{k=1}^4\e^k\ub_{I,k}+\sum_{k=1}^4\e^k\ub_{B,k}}_{L^{\infty}(\Omega\times\s^1)}=O(\e), \end{eqnarray} our result naturally follows. \end{proof} \section{Counterexample for Classical Approach In this section, we present the classical approach in \cite{Bensoussan.Lions.Papanicolaou1979} to construct asymptotic expansion, especially the boundary layer expansion, and give a counterexample to show this method is problematic in unsteady equation. \subsection{Discussion on Expansions except Boundary Layer} Basically, the expansions for interior solution and initial layer are identical to our method, so omit the details and only present the notation. We define the interior expansion as follows: \begin{eqnarray}\label{interior expansion.} \uc(t,\vx,\vw)\sim\sum_{k=0}^{\infty}\e^k\uc_k(t,\vx,\vw), \end{eqnarray} $\uc_0(t,\vx,\vw)$ satisfies the equation \begin{eqnarray}\label{interior 1.} \left\{ \begin{array}{rcl} \uc_0&=&\buc_0,\\ \dt\buc_0-\Delta_x\buc_0&=&0. \end{array} \right. \end{eqnarray} $\uc_1(t,\vx,\vw)$ satisfies \begin{eqnarray}\label{interior 2.} \left\{ \begin{array}{rcl} \uc_1&=&\buc_1-\vw\cdot\nx\uc_0,\\ \dt\buc_1-\Delta_x\buc_1&=&0, \end{array} \right. \end{eqnarray} and $\uc_k(t,\vx,\vw)$ for $k\geq2$ satisfies \begin{eqnarray}\label{interior 3.} \left\{ \begin{array}{rcl} \uc_k&=&\buc_k-\vw\cdot\nx\uc_{k-1}-\dt\uc_{k-2},\\ \dt\buc_k-\Delta_x\buc_k&=&0. \end{array} \right. \end{eqnarray} With the substitution (\ref{substitution 0}), we define the initial layer expansion as follows: \begin{eqnarray}\label{initial layer expansion.} \ubc_I(\tau,\vx,\vw)\sim\sum_{k=0}^{\infty}\e^k\ubc_{I,k}(\tau,\vx,\vw), \end{eqnarray} where $\ubc_{I,0}$ satisfies \begin{eqnarray} \left\{ \begin{array}{rcl} \p_{\tau}\bubc_{I,0}&=&0,\\\rule{0ex}{1.0em} \ubc_{I,0}(\tau,\vx,\vw)&=&\ue^{-\tau}\ubc_{I,0}(0,\vx,\vw)+(1-\ue^{-\tau})\bubc_{I,0}(0,\vx). \end{array} \right. \end{eqnarray} and $\ubc_{I,k}(\tau,\vx,\vw)$ for $k\geq1$ satisfies \begin{eqnarray} \left\{ \begin{array}{rcl} \p_{\tau}\bubc_{I,k}&=&-\displaystyle\int_{\s^1}\bigg(\vw\cdot\nabla_x\ubc_{I,k-1}\bigg)\ud{\vw},\\\rule{0ex}{1.5em} \ubc_{I,k}(\tau,\vx,\vw)&=&\ue^{-\tau}\ubc_{I,k}(0,\vx,\vw)+\displaystyle\int_0^{\tau}\bigg(\bubc_{I,k}-\vw\cdot\nabla_x\ubc_{I,k-1}\bigg)(s,\vx,\vw)\ue^{s-\tau}\ud{s}. \end{array} \right. \end{eqnarray} \subsection{Boundary Layer Expansion} By the idea in \cite{Bensoussan.Lions.Papanicolaou1979}, the boundary layer expansion can be defined by introducing substitutions (\ref{substitution 1}), (\ref{substitution 2}), and (\ref{substitution 3}). Note that we terminate here and do not further use substitution (\ref{substitution 4}). Hence, we have the transformed equation for (\ref{transport}) as \begin{eqnarray}\label{classical temp.} \left\{ \begin{array}{l}\displaystyle \e^2\frac{\p u^{\e}}{\p t}+\sin(\theta+\xi)\frac{\p u^{\e}}{\p\eta}-\frac{\e}{1-\e\eta}\cos(\theta+\xi)\frac{\p u^{\e}}{\p\theta}+u^{\e}-\frac{1}{2\pi}\int_{-\pi}^{\pi}u^{\e}\ud{\xi}=0,\\\rule{0ex}{1.0em} u^{\e}(0,\eta,\theta,\xi)=h(\eta,\theta),\\\rule{0ex}{1.0em} u^{\e}(0,\theta,\xi)=g(\theta,\xi)\ \ \text{for}\ \ \sin(\theta+\xi)>0. \end{array} \right. \end{eqnarray} \ \\ We now define the Milne expansion of boundary layer as follows: \begin{eqnarray}\label{classical expansion.} \ubc(t,\eta,\theta,\phi)\sim\sum_{k=0}^{\infty}\e^k\ubc_k(t,\eta,\theta,\phi), \end{eqnarray} where $\ubc_k$ can be determined by comparing the order of $\e$ via plugging (\ref{classical expansion.}) into the equation (\ref{classical temp.}). Thus, in a neighborhood of the boundary, we have \begin{eqnarray} \sin(\theta+\xi)\frac{\p \ubc_0}{\p\eta}+\ubc_0-\bubc_0&=&0,\label{cexpansion temp 5.}\\ \sin(\theta+\xi)\frac{\p \ubc_1}{\p\eta}+\ubc_1-\bubc_1&=&\frac{1}{1-\e\eta}\cos(\theta+\xi)\frac{\p \ubc_0}{\p\theta},\label{cexpansion temp 6.}\\ \sin(\theta+\xi)\frac{\p \ubc_2}{\p\eta}+\ubc_2-\bubc_2&=&\frac{1}{1-\e\eta}\cos(\theta+\xi)\frac{\p \ubc_1}{\p\theta}-\frac{\p \ubc_0}{\p t},\label{cexpansion temp 7.}\\ \ldots\nonumber\\ \sin(\theta+\xi)\frac{\p \ubc_k}{\p\eta}+\ubc_k-\bubc_k&=&\frac{1}{1-\e\eta}\cos(\theta+\xi)\frac{\p \ubc_{k-1}}{\p\theta}-\frac{\p \ubc_{k-2}}{\p t}, \end{eqnarray} where \begin{eqnarray} \bar \ubc_k(t,\eta,\theta)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\ubc_k(t,\eta,\theta,\xi)\ud{\xi}. \end{eqnarray} \subsection{Classical Approach to Construct Asymptotic Expansion} Similarly, we require the zeroth order expansion of initial and boundary data be satisfied, i.e. we have \begin{eqnarray} \uc_0(0,\vx,\vw)+\ubc_{I,0}(0,\vx,\vw)+\ubc_{B,0}(0,\vx,\vw)&=&h,\\ \uc_0(t,\vx_0,\vw)+\ubc_{I,0}(t,\vx_0,\vw)+\ubc_{B,0}(t,\vx_0,\vw)&=&g. \end{eqnarray} The construction of $\uc_k$, $\ubc_{I,k}$, and $\ubc_{B,k}$ by the idea in \cite{Bensoussan.Lions.Papanicolaou1979} can be summarized as follows:\\ \ \\ Assume the cut-off function $\psi$ and $\psi_0$ are defined as (\ref{cut-off 1}) and (\ref{cut-off 2}).\\ \ \\ Step 1: Construction of zeroth order terms.\\ The zeroth order boundary layer solution is defined as \begin{eqnarray}\label{classical temp 1.} \left\{ \begin{array}{rcl} \ubc_0(t,\eta,\theta,\xi)&=&\psi_0(\e\eta)\bigg(\f_0(t,\eta,\theta,\xi)-f_0(t,\infty,\theta)\bigg),\\ \sin(\theta+\xi)\dfrac{\p \f_0}{\p\eta}+\f_0-\bar \f_0&=&0,\\ \f_0(t,0,\theta,\xi)&=&g(t,\theta,\xi)\ \ \text{for}\ \ \sin(\theta+\xi)>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}\f_0(t,\eta,\theta,\xi)&=&f_0(t,\infty,\theta). \end{array} \right. \end{eqnarray} The zeroth order initial layer is defined as \begin{eqnarray}\label{classical temp 21.} \left\{ \begin{array}{rcl} \ubc_{I,0}(\tau,\vx,\vw)&=&\ff_0(\tau,\vx,\vw)-\ff_0(\infty,\vx)\\ \p_{\tau}\bar\ff_0&=&0,\\\rule{0ex}{1.0em} \ff_0(\tau,\vx,\vw)&=&\ue^{-\tau}\ff_0(0,\vx,\vw)+(1-\ue^{-\tau})\bar\ff_0(0,\vx),\\ \ff_0(0,\vx,\vw)&=&h(\vx,\vw),\\ \lim_{\tau\rt\infty}\ff_0(\tau,\vx,\vw)&=&\ff_0(\infty,\vx). \end{array} \right. \end{eqnarray} Then we can define the zeroth order interior solution as \begin{eqnarray}\label{classical temp 2.} \left\{ \begin{array}{rcl} \uc_0&=&\buc_0,\\\rule{0ex}{1em} \dt\buc_0-\Delta_x\buc_0&=&0,\\\rule{0ex}{1em}\buc_0(0,\vx)&=&\ff_0(\infty,\vx)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1em} \buc_0(t,\vx_0)&=&\f_0(t,\infty,\theta)\ \ \text{on}\ \ \p\Omega, \end{array} \right. \end{eqnarray} where $(t,\vx,\vw)$ is the same point as $(\tau,\eta,\theta,\xi)$.\\ \ \\ Step 2: Construction of first order terms. \\ Define the first order boundary layer solution as \begin{eqnarray}\label{classical temp 3.} \left\{ \begin{array}{rcl} \ubc_1(t,\eta,\theta,\xi)&=&\psi_0(\e\eta)\bigg(\f_1(t,\eta,\theta,\xi)-f_1(t,\infty,\theta)\bigg),\\ \sin(\theta+\xi)\dfrac{\p \f_1}{\p\eta}+\f_1-\bar \f_1&=&\cos(\theta+\xi)\dfrac{\psi(\e\eta)}{1-\e\eta}\dfrac{\p \ubc_0}{\p\theta},\\\rule{0ex}{1em} \f_1(t,0,\theta,\xi)&=&\vw\cdot\nx\uc_0(t,\vx_0,\vw)\ \ \text{for}\ \ \sin(\theta+\xi)>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}\f_1(t,\eta,\theta,\xi)&=&f_1(t,\infty,\theta). \end{array} \right. \end{eqnarray} Define the first order initial layer as \begin{eqnarray}\label{classical temp 22.} \left\{ \begin{array}{rcl} \ubc_{I,1}(\tau,\vx,\vw)&=&\ff_1(\tau,\vx,\vw)-\ff_1(\infty,\vx)\\ \p_{\tau}\bar\ff_1&=&-\displaystyle\int_{\s^1}\bigg(\vw\cdot\nabla_x\ubc_{I,0}\bigg)\ud{\vw},\\\rule{0ex}{1.5em} \ff_1(\tau,\vx,\vw)&=&\ue^{-\tau}\ff_1(0,\vx,\vw)+\displaystyle\int_0^{\tau}\bigg(\bar\ff_1-\vw\cdot\nabla_x\ubc_{I,0}\bigg)(s,\vx,\vw)\ue^{s-\tau}\ud{s},\\ \ff_1(0,\vx,\vw)&=&\vw\cdot\nx\u_0(0,\vx,\vw),\\ \lim_{\tau\rt\infty}\ff_1(\tau,\vx,\vw)&=&\ff_1(\infty,\vx). \end{array} \right. \end{eqnarray} Define the first order interior solution as \begin{eqnarray}\label{classical temp 5.} \left\{ \begin{array}{rcl} \uc_1&=&\buc_1-\vw\cdot\nx\uc_0,\\ \dt\buc_1-\Delta_x\buc_1&=&0,\\\rule{0ex}{1em}\buc_1(0,\vx)&=&\ff_1(\infty,\vx)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1em} \buc_1(t,\vx)&=&f_1(t,\infty,\theta)\ \ \text{on}\ \ \p\Omega. \end{array} \right. \end{eqnarray} \ \\ Step 3: Construction of second order terms. \\ Define the second order boundary layer solution as \begin{eqnarray}\label{classical temp 3.} \left\{ \begin{array}{rcl} \ubc_2(t,\eta,\theta,\xi)&=&\psi_0(\e\eta)\bigg(\f_2(t,\eta,\theta,\xi)-f_2(t,\infty,\theta)\bigg),\\ \sin(\theta+\xi)\dfrac{\p \f_2}{\p\eta}+\f_2-\bar \f_2&=&\cos(\theta+\xi)\dfrac{\psi(\e\eta)}{1-\e\eta}\dfrac{\p \ubc_1}{\p\theta}-\dfrac{\p\ubc_0}{\p t},\\\rule{0ex}{1em} \f_2(t,0,\theta,\xi)&=&\vw\cdot\nx\uc_1(t,\vx_0,\vw)+\dt\uc_0(t,\vx_0,\vw)\ \ \text{for}\ \ \sin(\theta+\xi)>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}\f_2(t,\eta,\theta,\xi)&=&f_2(t,\infty,\theta). \end{array} \right. \end{eqnarray} Define the second order initial layer as \begin{eqnarray}\label{classical temp 23.} \left\{ \begin{array}{rcl} \ubc_{I,2}(\tau,\vx,\vw)&=&\ff_2(\tau,\vx,\vw)-\ff_2(\infty,\vx)\\ \p_{\tau}\bar\ff_2&=&-\displaystyle\int_{\s^1}\bigg(\vw\cdot\nabla_x\ubc_{I,1}\bigg)\ud{\vw},\\\rule{0ex}{1.5em} \ff_2(\tau,\vx,\vw)&=&\ue^{-\tau}\ff_2(0,\vx,\vw)+\displaystyle\int_0^{\tau}\bigg(\bar\ff_2-\vw\cdot\nabla_x\ubc_{I,1}\bigg)(s,\vx,\vw)\ue^{s-\tau}\ud{s},\\ \ff_2(0,\vx,\vw)&=&\vw\cdot\nx\u_1(0,\vx,\vw)+\dt\u_0(0,\vx,\vw),\\ \lim_{\tau\rt\infty}\ff_2(\tau,\vx,\vw)&=&\ff_2(\infty,\vx). \end{array} \right. \end{eqnarray} Define the first order interior solution as \begin{eqnarray}\label{classical temp 5.} \left\{ \begin{array}{rcl} \uc_2&=&\buc_2-\vw\cdot\nx\uc_1-\dt\uc_0,\\ \dt\buc_2-\Delta_x\buc_2&=&0,\\\rule{0ex}{1em}\buc_2(0,\vx)&=&\ff_2(\infty,\vx)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1em} \buc_2(t,\vx)&=&f_2(t,\infty,\theta)\ \ \text{on}\ \ \p\Omega. \end{array} \right. \end{eqnarray} \ \\ Step 4: Generalization to arbitrary $k$.\\ Similar to above procedure, we can define the $k^{th}$ order boundary layer solution as \begin{eqnarray} \\ \left\{ \begin{array}{rcl} \ubc_k(t,\eta,\theta,\xi)&=&\psi_0(\e\eta)\bigg(\f_k(t,\eta,\theta,\xi)-f_k(t,\infty,\theta)\bigg),\\ \sin(\theta+\xi)\dfrac{\p \f_k}{\p\eta}+\f_k-\bar \f_k&=&\cos(\theta+\xi)\dfrac{\psi(\e\eta)}{1-\e\eta}\dfrac{\p \ubc_{k-1}}{\p\theta}-\dfrac{\p\ubc_{k-2}}{\p t},\\\rule{0ex}{1em} \f_k(t,0,\theta,\xi)&=&\vw\cdot\nx\uc_{k-1}(t,\vx_0,\vw)+\dt\uc_{k-2}(t,\vx_0,\vw)\ \ \text{for}\ \ \sin(\theta+\xi)>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}\f_k(t,\eta,\theta,\xi)&=&f_k(t,\infty,\theta).\no \end{array} \right. \end{eqnarray} Define the $k^{th}$ order initial layer as \begin{eqnarray}\label{classical temp 24.} \left\{ \begin{array}{rcl} \ubc_{I,k}(\tau,\vx,\vw)&=&\ff_k(\tau,\vx,\vw)-\ff_k(\infty,\vx)\\ \p_{\tau}\bar\ff_k&=&-\displaystyle\int_{\s^1}\bigg(\vw\cdot\nabla_x\ubc_{I,k-1}\bigg)\ud{\vw},\\\rule{0ex}{1.5em} \ff_k(\tau,\vx,\vw)&=&\ue^{-\tau}\ff_k(0,\vx,\vw)+\displaystyle\int_0^{\tau}\bigg(\bar\ff_k-\vw\cdot\nabla_x\ubc_{I,k-1}\bigg)(s,\vx,\vw)\ue^{s-\tau}\ud{s},\\ \ff_k(0,\vx,\vw)&=&\vw\cdot\nx\u_{k-1}(0,\vx,\vw)+\dt\u_{k-2}(0,\vx,\vw),\\ \lim_{\tau\rt\infty}\ff_k(\tau,\vx,\vw)&=&\ff_k(\infty,\vx). \end{array} \right. \end{eqnarray} Define the $k^{th}$ order interior solution as \begin{eqnarray} \left\{ \begin{array}{rcl} \uc_k&=&\buc_k-\vw\cdot\nx\uc_{k-1}-\dt\uc_{k-2},\\\rule{0ex}{1em} \dt\buc_k-\Delta_x\buc_k&=&0,\\\rule{0ex}{1em}\buc_k(0,\vx)&=&\ff_k(\infty,\vx)\ \ \text{in}\ \ \Omega,\\\rule{0ex}{1em} \buc_k&=&f_k(t,\infty,\theta)\ \ \text{on}\ \ \p\Omega. \end{array} \right. \end{eqnarray} By the idea in \cite{Bensoussan.Lions.Papanicolaou1979}, we should be able to prove the following result: \begin{theorem}\label{main fake 1.} Assume $g(t,\vx_0,\vw)$ and $h(\vx,\vw)$ are sufficiently smooth. Then for the unsteady neutron transport equation (\ref{transport}), the unique solution $u^{\e}(t,\vx,\vw)\in L^{\infty}([0,\infty)\times\Omega\times\s^1)$ satisfies \begin{eqnarray}\label{main fake theorem 1.} \lnm{u^{\e}-\uc_0-\ubc_{I,0}-\ubc_{B,0}}=O(\e). \end{eqnarray} \end{theorem} \ \\ Similar to the analysis in \cite[Section 2.2]{AA003}, considering a crucial observation that based on Remark \ref{Milne remark}, we know that the existence of solution $\f_1$ requires \begin{eqnarray} \frac{\p }{\p\theta}\bigg(\f_0(t,\eta,\theta,\xi)-f_0(t,\infty,\theta)\bigg)\in L^{\infty}([0,\infty)^2\times[-\pi,\pi)\times[-\pi,\pi)). \end{eqnarray} This in turn requires \begin{eqnarray} \frac{\p \f_0}{\p\eta}\in L^{\infty}([0,\infty)^2\times[-\pi,\pi)\times[-\pi,\pi)). \end{eqnarray} On the other hand, as shown by the Appendix of \cite{AA003}, we can show for specific $g$, it holds that $\px\f_0\notin L^{\infty}([0,\infty)^2\times[-\pi,\pi)\times[-\pi,\pi))$. Due to intrinsic singularity for (\ref{classical temp 1.}), this construction breaks down. \subsection{Counterexample to Classical Approach} \begin{theorem} If $g(t,\theta,\phi)=t^2\ue^{-t}\cos\phi$ and $h(\vx,\vw)=0$, then there exists a $C>0$ such that \begin{eqnarray} \lnm{u^{\e}-\uc_0-\ubc_{I,0}-\ubc_{B,0}}\geq C>0 \end{eqnarray} when $\e$ is sufficiently small, where the interior solution $\uc_0$ is defined in (\ref{classical temp 2.}), the initial layer $\ubc_{I,0}$ is defined in (\ref{classical temp 21.}), and the boundary layer $\ub_{B,0}$ is defined in (\ref{classical temp 1.}). \end{theorem} \begin{proof} We divide the proof into several steps:\\ \ \\ Step 1: Basic settings.\\ By (\ref{classical temp 1.}), the solution $\f_0$ satisfies the Milne problem \begin{eqnarray} \left\{ \begin{array}{rcl}\displaystyle \sin(\theta+\xi)\frac{\p \f_0}{\p\eta}+\f_0-\bar \f_0&=&0,\\ \f_0(t,0,\theta,\xi)&=&g(t,\theta,\xi)\ \ \text{for}\ \ \sin(\theta+\xi)>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}\f_0(t,\eta,\theta,\xi)&=&f_0(t,\infty,\theta). \end{array} \right. \end{eqnarray} For convenience of comparison, we make the substitution $\phi=\theta+\xi$ to obtain \begin{eqnarray} \left\{ \begin{array}{rcl}\displaystyle \sin\phi\frac{\p \f_0}{\p\eta}+\f_0-\bar \f_0&=&0,\\ \f_0(t,0,\theta,\phi)&=&g(t,\theta,\phi)\ \ \text{for}\ \ \sin\phi>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}\f_0(t,\eta,\theta,\phi)&=&f_0(t,\infty,\theta). \end{array} \right. \end{eqnarray} Assume the theorem is incorrect, i.e. \begin{eqnarray} \lim_{\e\rt0}\lnm{(\uc_0+\ubc_{I,0}+\ubc_{B,0})-(\u_0+\ub_{I,0}+\ub_{B,0})}=0. \end{eqnarray} We can easily show the zeroth order initial layer $\ubc_{B,0}=\ub_{B,0}=0$ due to $h(\vx,\vw)=0$. Since the boundary $g(t,\theta,\phi)=t^2\ue^{-t}\cos\phi$ independent of $\theta$, by (\ref{classical temp 1.}) and (\ref{expansion temp 9}), it is obvious the limit of zeroth order boundary layer $f_0(t,\infty,\theta)$ and $f_0^{\e}(t,\infty,\theta)$ satisfy $f_0(t,\infty,\theta)=C_1(t)$ and $f_0^{\e}(t,\infty,\theta)=C_2(t)$ for some constant $C_1(t)$ and $C_2(t)$ independent of $\theta$. By (\ref{classical temp 2.}), (\ref{expansion temp 8}) and solution continuity of heat equation, we can derive the interior solutions are smooth and are close to constants $\uc_0=C_1(t)$ and $\u_0=C_2(t)$ in a neighborhood $O(\e)$ of the boundary with difference $O(\e)$. Hence, we may further derive in this neighborhood, \begin{eqnarray}\label{compare temp 5.} \lim_{\e\rt0}\lnm{(f_0(\infty)+\ubc_0)-(f_0^{\e}(\infty)+\ub_0)}=0. \end{eqnarray} For $0\leq\eta\leq 1/(2\e)$, we have $\psi_0=1$, which means $\f_0=\ubc_0+f_0(\infty)$ and $f_0^{\e}=\ub_0+f_0^{\e}(\infty)$ in this neighborhood of the boundary. Define $u=f_0+2$, $U=f_0^{\e}+2$ and $G=g+2=t^2\ue^{-t}\cos\phi+2$, then $u(\eta,\phi)$ satisfies the equation \begin{eqnarray}\label{compare flat equation.} \left\{ \begin{array}{rcl}\displaystyle \sin\phi\frac{\p u}{\p\eta}+u-\bar u&=&0,\\ u(0,\phi)&=&G(\phi)\ \ \text{for}\ \ \sin\phi>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}u(\eta,\phi)&=&2+f_0(\infty), \end{array} \right. \end{eqnarray} and $U(\eta,\phi)$ satisfies the equation \begin{eqnarray}\label{compare force equation.} \left\{ \begin{array}{rcl}\displaystyle \sin\phi\frac{\p U}{\p\eta}+F(\e;\eta)\cos\phi \frac{\p U}{\p\phi}+U-\bar U&=&0,\\ U(0,\phi)&=&G(\phi)\ \ \text{for}\ \ \sin\phi>0,\\\rule{0ex}{1em} \lim_{\eta\rt\infty}U(\eta,\phi)&=&2+f_0^{\e}(\infty). \end{array} \right. \end{eqnarray} Based on (\ref{compare temp 5.}), we have \begin{eqnarray} \lim_{\e\rt0}\lnm{U(\eta,\phi)-u(\eta,\phi)}=0. \end{eqnarray} Then it naturally implies \begin{eqnarray} \lim_{\e\rt0}\lnm{\bar U(\eta)-\bar u(\eta)}=0. \end{eqnarray} \ \\ Step 2: Continuity of $\bar u$ and $\bar U$ at $\eta=0$.\\ For the problem (\ref{compare flat equation.}), we have for any $r_0>0$ \begin{eqnarray} \abs{\bar u(\eta)-\bar u(0)}&\leq&\frac{1}{2\pi}\bigg(\int_{\sin\phi\leq r_0}\abs{u(\eta,\phi)-u(0,\phi)}\ud{\phi}+\int_{\sin\phi\geq r_0}\abs{u(\eta,\phi)-u(0,\phi)}\ud{\phi}\bigg). \end{eqnarray} Since we have shown $u\in L^{\infty}([0,\infty)\times[-\pi,\pi))$, then for any $\delta>0$, we can take $r_0$ sufficiently small such that \begin{eqnarray} \frac{1}{2\pi}\int_{\sin\phi\leq r_0}\abs{u(\eta,\phi)-u(0,\phi)}\ud{\phi}&\leq&\frac{C}{2\pi}\arcsin r_0\leq \frac{\delta}{2}. \end{eqnarray} For fixed $r_0$ satisfying above requirement, we estimate the integral on $\sin\phi\geq r_0$. By Ukai's trace theorem, $u(0,\phi)$ is well-defined in the domain $\sin\phi\geq r_0$ and is continuous. Also, by consider the relation \begin{eqnarray} \frac{\p u}{\p\eta}(0,\phi)=\frac{\bar u(0)-u(0,\phi)}{\sin\phi}, \end{eqnarray} we can obtain in this domain $\px u$ is bounded, which further implies $u(\eta,\phi)$ is uniformly continuous at $\eta=0$. Then there exists $\delta_0>0$ sufficiently small, such that for any $0\leq\eta\leq\delta_0$, we have \begin{eqnarray} \frac{1}{2\pi}\int_{\sin\phi\geq r_0}\abs{u(\eta,\phi)-u(0,\phi)}\ud{\phi}&\leq&\frac{1}{2\pi}\int_{\sin\phi\geq r_0}\frac{\delta}{2}\ud{\phi}\leq\frac{\delta}{2}. \end{eqnarray} In summary, we have shown for any $\delta>0$, there exists $\delta_0>0$ such that for any $0\leq\eta\leq\delta_0$, \begin{eqnarray} \abs{\bar u(\eta)-\bar u(0)}\leq\frac{\delta}{2}+\frac{\delta}{2}=\delta. \end{eqnarray} Hence, $\bar u(\eta)$ is continuous at $\eta=0$. By a similar argument along the characteristics, we can show $\bar U(\eta,\phi)$ is also continuous at $\eta=0$. In the following, by the continuity, we assume for arbitrary $\delta>0$, there exists a $\delta_0>0$ such that for any $0\leq\eta\leq\delta_0$, we have \begin{eqnarray} \abs{\bar u(\eta)-\bar u(0)}&\leq&\delta\label{compare temp 1.},\\ \abs{\bar U(\eta)-\bar U(0)}&\leq&\delta\label{compare temp 2.}. \end{eqnarray} \ \\ Step 3: Milne formulation.\\ We consider the solution at a specific point $(\eta,\phi)=(n\e,\e)$ for some fixed $n>0$. The solution along the characteristics can be rewritten as follows: \begin{eqnarray}\label{compare temp 3.} u(n\e,\e)=G(\e)\ue^{-\frac{1}{\sin\e}n\e} +\int_0^{n\e}\ue^{-\frac{1}{\sin\e}(n\e-\k)}\frac{1}{\sin\e}\bar u(\k)\ud{\k}, \end{eqnarray} \begin{eqnarray}\label{compare temp 4.} U(n\e,\e)=G(\e_0)\ue^{-\int_0^{n\e}\frac{1}{\sin\phi(\zeta)}\ud{\zeta}} +\int_0^{n\e}\ue^{-\int_{\k}^{n\e}\frac{1}{\sin\phi(\zeta)}\ud{\zeta}}\frac{1}{\sin\phi(\k)}\bar U(\k)\ud{\k}, \end{eqnarray} where we have the conserved energy along the characteristics \begin{eqnarray} E(\eta,\phi)=\cos\phi \ue^{-V(\eta)}, \end{eqnarray} in which $(0,\e_0)$ and $(\zeta,\phi(\zeta))$ are in the same characteristics of $(n\e,\e)$.\\ \ \\ Step 4: Estimates of (\ref{compare temp 3.}).\\ We turn to the Milne problem for $u$. We have the natural estimate \begin{eqnarray} \int_0^{n\e}\ue^{-\frac{1}{\sin\e}(n\e-\k)}\frac{1}{\sin\e}\ud{\k}&=&\int_0^{n\e}\ue^{-\frac{1}{\e}(n\e-\k)}\frac{1}{\e}\ud{\k}+o(\e)\\ &=&\ue^{-n}\int_0^{n\e}\ue^{\frac{\k}{\e}}\frac{1}{\e}\ud{\k}+o(\e)\nonumber\\ &=&\ue^{-n}\int_0^n\ue^{\zeta}\ud{\zeta}+o(\e)\nonumber\\ &=&(1-\ue^{-n})+o(\e)\nonumber. \end{eqnarray} Then for $0<\e\leq\delta_0$, we have $\abs{\bar u(0)-\bar u(\k)}\leq\delta$, which implies \begin{eqnarray} \int_0^{n\e}\ue^{-\frac{1}{\sin\e}(n\e-\k)}\frac{1}{\sin\e}\bar u(\k)\ud{\k}&=& \int_0^{n\e}\ue^{-\frac{1}{\sin\e}(n\e-\k)}\frac{1}{\sin\e}\bar u(0)\ud{\k}+O(\delta)\\ &=&(1-\ue^{-n})\bar u(0)+o(\e)+O(\delta)\nonumber. \end{eqnarray} For the boundary data term, it is easy to see \begin{eqnarray} G(\e)\ue^{-\frac{1}{\sin\e}n\e}&=&\ue^{-n}G(\e)+o(\e) \end{eqnarray} In summary, we have \begin{eqnarray} u(n\e,\e)=(1-\ue^{-n})\bar u(0)+\ue^{-n}G(\e)+o(\e)+O(\delta). \end{eqnarray} \ \\ Step 5: Estimates of (\ref{compare temp 4.}).\\ We consider the $\e$-Milne problem for $U$. For $\e<<1$ sufficiently small, $\psi(\e)=1$. Then we may estimate \begin{eqnarray} \cos\phi(\zeta)\ue^{-V(\zeta)}=\cos\e \ue^{-V(n\e)}, \end{eqnarray} which implies \begin{eqnarray} \cos\phi(\zeta)=\frac{1-n\e^2}{1-\e\zeta}\cos\e. \end{eqnarray} and hence \begin{eqnarray} \sin\phi(\zeta)=\sqrt{1-\cos^2\phi(\zeta)}=\sqrt{\frac{\e(n\e-\zeta)(2-\e\zeta-n\e^2)}{(1-\e\zeta)^2}\cos^2\e+\sin^2\e}. \end{eqnarray} For $\zeta\in[0,\e]$ and $n\e$ sufficiently small, by Taylor's expansion, we have \begin{eqnarray} 1-\e\zeta&=&1+o(\e),\\ 2-\e\zeta-n\e^2&=&2+o(\e),\\ \sin^2\e&=&\e^2+o(\e^3),\\ \cos^2\e&=&1-\e^2+o(\e^3). \end{eqnarray} Hence, we have \begin{eqnarray} \sin\phi(\zeta)=\sqrt{\e(\e+2n\e-2\zeta)}+o(\e^2). \end{eqnarray} Since $\sqrt{\e(\e+2n\e-2\zeta)}=O(\e)$, we can further estimate \begin{eqnarray} \frac{1}{\sin\phi(\zeta)}&=&\frac{1}{\sqrt{\e(\e+2n\e-2\zeta)}}+o(1)\\ -\int_{\k}^{n\e}\frac{1}{\sin\phi(\zeta)}\ud{\zeta}&=&\sqrt{\frac{\e+2n\e-2\zeta}{\e}}\bigg|_{\k}^{n\e}+o(\e) =1-\sqrt{\frac{\e+2n\e-2\k}{\e}}+o(\e). \end{eqnarray} Then we can easily derive the integral estimate \begin{eqnarray} \int_0^{n\e}\ue^{-\int_{\k}^{n\e}\frac{1}{\sin\phi(\zeta)}\ud{\zeta}}\frac{1}{\sin\phi(\k)}\ud{\k}&=& \ue^1\int_0^{n\e}\ue^{-\sqrt{\frac{\e+2n\e-2\k}{\e}}}\frac{1}{\sqrt{\e(\e+2n\e-2\k)}}\ud{\k}+o(\e)\\ &=&\half \ue^1\int_{\e}^{(1+2n)\e}\ue^{-\sqrt{\frac{\sigma}{\e}}}\frac{1}{\sqrt{\e\sigma}}\ud{\sigma}+o(\e)\nonumber\\ &=&\half \ue^1\int_{1}^{1+2n}\ue^{-\sqrt{\rho}}\frac{1}{\sqrt{\rho}}\ud{\rho}+o(\e)\nonumber\\ &=&\ue^1\int_{1}^{\sqrt{{1+2n}}}\ue^{-t}\ud{t}+o(\e)\nonumber\\ &=&(1-\ue^{1-\sqrt{1+2n}})+o(\e)\nonumber. \end{eqnarray} Then for $0<\e\leq\delta_0$, we have $\abs{\bar U(0)-\bar U(\k)}\leq\delta$, which implies \begin{eqnarray} \int_0^{n\e}\ue^{-\int_{\k}^{n\e}\frac{1}{\sin\phi(\zeta)}\ud{\zeta}}\frac{1}{\sin\phi(\k)}\bar U(\k)\ud{\k}&=& \int_0^{n\e}\ue^{-\int_{\k}^{n\e}\frac{1}{\sin\phi(\zeta)}\ud{\zeta}}\frac{1}{\sin\phi(\k)}\bar U(0)\ud{\k}+O(\delta)\\ &=&(1-\ue^{1-\sqrt{1+2n}})\bar U(0)+o(\e)+O(\delta)\nonumber. \end{eqnarray} For the boundary data term, since $G(\phi)$ is $C^1$, a similar argument shows \begin{eqnarray} G(\e_0)\ue^{-\int_0^{n\e}\frac{1}{\sin\phi(\zeta)}\ud{\zeta}}&=&\ue^{1-\sqrt{1+2n}}G(\sqrt{1+2n}\e)+o(\e). \end{eqnarray} Therefore, we have \begin{eqnarray} U(n\e,\e)=(1-\ue^{1-\sqrt{1+2n}})\bar U(0)+\ue^{1-\sqrt{1+2n}}G(\sqrt{1+2n}\e)+o(\e)+O(\delta). \end{eqnarray} \ \\ Step 6: Contradiction.\\ In summary, we have the estimate \begin{eqnarray} u(n\e,\e)&=&(1-\ue^{-n})\bar u(0)+\ue^{-n}G(\e)+o(\e)+O(\delta),\\ U(n\e,\e)&=&(1-\ue^{1-\sqrt{1+2n}})\bar U(0)+\ue^{1-\sqrt{1+2n}}G(\sqrt{1+2n}\e)+o(\e)+O(\delta). \end{eqnarray} The boundary data is $G=t^2\ue^{-t}\cos\phi+2$. Fix $t=1$. Then by the maximum principle in Theorem \ref{Milne theorem 3}, we can achieve $1\leq u(0,\phi)\leq3$ and $1\leq U(0,\phi)\leq3$. Since \begin{eqnarray} \bar u(0)&=&\frac{1}{2\pi}\int_{-\pi}^{\pi}u(0,\phi)\ud{\phi} =\frac{1}{2\pi}\int_{\sin\phi>0}u(0,\phi)\ud{\phi}+\frac{1}{2\pi}\int_{\sin\phi<0}u(0,\phi)\ud{\phi}\\ &=&\frac{1}{2\pi}\int_{\sin\phi>0}(2+\ue^{-1}\cos\phi)\ud{\phi}+\frac{1}{2\pi}\int_{\sin\phi<0}u(0,\phi)\ud{\phi}\nonumber\\ &=&2+\frac{1}{2\pi}\int_{\sin\phi>0}\ue^{-1}\cos\phi\ud{\phi}+\frac{1}{2\pi}\int_{\sin\phi<0}u(0,\phi)\ud{\phi}\nonumber, \end{eqnarray} we naturally obtain \begin{eqnarray} 2-\half\ue^{-1}\leq\bar u(0)\leq 2+\half\ue^{-1}. \end{eqnarray} Similarly, we can obtain \begin{eqnarray} 2-\half\ue^{-1}\leq\bar U(0)\leq 2+\half\ue^{-1}. \end{eqnarray} Furthermore, for $\e$ sufficiently small, we have \begin{eqnarray} G(\sqrt{1+2n}\e)&=&2+\ue^{-1}+o(\e),\\ G(\e)&=&2+\ue^{-1}+o(\e). \end{eqnarray} Hence, we can obtain \begin{eqnarray} u(n\e,\e)&=&\bar u(0)+\ue^{-n}(-\bar u(0)+2+\ue^{-1})+o(\e)+O(\delta),\\ U(n\e,\e)&=&\bar U(0)+\ue^{1-\sqrt{1+2n}}(-\bar U(0)+2+\ue^{-1})+o(\e)+O(\delta). \end{eqnarray} Then we can see $\lim_{\e\rt0}\lnm{\bar U(0)-\bar u(0)}=0$ naturally leads to $\lim_{\e\rt0}\lnm{(-\bar u(0)+2+\ue^{-1})-(-\bar U(0)+2+\ue^{-1})}=0$. Also, we have $-\bar u(0)+2+\ue^{-1}=O(1)$ and $-\bar U(0)+2+\ue^{-1}=O(1)$. Due to the smallness of $\e$ and $\delta$, and also $\ue^{-n}\neq \ue^{1-\sqrt{1+2n}}$, we can obtain \begin{eqnarray} \abs{U(n\e,\e)-u(n\e,\e)}=O(1). \end{eqnarray} However, above result contradicts our assumption that $\lim_{\e\rt0}\lnm{U(\eta,\phi)-u(\eta,\phi)}=0$ for any $(\eta,\phi)$. This completes the proof. \end{proof} \section*{Acknowledgements The author thanks Yan Guo and Xiongfeng Yang for stimulating discussions. The research is supported by NSF grant 0967140. \bibliographystyle{siam}
1,116,691,499,461
arxiv
\section{Introduction} \label{s_intro} Super star cluster are found in various galaxies: starburst galaxies \citep[M82,][]{oc95,mccrady03,gs99}, interacting galaxies \citep[NGC4038/39, the Antennae,][]{whitmore95}, amorphous galaxies \citep[NGC1705][]{oc94}. Compared to the most massive clusters found in the Galaxy and the Magellanic Clouds, they have larger estimated masses \citep[in excess of 10$^{5}$ \ifmmode M_{\odot} \else M$_{\odot}$\fi\ and usually closer to a few 10$^{6}$ \ifmmode M_{\odot} \else M$_{\odot}$\fi,][]{mengel02,larsen04,bastian06}. Their mass distribution follows a power law with an index equal to $-$2 \citep{fall04}. \citet{fz01} have shown that a distribution of young massive clusters with such a mass function could evolve into a lognormal mass function similar to that of old globular clusters. Super star clusters may thus be the progenitors of globular clusters, although this is a debated question. Among the difficulties of this scenario, super star clusters have to survive the so--called ``infant mortality''. \citet{fall05} showed that the distribution of the number of clusters as a function of time in the Antennae galaxies was dramatically decreasing: $dN/d\tau \propto\ \tau^{-1}$ where $N$ is the number of clusters and $\tau$ the age. Either clusters are born unbound and dissolve rapidly, or they experience negative feedback effects from the most massive stars. Supernovae explosions and stellar winds can expel interstellar gas on short timescales leading to the cluster disruption \citep{gb06}. The way this feedback affects the cluster's evolution depends on the stellar content and its distribution. If the stellar initial mass function is top--heavy \citep[as seems to be the case in some clusters,][]{sternberg98} the presence of a large number of massive stars will enhance disruption. But if the massive stars are concentrated in the cluster core due to initial or dynamical mass segregation, their effects might be reduced. Information of the stellar content is thus necessary to better understand the evolution of these mini--starbursts. In this paper, we present new observations of the super star cluster in the amorphous galaxy NGC1705 \citep{melnick85a}. NGC1705--1 is one of the brightest clusters \citep[M$_{V}$=-15.4][]{ma01}. It is also one of the closest, at a distance of 5.1$\pm$0.6 Mpc \citep{tosi01}. Measuring velocity dispersions and assuming a bound cluster, \citet{hp96} determined a mass of 8.2$\pm$2.1 10$^{4}$ \ifmmode M_{\odot} \else M$_{\odot}$\fi. Using a larger gravitational radius, \citet{sternberg98} obtained M=2.7 10$^{5}$ \ifmmode M_{\odot} \else M$_{\odot}$\fi. \citet{sternberg98} used the L/M ratio derived from photometry and velocity dispersion to constrain the initial mass function. He found that the IMF is either flatter than the Salpeter IMF (slope $<$ 2.0) or that it is truncated at masses below 1--3 \ifmmode M_{\odot} \else M$_{\odot}$\fi. \citet{sg01} confirmed the latter conclusion and \citet{vazquez04} showed that a lower mass limit of 1 \ifmmode M_{\odot} \else M$_{\odot}$\fi\ for a Salpeter IMF was still compatible with the observed luminosity to mass ratio. \citet{melnick85a} showed that the optical spectrum of NGC1705--1 was typical of early B stars, excluding the presence of hotter objects such as O and/or Wolf--Rayet stars. This places a lower limit to the age of the cluster to 8--10 Myr. In a subsequent study, they detected the presence of CO bandheads in near--infrared narrow band photometry. Such features are typical of evolved cool stars such as red supergiants or AGB/RGB stars \citep{melnick85b}. \citet{hp96} obtained high resolution optical spectra of NGC1705--1 and confirmed the presence of both early B stars signatures (from spectral lines below 4500 \AA) as well as red supergiants metallic lines \citep[above 4500 \AA, see also][]{meurer92}. The presence of early B stars was confirmed by the UV spectra of \citet{vazquez04} which are typical of B0--1 V/III stars. Such an average spectral type corresponds to a dominant population of hot stars of age 12$^{+3}_{-1}$ Myr. \citet{marlowe95} estimated an age of 10 to 16 Myr for the starbust event in NGC1705 using UBV colors and the H$\alpha$ flux in comparison with starburst models. The metallicity of the host galaxy NGC1705 is subsolar with Z=0.35Z$_{\odot}$ \citep{ls04}. In this research note, we present the first near--infrared spectrum of the super star cluster NGC1705--1 obtained from integral field spectroscopy with SINFONI on the ESO/VLT. We provide an upper limit on the spatial extent of the cluster and determine its K--band spectral type. We give age estimates and discuss their uncertainties. \section{Observations and data reduction} \label{s_obs} The observations were performed with the integral field near--infrared spectrograph SINFONI \citep{spiffi,bonnet04} on the ESO/VLT in service mode between December 6$^{th}$ 2009 and January 22$^{nd}$ 2010. The seeing was usually good, between 0.7 and 1.0\arcsec. We used the adaptive optics system with the cluster itself as guide star. We used both the 100mas and the 25mas plate scale in order to probe the very cluster as well as its immediate surrounding. Table \ref{tab_obs} provides the journal of observations. Data reduction was performed with the SPRED software \citep{spred}. After flat field correction and bias/sky subtraction, wavelength calibration was done using Ne--Ar lamp calibration data. Fine tuning based on atmospheric features provided the final wavelength calibration. Telluric lines were removed using early B stars spectra taken just after the science data and from which the stellar Br$\gamma$ (and \ion{He}{i} 2.11 $\mu$m\ feature when present) were corrected. To estimate the spatial resolution of our data, we observed two point source (stars) of similar magnitude to NGC1705--1 immediately after the observations of the cluster on Jan. 10$^{th}$ and Jan. 11$^{th}$ 2010. These data are used to derive the width of the PSF. Fig.\ \ref{hst_vlt} shows our 100 mas pixel scale mosaic (right) together with an HST UBVI composite image at the same scale (left). \begin{table} \begin{center} \caption{Journal of observations. } \label{tab_obs} \begin{tabular}{lcccc} \hline Date & pixel scale & exposure time & seeing & airmass\\ & [mas] & [s] & \arcsec & \\ \hline & & NGC1705--1 & & \\ 06 dec 2009 & 25 & 4$\times$100 & 0.81--0.94 & 1.22--1.26 \\ 23 dec 2009 & 25 & 4$\times$100 & 0.60--0.75 & 1.40--1.55 \\ 10 jan 2010 & 100 & 4$\times$100 & 0.56--1.00 & 1.17--1.22 \\ 11 jan 2010 & 25 & 4$\times$100 & 0.96--2.60 & 1.23--1.31 \\ 22 jan 2010 & 100 & 4$\times$100 & 0.83--1.13 & 1.18--1.20 \\ \hline & & PSF calibrator & & \\ 10 jan 2010 & 100 & 2$\times$60 & 0.68--0.75 & 1.24--1.24 \\ 11 jan 2010 & 25 & 2$\times$60 & 0.95--0.99 & 1.38--1.38 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure*}[] \centering \includegraphics[width=14cm]{image_map_combi_hst_sinfoni.eps} \caption{{\it Left:} HST UBVI composite image of the super star cluster and its close environment (Tosi et al., Hubble Heritage Team -- STScI/AURA, NASA, ESA). {\it Right:} combination of all three SINFONI mosaics obtained with the 100 mas pixel scale frame. } \label{hst_vlt} \end{figure*} \section{Spatial distribution} \label{s_spatial} We have used the 25mas pixel scale mosaics taken on December 9$^{th}$ 2009, December 23$^{rd}$ 2009 and January 11$^{th}$ 2010 to estimate the spatial extent of NGC1705--1. Performing 2D Gaussian fits to the data, we obtain the following values for the Full Width at Half Maximum: 0.112\arcsec$\times$0.114\arcsec, 0.110\arcsec$\times$0.121\arcsec, 0.132\arcsec$\times$0.150\arcsec. On January 11$^{th}$ 2010, we observed a standard star used as a PSF calibrator. The 2D Gaussian fit gives a 2D FWHM of 0.102\arcsec$\times$0.111\arcsec. These measurements indicate that the core of NGC1705--1 is not resolved by our observations. The variations of the cluster FWHM from night to night are mainly due to varying seeing conditions. On January 11$^{th}$ 2010, the PSF is about 30\% smaller on the standard star, but the average seeing was also smaller during the observation compared to the cluster observation (see Table \ref{tab_obs}). In Fig\ \ref{fig_core_out}, we show the spectrum obtained in two different regions of the 25mas mosaic: a circle centered on the cluster core and of spatial radius $\sim$ 0.12\arcsec, and a ring--like region located between $\sim$0.13\arcsec and $\sim$0.18\arcsec. The resulting spectra are shown in the bottom panel. The main lines are indicated. There is very little difference between the two spectra (see the plot of the difference as a function of wavelength in the bottom panel). This most likely indicates that we are observing the far wing of the PSF in the annulus region, confirming that we are not resolving the cluster with our observations. We can provide upper limits on its half--light radius. According to the values of FWHM given above, the cluster core is smaller than 0.11--0.12\arcsec. At the distance of NGC1705, this corresponds to a physical radius of less than 2.85$\pm$0.50 pc \citep[using the dispersion of the FWHM measurements as error on the angular size and for the distance of 5.1$\pm$0.6 Mpc of][]{tosi01}. \citet{oc94} determined a half--light radius of 0.14\arcsec\, corresponding to 3.4 pc using a distance of 5.0$\pm$2.0 Mpc. \citet{meurer95} found a significantly smaller value (0.04\arcsec, 1.1pc) using the same set of HST/WFPC data. \citet{sg01} concluded from their HST/WFPC2 observations that the half--light radius was 1.6$\pm$0.4 pc for a slightly larger distance (5.3$\pm$0.8 pc). Our \textit{upper limit} on the cluster size is consistent with the largest values derived previously, and compatible with the small radius quoted by \citet{meurer95} and \citet{sg01}. The size of the cluster NGC1705--1 remains poorly constrained at present, and future observations with ELTs and/or JWST are necessary to probe the spatial structure of this (and other) super star cluster. \begin{figure}[] \centering \includegraphics[width=9cm]{spec_comp_core_outter.eps} \caption{Comparison between the spectrum extracted from a region of radius $\sim$0.12\arcsec\ centered on the cluster core and the spectrum extracted from a ring of radius 0.13--0.18\arcsec. The bottom panel shows the difference between both spectra. The dotted lines indicate the 1$\sigma$ deviation.} \label{fig_core_out} \end{figure} \section{Cluster age} \label{s_age} We first note the absence of Br$\gamma$ emission in the spectrum of NGC1705--1 (we determine an upper limit on the Br$\gamma$\ equivalent width of 0.1 \AA), consistent with the absence of hot massive stars and thus of a very young population. As seen above, CO bandhead absorption dominate the K--band spectrum, as in late type stars. We have compared by eye the cluster spectrum with templates spectra of cool, evolved stars taken from the atlas of \citet{wh97}. We find that the former is best accounted for by the spectrum of a K4.5Ib star (Fig.\ \ref{comp_spec}). For later spectral types, the CO bandheads are too deep. A supergiant luminosity class is also preferred, since giant stars spectra do not have broad enough CO overtones \footnote{Note that the Wallace \& Hinkle atlas does not contain all spectral types and luminosity classes. Hence, the best representative spectral type should not be trusted at the level of a sub-spectral type.}. This result is confirmed by the calculation of the first CO overtone equivalent width (38.2 \AA\ measured between 2.2900 and 2.3200 $\mu$m). From Fig.\ 8 of \citet{rsg1}, in which the CO equivalent width was measured on the same interval as us, we see that such an equivalent width is observed in K3--4I stars, as well as marginally in M5--6 giants. The two independent estimates favor a K4--5I spectral type for the entire cluster which is thus dominated by the near--infrared light of red supergiants. If we assume that the supergiants of NGC1705--1 are in their coolest evolutionary state, the fact that their spectral type is K and not M is an indication that the metallicity of the cluster is sub-solar. As shown by \citet{mo03}, the distribution of spectral types among red supergiants shifts towards earlier spectral types when metallicity decreases. While M2I is the spectral type the most represented in the Galaxy, M1I stars populate in majority the LMC and K5--7I the SMC. This is consistent with the study of \citet{meurer92} and \citet{storchi94} who report a sub--solar global metallicity for the entire galaxy NGC~1705: the former derive 12+log(O/H)=8.46, while the second give 8.36. These values are similar to the LMC (respectively SMC) metallicity. This estimate of the dominant population in NGC1705--1 can be used to derive the age of the cluster. \citet{vazquez04} proceeded this way to report an age of 12$^{+3}_{-1}$ Myr using a HST/STIS UV spectrum of the cluster. In Fig.\ \ref{hr_ssc} we show the position of a typical SMC K4--5 supergiant in the HR diagram, using parameters from \citet{levesque06}, i.e.: \ifmmode T_{\rm eff} \else T$_{\mathrm{eff}}$\fi\ = 3925$\pm$50 K and \ifmmode \log \frac{L}{L_{\odot}} \else $\log \frac{L}{L_{\odot}}$\fi\=4.98$\pm$0.10. The evolutionary tracks of \citet{brott11a} are overplotted. They include rotational mixing (for an initial rotation rate of 300 km s$^{-1}$) and have the SMC composition. Ages are indicated in Myr by filled circles along the tracks. From that figure, we see that an age of about 7 to 10 Myr can be inferred for NGC~1705--1. If we were to use the LMC tracks and average LMC properties of K4--5I stars \citep[still from][]{levesque06} we would derive an age of 5.5 to 7.5 Myr. These numbers are significantly lower than the value reported by \citet{vazquez04}. The reason is the use of different sets of evolutionary tracks. \citet{vazquez04} relied on the non--rotating Geneva tracks, while we use the rotating Utrecht tracks. Using the non--rotating Geneva tracks, we find ages of 8--14 Myr and 12--17 Myr for the LMC and SMC cases respectively. These values are in much better agreement with that of \citet{vazquez04}. We can thus conclude that our results are consistent with theirs. But the main conclusion is that the choice of evolutionary tracks is crucial to establish the age of the cluster. Depending on the tracks used, systematic differences of the order of 50\% of the cluster age can be made. To further quantify the effects of evolutionary tracks on age determinations, we can compare the results obtained from the Geneva and Utrecht \textit{rotating} tracks. At solar metallicity \footnote{Comparisons are not possible at LMC/SMC metallicities because Geneva rotating tracks are not available at those metallicities for stars with M$<$25 \ifmmode M_{\odot} \else M$_{\odot}$\fi.}, important differences in the behaviour of the tracks are found at \ifmmode T_{\rm eff} \else T$_{\mathrm{eff}}$\fi\ lower than 10000\,K. The Geneva tracks have an almost constant luminosity until the lowest temperatures where a luminosity increase happens. The Utrecht tracks show a decreasing luminosity until approximately 4000\,K before a rise at lower \ifmmode T_{\rm eff} \else T$_{\mathrm{eff}}$\fi. As a consequence, a star with \ifmmode T_{\rm eff} \else T$_{\mathrm{eff}}$\fi\ = 4000\,K and \ifmmode \log \frac{L}{L_{\odot}} \else $\log \frac{L}{L_{\odot}}$\fi\ = 4.80 is reproduced by the 20 \ifmmode M_{\odot} \else M$_{\odot}$\fi\ Utrecht track at 8.7 Myr, and by the 15 \ifmmode M_{\odot} \else M$_{\odot}$\fi\ Geneva track at 13.9 Myr. There is a 5 Myr difference in the age estimate. A detailed understanding of these differences is beyond the scope of this paper. It may be due to a different treatment of convection. The conclusion one can draw is that, as illustrated by our analysis of NGC1705--1, the ages derived using the Utrecht tracks are much lower than those determined with the Geneva tracks. Hence, an accurate age determination cannot be performed, not because of the quality of the observational data, but due to the uncertainties in the theoretical tracks. We can use the equivalent width of the first CO overtone to get an independent estimate the age of the population \citep{mengel01}. Population synthesis models predict the evolution of the strength of this feature as a function of time, depending on the assumed star formation history, initial mass function, stellar libraries and isochrones. We measure an equivalent width of 12.8 \AA\ for the first CO overtone \citep[measured between 2.2924 and 2.2977 $\mu$m\ according to][]{ori93}. Using the starburst models of \citet{leitherer99} (including non--rotating tracks, a Salpeter IMF, a burst of star formation) we see from their Fig. 101c that at a slightly sub--solar metallicity, the first CO overtone equivalent width (computed on the same interval as us) is in the range 10--16 \AA\ for ages between 7 and 30 Myr. This is consistent with our estimates. In conclusion, performing an age determination for NGC1705--1 is found to be a difficult task given the current uncertainties on evolutionary models in the red supergiant phase. Based on our estimates, we can quote a value of 12$\pm$6 Myr. This is still compatible with the absence of ionized gas emission (Br$\gamma$) that would be produced by a large population of ionizing sources (O and Wolf--Rayet stars). \begin{figure}[] \centering \includegraphics[width=8cm]{comp_K4p5Ib.eps} \caption{Comparison between the K band spectrum of NGC1705--1 (black) and the spectrum of the Galactic K4.5Ib red supergiant HD~78647 (red). The latter spectrum is taken from the atlas of \citet{wh97}. The SINFONI spectrum has been degraded to the resolution of the template spectrum (R=2000).} \label{comp_spec} \end{figure} \begin{figure}[] \centering \includegraphics[width=8cm]{hr_ssc_br300smc.eps} \caption{HR diagram showing the position of an SMC K4--5I star. The evolutionary tracks are from \citet{brott11a} and have an initial rotational velocity of 300 km s$^{-1}$\ and the SMC composition. The dots on the tracks indicate the ages in Myr. The uncertainty on the red supergiant parameters reflect the range of values determined for SMC K4--5I stars in the study of \citet{levesque06}. } \label{hr_ssc} \end{figure} \section{Conclusion} \label{s_conc} We have presented the first near--infrared integral field spectroscopy of the super star cluster NGC1705--1 obtained with SINFONI on the ESO/VLT. The cluster is found to have an angular size smaller than about 0.11--0.12\arcsec\ and is not resolved by our AO--assisted observations. This places an upper limit of 2.85$\pm$0.50 pc (depending on the distance) on its radial extension. The K--band spectrum of the cluster is dominated by strong CO absorption bandheads. It is similar to the spectrum of a red supergiant of spectral type K4--5. There is no sign of ionized gas in the spectrum. This confirms previous studies indicating that the cluster contains massive stars, but no O and/or Wolf--Rayet objects. Using different evolutionary tracks, we estimate the age to be 12$\pm$6 Myr. The large uncertainty is rooted on the important differences between the Geneva and Utrecht evolutionary tracks in the supergiant regime, and not in the quality of the observational data. Depending on the type of tracks used, ages can systematically differ by 5--7 Myr. \begin{acknowledgements} We acknowledge the suggestions of an anonymous referee. We thank the ESO/Paranal staff for performing the observations in service mode. FM acknowledges support from the ``Agence Nationale de la Recherche''. \end{acknowledgements}
1,116,691,499,462
arxiv
\section{Introduction} When bank borrowers get into trouble, the early detection of small changes in their financial situation can substantially reduce a bank's losses when precautionary steps are taken (e.g., lowering the borrower's credit limit). In fact, the ongoing monitoring of credit risk is an integral part of most loss mitigation strategies within the banking industry \cite{thomas2002}. Current credit scoring models to assess risk typically rely on gradient boosted decision trees (GBDTs), which provide several benefits over more traditional techniques such as logistic regression \cite{Friedman2001}. For instance, interactions between variables are automatically created during the boosting process and provide multiple combinations to improve prediction accuracy. Additionally, preprocessing steps such as missing value imputation and data transformation are generally not required. However, these models also have certain limitations. First, performance improvements are usually derived from feature engineering to create new variables based on extensive domain knowledge and expertise. Second, tree-based models do not take full advantage of the available historical data. Finally, they cannot be used in an online learning setting (where data becomes available in a sequential order) due to their limited capacity for updates, such as altering splits, without seeing the whole dataset from the start (e.g., \cite{chen2016}). To overcome the limitations of a traditional tree-based approach, we explored several deep learning techniques for assessing credit risk and created a unique method for generating sequences of credit card transactions that looks back one year into borrowers' financial history. Applying the same input features as the benchmark GBDT model, we show that our final sequential deep learning approach using a temporal convolution network (TCN) provides distinct advantages over the tree-based technique. The main improvement is that performance increases substantially, resulting in significant financial savings and earlier detection of credit risk. We also demonstrate the potential for our approach to be used in an online learning setting for credit risk monitoring without making significant changes to the training process. Previous research on the application of deep learning to tabular data from credit card transactions has focused on fraud detection \cite{Roy2018, Efimov2019}, loan applications \cite{Wang2018, Babaev2019}, and credit risk monitoring \cite{Nanni2009,Addo2018}. Our work differs from previous deep learning work on credit risk monitoring by extending beyond simple multilayer perceptron networks and into sequential techniques, such as recurrent neural networks (RNNs) and TCNs, that explicitly utilize the available historical data. Additionally, we introduce a novel sampling method for generating sequences of transactions that addresses the drastic differences in sequence length between card members over a full year. The benefits of using sampled transactions from a long time window, as opposed to the total number of transactions, are that it reduces noise, memory requirements, and training/inference time so that the model is able to learn generalizable early warning signs of risky financial behavior while still being able to run in near real time during online credit monitoring. The main contributions presented in this paper are as follows. We first show the potential of sequential deep learning to improve our current credit scoring system through offline analysis, which revealed significant reductions in financial losses (tens of millions of US dollars annually) and increased early risk detection rates. We also developed a sampling method for generating sequences of transactions over the course of a full year, which allows the model to learn long-term behavioral trends and removes the need to load and process the hundreds of billions of credit card transactions that occur in a given year across card members. Finally, we demonstrate that online learning can be applied to our approach and achieves higher performance over randomly re-initializing the model weights. \section{Methods} \subsection{Transactional Data} Credit risk scores are predicted using tabular data from credit card transactions, where each transaction is associated with a set of numerical and categorical variables. We define credit risk prediction as a \textit{binary classification} task, where the goal is to detect default (non-payment) on credit card debt within 1.5 years following the timestamp of each transaction. \subsubsection{Features} For our modeling experiments, we used transactional data for card members with consumer credit cards. Data sources included both internal and external sources, such as consumer credit reporting agencies, from which 127 raw and engineered features were created. Our features can be divided into two subcategories: transaction-related (credit exposure, ATM indicator, etc.) and account-related (days past due, account tenure, etc.). To ensure fairness, personal demographic information was not included in any of the features. However, due to privacy concerns and strict adherence to data protection laws, we are unable to describe our features in further detail. \subsubsection{Training and Validation} The training dataset consisted of a sample of 15 million card members, with transactions spanning the course of twelve months from March 2016 to February 2017. We shifted the one year window forward by one month twice to create 45 million transactional sequences for training. This simply means that each card member appeared three times in the training dataset with slightly different (time-shifted) sequence data. The validation dataset consisted of a sample of 6 million non-overlapping (out-of-sample) card members, with transactions spanning twelve months from May 2017 to April 2018 in order to also create an out-of-time validation set. This simulates our production environment, where models are trained on past data. Within these sample datasets, roughly 2\% of card members defaulted on their balance. We addressed this class imbalance in the context of deep learning by creating balanced mini-batches for training models with mini-batch gradient descent, where each mini-batch had the same percentage of defaulting card members as the whole dataset. \subsubsection{Preprocessing} Neural networks are sensitive to feature distributions and performance can drop significantly for non-normal distributions, such as skewed distributions or those with outliers. Our tabular datasets contain many features with similar behavior and thus, standard feature preprocessing approaches would not lead to superior results. Therefore, our preprocessing steps were as follows. \begin{itemize} \item Missing values for each feature were imputed based on the target label (default/non-default) rates within 10 bins defined by the feature's percentiles. \item For dealing with numerical outliers, we developed a novel capping procedure to extract the most significant part of the feature distribution by leveraging splits obtained from training a decision tree model (in our case, the GBDT framework was used). For each feature and all trees produced by the model, we sorted all splits in ascending order: $$ (s_1, s_2, \ldots, s_k), \, s_i > s_j, \mbox{ for all } i>j $$ and applied the following capping rules: $$ \hat{x} = \left\{ \begin{array}{ll} 2 \cdot s_1 - s_2, & \mbox{if } x < 2 \cdot s_1 - s_2, \\ 2 \cdot s_k - s_{k-1}, & \mbox{if } x > 2 \cdot s_k - s_{k-1}, \\ x, & \mbox{otherwise}, \end{array} \right. $$ where $x$ is the original feature and $\hat{x}$ is the transformed feature (see Figure \ref{fig:xgbcap} for an illustrative example). \item After imputation and outlier capping, the Box-Cox transformation was applied to reduce feature skewness \cite{Box1964}. \item Categorical data were transformed to numerical features using a procedure known as Laplace smoothing \cite{Manning2008}, which contains two main steps: \begin{enumerate} \item Calculate the average of the target variable within each category. \item Modify the average from step 1 to address categories with a small number of observations: $$ \hat{x} = \dfrac{k \cdot \bar{y} + \sum\limits_{i \in G} y_i}{k + \left|G\right|}, $$ where $G$ is a set of indices for the given category, $|G|$ is the size of the category, $k$ is a metaparameter defined empirically ($k = 30$ in our case), and $\bar{y}$ is the average of the target variable for all training observations. \end{enumerate} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{gbdt_capping.png} \caption{Capping with GBDT splits.} \label{fig:xgbcap} \end{figure} \item Finally, standard scaling was applied to all features. \end{itemize} \subsection{Sampling Transactions for Sequence Generation} Generating sequences of credit card transactions over a fixed period of time presents a unique challenge for risk assessment. Within a given time frame, card members can have drastically different amounts of transactions. For example, in a single year, one card member might have only a few transactions while another has thousands. Therefore, rather than modeling full sequences of transactions over the course of year, we created a sampling scheme that selects one random historical transaction per card member per month. \begin{figure} \centering \includegraphics[width=0.475\textwidth]{sequence_v2.png} \caption{Sequence of sampled transactions.} \label{fig:seq} \end{figure} The benefit of this sampling technique is twofold. First, it reduces noise in order to expose more general trends in risky card member behavior over a long period of time. Second, it allows us to create and process sequences in near real time because the previous 11 months of transactions can be stored efficiently in memory and loaded quickly. We simply append each incoming transaction in the current month to the end of the sequence before submitting it to a sequential model (see Figure \ref{fig:seq} for an illustration). This is important for the implementation of sequential models in our production environment, where a near real-time processing speed is needed to calculate a credit risk score for each incoming transaction before the transaction is approved (usually <10 milliseconds). For inactive card members with zero transactions in a given month, data were collected at the timestamp of their monthly billing statement date. The reason we use random historical transactions instead of the billing statement date for active card members is because we want to predict credit risk for each incoming transaction, which does not occur at a fixed time interval. Therefore, we introduce noise to the time interval in order to reduce the model's dependence on it. For low tenure card members with less than 12 months of transactions available, we zero-padded the sequences and applied a binary mask to the loss calculations during training. This technique excludes the padded timestamps from being used to update the model weights. \subsection{Model Evolution} We compared four different types of deep learning classification models, from simple neural networks to more complex ones, and compared their performance to a benchmark GBDT model for predicting credit risk. Since the GBDT algorithm was not designed to accommodate sequential data, we only used the last (i.e., most recent) month's transaction to train the model. Similarly, two of the deep learning models (multilayer perceptron and TabNet) are also non-sequential and only used the most recent month's transaction. \subsubsection{Multilayer Perceptron} We started our deep learning experiments with a straightforward implementation of the standard vanilla multilayer perceptron (MLP) neural network with dropout regularization. The MLP connects multiple layers of nodes/neurons in a forward-directed graph. Dropout regularization stochastically "drops" some of these neurons during training in order to simulate an ensemble of different MLPs \cite{Hinton2012}. To train the connection weights of the model, a generalization of the least mean squares algorithm known as backpropagation was used to calculate the gradient of the loss function with respect to each weight by the chain rule, iterating backward from the last layer. \subsubsection{TabNet} TabNet is a more recently developed neural network designed specifically for tabular, non-sequential data \cite{Arik2019}. It utilizes an iterative attention mechanism for feature selection, where the final prediction score is an aggregate of all of the processed information from the previous iterations. A single iteration of the TabNet architecture contains two parts: a feature transformer and an attentive transformer. The output of the feature transformer is combined with the outputs from the previous iterations in order to obtain a final output decision. The attentive transformer creates a sparse feature mask vector, which is applied back to the initial feature set to generate a smaller subset of features that is fed into the next decision step (see Figure \ref{fig:tabnet}). The benefit of this technique is that it allows the network to fully utilize the most relevant features in a tabular dataset at each decision step, which enables more efficient learning. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{tabnet.png} \caption{TabNet's decision step architecture.} \label{fig:tabnet} \end{figure} \subsubsection{Recurrent Neural Network} The aforementioned deep learning models suffer from a similar limitation as the baseline GBDT model: they don't explicitly utilize card members' historical data. RNNs address this issue by iteratively processing and saving information from previous transactions in its hidden nodes, which is then added to the current transaction in order to produce a more informed credit risk prediction. For this paper, we used a long short-term memory (LSTM) RNN \cite{Hochreiter1997}, which is a popular variant of the vanilla RNN that has been shown to more effectively learn long-term dependencies in sequential data. We also utilized a newer method for RNN regularization known as \textit{zoneout regularization} \cite{Krueger2019}. Like dropout regularization, zoneout uses random noise to approximate training an ensemble of different RNNs. However, instead of dropping random recurrent connection weights, zoneout stochastically forces some of the hidden nodes to maintain their previous values from the previous transaction. The benefit of using zoneout over recurrent dropout is that it allows the RNN to remember more information from past transactions. An example of a single-layer LSTM model with zoneout regularization for predicting credit risk scores using our preprocessing approach is shown in Figure \ref{fig:rnn}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{lstm_v2.png} \caption{Single-layer LSTM architecture for predicting credit risk (i.e., default on credit card debt) from 12 months of transactional data.} \label{fig:rnn} \end{figure} \begin{figure} \centering \includegraphics[width=.475\textwidth]{causalconv.png} \caption{Example of 1D causal convolution.} \label{fig:causalconv} \vspace*{\floatsep} \includegraphics[width=.3\textwidth]{tcn_block.png} \caption{TCN block.} \label{fig:tcnblock} \vspace*{\floatsep} \includegraphics[width=.475\textwidth]{tcn.png} \caption{Final TCN architecture.} \label{fig:tcn} \end{figure} \subsubsection{Temporal Convolutional Network} An important limitation of the RNN framework is that training the model can be time-consuming because sequences of data are processed iteratively. Additionally, the ability of RNNs to capture long-term dependencies in sequences remains a fundamental challenge \cite{pascanu2013}. Therefore, more recent techniques have focused on more efficient convolution-based approaches for sequential data \cite{oord2016, kalchbrenner2016, dauphin2017, gehring2017, Bai2018}. Since convolutions can be done in parallel, sequences are processed as a whole instead of iteratively as in RNNs. Convolutional neurons can also utilize their receptive field to retain even more information from the distant past. \textit{Causal} convolutions prevent leakage from the future by only convolving the output at time \textit{t} with elements from time \textit{t} and earlier (see Figure \ref{fig:causalconv}) \cite{waibel1989}. Perhaps the most popular recent causal convolution-based approach is the temporal convolutional network (TCN) \cite{Bai2018}. In addition to convolution and dropout, TCNs utilize dilation to enable larger receptive field sizes that look back at longer history lengths by introducing a fixed step size between every two adjacent convolutional filter taps \cite{Yu2016}. TCNs also employ residual connections so that each convolutional layer learns an identity mapping rather than the entire transformation (see Figure \ref{fig:tcnblock}), which helps stabilize deeper and wider models \cite{He2016}. Within each residual TCN block, layer normalization is applied to the output of the convolutional layer and dropout is added for regularization. An illustration of our final deep TCN architecture for the credit risk prediction task is shown in Figure \ref{fig:tcn}. \subsection{Model Optimization and Tuning} \subsubsection{Architecture Search} The final number of layers, neurons per layer, and dropout/zoneout rates used in each deep learning model were determined using an iterative Bayesian optimization approach with SigOpt \cite{Dewancker2015}. However, several of the top models performed similarly. We found that the best performance occurred for the 5-layer MLP, 3-layer LSTM (with 2 LSTM layers and one dense output layer), and 6-layer TCN (with 2 TCN blocks and 4 dense layers). Dropout and zoneout rates were kept fairly low in each model (averaging \textasciitilde0.2). \subsubsection{Optimization} Each deep learning model was trained using the Adam optimization algorithm \cite{Kingma2014} and early stopping \cite{prechelt1998}. We used the default beta parameters for the Adam algorithm (betas = 0.9, 0.999) that control the decay rate for the first and second moment estimates of the gradient, although other values were explored and showed no obvious improvement. The classic binary cross entropy function for classification tasks was used as the loss function to be optimized. \subsubsection{Batch Size and Learning Rate} We also experimented with several batch sizes, learning rates, and learning rate schedules for the different architectures. We found that a universal batch size of 512 credit card members and a decay rate of 0.8, starting from an initial learning rate of 1e-4, was generally the most effective for all models. \subsubsection{Performance Metrics} Performance was measured in terms of the Gini coefficient and recall. The Gini coefficient is a common measurement of the discriminatory power of rating systems such as credit scoring models and is directly proportional to the area under the receiver operator characteristic curve (Gini = (2*AUROC) - 1). Recall, or sensitivity, is a measurement for the correct predictions of credit card default within a fixed fraction of the model's top prediction scores. This fraction was determined by existing business principles and fixed across models. \section{Results} The performance results for each individual deep learning model and its ensemble with the benchmark GBDT are shown in Table \ref{tab:performance}. Performance results for the high debt exposure subpopulation (card members with a balance exceeding \$15,000) are also included in this table to demonstrate the potential for substantial financial savings using our proposed approach. The sequential models, LSTM and TCN, both outperformed the benchmark GBDT in isolation. The non-sequential neural networks, MLP and TabNet, performed worse than the benchmark. Additionally, a much larger boost in prediction performance was observed when the LSTM and TCN were used in an ensemble with GBDT, suggesting that historical information provides orthogonal information that is predictive of risky financial behavior. While these performance improvements may seem modest, it is important to keep in mind the large volume of card members that exists in the dataset, implying that small improvements lead to significant savings (in our case, an annual savings of tens of millions of US dollars). \begin{table} \caption{Individual and ensemble model performance for the overall population and the high debt population (>\$15k).} \label{tab:performance} \begin{tabular}{c|cc|cc} \toprule & \multicolumn{2}{c|}{Overall} & \multicolumn{2}{c}{High Debt}\\ Model & Gini (\%) & Recall (\%) & Gini (\%) & Recall (\%) \\ \midrule GBDT & 92.19& 63.85 & 84.12 & 60.68\\ MLP & 92.06 & 63.42 & 83.98 & 60.82\\ TabNet & 92.03 & 63.5 & 83.92 & 61.02\\ LSTM & 92.28 & 64.04 & 84.27 & 61.00\\ \textbf{TCN} & \textbf{92.33}& \textbf{64.13} & \textbf{84.47} & \textbf{61.33} \\ \midrule GBDT + MLP & 92.26& 64.10 & 84.44 & 61.64 \\ GBDT + TabNet & 92.26 & 64.11 & 84.46 & 61.57 \\ \textbf{GBDT + LSTM} & \textbf{92.49}& 64.75 & \textbf{84.83} & \textbf{62.13} \\ \textbf{GBDT + TCN} & 92.48 & \textbf{64.81} & \textbf{84.83} & 62.01 \\ \bottomrule \end{tabular} \end{table} \begin{figure*}[h] \begin{minipage}[t]{.45\textwidth} \centering \includegraphics[width=1\textwidth]{seqlen.png} \caption{Performance increased as the number of monthly transactions in the sequence increased.} \label{fig:seqlen} \end{minipage}\hfill \begin{minipage}[t]{.45\textwidth} \centering \includegraphics[width=1\textwidth]{online.png} \caption{Online learning (i.e., progressively tuning the weights using incoming data) produced superior performance results when compared to re-initializing the weights with small random values before training.} \label{fig:online} \end{minipage}\hfill \end{figure*} \subsection{Early Risk Detection} In addition to capturing more risky behavior types, another important task for the model was to catch risky behavior as early as possible before the credit default occurred. This is essential when precautionary measures need to be implemented months in advance in order to reduce future financial losses. \begin{table} \caption{Individual model performance, split by default date.} \label{tab:bins} \begin{tabular}{c|c|c|c} \toprule & \multicolumn{3}{c}{Gini/Recall (\%) by Default Date} \\ & April - & November, 2018 - & May - \\ Model & October, 2018 & April, 2019 & November, 2019\\ \midrule GBDT & \textbf{95.56 / 81.60} & 90.38 / 59.64 & 86.24 / 45.95 \\ MLP & 95.48 / 80.90 & 90.24 / 59.26 & 86.08 / 45.81 \\ TabNet & 95.43 / 81.08 & 90.21 / 59.26 & 86.07 / 45.90 \\ LSTM & 95.53 / 81.27 & 90.50 / 60.11 & 86.43 / 46.43 \\ TCN & 95.49 / 81.09 & \textbf{90.58 / 60.32} & \textbf{86.57 / 46.75} \\ \bottomrule \end{tabular} \end{table} A comparison of LSTM and TCN to GBDT for three different time bins (with the default event occurring in the near term, mid term, and long term) is shown in Table 2. While GBDT seems to outperform the sequential model in predicting defaults in the near term (within 6 months after the most recent transaction), the historical information incorporated in the sequential model helps predict defaults in medium term (between 7-12 months after the most recent transaction) and long term (between 13-18 months after the most recent transaction) more effectively, indicating that our approach improves early risk detection. \subsection{Sequence Length} Performance of the sequential models was also dependent on the number of input transactions used (i.e., the sequence length). In Figure \ref{fig:seqlen}, we show the degradation of performance in terms of Gini and recall as the number of monthly historical transactions included in the sequences decreased, meaning the "look-back" period was shorter than a full year. Both the LSTM and TCN achieved the best results with 12 months of transactions, with TCN outperforming LSTM at each sequence length. Given the linear increase in performance as the sequence length increased, this suggests that looking back even further into borrowers' financial history might provide better performance. However, we were limited by the available data that does not predate 2016. \subsection{Online Learning} Since model performance is known to deteriorate with changes in consumer behavior and economic conditions, we also tested the LSTM's and TCN's ability to adapt to incoming data via online learning. To do this, we progressively tuned the weights of each model using sequences collected from three future months in 2017. We compared this method to the standard random weight initialization method in Figure \ref{fig:online}, where the weights of the model are set to small random numbers before training. As expected, performance gradually improved as more recent data were used to create the predictions. Additionally, progressively tuning the weights with incoming data outperformed the standard random weight initialization approach. \subsection{Training and Inference Time} Although the TCN outperformed the LSTM in terms of prediction performance, it is also important to consider their performance in terms of training and inference time for use in a production environment. Using an NVDIA Tesla V100 GPU, it took an average of \textasciitilde30ms to train the TCN on a single mini-batch of 512 card members, compared to \textasciitilde50ms for the LSTM. This was expected because the input sequence is processed as a whole by the TCN instead of sequentially as in the LSTM. Anecdotally, inference time for the LSTM should be faster than the TCN. As suggested by Bai et al. \cite{Bai2018}, the LSTM only needs to save the next-to-last hidden state to memory and take in the current/incoming transactional data in order to generate a prediction. In contrast, the TCN needs to process the entire sequence. However, both methods were able to process a sequence in <1ms, which is much less than the time required for inference in our production environment (<10ms). In future work, we will analyze the full pipeline for inference that includes the time it takes to preprocess the data for the incoming transaction in addition to the time it takes load the historical data (TCN) and the next-to-last hidden state (LSTM). \section{Conclusions} Current models for monitoring credit risk typically utilize boosted decision trees for their superior generalization accuracy when compared to that of other popular techniques, such as the multilayer perceptron. However, this approach is limited by its input features and inability to process sequential data efficiently. In this paper, we investigated sequential deep learning methods for credit risk scoring and proposed a novel sampling method to generate sequences from one year of transaction-related tabular financial data. We compared the performance of our approach to the current tree-based production model and showed that our TCN, when combined with our sampling technique, achieved superior performance in terms of financial savings and early risk detection. Finally, we provided evidence that this framework would be suitable in our production environment and for online learning. One major concern with our approach is the lack of interpretability that plagues black-box models such as the proposed LSTM and TCN. However, since GBDT also suffers from a similar interpretability problem, we do not directly use the prediction scores of the model to make account-level decisions. Instead, established and strategic business rules are used on top of (and in parallel with) the prediction score to ensure fairness and customer satisfaction. Future work will focus on stress testing our deep sequential models and probing the prediction scores to determine if there exists sub-populations of card members for whom our new approach performs better than the benchmark on. It is also important to note that our data were collected during a relatively stable economic period that did not include any major recessions. As more data are collected going forward, we will be able to test the stability and generalization capability of our approach. \bibliographystyle{ACM-Reference-Format}
1,116,691,499,463
arxiv
\section{Introduction} \label{sec:intro} Availability of COVID-19 data is crucial for researchers and policymakers to understand the pandemic and react to it in real-time. However, unlike countries with well-defined data reporting mechanisms, pandemic data from India is available either through volunteer-driven initiatives, through special access granted by the government, or manually collected from daily bulletins published by states and cities on their own websites or platforms. While daily health bulletins from Indian states contain a wealth of data, they are only available in the unstructured form in PDF documents and images. On the other hand, volunteer-driven manual data-curation cannot scale to the volume of data over time. For example, one of the most well-known sources of COVID data from India: \url{covid19india.org}, has manually maintained public APIs for limited data throughout the pandemic. Such approaches, while simultaneously limited in the detail of data made available, are also unlikely to continue in the long term due to the amount of volunteer manual labor required indefinitely. Although this project originally began anticipating that outcome, that eventuality has already come to pass for the aforementioned project, for similar reasons outlined in \cite{sunset}. As such, detailed COVID-19 data from India, in a structured form, remains inaccessible at scale. \cite{plea} notes pleas from researchers in India, earlier this year, for the urgent access to detailed COVID data collected by government agencies. The aim of this project is to use document and image extraction techniques to automate the extraction of such data in structured (SQL) form from the state-level daily health bulletins; and make this data freely available. Our target is to automate the data extraction process, so that once the extraction for each state is complete, it requires little to no attention after that (other than responding to changes in the schema). The role of machine learning here is to make that extraction automated and robust in coverage and accuracy. This data goes beyond just daily case and vaccinations numbers to comprehensive state-wise metrics such as the hospitalization data, age-wise distribution of cases, asymptomatic and symptomatic cases, and even case information for individuals in certain states. India, one of the most populous countries in the world, has reported over 33 million confirmed cases of COVID-19 -- second only to the United States. The massive scale of this data not only provides intriguing research opportunities in data science, document understanding, and NLP for AI researchers but will also help epidemiologists and public policy experts to analyze and derive key insights about the pandemic in real-time. At the time of this writing, \url{covid19india.org} has also released possible alternatives going forward once the current APIs are sunset next month. These suggestions, detailed here: \cite{operations}, also align perfectly with this current project and give us hope that we can continue providing this data, at scale and with much more detail than ever before. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{images/system-architecture-alt.png} \end{center} \caption{Illustration of the data extraction pipeline from daily health bulletins to an SQL database.} \label{fig:architecture} \end{figure} \section{System Overview} \label{sec:arch} We segment the system into 3 major components: (a) the backend which is responsible for extracting data from health bulletins, (b) the database which stores the parsed structured data, and (c) the frontend which displays key analyses extracted from the parsed data. We describe each of these components in greater detail in the following sections. \subsection{The Backend} Since we aim to extract data from health bulletins published by individual states on their respective websites, there is no standard template that is followed across these data sources in terms of where and how the bulletin is published, and what and how information is included in these bulletins. To account for these variations, we modularize the system into the following 3 main components: a) bulletin download, b) datatable definition, and c) data extraction. We provide an overview of the system in Figure \ref{fig:architecture} and look at the three components in greater detail. The open-sourced code can be accessed at: \textcolor{blue}{\url{https://github.com/IBM/covid19-india-data}}. \subsubsection{Bulletin download} The bulletin download procedure downloads the bulletins from the respective state websites to the local storage while maintaining the dates already processed. We use the BeautifulSoup \footnote{\url{https://www.crummy.com/software/BeautifulSoup/}} library to parse the state websites and identify bulletin links and dates for download. \subsubsection{Datatable definitions} Since each state provides different information, we define table schemas for each state by manually investigating the bulletin (done once per state). We then use the free open-source SQLite \footnote{\url{https://www.sqlite.org/index.html}} database to interface with the data extractor and store the data. \subsubsection{Data extractor} States typically provide the bulletins in the form of PDF documents. To extract information from them, we use a combination of classical PDF parsers and state of the art Machine Learning based extraction techniques: \vspace{-5pt} \paragraph{Classical PDF parsing:} Since a substantial amount of information in the bulletins are in the form of data tables, we use the Tabula\footnote{\url{https://tabula.technology/}} and the Camelot\footnote{\url{https://camelot-py.readthedocs.io/en/master/}} Python libraries to extract these tables in the form of python data structures. While these libraries cover a lot of use cases, they do fail in certain edge case scenarios. \vspace{-5pt} \paragraph{Deep-learning augmented PDF parsing:} Libraries extracting data tables from PDF typically use either the Lattice or the Stream \cite{Elomaa2013ANSSINA} based method of detecting table boundaries and inferring table structure. While these heuristics works great for most cases, for cases where tables are either not well separated or are spread wide, they fail to correctly separate tables with each other, and group all the tables together. To correct for such errors, we utilize CascadeTabNet \cite{cascadetabnet2020}, a state-of-the-art convolutional neural network that identifies table regions and structure. We use the detected table boundaries to parse for tables in areas of the PDF, thereby increasing the parsing accuracy. We show an example of performance gain we get from this approach in Appendix \ref{sec:cascadetabnet-example}. \vspace{-5pt} \paragraph{Data extraction from images:} While a majority of information provided in health bulletins is in the form of textual tables, some information is provided as images of tabular data. This information cannot be processed through the aforementioned techniques, and requires Optical Character Recognition (OCR) to extract data from. We employ the Tesseract OCR engine \cite{smith2007overview} to read and extract tabular data provided as images. In Appendix \ref{sec:ocr-example}, we provide an example of a bulletin parsed through Tesseract OCR. The detected text is overlayed in the green boxes. Note that this is an experimental feature and we are actively working on assessing and improving its efficacy. To process information for a state, a separate data extractor routine is used, which has access to all the three aforementioned APIs. Depending on the format of the particular bulletin, we utilize a combination of the three techniques to extract information. \subsection{The Frontend} \label{sec:frontend} The frontend or landing page for the project is generated automatically from the database schema and provides easy access to 1) the raw data (sampled at an appropriate rate to be loaded on the browser); and 2) pages for highlights and analysis based on SQL queries (such as those described in Section \ref{sec:prelim-analysis}). \subsection{The Database} \label{sec:data-description} The system described above runs daily and produces a SQL database that is publicly available for download. However, one can also use the source code to generate data customized with their own parameters, and deploy into their local systems. \vspace{-5pt} \paragraph{Current Status:} At the time of writing, we have completely indexed information from seven major Indian states, covering a population of over 382 million people or roughly 28.67\% of India's population. Additionally, we're in the final stages of integrating 5 new states, covering an additional 271.5 million people in the database, for a total coverage of 653.5 million people. In Appendix \ref{sec:data-comparison}, we provide an overview of the categories of information available in our database, and contrast it with the information in the covid19india.org database. \section{Preliminary Analysis} \label{sec:prelim-analysis} In this section, we perform some preliminary analysis on the data collected from the health bulletins of Delhi and West Bengal. We would like to emphasize that some of these analyses (to the best of our knowledge) are the first such analyses available for the two states. However, these are still preliminary but provide an insight into the power of such data available to researchers interested in the subject. \begin{figure*}[!t] \centering \subfigure[Weekly CFR]{\includegraphics[width=0.24\textwidth]{images/DL_WB_CFR.pdf}} \hfill \subfigure[RTPCR tests (DL)]{\includegraphics[width=0.24\textwidth]{images/DL_rtpcr.pdf}} \hfill \subfigure[Bed occupancy]{\includegraphics[width=0.24\textwidth]{images/DL_WB_covid_bed_occ.pdf}} \hfill \subfigure[Hospitalization \%-age]{\includegraphics[width=0.24\textwidth]{images/WB_perc_hospitalized.pdf}} \caption{Preliminary analysis illustrating the depth of data available from the daily health bulletins.} \label{fig:DL_analysis} \end{figure*} \subsection{Weekly Case Fatality Rate (CFR)} \label{sec:weekly-cfr} India has seen two major waves of COVID-19, with the second wave fuelled primarily by the Delta variant \cite{yang2021covid} being more deadly than the first \cite{budhiraja2021differentials, gupta2021clinical}. We aim to understand the difference between the two waves by computing the Weekly Case Fatality Rate as the ratio of total fatalities to total newly confirmed cases in a particular week. The charts for Delhi and West Bengal are presented in Figure \ref{fig:DL_analysis}. While the weekly CFR for the first wave seems to be comparable for the two states, there appears to be a stark difference in the numbers for the second wave. \subsection{Percentage of RT-PCR tests} \label{sec:rtpcr} Currently, India uses the reverse-transcriptase polymerase-chain-reaction (RT-PCR) tests and the Rapid Antigen Tests (RATs) to detect COVID-19 cases. While RT-PCR tests are highly accurate and are considered gold-standard tests for detecting COVID-19 \cite{brihn2021diagnostic}, they are more expensive and time-consuming than the less accurate RATs. While the official advisory is to prefer RT-PCRs over RATs \cite{icmr_testing_adv}, there exists a discrepancy in how the two testing methods are used \cite{cherian2021optimizing} and how this ratio affects the reported case results \cite{chatterjee2020india}. The state government of Delhi has in the past been called out for over-reliance on RATs as opposed to the preferred RT-PCR tests \cite{sirur_2020}. Following this criticism, the government increased the share of RT-PCR tests. We compute this ratio of RT-PCR tests to total tests conducted in the state (Figure \ref{fig:DL_analysis}). As is evident, in 2020, less than 50\% of the total tests conducted in the state were RT-PCR tests. However, starting 2021, and especially during the second wave of COVID-19 in India, this ratio increased to over 70\%. \subsection{COVID-19 bed occupancy} \label{sec:bed-occupancy} Both DL and WB report the dedicated COVID-19 hospital infrastructure and occupancy information in their bulletins. Using these numbers, we compute the COVID-19 bed occupancy as the ratio of occupied beds to total (Figure \ref{fig:DL_analysis}). Similar to the results in Section \ref{sec:weekly-cfr}, bed occupancy for Delhi shows a steep increase -- reaching about 90\% occupancy -- during the second wave, while the occupancy for West Bengal does not show any significant difference during the two waves. \subsection{Hospitalization percentage} \label{sec:hospitalization} To treat COVID-19 patients, India adopted a two-pronged strategy of hospitalization along with home isolation, where patients with a mild case of COVID-19 were advised home isolation whereas hospitals were reserved for patients with more severe cases of COVID-19 \cite{varghese2020covid, bhardwaj2021analysis}. We compute the hospitalization percentage as the ratio of the number of occupied hospital beds to the number of active cases. This is an estimate of how many of the currently active COVID-19 patients are in hospitals versus home isolation (Figure \ref{fig:DL_analysis}). The peaks we see for the two states relate to time periods after the respective wave has subsided , the minima and the subsequent rise in hospitalization relate to the onset of the particular wave. \section{Future work} \label{sec:future-work} The primary aim of this project is to extract as much information about the pandemic as possible from public sources so that this data can be made accessible in an easy and structured form to researchers who can utilize such data (from one of the most populous and heavily COVID-affected countries in the world) in their research. We foresee two main areas of future work for this project: \begin{enumerate} \item In the immediate future, we aim to integrate information for all Indian states into the dataset. Additionally, the project currently relies on health bulletins alone to extract the data. There are other platforms where the authorities release data, such as Twitter and Government APIs \cite{covid19org_sources}. We hope to integrate these additional sources of information into the dataset. \item We anticipate this data to be helpful in validating or extending models developed for other countries \cite{friedman2021predictive, borchering2021modeling}, developing pandemic models which integrate additional variables available in our dataset \cite{hethcote2000mathematics, agrawal2021sutra, adiga2020mathematical, Bhaduri2020ExtendingTS}, and understanding other aspects of the pandemic \cite{Ray2020PredictionsRO, Ghosh2020InterstateTP}. \end{enumerate} \begin{ack} We would like to thank all our open source contributors, in addition to those who have joined as as co-authors of this paper, for their amazing contributions to this project and this dataset. In particular, we thank Sushovan De (Google) for helping us extending the dataset to the Indian state of Karnataka. \end{ack} \bibliographystyle{plain}
1,116,691,499,464
arxiv
\section{Introduction} The merger of two neutron stars (NSs) are thought to be potential sources of producing gravitational-wave (GW) radiation and short gamma-ray bursts (GRBs; Berger 2014 for a review). The first ``smoking gun" evidence to support this hypothesis is the direct detection of GW170817 and its electromagnetic counterpart (e.g., GRB 170817A and AT2017agfo) originating from the merger of a binary NS system that was achieved via the collaboration of Advanced LIGO and Advanced VIRGO, as well as space- and ground-based telescopes (Abbott et al. 2017a; Covino et al. 2017; Goldstein et al. 2017; Kasen et al. 2017; Savchenko et al. 2017; Zhang et al. 2018). Binary NS systems play an especially interesting role in the field of astrophysics, such as studying the equation of state (EoS) of NSs, the origin of heavy elements produced through $r$-process nucleosynthesis, testing of the equivalence principle, and constraining Lorentz invariance (Abbott et al. 2017b; Burns 2020; Foucart 2020, for a review). However, the post-merge product of NS merger remains an open question, and it is dependent on the total mass of the post-merger system and the poorly known EoS of NS (Lasky et al. 2014; L{\"u} et al. 2015; Li et al. 2016). From the theoretical point of view, binary NS mergers may form a black hole (BH), or a supramassive NS that is supported by rigid rotation and through its survival for hundreds of seconds before collapsing into a BH if the EoS of NS is stiff enough (Rosswog et al. 2000; Dai et al. 2006; Fan \& Xu 2006; Metzger et al. 2010; Yu et al. 2013; Zhang 2013; Lasky et al. 2014; L\"{u} et al. 2015; Gao et al. 2016), or even a stable Ns (Dai \& Lu 1998a,b). No matter what the remnant is, electro magnetic (EM) transients are probably produced during the coalescence process, such as the short GRB and its afterglow emission within a small opening angle (Rezzolla et al. 2011; Troja et al. 2016; Jin et al. 2018), and an optical/infrared transient generated from the ejected materials and powered by radioactive decay from $r$-process with near-isotropic (Li \& Paczynski 1998; Metzger et al. 2010; Berger 2011; Rezzolla et al. 2011; Hotokezaka et al. 2013; Rosswog et al. 2013). Moreover, such a transient is also suggested to be referred to as "macronova" (Kulkarni 2005), "kilonova" (Metzger et al. 2010), or "merger-nova" (Yu et al. 2013; Metzger \& Piro 2014). In terms of observations, the "internal plateau" emission is defined as a nearly fairly constant emission followed by an extremely steep decay in the X-ray light curve of GRBs (Troja et al. 2007). Moreover, the X-ray and optical data during the plateau phase are required to be not simultaneously consistent with the forward shock model when extracting the spectrum. The first smoking gun of such an internal plateau was discovered by Troja et al. (2007) in long GRB 070110. Whereafter, a good fraction of short GRBs with X-ray emission present a internal plateau emission observed by {\em Swift}/XRT (Rowlinson et al. 2010, 2013; L\"{u} et al. 2015, 2017; Troja et al. 2019). This feature at least strongly supports a unstable supramassive NS as the center engine of some short GRBs before it collapses into BH (Dai et al. 2006; Gao \& Fan 2006; Metzger et al. 2008; Fan et al. 2013; Zhang 2013; Ravi \& Lasky 2014; L\"{u} et al. 2015; Gao et al. 2016; Chen et al. 2017; L\"{u} et al. 2020; Sarin et al. 2020a,b). If this is the case, the main power of the transients from a double NS merger should no longer be limited to $r$-process, and the spin energy of the magnetar can be comparable to or even larger than the radioactive decay energy (Yu et al. 2013; Zhang 2013; Metzger \& Piro 2014; Gao et al. 2017; Murase et al. 2018). Furthermore, the magnetic wind from the accretion disk of a newborn BH would heat up the neutron-rich merger ejecta and provide additional energy to the transients (Jin et al. 2015; Yang et al. 2015; Ma et al. 2018). Back to previous studies, this additional energy source from the post-merger central engine had been adopted to power a bright X-ray afterglow emission (Zhang 2013), or a bright optical/infrared emission (Yu et al. 2013; Metzger \& Piro 2014; Gao et al. 2015; Jin et al. 2015; Ma et al. 2018). Although, there are a number of papers that discuss the rate of kilonova detections in optical surveys, but they do not consider the contributions of all possible energy sources (Scolnic et al. 2018; Andreoni et al. 2020; Sagu{\'e}s Carracedo, et al. 2020; Thakur et al. 2020). One basic question is how bright the merger-nova is if we simultaneously consider the above three lines of energy sources, i.e., r-process-powered, spin energy from a supramassive NS, and magnetic wind from the accretion disk of a newborn BH. Whether this merger-nova can be detected by optical telescopes in the future, such as the Vera C. Rubin Observatory's Legacy Survey of Space and Time (Vera C. Rubin; Ivezi{\'c} et al. 2019), the Zwicky Transient Facility (ZTF; Bellm et al. 2019; Graham et al. 2019), the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; Morgan et al. 2012; Chambers et al. 2016), and Roman Space Telescope (Roman; Pierel et al. 2020). Following the above consideration, we systematically search for the X-ray emission of short GRBs with internal plateau. This feature is believed to be corresponding to the process of a short lifetime supramassive NS as central engine then collapsing into BH. Then, we calculate the merger-nova emission by considering the above three energy sources, and discuss the detectability of merger-nova to compare with the detected limit of optical telescopes in the future. Throughout the paper, a concordance cosmology with parameters $H_0=71~\rm km~s^{-1}~Mpc^{-1}$, $\Omega_M=0.30$, and $\Omega_{\Lambda}=0.70$ is adopted. \section{Basic model of merger-nova energy sources} If the central engine of some short GRBs is a magnetar, for given an initial spin period $P_{\rm i}$, its total rotational energy can be written as $E_{\rm rot}\sim 2\times 10^{52}I_{45}M_{1.4}R^{2}_{6}P_{\rm i,-3}^{-2}\rm erg$, where $M$, $R$, and $I$ are the mass, radius, and moment of inertia of a neutron star, respectively. Throughout the paper, the convention $Q_x = Q / 10^x$ is adopted in cgs units. The spin-down luminosity of the millisecond magnetar as a function of time may be expressed by the magnetic dipole radiation formula equation (Zhang \& M\'esz\'aros 2001; L\"{u} et al. 2018) \begin{eqnarray} L_{\rm sd} = L_{0}\left(1+\frac{t}{\tau}\right)^{\alpha} \label{Lsd} \end{eqnarray} where $L_0 \simeq 1.0 \times 10^{47}~{\rm erg~s^{-1}} (B_{p,14}^2 P_{i,-3}^{-4} R_6^6)$ and $\tau \simeq2 \times 10^5~{\rm s}~ (I_{45} B_{p,14}^{-2} P_{0,-3}^2 R_6^{-6})$ are the characteristic spin-down luminosity and timescale, respectively. $\alpha$ is usually constant and equal to -2 for a stable magnetar central engine when the energy loss of the magnetar is dominated by magnetic dipole radiation (Zhang \& M{\'e}sz{\'a}ros 2001). However, $\alpha$ may be less than -2 if the central engine is a supramassive NS whether the energy loss is dominated by magnetic dipole radiation or a gravitational wave (Rowlinson et al. 2010; Lasky \& Glampedakis 2016; L\"{u} et al. 2018). The near-isotropic magnetar wind interacts with ejecta and is quickly decelerated. On the other hand, the injected wind continuously pushes from behind and accelerates the ejecta. In our calculation, we assume that the magnetar wind is magnetized, and the wind energy could be deposited into the ejecta either via direct energy injection by a Poynting flux (Bucciantini et al. 2012), or via forced reconnection (Zhang 2013). More details about the dynamical evolution of the ejecta will be discussed in below. The supramassive NS will collapse into a BH after a few hundred seconds when the degeneracy pressure of the neutron cannot support the gravitational force due to its rotation energy loss. The strongly magnetic field from the newborn BH will be anchored in the horizon, thus another possible additional energy deposited into ejecta through the Blandford-Payne (BP) mechanism (Blandford \& Payne 1982). The magnetic wind power can be estimated as follows (Livio et al. 1999, Meier et al. 2001), \begin{equation} L_{\rm bp} = (B_{\rm ms}^{\rm p})^2 r_{\rm ms}^4 \Omega_{\rm ms}^2 / 32c \end{equation} where $\Omega_{\rm ms}$ and $r_{\rm ms}$ are the Keplerian angular velocity and the radius of the marginally stable orbit, respectively. The $r_{\rm ms}$ can be written as \begin{equation} r_{\rm ms}/ r_{\rm g} = 3 + Z_2 - [(3 - Z_1)(3 + Z_1 + 2Z_2)]^{1/2} \end{equation} where $r_{\rm g} \equiv {GM_{\bullet}/c^2}$, and $M_{\bullet}$ is the mass of BH. Here, $Z_1$ and $Z_2$ are dependent on spin parameter of BH ($a_{\bullet}$). Namely, $Z_1 \equiv 1 + (1 - a_{\bullet}^2)^{1/3} [(1 + a_{\bullet})^{1/3} + (1 - a_{\bullet})^{1/3}]$ for $0 \leq a_{\bullet} \leq 1$, and $Z_2 \equiv (3 a_{\bullet}^2 + Z_1^2)^{1/2}$. So that, $\Omega_{\rm ms}$ is given by \begin{equation} \Omega_{\rm ms} = \left( \frac{GM_{\bullet}}{c^3} \right)^{-1} \frac{1}{\chi_{\rm ms}^3 + a_{\bullet}} \end{equation} where $\chi_{\rm ms} \equiv \sqrt{r_{\rm ms/ r_{\rm g}}}$. On the other hand, based on the derivation of Blandford \& Payne (1982), the disk poloidal magnetic field $B_{\rm ms}^{\rm p}$ at $r_{\rm ms}$ can be written as \begin{equation} B_{\rm ms}^{\rm p} = B_{\bullet}(r_{\rm ms} / r_{\bullet})^{-5/4} \end{equation} where $r_{\bullet} = r_{\rm g}(1 + \sqrt{1 - a_{\bullet}^2})$ and $B_{\bullet}$ are the horizon radius of BH and the magnetic field strength threading the BH horizon, respectively (Lei et al. 2013; Liu et al. 2017 for a review). For more details, refer to Ma et al. (2018). In our calculations, we assume that the distribution of ejecta mass ($M_{\rm ej}$) is from $10^{-4} M_\odot$ to $10^{-2} M_\odot$, which can be accelerated to a relativistic speed ($\beta \sim 0.1 - 0.3$) by the magnetar wind, and the opacity is in the range of $\kappa \sim (0.1 - 10)\rm ~g\cdot cm^{-2}$. Also, $M_{\bullet}=3 M_{\odot}$ is adopted as the initial mass of newborn BH, then, the total energy of the ejecta and shocked medium can be expressed as $E = (\Gamma -1)M_{\rm ej}c^2 + \Gamma E_{\rm int}^{'} $, where $\Gamma$ and $E_{\rm int}^{'}$ are the Lorentz factor and the internal energy measured in the comoving frame, respectively. The energy conservation is given as, \begin{eqnarray} d E_{\rm ej} = (L_{\rm inj} - L_{\rm e})dt \end{eqnarray} where $L_{\rm e}$ is the radiated bolometric luminosity. $L_{\rm inj}$ is the injection luminosity, which is contributed by two terms. One is spin-down luminosity ($L_{\rm sd}$) from the magnetar and radioactive power ($L_{\rm ra}$) before the magnetar collapsed into BH, and the second is magnetic wind power ($L_{\rm bp}$) from a newborn BH after magnetar collapse. The mathematics formula can be expressed as follows, \begin{eqnarray} L_{\rm inj} = \left\{ \begin{array}{ll} \xi_1 L_{\rm sd} + L_{\rm ra}, & t \leq t_{col}\\ \xi_2 L_{\rm bp}, & t > t_{col} \end{array} \right. \end{eqnarray} Here, $\xi_1$ and $\xi_2 $ are corresponding to converted efficiencies from spin-down luminosity or BP luminosity to the ejecta, respectively. A normal value of 0.3 is adopted with both $\xi_1$ and $\xi_2 $ (Zhang \& Yan 2011). The $t_{\rm col}$ is characteristic collapse time from magnetar to BH. Together with the above considerations, the full dynamic evolution of the ejecta can be determined by \begin{equation} \frac{d \Gamma}{dt} = \frac{L_{\rm inj} - L_{\rm e} - \Gamma \mathcal{D}(dE_{\rm int}^{'}/ dt^{'})}{M_{\rm ej}c^2 + E_{\rm int}^{'}} \label{dGamma} \end{equation} Here, we do not consider the energy loss, which is due to shock emission. $\mathcal{D} = \Gamma+\sqrt{\Gamma^2-1}$ is the Doppler factor, and one can switch the comoving time ($dt^{'} $) into observer's time ($dt$) based on the Doppler effect, $dt^{'} = \mathcal{D}dt$. The evolution of internal energy in the comoving frame can be written as (Kasen \& Bildsten 2010, Yu et al. 2013) \begin{eqnarray} \frac{dE_{\rm int}^{'}}{dt^{'}} = L_{\rm inj}^{'} - L_{\rm e}^{'} - \mathcal{P}^{'}\frac{dV^{'}}{dt^{'}} \label{dEint} \end{eqnarray} where the luminosity of radiative, spin-down of magnetar, and magnetic wind of a newborn BH in the comoving frame are $L_{\rm sd}^{'} = L_{\rm sd}/\mathcal{D}^2$, $L_{\rm e}^{'} = L_e/\mathcal{D}^2$, and $L_{\rm ra}/\mathcal{D}^2$ (Yu et al. 2013). The pressure $\mathcal{P}^{'} = E_{\rm int}^{'}/3V^{'}$ is dominated by radiation, and the evolution of the comoving volume can be addressed by \begin{equation} \frac{dV^{'}}{dt} = 4\pi R^2\beta c \label{dV} \end{equation} where $\beta=\sqrt{1-\Gamma^{-2}}$. The the radius of ejecta ($R$) evolute with time as \begin{equation} \frac{dR}{dt} = \frac{\beta c}{(1-\beta)} \label{dR} \end{equation} The radiate bolometric luminosity ($L_{\rm e}^{'}$) is read as (Kasen \& Bild- sten 2010; Kotera et al. 2013) \begin{eqnarray} L_{\rm e}^{'} = \left\{ \begin{array}{ll} \frac{E_{\rm int}^{'}c}{\tau R / \Gamma}, & t \leq t_{\tau}\\ \frac{E_{\rm int}^{'}c}{R/\Gamma}, & t > t_{\tau} \end{array} \right. \end{eqnarray} where $\tau = \kappa(M_{ej}/V^{'})(R/\Gamma)$ is the optical depth of the ejecta, and $\kappa$ is the opacity. $t_{\tau}$ is the time at which $\tau = 1$. By assuming that the emission spectrum $\nu L_{\nu}$ is always characterized by the blackbody radiation, and an effective temperature can be defined as (Yu et al. 2018) \begin{equation} T_{e}=(\frac{L_{e}}{4\pi R^{2}_{ph}\sigma_{SB}})^{1/4} \label{temperature} \end{equation} where $\sigma_{\rm SB}$ is the Stephan-Boltzman constant, and $R_{\rm ph}$ is photospheric radius when the optical depth equals unity in mass layer beyond. For given frequency $\nu$, the observed flux density could be calculated as \begin{equation} F_{\nu}(t) = \frac{2\pi h \nu^3R^2}{D_{L}^{2}c^2}\frac{1}{\mathrm{exp}(h \nu /kT_{e})-1} \label{Fv} \end{equation} where $h$ is the Planck constant, and $D_{L}$ is the luminosity distance of the source. Finally, we determine the monochromatic apparent magnitude by $M_\nu = -2.5~\mathrm{log}_{10}\frac{F_\nu}{3631}\ \mathrm{Jy}$ (Yu et al. 2018). \section{Sample Selection} Since the stable magnetar signature typically invokes a plateau phase followed by a $t^{-2}$ decay in X-ray emission (Zhang \& M{\'e}sz{\'a}ros 2001), but supra-massive NS signature require a feature of X-ray emission with "internal plateau" (Troja et al. 2007; Rowlinson et al. 2010; L\"{u} et al. 2015). We search for such a signature to decide how likely a GRB is to be powered by a supra-massive NS, so that two criteria for our sample selection are adopted: \begin{itemize} \item We focus on short GRBs with internal plateau of X-ray emission\footnote{L\"{u} et al. (2015) performed a systematically searching for short GRBs sample with internal plateau by including the extended emission, which is extrapolated into X-ray band by assuming a single power-law spectrum, also see Rowlinson et al (2013). In this work, we only selected the short GRBs with internal plateau in X-ray emission, and do not consider the internal plateau which caused by extrapolating from BAT to X-ray.}, which may be the signature of supra-massive NS as the central engine and suffering a collapse process from supra-massive NS to new-born BH. \item In order to obtain quantitatively fitting of the data and more intrinsic information of GRBs, we focus on bursts with redshift measurement. \end{itemize} Following the above criteria, we systematically search for 122 short GRBs observed by the {\em Swift} Burst Alert Telescope (BAT) from the launch of {\em Swift} to 2020 September, and 105 short GRBs have the X-ray observations with more than five data points. The X-ray data of those GRBs are taken from the {\em Swift} X-ray Telescope (XRT) website\footnote{$\rm https://www.swift.ac.uk/xrt\_curves/allcurves.php$} (Evans et al. 2009). We then perform a temporal fit to the X-ray light curve with a smooth broken power law in the rest frame, \begin{eqnarray} F = F_0 \left[ \left(\frac{t}{t_b}\right)^{\omega\alpha_1} + \left( \frac{t}{t_b}\right)^{\omega\alpha_2} \right ]^{-1/\omega} \end{eqnarray} Here, $t_{\rm b}$ is the break time, $F_{\rm b} = F_0 \cdot 2^{-1/\omega}$ is the flux at $t_{\rm b}$, and $\alpha_1$ and $\alpha_2$ are decay indices before and after the break, respectively. $\omega$ describe the sharpness of the break, and the larger $\omega$ describes the sharper break. In order to identify a possible an internal plateau, $\omega=10$ is adopted in our analysis (Liang et al. 2007). We find that 17 bursts exhibit an internal plateau. Within this sample, only five bursts (e.g., GRBs 060801, 090515, 100625A, 101219A, and 160821B) have the redshift measured. The fitting results of those bursts are presented in Figure 1 and Table 1. Since the GRBs 060801, 090515, 100625A, and 101219A are reported in Rowlinson et al. (2013) and L\"{u} et al. (2015), and the internal plateau feature of GRB 160821B is also identified in L\"{u} et al. (2017) and Troja et al. (2019). Moreover, even the decay slope following the plateau of X-ray emission for GRB 090510 is steeper than 2, and measured redshift, but the spectrum of X-ray and optical during the plateau phase is consistent with a standard forward shock model (Ackermann et al. 2010; De Pasquale et al.2010; Pelassa \& Ohno 2010; Nicuesa Guelbenzu et al. 2012). So this case is not included in our sample. \section{Detectability of ``Merger-nova" emission} It is interesting to compare the brightness of $r$-process powered kilonovae, magnetar-powered merger-novae, and the BH magnetic wind powered merger-novae claimed in the literature. How do the brightness of the merger-nova emission depend on those three energy sources, e.g., $r$-process-powered (marked $S1$), spin energy from supramassive NS (marked $S2$), and magnetic wind from the accretion disk of a newborn BH (marked $S3$). Figure \ref{fig:mergernova} shows the numerical calculation of merger-nova emission in the $R$ band by considering only $S1$, $S1+S2$, and $S1+S2+S3$ to adopt the typical value of model parameters. We find that a significant contribution from $S2$ and $S3$ for merger-nova emission occurs by comparing the contribution of only $S1$. Based on the fits of the internal plateau of X-ray emission, one can roughly estimate the initial spin-down luminosity $L_0\sim 4\pi D_{L}^2 F_0$ which is also presented in Table 1 for our sample. In order to infer the power of spin-down of a magnetar and the magnetic wind of a newborn BH, we roughly set up $\alpha=\alpha_2$, and $\tau=t_{\rm col}=t_{b}$. By combining with those three energy sources of merger-nova emission, Figure \ref{GRB060801}-\ref{GRB160821B} show the possible merger-nova light curve of our sample in $K$-, $r$-, and $U$-bands with variations of $M_{\rm ej}$, $\kappa$, and $\beta$. Moreover, we also overplot the upper limit detected of instruments for we also overplot the upper limit detected of instruments for Vera C. Rubin, ZTF, Pan-STARRS, and Roman. In order to compare the real optical observations of those candidates with the model, we collect the optical or near-infrared (NIR) data that was observed by optical telescopes of them as much as possible if the optical data are available or even upper limits. The top three panels of Figure \ref{GRB060801} show the merger-nova light curve with variation of $M_{\rm ej}=10^{-2} M_{\odot}$, $10^{-3} M_{\odot}$, and $10^{-4} M_{\odot}$ for fixed $\kappa=0.1~\rm cm^{2}~g^{-1}$ and $\beta=0.1$. In the middle three panels of Figure \ref{GRB060801} also show its light curve, but fixed $M_{\rm ej}=10^{-2} M_{\odot}$ and $\kappa=0.1~\rm cm^{2}~g^{-1}$ with variation of $\beta=$0.1, 0.2, 0.3. Bottom three panels of Figure \ref{GRB060801} present the light curve for fixed $M_{\rm ej}=10^{-2} M_{\odot}$ and $\beta=$0.1 with a variation of $\kappa=0.1$, $1.0$, and $10~\rm cm^{2}~g^{-1}$. It is clear to see that a lower ejecta mass and higher ejecta velocity would power a brighter merger-nova emission, but the variation of opacity is not significant enough to effect the brightness of merger-nova. Moreover, the numerical calculation shows that the merger-nova emission of GRB 060801 in $K$, $r$, and $U$ bands with variation of $M_{\rm ej}$, $\kappa$, and $\beta$ is very difficult to detect using all of the above instruments, except for the case of large ejecta mass $M_{\rm ej}=10^{-2} M_{\odot}$. Similar results are also shown for GRB 090515 in Figure \ref{GRB090515}. Following the same method as in Figure \ref{GRB060801}, we also present the merger-nova emission light curves of the other four GRBs (e.g., GRBs 090515, 100625A, 101219A, and 160821B) in our sample from Figure \ref{GRB090515} to Figure \ref{GRB160821B}. We are very hopeful to detect the merger-nova emission of GRBs 090515, 100625A, and 101219A using the more sensitive instruments of Roman, Vera C.Rubin, and Pan-STARRS, but it seems to be difficult to detect them using ZTF. Figure \ref{GRB160821B} shows the possible merger-nova emission of the nearby short GRB 160821B. One can see that it is bright enough to be detected by all Vera C. Rubin, Pan-STARRS, ZTF, as well as Roman for given model parameters. We will discuss more details of this case at the end of Section 5. \section{Conclusion and discussion} The observed X-ray emission internal plateau in some short GRBs suggests that one possible remnant of double NS mergers is a supramassive NS that is supported by rigid rotation and survives for hundreds of seconds before collapsing into a BH. An optical/infrared transient (e.g., merger-nova) is generated from the ejected materials and powered by radioactive decay from the $r$-process, spin energy from supramassive NS, and magnetic wind from the accretion disk of newborn BH. In this work, we systematically search for the signature of a supramassive NS central engine by analyzing the X-ray emission of short GRBs, and calculate how bright this merger-nova is for the given model parameters. Five candidates of short GRBs (GRBs 060801, 090515, 100625A, 101219A, and 160821B) are found to be carrying such a feature with redshift measurements. By detail numerical calculations, we find that the merger-nova emission of GRB 060801 in $K$, $r$, and $U$ bands with variations of $M_{\rm ej}$ ($10^{-4}-10^{-2} M_{\odot}$), $\kappa$ ($0.1-10 ~\rm cm^{2}~g^{-1}$), and $\beta$ ($0.1-0.3$) is very difficult to be detected by Vera C. Rubin, Pan-STARRS, as well as ZTF, except for the case of less ejecta mass $M_{\rm ej}=10^{-4} M_{\odot}$. However, it is very hopeful to detect the merger-nova emission of GRBs 100625A, and 101219A by more sensitive instruments of Roman, Vera C. Rubin and Pan-STARRS, but seems to be difficult detected by ZTF. For GRB 090515, there are two real optical data in $r$ band observed by the optical telescope (Fong \& Berger 2013), and the model prediction of the merger-nova signal for given model parameters are higher than those two real optical data. However, no any evidence was found for recent radio surveys of short GRBs (Klose et al. 2019; Ricci et al. 2021). This contradiction between observations and model prediction may be caused by the unreasonable selected model parameters (e.g., higher ejecta mass, lower opacity, and higher speed). More interestingly, the merger-nova emission of the nearby short GRB 160821B is indeed detected or even some upper limits by optical telescopes (Kasliwal et al. 2017, called it kilonova), and the presence of a merger-nova was also discussed by Kasliwal et al. (2017), Jin et al. (2018), and Troja et al. (2019). The radio surveys of this case are also reported by Ricci et al. (2021). In our calculations, the merger-nova emission of this case is bright enough to be detected by Roman, Vera C. Rubin, Pan-STARRS, and ZTF, and it is consistent with current observed data of merger-nova. Recently, Ma et al. (2020) proposed that the merger-nova of this case is possibly powered through the Blandford-Znajek mechanism (BZ; Blandford \& Znajek 1977) forming a newborn BH, and the observed data of merger-nova is consistent with the physical process from the magnetar collapsing into a BH. In order to confirm the real model parameters of this case, we are working on the Markov Chain Monte Carlo (MCMC) method to present the values of model parameters in another paper (in preparation) of this case, as well as other short GRBs if the optical data are good enough (An et al. 2021, in preparation). The criteria for our sample selection invoked the X-ray internal-plateau emission, which indicates the signature of supramassive NS collapse into BH. Alternatively, Yu et al. (2018) proposed that the internal-plateau emission in some short GRBs may be caused by an abrupt suppression of the magnetic dipole radiation of a remnant NS. If this is the case, the energy injection from NS will be contributed to the merger-nova at a later time. In this paper, we do not consider this effect. In any case, in order to investigate the physical process of the magnetar central engine in short GRBs, we therefore encourage intense multiband optical follow-up observations of short GRBs with internal plateaus to catch such merger-nova signatures in the future. More importantly, the observed kilonova or merger-nova together with the GW signal from a double NS merger and an NS-BH merger will provide a good probe to study the accurate Hubble constant, the equation of state of an NS, as well as the origin of heavier elements in the universe. With the improvement of detection technology, we are looking forward to observing more and more GW events associated with merger-nova from compact stars in the future. \acknowledgments We are very grateful to thank the referee for constructive comments and suggestions. We acknowledge the use of the public data from the {\em Swift} data archive and the UK {\em Swift} Science Data Center. We would like to thank Yunwei Yu for sharing his original code used to produce merger-nova, and we also thank Dabin Lin, Jia Ren, Shenshi Du, and Cheng Deng for helpful discussions. This work is supported by the National Natural Science Foundation of China (grant Nos. 11922301, 11851304, and 11533003), the Guangxi Science Foundation (grant Nos. 2017GXNSFFA198008, 2018GXNSFGA281007, and AD17129006), the Program of Bagui Young Scholars Program (LHJ), and special funding for Guangxi distinguished professors (Bagui Yingcai and Bagui Xuezhe).
1,116,691,499,465
arxiv
\section{Introduction} In an unpublished preprint \cite{kisinppmf}, Kisin introduces some $p$-adic period rings with an eye towards capturing periods for a certain class of overconvergent $p$-adic modular forms. The Fontaine-style functors associated to these rings are intended to be a sort of Betti realization for these forms. With this in mind, Kisin essentially ``deforms'' $B_{HT}$ to force non-integral $p$-adic powers of the cyclotomic character to turn up. The rings he defines are closely related to the Iwasawa algebra and non-canonically contain a copy of $B_{HT}$. The $p$-adic modular forms he considers are of a limited sort. In particular, they are $p$-adic limits of classical forms. As such, their weights are $p$-adic limits of classical weights, which is why $p$-adic powers of the cyclotomic character suffice for his purposes. One might naturally be led to ask what can be said outside of this realm of the eigencurve. The first thing one might try to do is to further deform $B_{HT}$ to ``see'' all points of $p$-adic weight space $\scr{W}$. Given the interpretation of the Iwasawa algebra in terms of distributions on $\Z_p$, one natural approach is to consider a suitable ring of distributions on the rigid-analytic space $\scr{W}$. In so doing one quickly encounters a number of unpleasant features, including zero-divisors in the ring of distributions and too many Galois invariants. Both of these seem to be related to the presence of torsion in $\scr{W}$ (which conveniently lies just outside the region containing the $\Z_p$-powers of the cyclotomic character considered by Kisin). This led the author to consider the quotient of $\scr{W}$ by its torsion. We show that this quotient has the structure of a rigid space and that a certain collection of distributions on this space does indeed furnish a nice ring of periods. In fact, this ring turns out to be canonically isomorphic via a ``Fourier transform'' to the the ring $B_{\mathrm{Sen}}$ introduced by Colmez in \cite{colmezsen}. \section{Notation} Fix an odd prime $p$. In this paper, $\scr{W}$ will denote $p$-adic weight space. This is a rigid space over $\Q_p$ whose points with values in a complete extension $L/\Q_p$ are given by $$\scr{W}(L) = \Hom_{\mathrm{cont}}(\Z_p^\times, L^\times)$$ Let $\tau$ denote the Teichmuller character $\Z_p^\times\longrightarrow \mu_{p-1}$ and define $\ip{x} = x/\tau(x)\in 1+p\Z_p$. Any $\psi\in \scr{W}(K)$ can be factored as $$\psi(x) = \psi(\tau(x))\psi(\ip{x}) = \tau(x)^i\psi(\ip{x})$$ for a unique $i\in \Z/(p-1)\Z$. The space $\scr{W}$ is the disjoint union of $p-1$ disks $\scr{W}^i$ associated to these values of $i$. Let $\gamma$ be a generator of the pro-cyclic subgroup $1+p\Z_p$ (for example, one could take $\gamma=1+p$). Then the restriction of $\psi$ to $1+p\Z_p$ is determined by $\psi(\gamma)$. Moreover, it is not difficult to show that $\psi(\gamma)=1+t$ where $t\in L$ satisfies $|t|<1$ and that moreover any such $t$ defines a character of $1+p\Z_p$ that we will denote by $\psi_t$. Thus the $L$-valued points of $\scr{W}$ are in bijective correspondence with $p-1$ copies of the open unit disk in $L$ via $$(i,t)\longleftrightarrow \tau^i\psi_t$$ For each $n\geq 0$, let $\scr{W}^0_n$ denote the admissible affinoid in $\scr{W}^0$ defined by the inequality $$|\psi(\gamma)-1|\leq p^{-1/p^{n-1}(p-1)}$$ Note that this these affinoids do not depend on the choice of $\gamma$. The numbering is set up so that if $\zeta$ is a primitive $p^n$-th root of unity and $\psi$ is satisfies $\psi(\gamma) = \zeta$, then $\psi\in \scr{W}^0_n\setminus \scr{W}^0_{n-1}$. That is, $\scr{W}^0_n\cap \scr{W}^0_{\mathrm{tors}} =\scr{W}^0[p^n]$. In what follows $K$ will denote a complete extension of $\Q_p$. If $K/\Q_p$ is finite, then for each integer $n\geq 0$ we define $K_n = K(\mu_{p^n})\subset \overline{\Q_p}$ and $K_\infty = \cup_n K_n\subseteq \overline{\Q_p}$. The letter $\chi$ will always denote the cyclotomic character We recall some standard facts about the $p$-adic logarithm and exponential for future use. See \cite{washingtonbook} for a nice account of this material. Let $$\log_p(1+x) = \sum_{k\geq 1} (-1)^{k+1}\frac{x^k}{k}$$ and $$\exp_p(x) = \sum_{k\geq 0}\frac{x^k}{k!}$$ These series have radii of convergence $1$ and $p^{-1/(p-1)}$, respectively. If $|x|<p^{-1/(p-1)}$, then $|\log_p(1+x)| = |x|$ and we have $$\exp_p(\log_p(1+x)) = x$$ and $$\log_p(\exp_p(x))=x$$ Also, the identity $$\log_p(uv) = \log_p(u)+\log_p(v)$$ holds whenever it makes sense ($|u-1|,|v-1|<1$). One consequence of the last property is that $\log_p(\zeta) = 0$ for any $p$-power root of unity $\zeta$. In fact, the converse of this statement holds as well in the sense that the $p$-power roots of unity are the only roots of $\log_p$ in its domain of convergence. \section{The quotient $\scr{W}/\scr{W}_{\mathrm{tors}}$} The space $\scr{W}$ has $(p-1)$-torsion given by the powers $\tau^i$ of the Teichmuller character. This torsion is all rational over $\Q_p$ and the quotient of $\scr{W}$ by this torsion subgroup is canonically identified (via Teichmuller) with the identity component $\scr{W}^0$. The space $\scr{W}^0$ has only $p$-power torsion. We wish to describe the quotient $\scr{W}^0/\scr{W}^0_{\mathrm{tors}}$. Looking only at the $\C_p$-points of these spaces, the properties of $\log_p$ described above ensure that it defines an injective map \begin{eqnarray*} \scr{W}^0(\C_p)/\scr{W}^0_{\mathrm{tors}}(\C_p) & \longrightarrow &\C_p \\ \psi & \longmapsto & \log_p(\psi(\gamma)) \end{eqnarray*} It is not difficult to see that this map is also surjective (see the proof of Theorem \ref{quotstructure} below). This description has two deficiencies. The first is that it is only a description on points and does not identify $\scr{W}^0/\scr{W}^0_{\mathrm{tors}}$ as a rigid space. The second is that it depends on the choice of $\gamma$. The latter is easily remedied as follows. Identify $\scr{W}^0$ with the disk defined by $|t-1|<1$ using $\gamma$ as explained in the previous section, and define an analytic function $\vartheta$ on $\scr{W}^0$ by $$\vartheta = \frac{1}{\log_p(\gamma)}\log_p(1+t)$$ In terms of characters, this functions is given by $$\vartheta(\psi) = \frac{\log_p(\psi(\gamma))}{\log_p(\gamma)}$$ It follows immediately from the latter description and properties of $\log_p$ that the function $\vartheta$ does not depend on the choice of $\gamma$ and thus furnishes a canonical analytic function on $\scr{W}^0$ that is defined over $\Q_p$ (say, by taking $\gamma=1+p$) and satisfies $$\vartheta(\psi\psi') = \vartheta(\psi)+\vartheta(\psi')\ \ \mbox{for all}\ \ \psi, \psi'\in \scr{W}^0$$ The first difficulty is more subtle, as the notion of quotient in the setting of rigid spaces is not well-known. We shall adopt the following definition of quotient in our context (see \cite{conradtemkin} for a much more general discussion of such quotients). Let $G$ be a rigid-analytic group (such as $\scr{W}^0$ or $\scr{W}^0_n$) and $H$ be a subgroup (such as $\scr{W}^0_{\mathrm{tors}}$ or $\scr{W}^0_n[p^n]$, respectively) and let $H\times G \rightrightarrows G$ denote the equivalence relation defined by multiplication and projection. We will say that a rigid space $X$ equipped with a surjective map $G\longrightarrow X$ is the quotient of $G$ by $H$ if the compositions $$H\times G\rightrightarrows G\longrightarrow X$$ coincide and the resulting map $$H\times G \longrightarrow G\times_X G$$ is an isomorphism. \begin{lemm}\label{thetabound} If $\psi\in \scr{W}^0_n$, we have $|\vartheta(\psi)|\leq p^{n-1/(p-1)}$. \end{lemm} \begin{proof} If $\psi\in \scr{W}^0_0$ then $|\psi(\gamma)-1|\leq p^{-p/(p-1)}<p^{-1/(p-1)}$, so $$|\vartheta(\psi)| = \left|\frac{\log_p(\psi(\gamma))}{\log_p(\gamma)}\right|= p|\log_p(\psi(\gamma))|\leq p^{1-p/(p-1)}=p^{-1/(p-1)}$$ which is the claim for $n=0$. Suppose that the claim holds for some $n$ and let $\psi\in\scr{W}_{n+1}^0$. Observe that $$|\psi(\gamma)^p-1| = |((\psi(\gamma)-1)+1)^p-1| = | (\psi(\gamma)-1)^p+p(\psi(\gamma)-1) (1+\cdots)|\leq p^{-1/p^{n-1}(p-1)}$$ so that $\psi^p\in \scr{W}^0_n$. Thus $$p^{-1}|\vartheta(\psi)|=|p\vartheta(\psi)|=|\vartheta(\psi^p)|\leq p^{n-1/(p-1)}$$ which establishes the claim for $n+1$ and thus for all $n$ by induction. \end{proof} The upshot of this lemma is that $\vartheta$ defines a canonical analytic function \begin{equation}\label{thetan} \vartheta:\scr{W}^0_n\longrightarrow \A^1_{p^{n-1/(p-1)}} \end{equation} for each $n\geq 0$, where $\A^1_{p^{n-1/(p-1)}}$ denotes the affinoid ball of radius of $p^{n-1/(p-1)}$ centered at $0$ in the rigid-analytic affine line $\A^1$. \begin{theo}\label{quotstructure} The map $\vartheta$ in (\ref{thetan}) identifies $\A^1_{p^{n-1/(p-1)}}$ with the quotient of $\scr{W}^0_n$ by $\scr{W}^0_n[p^n]$ \end{theo} \begin{proof} Let us first show that the map $\vartheta$ in (\ref{thetan}) is surjective. Suppose that $x\in \C_p$ has $|x|\leq p^{n-1/(p-1)}$. Then $$|p^n\log_p(\gamma)x| \leq p^{-n-1}p^{n-1/(p-1)} = p^{-1-1/(p-1)}<p^{-1/(p-1)}$$ so $y=\exp_p(p^n\log_p(\gamma)x)$ is defined. Let $z\in \C_p$ satisfy $z^{p^n}=y$ and let $\psi$ be the unique point of $\scr{W}^0$ with $\psi(\gamma)=z$. Then $|z-1|\leq p^{-1/p^{n-1}(p-1)}$, as is easy to see using an inductive binomial theorem argument akin to the one in the proof of Lemma \ref{thetabound}. Thus $\psi\in \scr{W}^0_n$ and $$p^n\vartheta(\psi)= \frac{\log_p(\psi(\gamma)^{p^n})}{\log_p(\gamma)} =\frac{\log_p(\exp_p(p^n\log_p(\gamma)x))}{\log_p(\gamma)} =p^n x$$ so $\vartheta(\psi)=x$ and $\vartheta$ is surjective. Let $A$ denote the affinoid algebra of $\scr{W}_n^0$ which we will identify with the Tate algebra of power series in $t$ with coefficients in $\Q_p$ that are strictly convergent for $|t|\leq p^{-1/p^{n-1}(p-1)}$. That is, $$A = \left\{\left.\sum a_kt^k\ \right|\ |a_k|p^{-k/p^{n-1}(p-1)}\to 0 \ \mbox{as}\ k\to\infty\right\}$$ Similarly, we let $B$ denote the affinoid algebra of $\A^1_{p^n-1/(p-1)}$, which we identify with the power series over $\Q_p$ in the variable $s$ that are strictly convergent for $|s|\leq p^{n-1/(p-1)}$. The map $\vartheta$ corresponds to a function $B\longrightarrow A$, and by suggestive abuse of notation, we will denote the image of $s$ under this pull-back by $\vartheta(1+t)$. Explicitly, $$\vartheta(1+t) = \frac{\log_p(1+t)}{\log_p(\gamma)} = \frac{1}{\log_p(\gamma)}\sum_{k\geq 0}(-1)^{k+1}\frac{t^k}{k}$$ which lies in $A$, as is evident by taking $\gamma=1+p$, for example. Note that the functional properties of $\log_p$ correspond to (in fact, result from) formal properties of this series, which we use with only minor comments below. The two maps $\scr{W}_n^0[p^n]\times\scr{W}_n^0\rightrightarrows \scr{W}_n^0$ comprising the equivalence relation correspond to the maps \begin{eqnarray*} A & \longrightarrow & A/((1+t)^{p^n}-1)\ \widehat{\otimes}\ A \\ t & \longmapsto & 1\otimes t \\ t & \longmapsto & 1\otimes t + t\otimes 1 + t\otimes t \end{eqnarray*} We claim that the compositions of these maps with the map $B\longrightarrow A$ coincide. Indeed \begin{eqnarray*} \vartheta(1+(1\otimes t +t\otimes 1+t\otimes t)) &=& \vartheta((1+1\otimes t)(1+t\otimes 1)) \\ &=& \vartheta(1+1\otimes t)+\vartheta(1+t\otimes 1) \\ &=& \vartheta(1+1\otimes t)+p^{-n}\vartheta((1+t\otimes 1)^{p^n}) \\ &=& \vartheta(1+1\otimes t) \end{eqnarray*} where the last equality follows because $$\vartheta((1+t\otimes 1)^{p^n}) = \vartheta(1+ ( (1+t\otimes 1)^{p^n}-1))$$ lies in the ideal generated by $(1+t\otimes 1)^{p^n}-1$, which is $0$. The upshot of the equality of these two functions is that we have a well-defined map \begin{eqnarray*} A\widehat{\otimes}_B A & \longrightarrow & A/((1+t)^{p^n}-1)\ \widehat{\otimes}\ A \\ 1\otimes t & \longrightarrow & 1\otimes t \\ t\otimes 1 & \longrightarrow & 1\otimes t + t\otimes 1 + t\otimes t \end{eqnarray*} To complete the proof, we must show that this map is an isomorphism. Let us try to define an inverse map via \begin{eqnarray*} A/((1+t)^{p^n}-1)\ \widehat{\otimes}\ A &\longrightarrow & A\widehat{\otimes}_B A\\ 1\otimes t & \longmapsto & 1\otimes t \\ t\otimes 1 &\longmapsto & \frac{1+t\otimes 1}{1+1\otimes t}-1 =(1+t\otimes 1)\left(\sum_{k\geq 0}(-1)^k(1\otimes t)^k\right) - 1 \end{eqnarray*} If this map is well-defined, then it is a simple formal matter to check that it is the inverse of the map defined above. To see that it is well-defined, first note that the series in the definition has radius of convergence $1$ and thus lies in $A$. Finally, we must show that $(1+t\otimes 1)^{p^n}-1$ gets sent to $0$ under the above proposed map. Thus, we need to check $$\left(\frac{1+t\otimes 1}{1+1\otimes t}\right)^{p^n}-1=0$$ in $A\widehat{\otimes}_BA$, which is in turn equivalent to $(1+t)^{p^n}\otimes 1 = 1\otimes (1+t)^{p^n}$ in $A\widehat{\otimes}_BA$. By definition of the tensor product, this will follow if we can find an element $b\in B$ whose image in $A$ under $B\longrightarrow A$ is $(1+t)^{p^n}$. We claim that $$b = \exp_p(p^n\log_p(\gamma)s) = \sum_{k\geq 0}\frac{(p^n\log_p(\gamma))^k}{k!}s^k$$ is such an element. First, since $\exp_p(x)$ is strictly convergent on $|x|\leq p^{-1-1/(p-1)}< p^{-1/(p-1)}$, the given series is strictly convergent for $|s|\leq p^{n-1/(p-1)}$ and thus lies in $B$. The image of $b$ under the map $B\longrightarrow A$ is $$\exp_p(p^n\log_p(\gamma)\vartheta(1+t)) = \exp_p\left(p^n \log_p(\gamma) \frac{\log_p(1+t)}{\log_p(\gamma)}\right) = (1+t)^{p^n}$$ as desired. \end{proof} \begin{defi} For each $n\geq 0$, let $X_n$ denote the quotient $\scr{W}_n^0/\scr{W}_n^0[p^n]$. By the previous theorem, $X_n$ is an affinoid defined over $\Q_p$ equipped with a canonical isomorphism $\vartheta:X_n\stackrel{\sim}{\longrightarrow} \A^1_{p^{n-1/(p-1)}}$. \end{defi} Theorem \ref{quotstructure} has the following immediate consequence. \begin{coro}\label{maincoro} The ring $\O(X_n\widehat{\otimes}K)$ consists of functions of the form $$f = \sum a_k\vartheta^k$$ with $a_k\in K$ and $|a_k|p^{k(n-1/(p-1))}\longmapsto 0$. In terms of this expansion we have $$\|f\|_{\sup{}} = |a_k|p^{k(n-1/(p-1))}$$ for all such $f$. \end{coro} For each $n\geq 0$ the natural map \begin{equation}\label{trans} X_n\longrightarrow X_{n+1} \end{equation} is injective and identifies $X_n$ with an admissible affinoid open in $X_{n+1}$. Gluing over increasing $n$, we conclude that the function $\vartheta$ furnishes a canonical isomorphism $$\scr{W}^0/\scr{W}^0_{\mathrm{tors}}\stackrel{\sim}{\longrightarrow} \A^1$$ \section{Distributions} To the affinoids $X_n$, we associate spaces of bounded distributions as follows. \begin{defi} Let $K/\Q_p$ be a complete extension. A bounded $K$-valued distribution on $X_n$ is a $\Q_p$-linear map $$\mu:\O(X_n)\longrightarrow K$$ such that there exists $C$ such that $|\mu(f)|\leq C \|f\|_{\sup{}}$ for all $f\in \O(X_n)$. We denote the space of all such distributions by $\scr{D}(X_n,K)$. The map (\ref{trans}) induces a map \begin{equation}\label{disttrans} \scr{D}(X_n,K)\longrightarrow \scr{D}(X_{n+1},K) \end{equation} for each $n$, and we define $$\scr{D}(X_\infty,K) = \varinjlim_n \scr{D}(X_n,K)$$ \end{defi} The maps in (\ref{disttrans}) are injective for all $n\geq 0$, so the injective limit is simply a union. This injectivity is not obvious from the definition, but is a simple consequence of Lemma \ref{chardist} below. \begin{rema}\label{extension} If $K/L$ is an extension of complete extensions of $\Q_p$, then any $\mu\in \scr{D}(X_n,K)$ induces an $L$-linear map $$\O(X_n\widehat{\otimes} L)\longrightarrow K$$ by extension of scalars in the obvious manner. \end{rema} The following lemma allows us to characterize our distributions on $X_n$ via their ``moments'' under the isomorphism of Theorem \ref{quotstructure}. \begin{lemm}\label{chardist}\ \begin{enumerate} \item If $\mu\in \scr{D}(X_n,K)$, then $|\mu(\vartheta^k)|p^{-k(n-1/(p-1))}$ is bounded in $k$. \item Conversely, if $x_k\in K$ is a sequence such that $|x_k|p^{-k(n-1/(p-1))}$ is bounded, then there exists a unique $\mu \in \scr{D}(X_n,K)$ such that $\mu(\vartheta^k)=x_k$. \end{enumerate} \end{lemm} \begin{proof}\ \begin{enumerate} \item Since $\mu\in \scr{D}(X_n,K)$, there exists $C$ such that $|\mu(f)|\leq C\|f\|_{\sup{}}$ for all $f\in \O(X_n)$. Thus $$|\mu(\vartheta^k)|\leq C\|\vartheta^k\|_{\sup{}}= Cp^{k(n-1/(p-1))}$$ \item Let $x_k$ be as in the statement. For $f\in \O(X_n)$ we may write write $$f = \sum_k a_k\vartheta^k$$ with $|a_k|p^{k(n-1/(p-1))}\longrightarrow 0$ by Corollary \ref{maincoro}. The hypotheses on $a_k$ and $x_k$ ensure that $a_kx_k\longrightarrow 0$, so we may define $\mu(f) = \sum_k a_kx_k$. Note that \begin{eqnarray*} |\mu(f)|\leq \sup_k |a_k||x_k| &=& \sup_k (|a_k|p^{k(n-1/(p-1))}) (|x_k|p^{-k(n-1/(p-1))}) \\ &\leq& \|f\|_{\sup{}}\cdot \sup_k |x_k|p^{-k(n-1/(p-1))} \end{eqnarray*} again by Corollary \ref{maincoro}. Thus $\mu$ is bounded and defines an element of $\scr{D}(X_n,K)$. The uniqueness of $\mu$ follows because $\mu(\sum b_k\vartheta^k) = \sum b_k\mu(\vartheta^k)$ holds for any $\mu\in \scr{D}(X_n,K)$ and any $\sum b_k\vartheta^k\in \O(X_n)$ by the boundedness of $\mu$, so any such $\mu$ is \emph{determined} by its moments. \end{enumerate} \end{proof} The group structure on $X_n$ endows $\scr{D}(X_n,K)$ with a convolution product. To define this product, we will need a lemma. For $f\in \O(X_n)$ and $\varphi\in X_n$, define a function $T_\varphi f$ on $X_n$ by $$T_\varphi f(\psi)=f(\varphi\psi)$$ Note that if $\varphi$ is a $K$-valued point of $X_n$, then $T_\varphi f$ is naturally an element of $\O(X_n\widehat{\otimes} K)$. \begin{lemm} Let $f\in \O(X_n)$ and let $\mu\in \scr{D}(X_n,K)$. The function $$\varphi\longmapsto \mu(T_\varphi f)$$ is an element of $\O(X_n\widehat{\otimes}K)$ \end{lemm} \begin{proof} First note that this function makes sense by Remark \ref{extension} and the comment preceding the lemma. By Corollary \ref{maincoro}, $f$ may be written as $f = \sum a_k\vartheta^k$ with $|a_k|p^{k(n-1/(p-1))}\longrightarrow 0$. We have \begin{eqnarray*} (T_\varphi f)(\psi) = f(\varphi\psi) &=& \sum_k a_k \vartheta(\varphi\psi)^k \\ &=& \sum_k a_k(\vartheta(\varphi)+\vartheta(\psi))^k\\ & =& \sum_k \sum_{m\leq k} a_k\binom{k}{m}\vartheta(\varphi)^{m}\vartheta(\psi)^{k-m} \\ \end{eqnarray*} Thus, since $\mu$ is bounded we have $$\mu(T_\varphi f) = \sum_k \sum_{m\leq k}a_k \binom{k}{m}\mu(\vartheta^{k-m})\vartheta(\varphi)^m = \sum_m\left(\sum_{k\geq m}a_k\binom{k}{m}\mu(\vartheta^{k-m})\right)\vartheta(\varphi)^m$$ According to Corollary \ref{maincoro}, in order to complete the proof we must show that $$\left|\sum_{k\geq m} a_k\binom{k}{m} \mu(\vartheta^{k-m})\right|p^{m(n-1/(p-1))}\longrightarrow 0$$ as $m\longrightarrow \infty$. This expression is bounded by the supremum over $k\geq m$ of $$|a_k||\mu(\vartheta^{k-m})|p^{m(n-1/(p-1))} = |a_k|p^{k(n-1/(p-1))} |\mu(\vartheta^{k-m})|p^{(m-k)(n-1/(p-1))}$$ By Corollary \ref{maincoro}, there exists $C$ such that $|\mu(\vartheta^{k-m})|\leq Cp^{(k-m)(n-1/(p-1))}$ for all $k\geq m$, so the previous bound is at most $C|a_k|p^{k(n-1/(p-1))}$. This quantity tends to zero in $k$, and therefore in $m$ since $m\leq k$. \end{proof} For $\mu,\nu\in \scr{D}(X_n,K)$ we define the \emph{convolution of $\mu$ and $\nu$} by $$(\mu*\nu)(f) = \mu(\varphi\longmapsto \nu(T_\varphi f))$$ This is well-defined by the previous lemma, and clearly defines an element of $\scr{D}(X_n,K)$. It is trivial to check that the distribution $\mu_{\bf{1}}\in\scr{D}(X_0,\Q_p)$ given by $\mu_{{\bf{1}}}(f) = f(\bf{1})$, that is, the Dirac distribution associated to the identity character, is a two-sided identity for the convolution product. As we will see below, the rings $\scr{D}(X_n,K)$ are in fact commutative integral domains. \begin{exem}\label{diracconv} Let $\psi,\psi'\in \scr{W}^0$. The convolution of the Dirac distributions associated to these two characters is $$(\mu_\psi*\mu_{\psi'})(f) = \mu_\psi(\varphi\longmapsto \mu_{\psi'}(T_\varphi f)) = \mu_\psi(\varphi\longmapsto f(\varphi\psi')) = f(\psi\psi')$$ which is to say $\mu_\psi*\mu_{\psi'}=\mu_{\psi\psi'}$. \end{exem} \section{Galois action and theta operator} The group $G=\Gal(\overline{\Q}_p/\Q_p)$ acts on the $\C_p$-valued points of $X_n$ with $g\in G$ acting on the class of a character $\psi$ to give the class of $g\circ\psi$. We define an action of $G$ on $\O(X_n\widehat{\otimes}\C_p)$ via $$f^g(\psi) = g(f(g^{-1}\circ \psi))$$ In terms of the expansion of Corollary \ref{maincoro} this action is simply given by the action of $G$ on the coefficients $a_k$ since $\vartheta$ is defined over $\Q_p$, and hence the isomorphism of Theorem \ref{quotstructure} intertwines this action with the usual one (given by action on coefficients) on analytic functions on the ball $\A^1_{p^{n-1/(p-1)}}$. Let $G_n=G_{\Q_p,n}\subset G$ be the subgroup $G_n = \Gal(\overline{\Q_p}/\Q_p(\mu_{p^n}))$. Then for $g\in G_n$ we have $v(\log_p(\chi(g))) = v(\chi(g)-1)\geq n$ and hence $$p^{k(n-1/(p-1))}\left|\frac{(\log_p(\chi(g)))^k}{k!}\right|\leq 1$$ It follows from Corollary \ref{maincoro} that $$\exp_p(\vartheta\log_p(\chi(g))) = \sum_k \frac{(\log_p(\chi(g)))^k}{k!}\vartheta^k$$ is an analytic function on $X_n$. This function allows us to define an action of $G_n$ on $\scr{D}(X_n,\C_p)$ by the formula $$(g\cdot \mu)(f) = g( \mu (f^{g^{-1}}\cdot\exp_p(\vartheta\log_p(\chi(g)))\ ))$$ It is an easy matter to check that this preserves the boundedness condition and defines an action of $G_n$. The collection $\{\scr{D}(X_n,\C_p)\}$ together with the respective actions by $G_n$ is an instance of a ``$G_{\Q_{p,\infty}}$-module'' in the sense of Colmez in \cite{colmezsen}. \begin{rema}\label{fullgalois} In the special case $n=0$, the above recipe actually defines an action the entire group $G$ on $\scr{D}(X_0,\C_p)$ if we agree that $\log_p(\chi(g))$ is to be interpreted as $\log_p(\chi(g)\tau(\chi(g))^{-1})$ (equivalently, if we agree that $\log_p$ be extended to $\Z_p^\times$ so that the $(p-1)^{st}$ roots of unity be sent to zero). \end{rema} Let us now turn to the $\Theta$ operator. For $\mu\in\scr{D}(X_n,\C_p)$, define a distribution $\Theta\mu$ by the formula $(\Theta\mu)(f) = \mu(f\cdot\vartheta)$. The operator $\Theta$ so defined preserves the space of bounded distributions and evidently commutes with the action of $G_n$. \begin{exem} Let us determine explicitly the action of $G_n$ on the Dirac distribution $\mu_\psi$ for $\psi$ a point of $\scr{W}^0$. \begin{eqnarray*} (g\cdot \mu_\psi)(f) &=& g(\mu_{\psi}(f^{g^{-1}}\cdot \exp_p(\vartheta\log_p(\chi(g))))) \\ &=& g(f^{g^{-1}}(\psi)\exp_p(\vartheta(\psi)\log_p(\chi(g)))) \\ &=& f(g\circ\psi)\exp_p(\vartheta(g\circ\psi)\log_p(\chi(g))) \end{eqnarray*} so $$g\cdot\mu_\psi = \mu_{g\circ\psi}\cdot \exp_p(\vartheta(g\circ\psi)\log_p(\chi(g)))$$ In particular, consider the $\Q_p$-valued character $\psi(x)=x^k\tau(x)^{-k}\in \scr{W}^0_0$. By Remark \ref{fullgalois}, it makes sense to consider $g\cdot \mu_\psi$ for any $g\in G$, and the above computation shows that $$g\cdot \mu_\psi = \mu_\psi\cdot\exp_p(k\log_p(\chi(g)\tau(\chi(g))^{-1}))= \mu_\psi\cdot\chi(g)^k\tau(\chi(g))^{-k}$$ Thus the $\C_p$-subalgebra (= $\C_p$-submodule) of $\scr{D}(X_0,\C_p)$ generated by these characters for all $k\in \Z$ is rather akin to the ring $B_{HT}$, but with all characters twisted by a Teichmuller power to lie in $\scr{W}^0$. Alternatively, one may choose a nonzero $\alpha\in\C_p$ with the property that $g(\alpha)=\tau(\chi(g))\alpha$. Then we have (see Example \ref{diracconv}) \begin{eqnarray*} g\cdot(\alpha\mu_{x\tau(x)^{-1}})^k &=& g\cdot(\alpha^k\mu_{x^k\tau(x)^{-k}})\\ &=& g(\alpha^k)(g\cdot\mu_{x^k\tau(x)^{-k}})\\ &=& (\tau(\chi(g))^k\alpha^k) (\mu_{x^k\tau(x)^{-k}}\chi(g)^k\tau(\chi(g))^{-k})\\ &=& \chi(g)^k (\alpha^k\mu_{x^k\tau(x)^{-k}}) \end{eqnarray*} It follows that the $\C_p$-subalgebra (= $\C_p$-submodule) of $\scr{D}(X_0,\C_p)$ generated by the $\alpha^k\mu_{x^k\tau(x)^{-k}}$ for $k\in \Z$ is isomorphic to $B_{HT}$ as a $G$-module, so we have a non-canonical injection $B_{HT}\hookrightarrow \scr{D}(X_0,\C_p)$ corresponding to the choice of $\alpha$. This is reminiscent of the non-canonical injections of $B_{HT}$ into the rings considered in \cite{kisinppmf} and \cite{colmezsen}. \end{exem} \section{Relation to $B_{\mathrm{Sen}}$} In \cite{colmezsen}, Colmez introduces the ring $B_{\mathrm{Sen}}$ defined as the collection of power series in $\C_p[\![T]\!]$ with positive radius of convergence. This ring has an ascending filtration $\{B_{\mathrm{Sen}}^n\}$ where $B_{\mathrm{Sen}}^n$ consists of power series of radius of convergence at least $p^{-n}$. Colmez defines a ``$G_{\Q_p,\infty}$'' action on this filtered ring, meaning a compatible collection of actions of $G_n$ on $B_{\mathrm{Sen}}^n$, by acting in the natural way on coefficients and setting $g\cdot T = T + \log_p(\chi(g))$. \begin{defi} Let $\mu\in \scr{D}(X_n,\C_p)$. The \emph{Fourier transform} of $\mu$ is the formal series $$\scr{F}_n(\mu) = \mu(\exp(T\vartheta) ) := \sum_k \frac{\mu(\vartheta^k)}{k!}T^k$$ \end{defi} Using the usual estimate for the $p$-divisibility of $k!$ we have $$\left|\frac{\mu(\vartheta^k)}{k!}T^k\right|\leq |\mu(\vartheta^k)| p^{-k(n-1/(p-1))}(p^n|T|)^k$$ If $|T|<p^{-n}$, then this tends to $0$ as $k\to\infty$ by Lemma \ref{chardist}, so the series defining $\scr{F}_n(\mu)$ has radius of convergence at least $p^{-n}$. Thus $\scr{F}_n$ is a $\C_p$-linear map $$\scr{F}_n:\scr{D}(X_n,\C_p)\longrightarrow B_{\mathrm{Sen}}^n$$ \begin{prop} The Fourier transform $\scr{F}_n$ is an injective ring homomorphism. \end{prop} \begin{proof} That $\scr{F}_n$ is injective follows from Corollary \ref{maincoro}. Observe that \begin{eqnarray*} \scr{F}_n(\mu*\nu) &=& (\mu*\nu)(\exp(T\vartheta)) \\ &=& \mu(\varphi\mapsto \nu(T_\varphi\exp(T\vartheta))) \\ &=& \mu(\varphi\mapsto \nu(\exp(T\vartheta(\varphi\psi)) \\ &=& \mu(\varphi\mapsto \nu(\exp(T\vartheta(\varphi)) \exp(T\vartheta(\psi))\ )) \\ &=& \mu(\exp(T\vartheta))\nu(\exp(T\vartheta)) = \scr{F}_n(\mu)\scr{F}_n(\nu) \end{eqnarray*} Lastly, note that $$\scr{F}_n(\mu_{\mathbf{1}}) = \mu_{\mathbf{1}}(\exp(T\vartheta)) = \exp(T\vartheta(\mathbf{1}))=\exp(0)=1$$ Thus $\scr{F}_n$ is indeed a ring homomorphism. \end{proof} It follows immediately from this that the rings $\scr{D}(X_n,K)$ are commutative integral domains. In fact, we can say rather more. Let $P(T)=\sum a_kT^k$ have positive radius of convergence. Then there exists a positive integer $m$ such that $|a_k|p^{-km}\longrightarrow 0$. The standard estimates on the $p$-divisibility of $k!$ imply that $$|k!a_k|p^{-k(m+1-1/(p-1))} = |a_k|p^{-km}|k!|p^{-kp/(p-1)}\longrightarrow 0$$ so by Lemma \ref{chardist} there exists $\mu\in \scr{D}(X_{m+1},\C_p)$ with $\mu(\vartheta^k) = k!a_k$. Thus we have $\scr{F}_{m+1}(\mu) = P(T)$, and, although the individual $\scr{F}_n$ are not isomorphisms (it is not difficult to check that they are not individually surjective), the limit $$\scr{F}:\scr{D}(X_\infty,\C_p) =\varinjlim_n\scr{D}(X_n,\C_p) \stackrel{\sim}{\longrightarrow} B_{\mathrm{Sen}}$$ is an isomorphism of rings. We claim that the Fourier transform $\scr{F}_n$ intertwines the action of $G_n$ that we have defined on bounded distributions with the action defined by Colmez. The following calculations are somewhat formal, but can be made rigorous with power series expansions with no difficulty. We have \begin{eqnarray*} \scr{F}_n(g\cdot \mu)(T) &=& (g\cdot\mu)(\exp(T\vartheta)) \\ &=& g(\mu(\exp(g^{-1}(T)\vartheta)\exp(\vartheta\log_p(\chi(g))))) \\&=& g(\mu( \exp((g^{-1}(T)+\log_p(\chi(g)))\vartheta))) \\ &=& g(\scr{F}_n(\mu)(g^{-1}(T)+\log_p(\chi(g)))) \end{eqnarray*} which is equal to $g$ applied to $\scr{F}_n(\mu)$ in the sense of Colmez, since $\log_p(\chi(g))\in \Q_p$ is Galois-invariant. In a similar manner, we can determine the way in which $\Theta$ interacts with the Fourier transform. $$\scr{F}_n(\Theta\mu) = (\Theta\mu)(\exp(T\vartheta)) = \mu(\vartheta\exp(T\vartheta)) = \frac{d}{dT}\scr{F}_n(\mu)$$ Note that this is the negative of the operator called $\Theta$ in \cite{colmezsen}. Recall that for a $G_{K_\infty}$-module $M=\cup_n M_n$ in the sense of \cite{colmezsen} we define $M^{G_{K_\infty}} = \cup_n M_n^{G_{K_n}}$. \begin{prop}\ \begin{enumerate} \item $\scr{D}(X_\infty,\C_p)^{G_{\Q_{p,\infty}}} = \Q_{p,\infty}$ \item If $V$ is $d$-dimensional $\C_p$-representation of $G$, then $$(V\otimes_{\C_p} \scr{D}(X_\infty,\C_p))^{G_{\Q_p,\infty}} := \varinjlim (V\otimes_{\C_p} \scr{D}(X_n,\C_p))^{G_n}$$ is isomorphic to $D_{\mathrm{Sen}}(V)$ as a $\Q_{p,\infty}$ vector space equipped with an endomorphism $\Theta$. In particular, it is $d$-dimensional over $\Q_{p,\infty}$ \end{enumerate} \end{prop} \begin{proof} We have already shown that we have a compatible sequence of $G_n$-equivariant injections $$\scr{F}_n:\scr{D}(X_n,\C_p)\longrightarrow B^n_{\mathrm{Sen}}$$ that are an isomorphism in the injective limit. It follows easily that they induce an isomorphism on invariants, so the result follows from Th\'eor\`eme 2 of \cite{colmezsen}. \end{proof} \begin{rema} This result holds more generally with $\Q_p$ replaced by any finite extension $K/\Q_p$, as Th\'eor\`eme 2 of \cite{colmezsen} is proven in this generality. \end{rema}
1,116,691,499,466
arxiv
\section{Introduction} A silhouette is a subset of the plane that was traditionally obtained by copying on paper the shadow projected on a wall by a person placed in front of a point light source\footnote{\url{https://en.wikipedia.org/wiki/Silhouette}}. In digital images, silhouettes of objects can be obtained by a mere luminance threshold (e.g. Otsu's algorithm \cite{ipol.2016.158}) as soon as the object is darker or brighter than its surroundings. Then the silhouette appears as one of the connected components of an upper or lower set of the image. More generally, the study of shape promoted by Mathematical Morphology \cite{matheron1975random} calls 2D shapes any such connected component. We will call our object of study here 2D shape or silhouette. Silhouettes are essential for the human perception of shapes, and the distribution of corners along its outlines are closely linked to the neurological models of the visual system~\cite{attneave1954some}. The geometric features captured by the vectorization are important in feature identification~\cite{nadal1990complementary}, remote sensing~\cite{kirsanov2010contour}, and others~\cite{yang2001bezier,tombre2000vectorization,zou2001cartoon}. As proved in \cite{ambrosio2001connected}, if a closed subset of the plane has finite perimeter, then it can be described by its essential boundary, which is a countable set of Jordan curves with finite length. In image processing upper level sets can be extracted by a mere thresholding, in which case they are a finite union of pixels, bounded by a finite number of Jordan curves made of vertical and horizontal segments. Using a parametric interpolation such as the bilinear, one can also extract the boundary of a level set as a union of pieces of hyperbolae \cite{cao2008theory}. Following \cite{monasse2000scale} an image can therefore be decomposed in a tree of connected shapes ordered by inclusion, and each of these shapes (or silhouettes) can be described by its raster or by its boundary, which is a finite number of Jordan curves described as polygons or concatenations of pieces of hyperbolae. Thus, there is a standard way to lead back image analysis to the analysis of 2D shapes, and eventually to the analysis of its outline, described by a finite set of nested Jordan curves that also are level lines of the image. This is not the only way to extract shapes from images. Any segmentation algorithm divides an image into connected regions. For example, many software vectorization software\footnote{See (e.g.) \url{https://en.wikipedia.org/wiki/Adobe_Illustrator} or Vector Magic} proceed by a mere color quantization which reduces the image to a piecewise constant image and therefore to a union of disjoint 2D shapes. The boundaries of these shapes can then be encoded in Scalable Vector Graphics (SVG) format\footnote{\url{https://fr.wikipedia.org/wiki/Scalable_Vector_Graphics}}. A crucial point of such vector representation is that it is scalable, and therefore used for all 2D shapes that, like logos or fonts, require printing at many sizes. Common silhouette vectorization methods from its outline consist of two steps: identification of control points and approximation of curves connecting the control points. In a founding work, Montanari~\cite{montanari1970note} introduced a polygonal approximation of outlines of rasterized silhouettes. After the discrete boundary is traced, the sub-pixel locations of the polygonal vertices are determined by minimizing a global length energy with an $L^\infty$ loss to the initial outline. In more recent literature, B\'{e}zier curves have become widely adopted to replace polygonal lines~\cite{mortenson1999mathematics}. Most developments on silhouette outline vectorization use piecewise B\'{e}zier curves, or B\'{e}zier polygons~\cite{ramer1972iterative,cinque1998shape,montero2012skeleton,pal2007cubic,yang2001bezier}. Ramer~\cite{ramer1972iterative} proposed an iterative splitting scheme for identifying a set of control points on a polygonal line $\mathcal{C}$ such that the B\'{e}zier polygon $\widehat{\mathcal{C}}$ defined by these vertices approximates $\mathcal{C}$. The Hausdorff distance between $\widehat{\mathcal{C}}$ and $\mathcal{C}$ is constrained to stay below a predefined threshold, and the number of control points is suboptimal. More recently, Safraz~\cite{sarfraz2010vectorizing} proposed an outline vectorization algorithm that splits the outline at corners which are identified without computing curvatures~\cite{chetverikov2003simple}, then new control points are introduced to improve curve fitting. The control points produced by some of these works may correspond to curvature extrema of the outline, but this happens by algorithmic convergence rather than by design. It is well-known that the direct computation of curvature is not reliable~\cite{alvarez1994formalization}. The above mentioned methods reflect the challenges of estimating the outline's curvature on shapes extracted from raster images. In this paper, we propose a mathematically founded outline vectorization algorithm. It identifies (i) curvature extrema of the outline computed at sub-pixel level by (ii) backpropagating control points detected as curvature extrema at coarser scale in the affine scale-space, then (iii) computing piecewise least-square cubic B\'{e}zier joining these control points while fitting the initial outline with a predefined accuracy. The main contribution of this paper is to propose a new approach using the sub-pixel curvature extrema and affine scale-space for silhouette vectorization. We shall illustrate by comparisons how the proposed method can give an accurate vectorization with a generally smaller number of more meaningful control points. We organize the paper as follows. In section \ref{sec_overview}, an overview of proposed algorithm is presented. In Section~\ref{sec_1}, we review the level line extraction and sub-pixel curvature computation~\cite{ciomaga2017image}. In Section~\ref{sec_2}, we introduce the affine scale-space induced by the smooth bilinear outline and define the candidate control points. In Section~\ref{sec_3}, we describe an adaptive piecewise least-square B\'{e}zier polygon fitting, where the set of candidate points is modified to achieve a compact representation and to guarantee a predefined accuracy. The overall algorithm is summarized in Section~\ref{sec_algOutline}. We include various numerical results and comparison with other vectorization methods in Section~\ref{sec_5}, and conclude the paper in Section~\ref{sec_6}. \section{Overview of the Proposed Method}\label{sec_overview} On a rectangular domain $\Omega=[0,H]\times[0,W] \subset\mathbb{R}^2$ with $H>0$ and $W>0$, a \textit{silhouette} is a compact subset $\mathcal{S}\subset\Omega$ whose topological boundary $\partial \mathcal{S}$, the \textit{outline}, is a piecewise smooth curve. Suppose $\mathcal{S}$ is represented by a \textit{raster binary image} $I:\Omega\cap \mathbb{N}^2\to\{0,255\}$, that is, the set of black pixels \begin{align} \overline{\mathcal{S}}=\{(i,j)\in\Omega\cap\mathbb{N}^2~|~I(i,j)=0\}\label{eq_raster_sil} \end{align} approximates $\mathcal{S}$. We assume that $\mathcal{S}\cap\partial\Omega=\varnothing$. Our objective is to find a cubic B\'{e}zier polygon close to $\partial \mathcal{S}$ in the Hausdorff distance. The proposed silhouette vectorization method has three main steps: \begin{enumerate} \item Estimate the curvature extrema of $\partial \mathcal{S}$ across different scales in sub-pixel level. \item Based on the affine scale-space induced by $\partial\mathcal{S}$, identify salient curvature extrema which are robust against pixelization and noise as the candidate control points. \item Fit the outline using a B\'{e}zier polygon from the candidate control points which are adaptively modified to achieve a compact representation while guaranteeing a desired accuracy. \end{enumerate} In the following sections, we give the details of the proposed method. \begin{figure} \centering \includegraphics[scale=0.4]{Figures/flowchart.png} \caption{Flowchart of the proposed method. (a) A raster image of a cat's silhouette. (b) Zoom-in of (a). (c) Extracted bilinear outline of (a). (d) Inversely tracing the curvature extrema along the affine shortening flow. (e) Zoom-in of (d). (f) The vectorized outline of (a) with control points marked as red dots. (g) Zoom-in of (f). (h) Vectorized silhouette of (a) by the proposed method. (i) Zoom-in of (h). Notice the difference between the given image (a) and our result (h), as well as the zoom of (b) and (i). }\label{fig_flow} \end{figure} \section{Sub-pixel Curvature Extrema Localization} \label{sec_1} Following the work of~\cite{ciomaga2017image}, we consider the bilinear interpolation $u:\Omega\to[0,255]$ for the raster image $I$ whose continuous function form is \begin{align} u(x,y) = axy+bx+cy+d\;,\quad(x,y)\in\Omega\;, \end{align} where $a,b,c,d$ are scalar functions depending on $(\lfloor x\rfloor, \lfloor y\rfloor)$, and \begin{align} u(i+1/2,j+1/2) = I(i,j)\;,\quad (i,j)\in \Omega\cap\mathbb{N}^2\;. \end{align} Here $\lfloor r\rfloor$ is the floor function giving the greatest integer smaller than the real number $r$. For any $\lambda\in(0,255)$, the level line of $u$ corresponding to $\lambda$ is defined as $\mathcal{C}_\lambda=\{(x,y)\in\Omega~|~u(x,y) = \lambda\}$. Since $I$ is binary, the Hausdorff distance between any $\mathcal{C}_\lambda$ and the raster silhouette $\overline{\mathcal{S}}$~\eqref{eq_raster_sil} is bounded above by $\sqrt{2}$. Hence $\mathcal{C}_\lambda$ for an arbitrary level $\lambda\in(0,255)$ approximates the discrete outline as a piecewise $C^2$ Jordan curve except for at finitely many points, e.g., saddle points~\cite{caselles2009geometric}. In the following, we focus on a single level line $\mathcal{C}_{\lambda^*}$ for some $\lambda^*\in(0,255)$ extracted by the level line extraction algorithm detailed in~\cite{caselles2010}. Specifically, $\mathcal{C}_{\lambda^*}$ is either piecewise linear line (horizontal or vertical) or a part of a hyperbola whose asymptotes are adjacent edges of a single pixel. See Figure~\ref{fig_flow}~(c). Due to pixelization, $\mathcal{C}_{\lambda^*}$ shows strong staircase effects~\cite{cao2003geometric}, and such oscillatory behavior is effectively reduced by the affine shortening flow~\cite{cao2003geometric,sapiro1993affine}. For any planar curve $\mathcal{C}$, we smooth it via solving the PDE \begin{align} \frac{\partial \mathcal{C}(s,t)}{\partial t} = \kappa^{1/3}(s,t)\mathbf{N}(s,t)\;,\quad\mathcal{C}(s,0) = \mathcal{C}(s)\;,\quad s\in [0,\text{Length}(\mathcal{C}(\cdot,t))]\label{eq_affine} \end{align} to some short time $T>0$. Here each curve $\mathcal{C}(\cdot,t)$ is arc-length parametrized for any $t$, $\kappa$ denotes the curvature, and $\mathbf{N}$ is the inward normal at $\mathcal{C}(s,t)$. The flow~\eqref{eq_affine} is affine intrinsic, that is, its solution is invariant under affine transformations; hence it preserves the geometric properties of the original curve during the evolution. \begin{figure} \centering \begin{tabular}{cc} (a)&(b)\\ \includegraphics[scale=0.25]{Figures/sigma1}& \includegraphics[scale=0.25]{Figures/sigma2} \end{tabular} \caption{Illustration of the geometric scheme for affine shortening. (a) A convex component of a discrete curve and a $\sigma$-chord (dashed line). (b) The result of discrete $\sigma$-affine erosion is a polygonal line (blue lines) whose vertices are middle points (red dots) of the $\sigma$-chords.}\label{fig_sigma_chord} \end{figure} To solve \eqref{eq_affine}, we apply the fully consistent geometric scheme \cite{moisan1998affine} which is independent of grid discretization. The idea is that by iterating the discrete affine erosion with a sufficiently small parameter, the convergent morphological operator becomes equivalent to the differential operator in~\eqref{eq_affine}. Given a discrete curve partitioned by its inflection points, the $\sigma$-affine erosion of each covex component is a polygonal line whose vertices are the middle points of $\sigma$-chords. See Figure~\ref{fig_sigma_chord}. A $\sigma$-chord is a segment joining two points on the curve such that the area enclosed by the segment and the curve is $\sigma$. After the erosion, we glue the evolved components at the inflection points, resample the resulted curve by arc-length, and iterate the procedure above to reach the desired scale $T$. For a sufficiently smooth convex curve, applying the $\sigma$-affine erosion is equivalent to solving~\eqref{eq_affine} till time $\omega\sigma^{2/3}$ for some absolute constant $\omega>0$~\cite{moisan1998affine}. % Denoting the smooth bilinear outline obtained above by $\Sigma_{\lambda^*}$, the curvature at any of its vertices can be computed without dependence on the grid discretization. The following discussion applies for each connected component. Suppose $\Sigma_{\lambda^*}=\{P_i(x_i,y_i)\}_{i=0}^N$ with $P_0=P_N$ and is oriented same as $\mathcal{C}_{\lambda^*}$. Following~\cite{ciomaga2017image}, the discrete curvature at point $P_i$ is computed by \begin{align} \kappa(P_i) = \frac{-2\,\text{det}(P_iP_{i-1}~~ P_iP_{i+1})}{||P_{i-1}P_i||\,||P_{i}P_{i+1}||\,||P_{i-1}P_{i+1}||}\;,~i=0,1,2,\dots,N-1\label{eq_curvature} \end{align} where \begin{align} \text{det}(P_iP_{i-1}~~ P_iP_{i+1}):=\text{det}\begin{bmatrix} x_{i-1}-x_i&x_{i+1}-x_i\\ y_{i-1}-y_i&y_{i+1}-y_i \end{bmatrix}\;, \end{align} $P_{-1}=P_{N-1}$, and $||\cdot||$ denotes the Euclidean $2$-norm. It computes the discrete curvature of $\Sigma_{\lambda^*}$ at $P_i$ as the curvature of the circumcircle that passes through three consecutive points $P_{i-1}$, $P_i$, and $P_{i+1}$. The discrete curvature values can be obtained at arbitrary resolution based on the sampling frequency applied to the bilinear outline $\mathcal{C}_{\lambda^*}$, hence it is called ``curvature microscope". To identify the curvature extrema, we process the data $\{\kappa(P_i)\}_{i=0}^{N-1}$ by repeatedly applying the filter $(1/18,4/18,8/18,4/18,1/18)$ with periodic boundary condition for $20$ times to suppress the noise. Based on the filtered data $\{\widetilde{\kappa}(P_i)\}_{i=0}^{N-1}$, $P_i$ is a curvature extramum if \begin{align} |\widetilde{\kappa}(P_i)|>|\widetilde{\kappa}(P_j)|\;,~\text{for}~j=i\pm 1, i\pm 2\;.~\label{eq_criterion} \end{align} In practice, to further stabilize the identification, we also require that a curvature extremum should have $|\widetilde{\kappa}(P_i)|>\delta$ for some small value $\delta>0$. In this paper, we take $\delta =0.001$. \begin{Rem} Our method is also applicable when the input is a raster gray-scale image where the intensity variation concentrates around the topological boundary of the underlying silhouette. The higher the image gradient across the silhouette's boundary, the more stable the position of the extracted outline with respect to the choice of levels. \end{Rem} \section{Affine Scale-space Control Points Identification} \label{sec_2} The curvature extrema form a good set of control points, since they capture the geometrical changes in the outline. We propose to refine the control points via affine scale-space approach, which is detailed in this section. \subsection{Backward Tracing via Inverse Affine Shortening Flow} The concept of scale-space, first introduced by Witkin~\cite{witkin1987scale}, provides a formalism for multiscale analysis of signals. Later developments~\cite{alvarez1992axiomes,babaud1986uniqueness,perona1990scale,koenderink1984structure} established the axiomatic properties for defining a scale-space. In~\cite{sapiro1993affine}, Sapiro et al. proved that the solution of the affine shortening flow~\eqref{eq_affine} defines an affine scale-space, where the scale is given by the time parameter, and the solution at any scale is affine invariant, i.e., it commutes with planar special affine transforms. A critical property satisfied by the affine scale-space is causality: no new information is created when passing from fine to coarse scales. In particular, the following is proved in \cite{sapiro1993affine}: \begin{Prop}\label{prop1} In the affine invariant scale-space of a planar curve, the number of vertices, that is, the extrema of Euclidean curvature, is a nonincreasing function of time. \end{Prop} More precisely, every curvature extremum on the curve at a coarser scale is the continuation of \textit{at least} one of the extrema at a finer scale. The lack of one-to-one correspondence is due to the possibility of multiple extrema (e.g. two maxima and one minimum) merging to a single one during the evolution. In this paper, we propose a new approach for defining the control points as the curvature extrema on $\Sigma_{\lambda^*}$ which persist across different scales in its affine scale-space. By inversely tracing curvature extrema from the coarser scales to the finer scales, the proposed control points are more robust to noise and help to capture prominent corners of the silhouette. Given a sequence of discrete times $t_0=0<t_1<\cdots<t_K$ for some positive integer $K$, we obtain the curve $\mathcal{C}(\cdot,t_{n})$ at scale $t_n$ by the affine shortening flow \eqref{eq_affine} for $n=0,1,\dots, K$. For any $1\leq n\leq K$, by a first order Taylor expansion, the affine shortening flow~\eqref{eq_affine} is approximated as \begin{align} \frac{\mathcal{C}(s,t_{n})-\mathcal{C}(s,t_{n-1})}{t_{n}-t_{n-1}}=(\kappa^n(s))^{1/3}\mathbf{N}^n(s)+\mathbf{r}(s)\;,\label{eq_implicit1} \end{align} where $\kappa^n$ and $\mathbf{N}^n$ denote the curvature and normal at the scale $t_n$, and $\mathbf{r}$ is a remainder such that $||\mathbf{r}(s)|| = O(t_n-t_{n-1})$. Rearranging~\eqref{eq_implicit1} gives \begin{align} \mathcal{C}(s,t_{n-1})=\mathcal{C}(s,t_n)-(t_n-t_{n-1})(\kappa^n(s))^{1/3}\mathbf{N}^n(s)+(t_n-t_{n-1})\mathbf{r}(s)\;.\label{eq_implicit2} \end{align} This expression shows that, if $t_n-t_{n-1}$ is sufficiently small, by following the opposite direction of the affine shortening flow at $\mathcal{C}(s,t_n)$, that is, \begin{align} -\text{sign}(\kappa^n(s))\mathbf{N}^n(s)\;,\label{eq_inverse_direction} \end{align} we can find $\mathcal{C}(s,t_{n-1})$ nearby. Here $\text{sign}(r)$ denotes the sign function which gives $+1$ if $r>0$, $-1$ if $r<0$ and $0$ if $r=0$. This gives a well-defined map from the curve at a coarser scale $t_n$ to a finer scale $t_{n-1}$ via the inverse affine shortening flow. Starting from $K$, for any curvature extremum $X^K$ on $\mathcal{C}_K=\mathcal{C}(\cdot,t_K)$, we set up the following constrained optimization problem to find a curvature extremum $X^{K-1}$ on $\mathcal{C}_{K-1}$ at scale $t_{K-1}$: \begin{equation} \max_{X\in\mathcal{C}_{K-1}} \frac{\langle X-X^K, -\text{sign}(\kappa^K)\mathbf{N}^K\rangle}{||X-X^K||}\;, \;\; \text{s.t.}~ \begin{cases} \displaystyle{\frac{\langle X-X^K, -\text{sign}(\kappa^n)\mathbf{N}^n\rangle}{||X-X^K||}>\alpha} \\ ||X-X^K|| < D\\ X \text{~is a curvature extremum on~}\mathcal{C}_{K-1} \end{cases}\;,\label{eq_opt} \end{equation} where $D>0$ is a positive parameter that controls the closeness between $X$ and $X^K$, and $\alpha$ enforces that the direction of $X-X^K$ is similar to that of the inverse affine shortening flow. The problem~\eqref{eq_opt} looks for the curvature extremum on $\mathcal{C}_{K-1}$ in the $D$-neighborhood of $X^K$ that is the nearest to the line passing $X^K$ in the direction of the inverse affine shortening flow. % When $D$ and $\alpha$ are properly chosen, if~\eqref{eq_opt} has one solution, we define it to be $X^{K-1}$. If~\eqref{eq_opt} has multiple solutions, we choose the one that has the shortest distance from $X^K$ to be $X^{K-1}$. In case there are multiple solutions having the same shortest distance from $X^K$, we can arbitrarily select one to be $X^{K-1}$. However, in practice, if~\eqref{eq_opt} has a solution, it is almost always unique. We repeat the optimization~\eqref{eq_opt} for $K-1$, $K-2$, etc. Either the solutions always exist until the scale $t_0$, or there exists some $m\geq 1$, such that~\eqref{eq_opt} at $t_m$ does not have any solution. In the first case, we call $X^K$ a \textit{complete} point, and in the second case, we call it \textit{incomplete}. % For each curvature extremum $X^K$ on $\mathcal{C}_K$, we construct a sequence of points $\mathcal{L}(X^K)$ that contains the solutions of~\eqref{eq_opt} for $K,K-1,K-2$, etc., starting at $X^K$ in a scale-decreasing order. If $X^K$ is complete, then $\mathcal{L}(X^K)$ has exactly $K+1$ elements, and we call the sequence complete; otherwise, the size of $\mathcal{L}(X^K)$ is strictly smaller than $K+1$, and we call the sequence incomplete. We define the last elements of the complete sequences as the candidate control points, and denote them as $\{O_i(t_K)\}_{i=1}^{M(t_K)}$. This set of points is ordered following the orientation of $\Sigma_{\lambda^*}$. Here the parameter $t_K$ in the parenthesis indicates that the candidate control points are associated with the curvature extrema identified at the scale $t_K$. When the scale $t_K$ is fixed, we simply write $\{O_i\}_{i=1}^{M}$. \subsection{Degenerate Case}\label{sec_degenerate} In the discussion above, if $M(t_K)=0$, i.e., if there are no candidate control points identified on $\Sigma_{\lambda^*}$ associated with the curvature extrema at scale $t_K$, then we call it a \textit{degenerate case}. This situation occurs when the underlying silhouette is a disk, or has a smoothly varying boundary, provided that the image has sufficiently high resolution. This paper is not considering open curves. If we did, the absence of curvature extrema only means that the curve has a monotone curvature, hence is a spiral. If $\mathcal{S}$ is indeed a disk, the vectorization only requires information about its center and radius. We use the isoperimetric inequality to determine if $\Sigma_{\lambda^*}$ represents a circle: for any closed plane curve with area $A$ and perimeter $L$, we have \begin{align} 4\pi A \leq L^2\;,\label{eq_iso} \end{align} and the equality holds if and only if the curve is a circle. In practice, we decide that $\Sigma_{\lambda^*}$ is a circle only if the corresponding ratio $1-4\pi A/L^2<0.005$. By this criterion, if $\Sigma_{\lambda^*}$ is classified as a circle, then its center and radius are easily computed by arbitrarily three distinct points on $\Sigma_{\lambda^*}$. For numerical stability, we take three outline points that are equidistant from each other. When $1-4\pi A/L^2\geq 0.005$ for a degenerate case, we insert a pair of most distant points on $\Sigma_{\lambda^*}$ to be the candidate control points. An efficient approach for finding these points is to combine a convex hull algorithm, e.g., the monotone chain method~\cite{andrew1979another}, which takes $O(N\log N)$ time, with the rotating calipers~\cite{preparata2012computational}, which takes $O(N)$ time. Here $N$ is the number of vertices of the polygonal line $\Sigma_{\lambda^*}$. \section{Adaptive Cubic B\'{e}zier Polygon Approximation} \label{sec_3} Starting from the control points identified by the affine scale-space, $H:=\{O_i\}_i^M$, we adjust $H$ by deleting non-salient sub-pixel curvature extrema and inserting new control points for guaranteeing a predefined accuracy. This adaptive approach yields a cubic B\'{e}zier polygon $\mathcal{B}(H)$ whose vertices are points in the updated $H$ and edges are cubic B\'{e}zier curves computed by least-square fittings. \subsection{B\'{e}zier Fitting with Chord-length Parametrization }\label{sec_cubicfitting} A cubic B\'{e}zier curve is specified by four points $B_0,B_1,B_2$, and $B_3$. Its parametric form is \begin{align} B(s) = (1-s)^3B_0+3(1-s)^2sB_1+3(1-s)s^2B_2+s^3B_3\;,~s\in[0,1]\;.\label{eq_cubic} \end{align} Specifically, it has the following properties: (i) $B_0$ and $B_3$ are the two endpoints for $B(s)$; and (ii) $B_1-B_0$ is the right tangent of $B(s)$ at $B_0$, and $B_2-B_3$ is the left tangent at $B_3$. To approximate a polygonal line segment $\Sigma=\{P_0,P_1,\dots, P_N\}$, we find a cubic B\'{e}zier curve that is determined by $B_0=P_0$, $B_1$, $B_2$, and $B_3=P_N$ such that the squared fitting error \begin{align} \widetilde{S}= \sum_{i=0}^N\left(P_i-((1-\widetilde{s_i})^3B_0+3(1-\widetilde{s_i})^2\widetilde{s_i}B_1+3(1-\widetilde{s_i})\widetilde{s_i}^2B_2+\widetilde{s_i}^3B_3)\right)^2\;~\label{eq_min2} \end{align} is minimized. Here $\widetilde{s_i}=(\sum_{k=1}^{i}||P_{k}-P_{k-1}||)/(\sum_{k=1}^N||P_{k}-P_{k-1}||)$ is the chord-length parameter for $P_i$, $i=0,1,\dots,N$. We note that \eqref{eq_min2} is used to initialize an iterative algorithm in~\cite{plass1983curve} for a more accurate B\'{e}zier fitting. The benefit of this approximating setup is that we have closed-form formulae for the minimizing $B_1$ and $B_2$ as follows: \begin{align} B_1=(A_2C_1-A_{12}C_2)/(A_1A_2-A_{12}^2)\;, \;\;\; B_2=(A_1C_2-A_{12}C_1)/(A_1A_2-A_{12}^2)\;,\label{eq_B2} \end{align} where \begin{equation*} A_1 = 9\sum_{i=1}^N\widetilde{t_i}^2(1-\widetilde{t_i})^4\;, \;\;\; A_2 = 9\sum_{i=1}^N\widetilde{t_i}^4(1-\widetilde{t_i})^2\;, \;\;\; A_{12}=9\sum_{i=1}^N\widetilde{t_i}^3(1-\widetilde{t_i})^3\;, \end{equation*} and \begin{equation*} C_1 = \sum_{i=1}^N3\widetilde{s_i}(1-\widetilde{s_i})^2[P_i-(1-\widetilde{s_i})^3P_0-\widetilde{s_i}^3P_3]\;, \;\;\; C_2 = \sum_{i=1}^N3\widetilde{s_i}^2(1-\widetilde{s_i})[P_i-(1-\widetilde{s_i})^3P_0-\widetilde{s_i}^3P_3]\;. \end{equation*} Hence we gain a better computational efficiency. A similar strategy is also taken in~\cite{montero2012skeleton} to find a B\'{e}zier cubic to smooth discrete outlines. \subsection{Control Point Update: Deletion of Sub-pixel Extrema}\label{sec_deletion} Recall that the candidate control points $H=\{O_i\}_{i=1}^{M}$ in Section~\ref{sec_2} are curvature extrema at sub-pixel level. Hence it is possible that some of them do not reflect salient corners of the silhouette. To remove spurious sub-pixel extrema from $H$, we propose to compare the left tangent and right tangent at each candidate control point, which are obtained via the least-square cubic B\'{e}zier fitting discussed above. We take advantage of the second property of cubic B\'{e}zier curves mentioned in Section~\ref{sec_cubicfitting}. For $i=1,\dots, M$, we fit a cubic B\'{e}zier to the polygonal line segment whose set of vertices is \begin{align} \{O_{i}=P_{j(i)},P_{j(i)+1},\dots, P_{j(i+1)}=O_{i+1}\}\;, \end{align} where we take $O_{M+1}=O_1$, and obtain the estimated defining points $B_{i,1}$ and $B_{i,2}$ for the B\'{e}zier curve. The left and right tangent at $O_i$ are computed as \begin{align} T_{i}^-= B_{i-1,2}-O_{i}\;,\;\;\;T_{i}^+= B_{i,1}-O_{i}\;,\label{eq_tangent} \end{align} respectively, where $B_{-1,2} = B_{M,2}$. These tangent vectors are associated with all the points between neighboring candidate control points. Therefore, the angle formed by $T_{i}^- $ and $T_{i}^+$ measures the sharpness of $\Sigma_{\lambda^*}$ at $O_i$ from a more global perspective. We delete $O_i$ from the set of candidate control points $H$ if \begin{align} \frac{\langle T_i^+, T_i^+\rangle}{||T_i^+||\,||T_i^-||}+1<\varepsilon\;,\label{eq_tangent_cond} \end{align} for some small parameter $\varepsilon>0$, which is equivalent to the condition that the angle between $T_i^+$ and $T_i^-$ is close to $\pi$. The set $H$ is updated with the remaining control points. It is possible that all the candidate control points $\{O_i\}_{i=1}^{M}$ are removed after this procedure, thus we encounter a degenerate case. If the underlying outline is a circle, we compute the center and radius; if it is not, we take the most distant pair of outline points to update $H$. \subsection{Control Point Update: Insertion for Accuracy}\label{sec_insertion} The candidate control points in $H$ split the outline $\Sigma_{\lambda^*}$ into polygonal line segments, each of which is approximated by a cubic B\'{e}zier using least square fitting as described in Section~\ref{sec_cubicfitting}. We obtain a B\'{e}zier polygon that approximates $\Sigma_{\lambda^*}$, denoted by $\mathcal{B}(H)$. A natural measure for the error of approximating $\Sigma_{\lambda^*}$ using the B\'{e}zier polygon $\mathcal{B}(H)$ is \begin{align} e = \max_{P_i\in \Sigma_{\lambda^*}}\text{dist}(P_i,\mathcal{B}(H))\;,\label{eq_error} \end{align} where $\text{dist}(P_i, \mathcal{B}(H))=\inf_{P\in\mathcal{B}(H)}||P_i-P||$ is the distance from $P_i$ to the curve $\mathcal{B}(H)$. It is desirable that the user can specify the threshold for the error, $\tau_e>0$. To guarantee that $e\leq \tau_e$, we apply the splitting strategy~\cite{ramer1972iterative} which inserts $P_{\text{new}}\in\Sigma_{\lambda^*}$ to $H$ as a new control point if \begin{align} \text{dist} (P_{\text{new}},\mathcal{B}(H))>\tau_e\;,~\label{eq_condition} \end{align} and among those points on $\Sigma_{\lambda^*}$ satisfying~\eqref{eq_condition}, $\text{dist} (P_{\text{new}},\mathcal{B}(H))$ is the largest. After the insertion, we fit $\Sigma_{\lambda^*}$ using a B\'{e}zier polygon based on the new set of control points in $H$. If the error of the newly fitted B\'{e}zier polygon is still greater than $\tau_e$, we insert another point based on the same criterion. This series of insertions terminates once the condition $e\leq \tau_e$ is met. Finally, $\mathcal{B}(H)$ with the updated set of control points $H$ gives a B\'{e}zier polygon that approximates the outline $\partial \mathcal{S}$, and with its interior filled with black, we obtain the vectorized silhouette for $\mathcal{S}$ from the raster image $I$. \begin{Rem} For a further reduction on the size of $H$, we may consider an optional step to merge neighboring B\'{e}zier cubics if the union of the underlying polygonal line segments can be approximated by a single B\'{e}zier cubic via~\eqref{eq_min2} with an error below $\tau_e$. We can regard the insertion in Section~\ref{sec_insertion} as controlling the data fidelity, and the simplification described here as minimizing the complexity of an estimator. Alternatively iterating these procedures provides a numerical scheme for a constrained optimizing problem \begin{equation*} \min_{H\subseteq\Sigma_{\lambda^*}}|H|\;,\quad\text{\text{s.t.}}~\max_{P_i\in \Sigma_{\lambda^*}}\text{dist}(P_i,\mathcal{B}(H))\leq \tau_e\;, \end{equation*} where $|H|$ denotes the number of elements in $H$. For any $\tau_e\geq 0$, this problem always has a solution, yet the uniqueness largely depends on the geometric structure of $\Sigma_{\lambda^*}$. \end{Rem} \section{Pseudo-code for the Proposed Method}\label{sec_algOutline} In this section, assuming that the outline of the given silhouette has only one connected component, we summarize the proposed algorithm for silhouette vectorization in three steps: \begin{enumerate} \item \textit{Extraction of the smooth sub-pixel outline $\Sigma_{\lambda^*}$} Extract the level line corresponding to $\lambda^*$ from the bilinear interpolation of the image, then discretize it as a polygon with uniform sub-pixel sampling. To reduce the staircase effects, smooth the polygon via affine shortening at a scale specified by the smoothness parameter $\sigma_0$. In this paper, we take $\lambda^*=127.5$. \item \textit{Identification of candidate control points.} Fix an increment $\Delta\sigma>0$ and a positive integer $K>0$. Evolve $\Sigma_{\lambda^*}$ via the affine shortening using the scales $\sigma^*=k\Delta \sigma$, $k=1,2,\dots,K$. Starting from each curvature extremum at scale $K\Delta\sigma$, trace the curvature extrema at smaller scales along the inverse affine shortening flow as described by~\eqref{eq_opt}. By doing so, each curvature extremum at scale $K\Delta \sigma$ induces a sequence of traced curvature extrema across different scales, which are arranged in a scale-decreasing order. The final elements of the complete sequences are defined as the candidate control points, denoted by $H=\{O_i\}_{i=1}^{M}$. In this paper, we fix $\Delta\sigma=0.5$ and $K =4$, so that $\sigma^*=2$. In case $M=0$, process the degenerate case as described in Section~\ref{sec_degenerate}. \item \textit{Refinement of the control points.} Remove any candidate control point $O_i$ from $H$ whose left tangent and right tangent~\eqref{eq_tangent} form an angle close to $\pi$~\eqref{eq_condition}. If all the candidate control points are removed, follow the instruction in Section~\ref{sec_degenerate} to address the degenerate case. Then, insert new control points into $H$ by the splitting strategy~\cite{ramer1972iterative} until the approximation error $e$~\eqref{eq_error} is bounded by a user-specified threshold $\tau_e>0$. \item \textit{(Optional) Merging neighboring B\'{e}zier cubics} For $i=1,2,\dots,M$, delete $O_i$ from $H$ if the polygonal line segment bounded by the left and right neighboring control points of $O_i$ in $H$ can be fitted by a single B\'{e}zier cubic with error smaller than $\tau_e$. \end{enumerate} As for the output, if $\Sigma_{\lambda^*}$ is not a circle, we write the points in $H$ together with the estimated defining points~\eqref{eq_B2} for each segment into a SVG format with the specification of drawing cubic B\'{e}zier curves. For the visualization purpose, if a fitted B\'{e}zier curve has a maximal absolute curvature smaller than $0.001$, we assign a straight line. Note that this value $0.001$ is consistent with the value for $\delta$ in Section~\ref{sec_1}. If $\Sigma_{\lambda^*}$ is a circle, we write its estimated center and radius into the SVG with the specification of drawing a circle. The pseudo-codes are presented in Algorithm~\ref{iterative} (with sub-procedures described in Algorithm~\ref{degenerate} and Algorithm~\ref{merge}), which can be parallelized to apply to outlines with multiple connected components. \begin{algorithm} \KwIn{ $\Sigma^0_{\lambda^*}$: a polygonal Jordan curve sampled from the level line of the bilinear interpolated image $u$ corresponding to the level $\lambda^*$. $\tau_e$: approximation error threshold. $\sigma_0$: smoothness parameter. Fixed parameters: $\Delta\sigma = 0.5$ , $K = 4$.} \KwOut{Cubic B\'{e}zier polygon $\mathcal{B}(H)$ specified by the vertex set $H$, or a perfect circle.} Apply the affine shortening to smooth $\Sigma_{\lambda^*}^0$ up to scale $\sigma_0$, which yields the sub-pixel smooth outline $\Sigma_{\lambda^*}=\{P_i\}_{i=0}^N$. \For{$k=1,2,\dots, K$}{Evolve $\Sigma_{\lambda^*}$ up to scale $k\Delta\sigma$ via the affine shortening, denoted by $\Sigma_{\lambda^*}^k=\{P_i^k\}_{i=0}^{N^k}$. Compute curvature $\kappa_i^k$ of $\Sigma_{\lambda^*}^k$ at $P_i^k$ according to~\eqref{eq_curvature}. Denoise the data $\{\kappa_i^k\}_{i=1}^{N^k}$ by moving average, based on which curvature extrema $\{X_i^{k}\}_{i=1}^{S^k}$ are located by~\eqref{eq_criterion}.} Initialize $H=\varnothing$. \If{$S^K\geq 1$}{ \For{$i=1,\dots, S^K$}{ Set $X_i^{(K)}=X_{i}^K$. \For{$k=K,K-1,\dots,1$}{ Solve the problem~\eqref{eq_opt} associated with $X_i^{(k)}$ \If{\eqref{eq_opt} has a solution }{ Denote the solution by $X_i^{(k-1)}$. \If{k=1}{Insert $X_i^{(0)}$ into $H$.} } \Else{ \textbf{break} } } } } \ \Else{ Run Algorithm~\ref{degenerate}. } \For{$i=1,\dots, M$}{ Fit the line segment $\{O_i=P_{j(i)},P_{j(i)+1},\dots, P_{j(i+1)}=O_{i+1}\}\subset\Sigma_{\lambda^*}$ using least square cubic B\'{e}zier~\eqref{eq_min2}. Obtain the right tangent $T_i^+$ at $O_i$ and left tangent $T_{i+1}^-$ at $O_{i+1}$ according to~\eqref{eq_tangent}. } \For{$i=1,\dots, M$}{ \If{Condition~\eqref{eq_tangent_cond} holds}{ Remove $O_i$ from $H$.} } \If{$H=\varnothing$}{ Run Algorithm~\ref{degenerate}. } Compute approximation error $e$ of the B\'{e}zier polygon $\mathcal{B}(H)$ via~\eqref{eq_error}. \While{$e>\tau_e$}{ Insert into $H$ a point $P_{\text{new}}\in\Sigma_{\lambda^*}$ furthest from $\mathcal{B}(H)$ that satisfies~\eqref{eq_condition}. Recompute the error $e$ of $\mathcal{B}(H)$. } \textit{(Optional)} Run Algorithm~\ref{merge} to further shrink the size of $H$. \caption{{\bf Shape Vectorization by Affine Scale-space} \label{iterative}} \end{algorithm} \begin{algorithm} \If{$\Sigma_{\lambda^*}$ is a circle}{ Take three equidistant points on $\Sigma_{\lambda^*}$ to compute the center $O$ and radius $r$. \Return $O, r$. } \Else{ Find the most distant pair of points on $\Sigma_{\lambda^*}$: $O_1$, $O_2$. Set $H=\{O_1,O_2\}$. } \caption{{\bf Sub-procedure for the Degenerate Case} \label{degenerate}} \end{algorithm} \begin{algorithm} Suppose $H=\{O_i\}_{i=1}^M$. Define $M'=M$, $P^-=O_{M'}$, $P^0= O_{1}$, and $P^+= O_{2}$. \For{$i=1,\dots,M$}{ Fit the the polygonal line segment bounded by $P^-$ and $P^+$ using least square cubic B\'{e}zier~\eqref{eq_min2}, and denote the fitting error by $e$. \If{$e<\tau_e$}{ $P^-\gets O_{i}$, $P^0\gets O_{\text{mod}(i+1,M')}$, $P^+\gets O_{\text{mod}(i+2,M')}$. } \Else{ $M'\gets M'-1$ $P^-\gets O_{\text{mod}(i-1,M')}$, $P^0\gets O_{\text{mod}(i+1,M')}$, $P^+\gets O_{\text{mod}(i+2,M')}$. } } \caption{{\bf Simple B\'{e}zier Cubic Merging} \label{merge}} \end{algorithm} \section{Numerical Results}\label{sec_5} In this section, we present some numerical experiments to demonstrate the performance of our proposed algorithm. After obtaining the SVGs from~\cite{SVG}, we rasterize them as PNG images, which are used as inputs in the following experiments. The inputs are either binary or gray-scale. We choose the level line corresponding to $\lambda^*=127.5$ to approximate the outlines throughout the experiments. By default, we set the error threshold $\tau_e = 1$, so that the vectorized outline is guaranteed to have sub-pixel level of accuracy; and the smoothness parameter $\sigma_0=1$. For the parameters in~\eqref{eq_opt}, we fix $D=10$ and $\alpha = 0.9$. The silhouettes used in the following experiments are collectively displayed in Table~\ref{tab_dataset}. If without any specifications, we apply the proposed method without the optional merging step. \subsection*{General Performance} We present some results of the our proposed algorithm in Figure~\ref{fig_example}. In (a), we have a silhouette of a cat. It has a single outline curve which contains multiple sharp corners on the tail, near the neck and around the paws, etc. These features provide informative visual cues for silhouette recognition, and our algorithm identifies them as control points for the silhouette vectorization shown as the red dots in (b). The outline of a butterfly in (c) has multiple connected components. In addition to the control points corresponding to corners, we observe in (d) some others on smooth segments of the outline. They are inserted during the refinement step of our algorithm, where a single B\'{e}zier cubic is inadequate to guarantee the accuracy specified by the error threshold $\tau_e=1$. In (e), we show a tessellation of words and (f) presents the vectorized result. The input is a PNG image of dimension $1934\times 1332$ and takes $346$ KB in the storage. In contrast, its silhouette vectorization, saved as a SVG file, has in total $2683$ control points and takes $68$ KB if the coordinates are stored in float, $36$ Kb if stored in integers. In this example, our algorithm provides a compresion ratio of about $80.35\%$ for float type, and a $89.60\%$ compression ratio for the integer type. Moreover, the total computational time for this case only takes $0.83$ seconds. Similar statistics for the other two examples are summarized in Table~\ref{tab}. Our algorithm is both effective and efficient. \begin{figure} \centering \begin{tabular}{cc} (a)&(b)\\ \includegraphics[scale=1]{Figures/cat.png}& \includegraphics[scale=0.24]{Figures/cat_vec}\\ (c)&(d)\\ \includegraphics[scale=0.5]{Figures/butterfly.png}& \includegraphics[scale=0.27]{Figures/butterfly_vec}\\ (e)&(f)\\ \includegraphics[scale=0.41]{Figures/text.png}& \includegraphics[scale=0.175]{Figures/text_vec}\\ (g)&(h)\\ \includegraphics[scale=0.15]{Figures/Yexample.png}& \includegraphics[scale=0.15]{Figures/Yexample_vec} \end{tabular} \caption{Examples of results of the proposed algorithm for silhouette vectorization. (a) Cat and (b) its vectorized outline ($42$ control points). (c) Butterfly and (d) its vectorized outline ($158$ control points). (e) Text design and its vectorized outline ($2683$ control points). Each red dot signifies the location of a control point. (g) Two letters exerted from (e) scaled up with the same magnitude. (h) Zoom-in of the vectorization (f) on the two letters corresponding to (g). }\label{fig_example} \end{figure} \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Shape& Image Dim. & Size& Result Size (float) & Result Size (int) & Proc. Time\\\hline Cat (a) & $700\times537$ &$5$ KB&$2$ KB ($60\%$)&$1$ KB ($80\%$)&$0.10$ Sec.\\ Butterfly (c) & $732\times 596$ &$178$ KB &$5$ KB ($97.19\%$)&$3$ KB ($98.31\%$)&$0.15$ Sec.\\ Text (e) & $1934\times1332$ &$346$ KB&$68$ KB ($80.35\%$)&$36$ KB ($89.60\%$)&$0.83$ Sec.\\\hline \end{tabular} \caption{Performance statistics of the proposed algorithm applied to examples in Figure~\ref{fig_example}. The float type result stores the control point coordinates as float type, and the int type result stores them as integer type.}~\label{tab} \end{table} \subsection*{Degenerate Cases} An important feature of our algorithm is that it offers flexibility in face of degenerate cases, where the silhouette does not have identifiable curvature extrema on its outline. A disk, as shown in Figure~\ref{fig_degenerate} (a), is the most common example. Once our algorithm classifies the outline as a circle, instead of fitting B\'{e}zier cubics, it directly approximates the center and radius of the circle and orders the SVG output to draw a perfect circle. See Figure~\ref{fig_degenerate} (b). Figure~\ref{fig_degenerate} (c) shows another degenerate case. It consists of a rectangle in the middle and two half disks attached on its opposite sides, whose diameters are equal to the height of the rectangle. This particular silhouette has no strict curvature extrema on its outline. By computation, its area is $172644$ and perimeter is $1742.07$; since $4\pi\text{Area}/\text{Perimeter}^2 = 4\times \pi \times 172644/ (1742.07)^2=0.7149 <1$, the outline is not a circle. Hence the algorithm inserts a pair of most distant points on the outline, the left-most and the right-most points in this case, and conducts the B\'{e}zier fitting routine as in the non-degenerate cases. The design of this special procedure for degenerate cases is important for two reasons. First, it makes the algorithm adaptive to image resolutions. If we reduce the resolution of (c) from $774\times 320$ to $144\times 58$, whose magnified version is shown in (e), due to strong effects of pixellization, all the control points are identified as local curvature extrema. (f) shows the magnified vectorization of the low resolution image. Second, it improves the compression ratio. To fit a circle using B\'{e}zier polygon requires at least two pieces of cubics, hence we need to store the coordinates of at least $6$ points. With our algorithm, only the coordinate of the center and the value of the radius are required, which saves the space for $9$ float or int type data. Figure~\ref{fig_degenerate} (g) shows mixture of degenerate and non-degenerate outline curves. The vectorization in (h) shows that the circles are represented as perfect circles, and the others are represented as B\'{e}zier polygons. \begin{figure} \centering \begin{tabular}{cc} (a)&(b)\\ \includegraphics[scale=0.5]{Figures/disk}& \includegraphics[scale=0.28]{Figures/disk_vec}\\ (c)&(d)\\ \includegraphics[scale=0.5]{Figures/elongate}& \includegraphics[scale=0.28]{Figures/elongate_vec}\\ (e)&(f)\\ \includegraphics[scale=0.8]{Figures/elongate_small}& \includegraphics[scale=0.7]{Figures/elongate_small_vec}\\ (g)&(h)\\ \includegraphics[scale=0.8]{Figures/taiji}& \includegraphics[scale=0.42]{Figures/taiji_vec} \end{tabular} \caption{Degenerate cases. In (a) and (c), no candidate control points were identified. Our algorithm handles such situations by checking if the outline is a circle. If it is, e.g. (a), the center and radius are computed and a circle is drawn without B\'{e}zier fitting; hence, there is no control point (red dots) on the vectorized outline (b). The blue dot indicates the center of the circle. If it is not a circle, e.g., (c), a pair of most distant points are inserted to initiate the B\'{e}zier fitting, such as in (d). (e) shows the low resolution version of (c) and (f) displays its vectorization. When the resolution is reduced, all the control points are identified curvature extrema. In (g), three of the outline curves are identified as circles and the others are fitted by B\'{e}zier polygons. (h) shows the vectorized result.}\label{fig_degenerate} \end{figure} \subsection*{Importance of the Control Point Update} Figure~\ref{fig_refinement} compares the vectorization using control points before and after the control point update, which are described in Section~\ref{sec_deletion} and \ref{sec_insertion}. The outline of the knot silhouette in (a) shows curvature variations of multiple scales. If no refinement is applied, as shown in (b), the corners of larger scale are captured, while some inflection points are missed on the top-left component. Moreover, many unnecessary control points appear on several arcs. In contrast, with the refinement, the result in (c) has fewer control points, but with a better approximation accuracy. Notice that on the top-left component of the knot, the newly inserted control points are close to the inflection points, and on the arcs, only $1$ or $2$ control points are generally needed. By refinement, we remove extrema that do not represent salient corners and inset new control points to meet the accuracy requirement. Although the total number of control points may or may not decrease after refinement, the distribution of the refined control points shows more correlation with the geometric features of the outline, and the vectorized result is improved. \begin{figure} \centering \begin{tabular}{ccc} (a)&(b)&(c)\\ \includegraphics[scale=0.425]{Figures/knot}& \includegraphics[scale=0.26]{Figures/knot_no_refine_106}& \includegraphics[scale=0.26]{Figures/knot_refine_86} \end{tabular} \caption{Importance of refinement. (a) A silhouette of a knot. (b) Result without refinement (106 control points). (c) Result with refinement (86 control points). Compared to (b), (c) has fewer control points distributed on the smooth curve segments, and some new control points are introduced to enhance accuracy.}\label{fig_refinement} \end{figure} \subsection*{Effect of the Error Threshold $\tau_e$} The error threshold $\tau_e$ controls the accuracy of the B\'{e}zier polygon approximating the outline. When the value of $\tau_e$ is reduced, the user requires higher accuracy of the B\'{e}zier fitting. Since any B\'{e}zier cubic contains at most one inflection point, a single cubic only allows a limited amount of variations. Hence, by adding more control points to split the outline into shorter segments, the specified accuracy is achieved. To better illustrate the effect of varying the threshold $\tau_e$, we computed in percentage the reduction of the number of control points when the threshold is $\tau_e>0.5$ compared to that when the threshold is $0.5$: \begin{align} \rho(\tau_e) = \frac{\# C(\tau_e)-\#C(0.5)}{\# C(0.5)}\times 100\%\;,\quad \tau_e>0.5\;.\label{eq_percentage} \end{align} Here $\#C(\tau_e)$ denotes the number of control points when the threshold is $\tau_e$. Figure~\ref{fig_threshold2} (a) shows the average values and the standard deviations of~\eqref{eq_percentage} when we apply the proposed method to the $20$ silhouettes in our data set. We observe that when $\tau_e<1$, the effect of increasing $\tau_e$ is the strongest: the number of control points reduces exponentially. On average, the percentage curves show inflection points around $\tau_e=1$, that is, when the fitted B\'{e}zier polygon has distance to the sub-pixel outline less than 1 pixel. After passing $\tau_e=1$, increasing $\tau_e$ has less impact on the variation of the number of control points. For even larger values of $\tau_e$, there is almost no need of inserting new control points, and the corresponding control points are closely related to the corners of the outline. This is justified by the regression in Figure~\ref{fig_threshold2} (b), where each point represents a silhouette in our data set. It shows that there is a positive relation between the number of corners computed by the Harris-Stephens corner detector~\cite{harris1988combined} and the number of control points when $\tau_e=10.0$, which is relatively large. With large values of $\tau_e\gg 1$, the silhouette representation is more compact yet less accurate. With small values of $\tau_e<1$, we have a more accurate representation yet less efficient. From this point of view, we would recommend $\tau_e=1$. \begin{figure} \centering \begin{tabular}{cc} (a) & (b)\\ \includegraphics[scale=0.2]{Figures/threshold_plot}& \includegraphics[scale=0.2]{Figures/corners} \end{tabular} \caption{(a) For the $20$ silhouettes in our data set (Table~\ref{tab_dataset}), the solid curve shows the average relative reduction of the number of control points $\rho(\tau_e)$~\eqref{eq_percentage}, and the dashed curves indicate the standard deviations. (b) The positive relation between the number of control points when $\tau_e=10.0$ is large and the number of corners of a silhouette. Each dot represents a sample in our data set. The red curve is computed by linear regression with a goodness of fit $R^2=0.75592$. }\label{fig_threshold2} \end{figure} \subsection*{Effect of the Smoothness Parameter $\sigma_0$} The smoothness parameter $\sigma_0$ adjusts the regularity of the smooth bilinear outline which approximates $\partial\mathcal{S}$. With larger values of $\sigma_0$, oscillatory features of the given outline are suppressed, while with smaller values of $\sigma_0$, the vectorized silhouette preserves sharp corners. Figure~\ref{fig_smooth} demonstrates this effect of $\sigma_0$. We apply the proposed method using $\sigma_0=2.0$, $1.0$ and $0.5$ on the silhouette of a tree~(a), and the zoom-ins of vectorization results within the boxed region of (a) are presented in (b), (c), and (d), respectively. Observe that the zig-zag feature around the tree's silhouette is better preserved by reducing $\sigma_0$. As a trade-off, this introduces more control points to recover the sharpness of the outline. \begin{figure} \centering \begin{tabular}{cccc} (a)&(b)&(c)&(d)\\ \includegraphics[scale=0.2]{Figures/tree}& \includegraphics[scale=0.24]{Figures/tree_1.0_2.0}& \includegraphics[scale=0.24]{Figures/tree_1.0_1.0}& \includegraphics[scale=0.24]{Figures/tree_1.0_0.5} \end{tabular} \caption{Effect of the smoothing parameter $\sigma_0$. (a) A silhouette of a tree where the boxed region is examined in detail. Vectorization using (b) $\sigma_0=2.0$ ($362$ control points) (c) $\sigma_0=1.0$ ($448$ control points), and (d) $\sigma_0=0.5$ ($500$ control points). With smaller values of $\sigma_0$, the vectorized outline is sharper, and the number of control points increases.}\label{fig_smooth} \end{figure} \subsection*{Stability Under Affine Transformations} We qualitatively explore the geometric stability of the proposed control points under affine transformations. Figure~\ref{fig_affine}~(a) shows a silhouette of a cat. (b) is a rotation of (a), and (c) is a sheared (a). The vectorized results of these silhouettes are presented in (d), (e), and (f), respectively. To better compare the distributions of the control points on these vectorized outlines, we applied the corresponding inverse affine transformations to (d)--(f) and show the results in (g)--(i). The numbers of control points are similar: (g) has $52$ control points ($38$ before refinement), (h) has $53$ control points ($37$ before refinement), and (i) has $56$ control points ($41$ before refinement). The distributions of control points between (g) and (h) are almost identical, while locations of the control points in (i) are slightly shifted, especially those on the tail. This is because B\'{e}zier fitting is not affine invariant, and these shifted points are inserted to guarantee the accuracy of approximating the transformed outline using a B\'{e}zier polygon. \begin{figure} \centering \begin{tabular}{ccc} (a)&(b)&(c)\\ \includegraphics[scale=0.3]{Figures/cat2_1}& \includegraphics[scale=0.3]{Figures/cat2_2}& \includegraphics[scale=0.3]{Figures/cat2_3}\\ (d)&(e)&(f)\\ \includegraphics[scale=0.3]{Figures/cat1_vec}& \includegraphics[scale=0.3]{Figures/cat2_vec}& \includegraphics[scale=0.3]{Figures/cat3_vec}\\ (g)&(h)&(i)\\ \includegraphics[scale=0.3]{Figures/cat1_vec_inv}& \includegraphics[scale=0.3]{Figures/cat2_vec_inv}& \includegraphics[scale=0.3]{Figures/cat3_vec_inv}\\ \end{tabular} \caption{Stability of control points under affine transformations. (a) Silhouette of a cat. (b) Rotation of (a). (c) Shear of (a). (d) Vectorized outline of (a). (e) Vectorized outline of (b). (f) Vectorized outline of (c). (g) is identical as (d) for comparison with (h) the inverse affine transform of (e), and (i) the inverse affine transform of (f). The control points in (g)--(i) are similarly distributed along the outline. }\label{fig_affine} \end{figure} \subsection*{Qualitative Comparison with Feature Point Detectors} Our algorithm produces a set of informative point features of the outline. This includes the control points which separate the outline curves into segments for cubic B\'{e}zier fitting and the centers of circles. In Figure~\ref{fig_detector}, we compare the distribution of these points with the results of some extensively applied feature point detectors: the Harris-Stephens corner detector~\cite{harris1988combined}, the features from Accelerated Segment Test (FAST) detector~\cite{rosten2005fusing}, the Speeded Up Robust Features (SURF) detector~\cite{bay2006surf}, and the Scale-Invariant Feature Transform (SIFT)~\cite{lowe1999object}. The Harris-Stephens corner detector is a local auto-correlation based method. It locally filters the image with spatial difference operators and identifies corners based on the response. In (a), the Harris-Stephens corner detector identifies all the corners except for the one on the right side of the label. The set of control points produced by our algorithm contains all the corners found by the Harris-Stephens detector plus the missed one. The FAST detector only considers the local configurations of pixel intensities, hence it is widely applied in real-time applications. From (b), we see that FAST identifies all the prominent corners same as our method does. Similarly to (a), there are no FAST points identified around the balloon. On the circular outline at the center instead, FAST detects multiple false corners; this illustrates how our algorithm is more robust against pixellization The SURF detector combines a fast Hessian measure computed via integral images and the distribution of local Haar-wavelet responses to identify feature points that are scale- and rotation-invariant. There is a similarity that it utilizes the Gaussian scale-space and scale-space interpolation to localize the points of interest. The SURF points are marked over scales, hence we see most of the green crosses in (c) form sequences converging toward the outline. These limit points correspond exactly to our control points (red dots) distributed over the outline, including those around the balloon. Moreover, there is a SURF point at the center of the circular hole in the label, which overlaps with our identified center of circle (blue dot). Rather than showing feature points over scales, our method locates them directly on the original outline. In~(c), notice that our identified points are much simpler compared to SURF points. The SIFT detects scale-invariant features of a given image. As shown in (d), SIFT successfully indicates the presence of corners and marks the centers of the balloon as well as the label, which are visually robust features of the silhouette Our method focuses on the outline instead of the interior points and provides interesting boundary points' locations exactly. Around the balloon, the symmetric distribution is compatible with the SIFT point at the center. The set of control points plus the center of circles produced by our algorithm is comparable to some of the frequently used feature point detectors in the literature. Hence,in addition to being an effective silhouette vectorization method, the identified control points can be used for other applications where feature point detectors are needed. \begin{figure} \centering \begin{tabular}{cccc} (a)&(b) & (c) &(d)\\ \includegraphics[width=1.5in]{Figures/Harris.png}& \includegraphics[width=1.5in]{Figures/FAST.png}& \includegraphics[width=1.5in]{Figures/SURF.png}& \includegraphics[width=1.5in]{Figures/SIFT.png} \end{tabular} \caption{Comparison between the control points (red dots) plus the centers of circles (blue dots) produced by the proposed algorithm and other point feature detectors (green crosses). (a) Compared with the Harris corner detector~\cite{harris1988combined}. (b) Compared with the FAST feature detector~\cite{rosten2005fusing}. (c) Compared with the SURF detector~\cite{bay2006surf}. (d) Compared with the SIFT detector~\cite{lowe1999object} }\label{fig_detector} \end{figure} \subsection*{Quantitative Comparison with Feature Point Detectors} To further justify that our method can be applied as a stable point feature detector for silhouettes, we compare the techniques discussed above with ours by quantitatively evaluating their performances via the repeatability ratio~\cite{schmid2000evaluation}. It measures the geometric stability of the detected feature points under various transformation. In particular, for each method, given any angle $\alpha$, $0^\circ<\alpha<360^\circ$, we rotate the silhouettes in the first column of Figure~\ref{fig_quant_eval} with respect to their centers by $\alpha$ respectively, record the detected feature points, apply the inverse transform on these points by rotating them by $-\alpha$, then compare their positions with the feature points detected on the original silhouette. Let $n_{\text{repeat}}=0$. For any rotated feature point, within its $\epsilon$-neighborhood, if we find at least one feature point on the original silhouette, we increase $n_{\text{repeat}}$ by $1$. The $\epsilon$-repeatability ratio is computed by \begin{align} \frac{n_{\text{repeat}}}{\min\{n_0,n_\text{transform}\}} \end{align} where $n_0$ denotes the number of feature points detected on the original silhouette, and $n_\text{transform}$ is the number of feature points detected on the transformed one. During the angle (or scale) changes, this value staying near $1$ indicates that the applied method is invariant under rotation (or scale). We fix $\epsilon=1.5$. The second column of Figure~\ref{fig_quant_eval} shows the repeatability ratios under rotations. The set of feature points produced by our method has superior stability when the silhouette is rotated by arbitrary angles. In contrast, the other detectors have low repeatability ratios especially when the silhouette is turned almost upside-down. Moreover, our method performs consistently well for silhouettes with different geometric features. The house silhouette has straight outlines and sharp corners; the butterfly silhouette is defined by smooth curves; and the fish silhouette has prominent curvature extrema which are not perfect corners. For the third column of Figure~\ref{fig_quant_eval}, we compute the repeatability ratios when the transformation is replaced by scaling. Observe that our method is comparable with other detectors, and it is the most consistent one across these different silhouettes. \begin{figure} \centering \begin{tabular}{c|c|c} Silhouette& Under Rotation& Under Scaling\\\hline \includegraphics[scale=0.12]{Figures/house_texample.png}& \includegraphics[scale=0.2]{Figures/house_R.png}& \includegraphics[scale=0.2]{Figures/house_S.png}\\ \includegraphics[scale=0.15]{Figures/butterfly_texample.png}& \includegraphics[scale=0.2]{Figures/butterfly_R.png}& \includegraphics[scale=0.2]{Figures/butterfly_S.png}\\ \includegraphics[scale=0.15]{Figures/fish_texample.png}& \includegraphics[scale=0.2]{Figures/Rotation.png}& \includegraphics[scale=0.2]{Figures/Scale.png}\\\hline \end{tabular} \caption{Repeatability ratios of the methods in comparison when the silhouettes in the first column are rotated or scaled. Notice that the blue lines (proposed method) are near $1$. The performance of our method is the most consistent across these silhouettes with different geometric features.} \label{fig_quant_eval} \end{figure} \subsection*{Comparison with State-of-the-art Software} There are many software available for image vectorization, e.g., Vector Magic~\cite{VM}, Inkspace~\cite{IS}, and Adobe Illustrator 2020 (AI)~\cite{AI}. In the following set of experiments, we compare our method with these software using the number of control points generated for given silhouettes as a criteria. This quantity is equal to the number of curve segments used for approximating the outline, and a smaller value indicates a more compact silhouette representation. To perform this comparison, after acquiring SVG files of various silhouettes, we rasterized them and used the PNG images as inputs. Table~\ref{tab_software} summarizes the results. For Vector Magic, we test three available settings: high, medium, and low for the vectorization quality. For AI, we choose the setting``Black and White Logo", as it is suitable for the style of our inputs. We also include the results when the automatic simplification is used, which are marked by daggers. For Inkspace, we use the default parameter settings as recommended. As shown by the values of the mean relative reduction on the number of control points in the last row, our method produces the most compact vectorization results. With such an effective reduction on the number of control points, we justify that our method does not over-simplify the representation. We show a detailed comparison in Figure~\ref{fig_detail_fish} between our proposed method and AI. In particular, we use AI without simplification and our method with two sets of parameters: $\sigma_0=1$, $\tau_e=1$ and $\sigma_0=0.1$, $\tau_e=0.5$. We note that $\sigma_0$ specifies the smoothness of the recovered outline, and $\tau_e$ controls the accuracy. Notice that under these settings, our method gives less number of control points, yet our results preserve more details of the given silhouettes, for example, the strokes on the scales at the bottom, and the sharp outlines on the rear fin. We quantify the performance of AI and our method by comparing the given image $I$ and the image $I'$ rasterized from the vectorization result. Denote $S_0=\{(x,y)\in\Omega\cap\mathbb{N}^2\mid I(x,y)<127.5\}$ and $S_r=\{(x,y)\in\Omega\cap\mathbb{N}^2\mid I'(x,y)<127.5\}$ as the interior pixels of the given silhouette and the reconstructed one. We evaluate the similarity between $S_0$ and $S_r$ by \begin{align} \text{Dice similarity coefficient (DSC)~\cite{sorensen1948method}}\quad \text{DSC} = \frac{2|S_0\cap S_r|}{|S_0|+|S_r|}\;. \end{align} Higher values of DSC $(0\leq \text{DSC}\leq 1)$ imply a better matching between two silhouettes. We evaluate the performance with wide ranges of parameters for both AI and the proposed method. For AI, we test various combinations of the curve simplification parameter $\mu$ ($0\%$--$100\%$) and the corner point angle threshold $\gamma$ ($0^\circ$--$180^\circ$). For our method, we use different combinations of $\tau_e$ and $\sigma_0$. Roughly speaking, $\mu$ in AI corresponds to $\tau_e$ in ours, which controls the approximating accuracy, and $\gamma$ in AI corresponds to $\sigma_0$ in ours, which adjusts the smoothness of the vectorized outline. Figure~\ref{fig_DSC_Bpn} plots the number of control points against the corresponding DSC values for various parameter settings in both methods. In (a), we fix the sharpness requirement, i.e., fixed $\gamma=150^\circ$ (default value for the automatic simplification used in AI) and fixed $\sigma_0=1$, and vary $\mu$ for AI (the blue curve) and $\tau_e$ for ours (the red curve). On the blue curve, larger dots correspond to smaller values of $\mu$; on the red curve, larger dots correspond to larger values of $\tau_e$. Moving from left to right along both curves indicates more accurate outline approximations. Since the red curve stays below the blue one, compared to AI, our method produces less control points while achieving the same level of DSC values. In (b), we present the results of AI using simplification specified by a set of combinations of parameters ($\gamma=0^\circ,10^\circ,\dots,180^\circ$, $\mu=0\%,10\%,\dots,100\%$). They are organized such that each blue curve corresponds to a fixed value of $\gamma$; higher curves (lighter shades of blue) correspond to larger values of $\gamma$, while moving from left to right (smaller sizes of dots) along each of the curves corresponds to decreasing $\mu$. The red curve shows our results using different values of $\tau_e$ when the merging is applied and $\sigma_0$ is fixed at $0.5$. From left to right, the value of $\tau_e$ decreases. Observe that the red curve gives a close lower bound for the blue curves when DSC$<0.93$. For higher requirement on the accuracy (DSC$>0.93$), our method shows superior efficiency: it takes comparatively small number of control points to reach greater values of DSC. In contrast, for AI, the best DSC value it can achieve is around $0.95$, and adding more control points does not offer any improvement. \begin{table} \centering \begin{tabular}{c|ccccc} &\multicolumn{5}{c}{Number of Control Points ($\# C$)}\\\hline Test Image & Original& VM& IS & AI & Proposed \\\hline {\small \includegraphics[height=3.5em]{Figures/panel.png} } &$405$& $248/256/245$& $330$& $280~(193^\dagger)$ & $168$\\\hline {\small \includegraphics[height=3.5em]{Figures/dog.png} }&$611$& $359/343/325$& $383$& $340~(293^\dagger)$ & $222$\\\hline {\small \includegraphics[height=3.5em]{Figures/heart.png} }&$682$& $296/294/263$& $272$& $211~(128^\dagger)$ & $120$\\\hline {\small \includegraphics[height=3.5em]{Figures/map.png} }&$1434$& $915/828/715$& $932$& $698~(462^\dagger)$ & $379$\\\hline {\small \includegraphics[height=3.5em]{Figures/circle.png} }&$4434$&$2789/2582/2370$& $3292$& $2120~(1431^\dagger)$ & $1407$\\\hline {\small \includegraphics[height=3.5em]{Figures/fish.png} }&$6664$& $5470/5218/4955$& $6493$& $4870~(3441^\dagger)$ & $2810$\\\hline MRR& --- &$37..97\%/40.55\%/45.01\%$&$29.88\%$ &$45.79\%~(61.58\%^\dagger)$&$67.38\%$\\\hline \end{tabular} \caption{Comparison with image vectorization software in terms of the number of control points. We compared with Vector Magic (VM), Inkspace (IS), and Adobe Illustrator 2020 (AI). For VM, we report the number of control points using three optional settings: High/Medium/Low. For AI, the values with dagger$^\dagger$ indicate the numbers of control points produced by the automatic simplification. The input image dimensions are $581\times 564$, $625\times 598$, $400\times 390$, $903\times 499$, $515\times 529$, and $1356\times 716$ from top to bottom. We also report the mean relative reduction (MRR) of the number of control points.}\label{tab_software} \end{table} \begin{figure} \centering \includegraphics[scale=0.32]{Figures/fish_detail.png} \caption{Comparison among the given raster image (red boxes), AI (orange boxes), the proposed with $\sigma_0=1$, $\tau_e=1$ (green boxes), and the proposed with $\sigma_0=0.1$, $\tau_e=0.5$ (blue boxes). With smaller numbers of control points ($\# C$), our method preserves better the geometric details of the given silhouette. }\label{fig_detail_fish} \end{figure} \begin{figure} \centering \begin{tabular}{cc} (a) & (b)\\ \includegraphics[scale=0.22]{Figures/fixed_sharp.png}& \includegraphics[scale=0.22]{Figures/AI_simp.png} \end{tabular} \caption{ (a) Comparison between AI ($\gamma=150^\circ$) and the proposed method ($\sigma_0=1$) when the complexity parameters ($\mu$ for AI, $\tau_e$ for ours) vary. The circled dot corresponds to our default setting. (b) Comparison between AI with simplification specified by various combinations of $\mu$ and $\gamma$, and the proposed method using merging with fixed $\sigma_0=0.5$ and varying $\tau_e$. In both figures, smaller dots indicate higher levels of complexity for AI ($\mu$) and the proposed method ($\tau_e$), respectively. . }\label{fig_DSC_Bpn} \end{figure} \section{Conclusion}\label{sec_6} In this paper, we proposed an efficient and effective algorithm for silhouette vectorization. The outline of the silhouette is interpolated bilinearly and uniformly sampled at sub-pixel level. To reduce the oscillation due to pixelization, we applied the affine shortening to the bilinear outline. By tracing the curvature extrema across different scales along the well-defined inverse affine shortening flow, we identified a set of candidate control points. This set is then refined by deleting sub-pixel extrema that do not reflect salient corners, and inserting new points to guarantee any user-specified accuracy. We also designed special procedures to address the degenerate cases, such as disks, so that our algorithm adapts to arbitrary resolutions and offers better compression of information. Our method provides a superior compression ratio by vectorizing the outlines. When the given silhouette undergoes affine transformations, the distribution of control points generated by our method remains relatively stable. These properties are quantitatively justified by the repeatability ratio when compared with popular feature point detectors. Our method is competitive compared to some well-established image vectorization software in terms of producing results that have less number of control points while achieving high accuracy.
1,116,691,499,467
arxiv
\section{Introduction} The accreting millisecond pulsar \sax{} was the first X-ray binary discovered to pulsate in the millisecond range \citep{wijnands98}. It is a transient compact binary with a $\sim$2.01~hr orbital period \citep{chakra98} and an outburst recurrence time of approximately 2.5 years. \sax{} has been seen in outburst six times since 1996, the most recent being in September-October 2008. The presence of pulsations implies the accreting gas is channeled onto the magnetic poles producing hot spots and accretion shocks. The rotation of the neutron star modulates this emission producing pulsations that add to the unpulsed accretion disk and Comptonized emission \citep[see for example][]{poutanen03}. A tool to examine \sax{} is the presence of an iron K-shell fluorescence emission line in its X-ray spectrum, similar to those observed in many other X-ray binaries and AGNs \citep[see][for comprehensive reviews]{miller07,fabian00}. It is generally thought that the shape of iron-K emission line formed in the inner region of accretion disks around black holes (both stellar-mass and in AGN) and neutron stars is sensitive to the distance of the line forming region from the compact object -- as this region gets closer to the compact object relativistic Doppler effects and gravitational redshifts become stronger, producing an asymmetric profile \citep{fabian89}. The iron K emission line can therefore provide an important way to examine the inner accretion disk. While this line has long been used to study the inner disk around black holes \citep[e.g.,][]{miller07}, only recently has it been shown that the lines in neutron star low-mass X-ray binaries also display the characteristic asymmetric shape too \citep{bhatta07,cackett08}. The detection of an iron line in an accreting millisecond pulsar is useful for constraining the magnetospheric radius and hence the strength of the magnetic field in this neutron star. During the Sept/Oct 2008 outburst, \sax{} was observed by both \suz{} and \xmm{}, approximately 1 day apart. Initial brief reports of the Fe K line seen by both telescopes were given by \citet{cackett08atel,papittoatel08}. Further analysis of the \xmm{} data has been presented by \citet{papitto08}. Here, we present a joint analysis of both \suz{} and \xmm{} data that yields the tightest constraints on the Fe K emission line present in the spectra. \section{Data Reduction and Analysis} The light curve of the Sept/Oct 2008 outburst is given in Fig.~\ref{fig:lc}. The daily average count rates are taken from the {\it Swift}/BAT Hard X-ray Transient Monitor in the 15-50 keV energy range. The times when the \suz{} and \xmm{} observations occurred are also indicated. Below we describe the data reduction for both the \suz{} and \xmm{}, before detailing our spectral analysis. \subsection{\suz{} data} \suz{} observed \sax{} starting at 2008 October 2, 16:32 UT, for approximately 42.5 ks (ObsID 903003010). The XIS was operated in 1/4 window mode, with a 1-sec burst option, and the telescope was pointed at the nominal XIS position. The XIS detectors, which cover the energy range from approximately 0.5-10 keV, were operated in both 3x3 and 5x5 edit modes during the observation. For each working XIS unit (0, 1 \& 3), we extracted spectra from the cleaned event files using \verb|xselect|. For the front-illuminated detectors (XIS 0 and 3), the extraction region used was a box of size 270\arcsec{} by 330\arcsec{} centered on the source. We choose to use a box rather than a circular extraction region as the used region of the detector is a thin 256 by 1024 pixel strip (approximately 270\arcsec{} by 1080\arcsec) when using 1/4 window mode. The background was extracted from a 250\arcsec{} by 230\arcsec{} box situated towards the end of the 1/4 window so as to be free from source photons. The response files were generated using the \verb|xisresp| script which uses the \verb|xisrmfgen| and \verb|xissimarfgen| tools. \verb|xissimarfgen| is used with no binning of the response file and simulating 200000 photons. We then added the spectra (and averaged the responses) from the two front illuminated detectors (XIS 0 and 3) together using the \verb|addascaspec| tool, as recommended by the \suz{} XIS team. The summed good exposure time of the XIS 0 + 3 spectrum is 42.4 ks (this takes into account the 1-sec burst option which leads to a 50\% livetime fraction). In this Letter, we choose not use data from the back-illuminated XIS 1 detector. As the back-illumination increases the effective area at soft energies, this detector is more prone to pile-up. We found that pile-up was present in the XIS 1 observation (but not significantly in the other detectors). We tested using an annulus source extraction region that avoids the piled-up central region of the source, but found that when we did this the signal-to-noise ratio was significantly decreased, especially through the Fe K band (6.4--6.97 keV) of interest here. In addition to the XIS spectra, we also extract the spectrum from the PIN camera which is part of the Hard X-ray Detector (HXD). We extracted the PIN spectrum from the cleaned event file following the standard analysis threads on the \suz{} website. The PIN non-X-ray background was extracted from the observation-specific model provided by the instrument team, and was combined with the standard model for the cosmic X-ray background. \begin{figure} \includegraphics[width=8cm]{f1.eps} \caption{{\it Swift}/BAT 15--50 keV light curve of the September-October 2008 outburst of \sax. The dashed and solid lines mark the mid-points of the \xmm{} and \suz{} observations respectively.} \label{fig:lc} \end{figure} \subsection{\xmm{} data} \xmm{} observed \sax{} starting on 2008 September 30, 23:53 UT, for a total of $\sim$63 ks (ObsID 0560180601). Here we present data from the PN detector, which was operated in timing mode during the observation to prevent pile-up. Processing of the ODF failed using \verb|epproc|, therefore we used the PN event file provided by the \xmm{} team. We checked for background flares, and found that the first $\sim$5 ks were affected by a flare, therefore we excluded that section of the data from the rest of the analysis. We extracted the spectrum using \verb|xmmselect|. In timing mode the detector is continously read-out in the Y direction, leading to a bright streak on the detector. Thus, the source extraction region was chosen to be of width 24 pixels centered on the source and with the length covering the entire readout streak. The background spectrum was extracted from a strip at the edge of the observed window. The good exposure time of the source spectrum was 43.4 ks. The response files were generated using the \verb|arfgen| and \verb|rmfgen| tools. \subsection{Spectral Fitting} We fit the spectra using \verb|XSPEC| version 11 \citep{arnaud96}. First, we fit both the \suz{} and \xmm{} spectra separately. For the \suz{} data we fit the XIS spectrum over the 0.7 - 10 keV energy range, and the PIN spectrum over the 14-45 keV range. For the continuum model, we used an absorbed multicolor disk-blackbody + a single temperature blackbody and an additional power-law \citep[e.g.][]{lin07,cackett08}. All components are required by the data and statistically improve the fit. For the absorption we used the \verb|phabs| model. In the \suz{}/XIS spectrum we found that there is a detector feature at approximately 1.85 keV. We attribute this to a change in the instrument response that is not yet corrected for by the response files \citep[similar features have been seen in other 1/4 window mode observations, e.g.,][]{miller08,cackett08}. We therefore model the feature with a Gaussian absorption line so that it does not effect the continuum fit. We allowed a constant between the \suz/XIS and \suz/PIN spectra to account for any offset in flux calibration, finding $c = 1.37 \pm 0.05$. Such an offset between XIS and PIN flux can occur in 1/4 window mode because small extraction regions must be used. This is a known problem detailed in the `Calibration Issues' on the \suz{} website. Initially, we fit the continuum ignoring the iron line region $4-7$ keV. When examining this region after having fit the continuum, an iron line was clearly present, thus we added the \verb|diskline| \citep{fabian89} model to fit the line and re-fit all the parameters. The energy of the line center was constrained to be within the Fe K band range, 6.4 to 6.97 keV. We left the power-law disk emissivity profile, $\beta$, and inner accretion disk radius, $R_{in}$, as free parameters, while the outer accretion disk radius was fixed at a large value (1000 GM/c$^2$). The inclination, $i$, of the disk was also a free parameter, yet was constrained to lie within 36 to 67 degrees -- the possible range of inclinations determined by optical observations of \sax{} \citep{deloye08}. The best-fitting parameters (continuum \& line) are given in Tab.~\ref{tab:model}. \begin{deluxetable}{lcc} \tablecolumns{3} \tablewidth{0pc} \tablecaption{Spectral parameters from separate fitting} \tablecomments{All uncertainties are at the 90\% confidence level. The \suz{} spectra are fit over the range 0.7-45 keV, and the \xmm{} spectra are fit from 1.2-11 keV.} \tablehead{Parameter & Suzaku & XMM} \startdata $N_H$ ($10^{21}$ cm$^{-2}$) & $ 0.46\pm0.06$ & $2.3\pm0.1$ \\ Disk $T_{in}$ (keV) & $ 0.48 \pm 0.01$ & $0.23 \pm 0.01$ \\ Disk normalization & $590 \pm 51$ & $20400^{+21800}_{-8000}$ \\ Blackbody, $kT$ (keV) & $1.00\pm0.04$ & $0.40 \pm0.01$ \\ Blackbody normalization ($10^{-3}$) & $1.27\pm0.02$ & $1.25\pm0.06$ \\ Power-law index & $1.93\pm0.02$ & $2.08 \pm 0.01$ \\ Power-law normalization & $0.20\pm0.01$ & $0.33 \pm 0.01 $ \\ $E_{line}$ (keV) & $6.40^{+0.06}$ & $6.40^{+0.06}$\\ $\beta$ & $-2.9 \pm 0.3$ & $-3.0\pm0.2$\\ $i$ $(^{\circ})$ & $50^{+11}_{-4}$ & $59\pm4$\\ $R_{in}$ (GM/c$^2$) & $12.7^{+11.3}_{-2.0}$ & $13.0\pm3.8$\\ EW (eV) & $134\pm30$ & $118\pm10$ \\ $\chi^2_\nu (\nu)$ & 1.17 (2550) & 1.09 (1950) \label{tab:model} \enddata \end{deluxetable} \begin{deluxetable}{lc} \tablecolumns{2} \tablewidth{0pc} \tablecaption{Iron line parameters from joint fitting} \tablecomments{All uncertainties are at the 90\% confidence level} \tablehead{Parameter & Suzaku \& XMM} \startdata $E_{line}$ (keV) & $6.40^{+0.03}$ \\ $\beta$ & $-3.05 \pm 0.21$ \\ $i$ $(^{\circ})$ & $55^{+8}_{-4}$ \\ $R_{in}$ (GM/c$^2$) & $13.2\pm2.5$ \\ EW, Suzaku (eV) & $144^{+14}_{-31}$ \\ EW, XMM (eV) & $113\pm10$ \\ $\chi^2_\nu (\nu)$ & 1.14 (4502) \enddata \label{tab:line} \end{deluxetable} We used the same method when fitting the \xmm/PN spectrum. When examining the PN spectrum we found significant residuals below 1.2 keV, therefore, we only fit the spectrum over the 1.2-11 keV energy range. As with the \suz{} data, a clear Fe K line was present in the data, which we again fitted using the \verb|diskline| model. The best-fitting parameters (continuum \& line) are given in Tab.~\ref{tab:model}, and the Fe K line profiles are shown in Fig.~\ref{fig:line}. The continuum shapes from \suz{} and \xmm{} are different. This is not unexpected given that the observations were non-simultaneous, and the flux (see Fig. \ref{fig:lc}) had changed between the observations. However, the Fe K line properties are similar (all the parameters are consistent within the uncertainties), thus, we chose to fit the \suz{} and \xmm{} spectra simultaneously to place tighter constraints on the inner disk radius. In this joint fit, we allowed the continuum parameters to be completely independent, yet tied all the line parameters (except the normalization) between the two datasets. Parameters from this joint fit are given in Tab.~\ref{tab:line}. We find an inner accretion disk radius of $R_{in} = 13.2 \pm 2.5$~GM/c$^2$. The inclination of the disk is measured to be $i = 55^{+8}_{-4}$ degrees. We show the broadband spectrum in Fig.~\ref{fig:spec}. \begin{figure} \includegraphics[angle=270,width=8cm]{f2.eps} \caption{\suz{}/XIS 1+3 (black), \suz{}/PIN (red) and \xmm{}/PN (green) spectra of \sax{} (top panel). The residuals of the best fit ($\chi =$ (data-model)/$\sigma$) are plotted in the bottom panel.} \label{fig:spec} \end{figure} \begin{figure} \includegraphics[width=8cm]{f3.eps} \caption{Iron K emission line in \sax{} detected by \suz{} (black) and \xmm{} (red). The ratio of data to continuum model is plotted to show the iron line profile. The black, solid line shows the best-fitting iron line model to the \suz{} data, and the red, dashed line shows the line model for the \xmm{} data.} \label{fig:line} \end{figure} In addition to this phenomenological approach, we also investigated a number of self-consistent spectral fits, including a relativistic disk reflection spectrum. The \suz{} spectra can be fit well using the \verb|refsch| model instead of a power-law component, leading to a reflection fraction of $0.5\pm0.1$. However, we note that there is little evidence of the Compton back-scattering hump expected from disk reflection; the HXD/PIN spectrum can be fit well using only a simple power-law. We will investigate disk reflection in neutron star systems in an upcoming paper. \subsection{Estimating the magnetic field strength} If we assume that the inner accretion disk is truncated at the magnetospheric (Alfv\'{e}n) radius (where magnetic pressure balances the ram pressure from infalling material), then from the source luminosity and reasonable assumptions about mass and radius we can estimate the magnetic field strength \citep[for example see Eq 6.19 in][]{fkr}. Here we modify the formulation of \citet{ibragimov09} substituting $R_{in} = x GM/c^2$ into their Eq. 14 to give the following expression for the magnetic dipole moment: \begin{eqnarray} \mu &=& 3.5\times10^{23} \; k_A^{-7/4} \; x^{7/4} \left(\frac{M}{1.4 \; \mathrm{M_\odot}}\right)^2 \nonumber \\ &\times& \left( \frac{f_{ang}}{\eta} \frac{F_{bol}}{10^{-9} \; \mathrm{erg \; cm^{-2} \; s^{-1}}} \right)^{1/2} \frac{D}{3.5 \; \mathrm{kpc}} \;\; \mathrm{G \: cm^3} \end{eqnarray} where $\eta$ is the accretion efficiency in the Schwarzschild metric, $f_{ang}$ is the anisotropy correction factor \citep[which is close to unity, see][for details]{ibragimov09}, and the coefficient $k_A$ depends on the conversion from spherical to disk accretion. Numerical simulations suggesting $k_A = 0.5$ \citep{long05}, and as noted by \citet{ibragimov09}, theoretical models predict $k_A < 1.1$ \citep{psaltis99,kluzniak07}. For ease of comparison with previous magnetic field estimates, we assume a distance $D = 3.5\pm0.1$ kpc \citep{galloway06} and mass $M = 1.4 \; \mathrm{M_\odot}$. We also use a bolometric conversion factor of $F_{bol}/F_{2-25 keV} = 2.12$ \citep{galloway06}. The 2-25 keV fluxes that we measure from the \suz{} and \xmm{} observations are $(1.12\pm0.01)\times10^{-9}$ erg cm$^{-2}$ s$^{-1}$ and $(1.16\pm0.01)\times10^{-9}$ erg cm$^{-2}$ s$^{-1}$, respectively. Here, we just use the average of the two, which leads to $F_{bol} = (2.42 \pm 0.02)\times 10^{-9}$ erg cm$^{-2}$ s$^{-1}$. Using $R_{in}$ from the joint \suz/\xmm{} Fe line fit, along with assuming $k_A$ = 1, $f_{ang} = 1$, and $\eta = 0.1$ leads to $\mu = (1.6\pm0.5) \times10^{26}$~G~cm$^{3}$, taking into account the uncertainty in the inner disk radius, flux, and distance. For a stellar radius, $R = 10$ km, this leads to a magnetic field strength of $B = (3.2\pm1.0)\times10^8$~G at the magnetic poles. Note if we assume $k_A =0.5$, we would infer a larger magnetic field strength approximately a factor of 3 larger. \section{Discussion} We have observed a broad, relativistic Fe K emission line in the accreting millisecond X-ray pulsar \sax, using both \suz{} and \xmm{}. The line is similar in both observations, and by modelling the iron line we determine the inner disk radius, $R_{in} = 13.2 \pm 2.5$ GM/c$^{2}$. Making reasonable assumptions about the neutron star radius and mass allowed us to estimate the magnetic field strength $B = (3.2\pm1.0)\times10^8$~G at the magnetic poles. The broadband spectrum from \suz{} displays a hard power-law tail that extends to at least 45 keV. Although we were able to fit the broadband spectrum well with a reflection model, we note there is no clear sign of the Compton hump that can be seen in black hole systems. Neutron star LMXB spectra are certainly not as simple as black hole LMXB spectra, with the former often requiring multiple thermal components, which may lead to hide the Compton hump. In comparison with our inner disk radius, from fitting the \xmm{} data alone, \citet{papitto08} find $R_{in} = 8.7^{+3.7}_{-2.7}$ GM/c$^{2}$ consistent with our value, though a little smaller. The small difference may be attributed to the fact that while we fixed the outer disk radius at a large value (so that is not does affect the line profile), \citet{papitto08} left it as a free parameter. If we also leave the outer disk radius as a free parameter, we reproduce their result. Of all the parameters of the line model, the line profile is least sensitive to the outer disk radius, $R_{out}$, and this parameter cannot be well constrained from spectral fitting -- for a typical emissivity profile of $R^{-3}$, there is a strong weighting to the innermost part of the disk. Thus, the slightly different assumptions about the outer disk have only a small effect on the measured inner disk radius. An independent estimate of the inner accretion disk radius is made by \citet{ibragimov09}. From {\it RXTE} observations of the 2002 outburst of \sax{}, these authors study the evolution of the pulse profiles, finding a secondary maximum during the late stages of the outburst. They attribute this to the accretion disk receding and allowing a view of the lower magnetic pole. Under this assumption, they estimate the inner disk radius, at that specific time in the outburst, to be 19.5 km for $M = 1.4$ M$_\odot$, $R = 12$ km, and $i = 50^\circ$, with larger estimates for larger inclinations. This is close to our determination of the inner disk radius. \citet{pandel08} find a broad, relativistic Fe K line in the spectrum of 4U~1636$-$536. They suggest that the line is a blend of several K$\alpha$ lines in different ionization states. However, from their modeling of the line profile with 2 ionization state lines (one at 6.4 keV, and one at 7 keV) the inner disk radii they determine for each ionization state are very similar (within 20 GM/c$^2$ of each other). Yet, it would seem difficult to produce lines with such different ionization states so close to each other. In addition, the disk emissivity heavily weights the emission from the innermost part of the disk, where the highest ionization state line would come from. Thus, even if multiple lines are present, it seems reasonable to assume that the highest ionization state line would completely dominate. Here, we find we do not require a second ionization state to model the line profile. The power-law index measured during the \suz{} and \xmm{} observations, as well as the power spectrum presented in \citet{papitto08}, is typical of the low-hard state \citep[often referred to as the `island' state, e.g., see][for further details on neutron star LMXB states]{vanderklis06}. The inner disk radius that we measure from the Fe line is only $\sim$2 times the innermost stable circular orbit (for a Schwarzschild metric), suggesting that there is little recession of the accretion disk in this hard state. This is in disagreement with the model proposed by \citet{done07} where the disk is recessed in the island state. A similar result has been seen in black hole LMXBs -- observations of iron lines and/or a cool disk in the low-hard state of black hole LMXBs also suggests that there is little recession of the accretion disk in such a state \citep{miller06b,miller06a}. Moreover, \citet{rykoff07} show that there is a disk component in the black hole LMXB XTE~J1817$-$330 that cools as $L_x \sim T^4$ as the source transitions to the low-hard state, with an inner disk radius consistent with the innermost stable orbit at all times. Our estimate of the magnetic field strength is broadly consistent with previous estimates. From the long-term spin down, \citet{hartman08} determine B $<1.5\times10^8$ G at the magnetic poles. \citet{disalvo03} combine the quiescent luminosity of \sax{} along with considerations of the magnetospheric radius during quiescence to estimate B $= 1-5\times10^8$ G (at the equator, note that for a dipole the field strength at the equator is a factor of 2 smaller than at the poles). An earlier estimate by \citet{psaltis99} is less constraining implying a large possible range in surface dipole field of $10^8-10^9$ G at the stellar equator. Moreover, \citet{ibragimov09} estimate $B = (0.4-1.2)\times10^8$ G from their measurement of the inner disk radius. The estimation of the magnetic field strength here is, of course, dependent on a number of assumptions. In particular, the coefficient $k_A$ is the most uncertain, with a lower value of $k_A$ increasing our estimate of the magnetic field strength. The uncertainty from our particular choice of $k_A$ is not included in the quoted confidence limit. Moreover, our estimate is based on the assumption that the disk is truncated at the magnetospheric radius. This, however, may not necessarily be the case, as MHD simulations have shown that periodicities at the star's rotation frequency can still be present during unstable accretion \citep{kulkarni09}. Additionally, the presence of the stellar magnetic field affects the inner part of the accretion disk which may in turn affect the iron line profile. It is not clear how this will change the shape of the line profile from the standard \verb|diskline| model used here. Finally, optical observations during the 2008 outburst of \sax{} have lead to an improved mass function for the pulsar, with $M_1\sin^3{i} = 0.48^{+0.17}_{-0.14}$ M$_\odot$ \citep{elebert09}. From these observations they are unable to constrain the binary inclination any better than previously. If we assume that the inclination we measure from the Fe K line is the same as the inclination of the binary system, then this would imply quite a low pulsar mass of $0.9^{+0.5}_{-0.4}$ M$_\odot$. However, there is still a large uncertainty in this mass determination, which is consistent with the canonical 1.4 M$_\odot$. In contrast, tight constraints on the thermal emission from the neutron star surface during quiescence imply that \sax{} requires enhanced levels of core cooling, which can possibly be achieved through have a larger mass \citep{heinke08}. It is also interesting to note that \citet{elebert09} estimate a shorter distance to \sax{} of $\sim2.5$ kpc based on the equivalent width of the interstellar absorption lines. This shorter distance is the same as that estimated from type-I X-ray bursts by \citet{intzand01}. Such a distance would reduce our magnetic field strength estimate by 0.7. \acknowledgements EMC gratefully acknowledges support provided by NASA through the Chandra Fellowship Program, grant number PF8-90052.
1,116,691,499,468
arxiv
\section{introduction} Properties of unstable nuclei with large excess of proton or neutron are one of the most important current topics of nuclear physics. In several radioactive beam facilities in the world, many unstable nuclei far from the $\beta$-stability line have been discovered\cite{T8588,Oza94,Oza01,Jon04}. In particular, neutron-rich unstable nuclei have been extensively studied and some exotic features have been observed. This includes a large concentration of the dipole strength distribution at low energies\cite{F04,N06}, referred to as a ``soft dipole excitation''. This type of excitation is naively understood as an oscillation between weakly bound valence nucleon(s) and the core nucleus\cite{IMKT10,Sag95}. The relation between the soft dipole excitation and a largely extended density distribution, that is, a halo or skin property has been discussed for light neutron-rich nuclei, both experimentally \cite{F04,N06,K89,A90,Shi95,A99,L01,P03} and theoretically \cite{HJ87,BB88,BE91,EB92,EBH97,EHMS07}. Recently, the neutron halo structure has been discussed also in the $^{31}$Ne nucleus, based on the measured large Coulomb break-up cross sections\cite{N09}. In contrast to neutron-rich nuclei, proton-rich nuclei have been less studied. It has not been fully clarified whether similar exotic features are present also in proton-rich nuclei. For instance, a proton halo structure in {\it e.g.,} $^8$B, $^{12}$N, $^{17}$F, and $^{17}$Ne has been discussed\cite{Sag95,ZT95,GPZ05,GLSZ06}, but no clear evidence has been obtained so far. In order to investigate proton-rich unstable nuclei and discuss their similarities and differences to neutron-rich nuclei, it is indispensable to assess the effect of the Coulomb repulsion between valence protons. In the previous work, we analyzed the ground state properties of $^{17}$Ne using a three-body model \cite{OHS10}. We have shown that the effect of the Coulomb repulsion is weak enough and the two valence protons in the ground state of $^{17}$Ne have a spatially compact configuration, that is, the diproton correlation, similar to a dineutron correlation in neutron-rich Borromean nuclei \cite{HS0507,MMS05,M06,PSS07,KEHSS09}. We have also shown that the effect of the Coulomb interaction between the valence protons can be well accounted for by renormalizing the nuclear interaction. A similar conclusion was achieved recently also by Nakada and Yamagami, who performed Hartree-Fock-Bogoliubov (HFB) calculations for $N$=20,28,50,82, and 126 isotones \cite{NY11}. In this paper, as an continuation of the previous study, we discuss the effect of the Coulomb repulsion on excited states of $^{17}$Ne. Our interest is to investigate whether a similar renormalization for the Coulomb repulsion works also for the soft dipole excitation, which plays an important role in the astrophysical two-proton capture on $^{15}$O\cite{GLSZ06}. We assume the $^{17}$Ne nucleus as a three-body system composed of an inert spherical core nucleus $^{15}$O and two valence protons. The three-body Hamiltonian in the three-body rest frame reads \begin{eqnarray} H &=& h^{(1)}+h^{(2)}+\frac{{\bf p}_1 \cdot {\bf p}_2}{A_{{\rm C}}m} +V_{\rm pp}({\bf r}_1,{\bf r}_2), \label{eq:3bh} \\ h^{(i)} &=& \frac{{\bf p}_i^2}{2\mu}+V_{{\rm pC}}({\bf r}_i), \label{eq:sph} \end{eqnarray} where $m$ and $A_{{\rm C}}$ are the nucleon mass and the mass number of the core nucleus, respectively. $h^{(i)}$ is the single-particle (s.p.) Hamiltonian for a valence proton, in which $\mu=mA_{{\rm C}}/(A_{{\rm C}}+1)$ is the reduced mass and $V_{{\rm pC}}$ is the potential between the proton and the core nucleus. The third term in Eq.(\ref{eq:3bh}) is a two-body part of the recoil kinetic energy of the core nucleus. For the proton-core potential $V_{{\rm pC}}$, we employ a Woods-Saxon (WS) plus Coulomb potential in the same manner as in the previous work \cite{OHS10}. We use the same parameters for the WS potential as in those listed in Ref.\cite{OHS10}. We solve the three-body Hamiltonian, Eq. (\ref{eq:3bh}), by expanding the wave function on the uncorrelated basis as, \begin{equation} \Psi^{(J,M)}({\bf r}_1,{\bf r}_2) = \sum_{k_1 \leq k_2} \alpha_{k_1,k_2} \tilde{\psi}^{(J,M)}_{k_1,k_2} ({\bf r}_1,{\bf r}_2), \label{eq:WF} \end{equation} where \begin{eqnarray} \tilde{\psi}^{(J,M)}_{k_1,k_2} ({\bf r}_1,{\bf r}_2) & = & \nonumber \frac{1}{\sqrt{2}} \left[ 1-(-)^{j_1+j_2-J}\delta_{k_1,k_2} \right]^{-1} \\ & \times & \nonumber \sum_{m_1,m_2} \langle j_1 m_1 j_2 m_2\mid J M \rangle \\ & \times & \nonumber \left[ \right. \phi_{k_1,m_1}({\bf r}_1) \phi_{k_2,m_2}({\bf r}_2) \\ & & -\phi_{k_2,m_2}({\bf r}_1) \phi_{k_1,m_1}({\bf r}_2) \left. \right]. \label{eq:basis} \end{eqnarray} Here, $\phi_{km}({\bf r})$ is a s.p. wave function with $k=(n,l,j)$, while $J$ and $M$ are the total angular momentum of the two-proton subsystem and its $z$ component, respectively. $\alpha_{k_1,k_2}$ is the expansion coefficient. The summation in Eq. (\ref{eq:WF}) is restricted to those combinations which satisfy $\pi=(-)^{l_1+l_2}$ for a state with parity $\pi$. In the actual calculations shown below, we include the s.p. angular momentum $l$ up to 5. We have confirmed that our results do not change significantly even if we include up to a larger value of $l$. In order to take into account the effect of the Pauli principle, we explicitly exclude the $1s_{1/2}$, $1p_{3/2}$, and $1p_{1/2}$ states from Eq. (\ref{eq:WF}), which are occupied by the protons in the core nucleus. For the pairing interaction $V_{\rm pp}$, we assume a density-dependent contact interaction \cite{BE91,EBH97,HS0507} together with the Coulomb repulsion, $V_{\rm pp}=V_{\rm pp}^{(N)}+V_{\rm pp}^{(C)}$, as in the previous work \cite{OHS10}. We take a cutoff energy $E_{\rm cut}=30$ MeV and include those configurations which satisfy \begin{equation} \epsilon_{n_1l_1j_1}+\epsilon_{n_2l_2j_2}\leq \frac{A_{{\rm C}}+1}{A_{{\rm C}}}\,E_{\rm cut}, \label{eq:Ecut} \end{equation} where $\epsilon_{nlj}$ is a s.p. energy \cite{EBH97}. Within this truncated space, we determine the strength of the nuclear part of the pairing interaction, $V_{\rm pp}^{(N)}$, using the empirical value of the neutron-neutron scattering length, $a_{\rm nn}=-18.5$ fm \cite{EBH97}. The parameters for the density dependence are adjusted so as to reproduce the experimental value of the two-proton separation energy of $^{17}$Ne, $S_{\rm 2p}=0.944$ MeV. In our calculations, the continuum s.p. spectra are discretized within a box of $R_{\rm box}$=30 fm. Thus, energies of the two-proton $1^-$ states as well as the E1 strength distribution are also discretized. The E1 strength function from the ground state is defined as \begin{equation} S(E) = \sum_{i} \mu_i \, \delta(E-\hbar\omega_i) \end{equation} where $\hbar\omega_i=E_i-E_{\rm g.s.}$, $E_{\rm g.s.}$ being the energy of the ground state, and $\mu_i$ is the $B({\rm E1})$ strength for $i$-th $1^-$ two-proton state, \begin{equation} \mu_i =3 \left| \left< \Psi^{(1,0)}_i \mid \hat{D}_0 \mid \Psi_{\rm g.s.} \right> \right|^2, \end{equation} with the E1 operator given by \begin{equation} \hat{D}_{\mu} = e\left( \frac{A_{{\rm C}}-Z_{{\rm C}}}{A_{{\rm C}}+2} \right) \left[ r_1Y_{1\mu}(\hat{{\bf r}}_1)+r_2Y_{1\mu}(\hat{{\bf r}}_2) \right]. \end{equation} Using the strength function $S(E)$, we can also calculate the $k$-th moment of energy defined as \begin{equation} S_k = \int dE \,E^k S(E) = \sum_i (\hbar\omega_i)^k \,\mu_i. \end{equation} Notice that $S_0$ and $S_1$ correspond to the direct and energy-weighted sum of $dB({\rm E1})/dE$, respectively. From the completeness of the $1^-$ basis, we can estimate the sum-rule-values as \begin{eqnarray} S_{0,{\rm SR}} &=& \frac{3}{\pi} e^2 \left( \frac{A_{{\rm C}}-Z_{{\rm C}}}{A_{{\rm C}}+2} \right)^2 \left< \Psi_{\rm g.s.} | r^2_{\rm 2N-C} | \Psi_{\rm g.s.} \right>, \\ S_{1,{\rm SR}} &=& \frac{9}{4\pi} e^2 \left( \frac{A_{{\rm C}}-Z_{{\rm C}}}{A_{{\rm C}}+2} \right)^2 \frac{A_{{\rm C}}+2}{A_{{\rm C}}m} \hbar^2, \end{eqnarray} where $\vec{r}_{\rm 2N-C}=({\bf r}_1+{\bf r}_2)/2$ is the distance between the center of mass of the two valence protons and the core nucleus. For the core nucleus $^{15}$O ($A_{{\rm C}}=15$ and $Z_{{\rm C}}=8$), we obtain $S_{0,{\rm SR}}=1.49$ $e^2$fm$^2$ and $S_{1,\rm SR}=5.69$ $e^2$fm$^2$MeV. Due to the Pauli forbidden transitions, the actual value of $S_0$ is smaller than $S_{0,{\rm SR}}$, while $S_1$ is larger than $S_{1,{\rm SR}}$. \begin{table}[h] \caption{ The results for the soft dipole excitations of $^{17}$Ne obtained with the three-body model of $^{15}$O+p+p. $S_0$ and $S_1$ are the non-energy weighted sum rule and the energy weighted sum rule, respectively. $E_{\rm cent}=S_1/S_0$ is the centroid energy of the dipole strength distribution. $\delta E_{\rm cent}$ is a shift of the centroid energy with respect to the result of the exact treatment of the Coulomb interaction. } \begin{center} \begin{tabular}{c c c c c} \hline \hline pairing & $S_0$ & $S_1$ & $E_{\rm cent}$ & $\delta E_{\rm cent}$ \\ & ($e^2$fm$^2$) & ($e^2$fm$^2$MeV) & (MeV) & (MeV)\\ \hline Nucl. + Coul. & 1.206 & 11.02 & 9.140 & 0 \\ Nucl. only & 1.206 & 10.45 & 8.666 & $-$0.47 \\ Ren. Nucl. & 1.205 & 10.86 & 9.017 & $-$0.12 \\ No pairing & 1.206 & 15.50 & 12.86 & 3.72 \\ \hline \hline \end{tabular} \label{tb:tb1} \end{center} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=8.0cm, height=6.0cm]{fig1.eps} \caption{(Color online) Comparison of the E1 strength distributions for $^{17}$Ne obtained with several treatments for the Coulomb interaction between the valence protons. The solid line is obtained by fully including the Coulomb interaction, while the dashed line is obtained by switching off the Coulomb repulsion. The thin solid line is the result of the renormalized nuclear interaction, which is readjusted to reproduce the ground state energy without the Coulomb interaction. The dotted line denotes the result without any pairing interaction. These distributions are smeared with the Cauchy-Lorentz function, Eq. (\ref{eq:CL}), with $\Gamma = 1.0$ MeV. } \label{fig:fig1} \end{center} \end{figure} Our main results are summarized in Fig. \ref{fig:fig1} and Table \ref{tb:tb1}. For a plotting purpose, we smear the Dirac delta function in the E1 distribution $S(E)$ with a Cauchy-Lorentz function \begin{equation} \frac{dB({\rm E1})}{dE} = \sum_{i} \mu_i \, \frac{\Gamma}{\pi} \frac{1}{(E-\hbar\omega_i)^2+\Gamma^2} \label{eq:CL} \end{equation} with the width parameter $\Gamma$ of $1.0$ MeV. In order to discuss the effect of the Coulomb part of the pairing interaction, we also perform the calculations with two other treatments for the Coulomb interaction. One is to switch off the Coulomb interaction (``Nucl. only''), that is, $V_{\rm pp}=V_{\rm pp}^{(N)}$, keeping the same values for the parameters of the nuclear pairing interaction $V_{\rm pp}^{(N)}$ as in the full calculation (``Nucl.+Coul.''). The other is again to use the nuclear interaction only, but renormalize the parameters, that is, $V_{\rm pp}=\tilde{V}_{\rm pp}^{(N)}$ (``Ren. Nucl.''). To renormalize the interaction, we use the empirical proton-proton scattering length, $a_{\rm pp}=-7.81$ fm, instead of the neutron-neutron scattering length $a_{\rm nn}$, and determine the other parameters so that the two-proton separation energy $S_{\rm 2p}$ is reproduced. This leads to about 10.0\% reduction of the strength of the pairing interaction. Notice that this value for the reduction factor is consistent with the finding of Ref. \cite{NY11}. See Ref.\cite{OHS10} for further details of the procedure. For a comparison, we also show the results without any pairing interaction. These treatments for the Coulomb interaction are applied only to the excited states, while the same ground state wave function is used for all the cases. That is, the ground state is calculated with the full treatment of the Coulomb interaction, that yields 76\% of $(d_{5/2})^2$ and 16\% of $(s_{1/2})^2$. This ground state is used also for the no-pairing calculation for the dipole excitations. (These values for the occupation probabilities are slightly different from those in Ref.\cite{OHS10}, as we use a smaller $E_{\rm cut}$ in this paper. We have confirmed that the dipole strength distribution does not change much even though we use the smaller value of $E_{\rm cut}$.) Notice that our results for $S_0$ are consistent with the result of Grigorenko {\it et al} \cite{GLSZ06}, that is, $S_0=1.56$ $e^2$fm$^2$ for $(s_{1/2})^2$=48\% and $S_0=1.07$ $e^2$fm$^2$ for $(s_{1/2})^2$=5\%. Table \ref{tb:tb1} also lists the centroid energy defined as $E_{\rm cent} \equiv S_1/S_0$ \cite{SSAR06}, and its relative value with respect to the result of the full treatment of the Coulomb interaction. From Fig. \ref{fig:fig1} and Table \ref{tb:tb1}, we can see that the pairing interaction shifts considerably the E1 strength distribution towards the low energy region, similarly to the dipole distribution in neutron-rich nuclei \cite{EHMS07,N06}. This large shift of the E1 strength distribution originates mainly from the nuclear part of the pairing interaction. If the Coulomb part is switched off, the strength distribution is shifted only slightly, as is shown in Fig. \ref{fig:fig1} by the dashed line. The shift of the centroid energy, $\delta E_{\rm cent}$, is only $-0.47$MeV. The sign and the magnitude of $\delta E_{\rm cent}$ is consistent with the result of Hartree-Fock+RPA calculations for medium-heavy nuclei shown in Ref. \cite{SSAR06}, although $\delta E_{\rm cent}$ for the soft dipole excitation in $^{17}$Ne is somewhat larger. The result with the renormalized nuclear pairing interaction is shown by the thin solid line in Fig. \ref{fig:fig1}. As one can see, the result of the full calculation (the thick solid line) is well reproduced by this prescription. We can thus conclude that the renormalization works well not only for the ground state \cite{OHS10,NY11}, but also for the dipole excitations. In summary, we discussed the influence of the Coulomb repulsion between the valence protons upon the soft dipole excitation in $^{17}$Ne. We showed that the effect of the Coulomb repulsion is so weak that the main feature of the dipole response is similar between neutron-rich and proton-rich weakly bound nuclei. We also showed that the effect of the Coulomb interaction can be well mocked up by renormalizing the nuclear pairing interaction. This renormalization works both for the ground state and for the dipole excitations, and thus from a practical point of view one can use a renormalized pairing interaction in order to understand the structure of proton-rich nuclei. One of the current topics of proton-rich nuclei is two-proton radioactivity. Bertulani, Hussein, and Verde argued that the final state interaction plays an important role in discussing the energy and the angular correlations in a two-proton emission process \cite{BHV08}. It would be an interesting future work to investigate how the renormalization works for those correlations. A work towards this direction is now in progress. \bigskip This work was supported by the Grant-in-Aid for Scientific Research (C), Contract No. 22540262 and 20540277 from the Japan Society for the Promotion of Science.
1,116,691,499,469
arxiv
\section{Introduction} \label{sec:introduction} \IEEEPARstart{A}{s}\footnote{S. Neogi, J. Dauwels, "PedAI: Smart Pedestrian Interactions for AV and ADAS", Singapore patent application number: 10202008970S. Filed 14th Sep, 2020.} we enter the era of autonomous driving with the first ever self-driving taxi launched in December 2018, smooth handling of pedestrian interactions still remains a challenge. The trade-off is between on-road pedestrian safety and smoothness of the ride. Recent user experiences and available online footage suggest conservative autonomous rides resulting from the emphasis on on-road pedestrian safety. To achieve rapid user adoption, the AVs must be able to simulate a smooth human driver-like experience without unnecessary interruptions, in addition to ensuring 100\% pedestrian safety. Automated braking systems in an ADAS tackle the emergency pedestrian interactions. These brakes get activated on detecting pedestrians' crossing behaviours within the vehicle safety range. Such a system may need to brake strongly on late prediction of pedestrian crossing behaviour, which can occur frequently in crowded areas. A future ADAS must be able of offer a smoother experience on such interactions. The key to a safe and smooth autonomous pedestrian interaction lies in early and accurate prediction of a pedestrian's crossing/not-crossing behaviour in front of the vehicle. Accurate and timely prediction of pedestrian behaviour ensures on-road pedestrian safety, while early anticipation of the crossing/not-crossing behaviour offers more path planning time and consequently a smoother control over the vehicle dynamics. Recent works on on-road pedestrian behaviour prediction (\hspace{1sp}\cite{Daimler} - \cite{traj-est2}) rely on a pedestrian's motion, skeletal pose, his/her location in scene (on road, at curb etc.) and certain static context variables (e.g., presence of zebra crossings, traffic lights etc.). While pedestrian motion, skeletal pose and location are reliable indicators of the current pedestrian action, they hardly reflect on the long-term ($>$1 sec ahead) pedestrian behaviour. Static context variables (e.g., zebra crossings, traffic lights) certainly are important factors to predict future behaviour, but are seldom present in a scene. More often than not, an Autonomous Vehicle is likely to encounter pedestrian(s) without these variables. \begin{figure} \includegraphics[scale=0.23]{System_overview_IP.pdf} \centering \caption{Overview of our intention prediction system. Points in red highlight our main contributions.} \label{fig:Overview} \end{figure} As pedestrians, our crossing/not-crossing intentions are largely influenced by the distances and speeds of the approaching vehicles. We refer to this causal information as dynamic context or \textit{vehicle interaction context} (see Fig. \ref{fig:Overview}) and utilize it together with pedestrian motion and location information for early prediction of future behaviour. To the best of our knowledge, we are the first \cite{mypaper} to consider influence of such dynamic context on pedestrian intention, which proves to be the strongest factor for early prediction, as we demonstrate in Section \ref{sec:results}. Adding such context improves prediction time of the pedestrian stopping (at the curb) behaviour by $\ge$0.5 seconds on average across considered datasets (see Section \ref{sec:results}), and can potentially reduce unnecessary halts in future autonomous rides. This dynamic context also improves prediction time and accuracy of other behaviour types across the datasets. Since each pedestrian interaction with the ego-vehicle is continuous in time for a certain duration (say, $T_p$) and we want to classify crossing/not-crossing intentions of a pedestrian at each instant within this duration, we formulate this problem as a time-series binary classification task, $P(y_t | x_{1:t})$, $t \in 1 \colon T_p$, where $y_t$ is the intention variable to be predicted and $x_t$ are the input features to the system at $t$. Conditional Random Fields (CRF) model such conditional task directly from the data. Each of the pedestrian crossing and not-crossing intentions has underlying latent intrinsic dynamics (see Section \ref{subsec:LDCRF} for examples), while there also exists an extrinsic dynamics between the intention labels. A Latent-Dynamic CRF (LDCRF, see Fig. \ref{fig:CDCRF}b) \cite{LDCRF} can capture the intrinsic dynamics within class labels $y_t$ by embedding latent states in a labeled CRF structure. In contrast to similar models (that capture the intrinsic dynamics) like Hidden Markov Models (HMM) \cite{HMM} and Hidden Conditional Random Fields (HCRF) \cite{HCRF}, a LDCRF does not require prior segmentation of the training sequences according to the class labels, thus preserving the extrinsic dynamics between labels as well in the model. LDCRF has been shown to outperform HMM and HCRF on similar tasks like gesture recognition \cite{LDCRF}. Considering its theoretical and experimental superiority over similar models, we apply LDCRF as baseline in our work. The only way to improve learning performance with a LDCRF is by varying the number of hidden states (say, $N_h$) associated to each class label. Such a variation not only restricts the model capabilities, but also results in a rapidly growing state transition matrix with size $(N_l \cdot N_h) \times (N_l \cdot N_h)$, where $N_l$ is the number of class labels. Such increment results in greater model complexity and requires larger training data. Moreover, it is possible that there exists multiple interacting latent dynamics (e.g., contextual and motion-related, see Section \ref{subsec:LDCRF}) within the class labels. Considering these limitations of LDCRF and inspired by Factorial HMM \cite{Factorial-HMM}, we propose a generalization of LDCRF, called Factored LDCRF (FLDCRF) \cite{mypaper}, in order to \begin{itemize}[nosep] \item{generate new models by varying the number of hidden layers,} \item{generate factorized models \cite{Factorial-HMM} with fewer parameters to fit better small/medium datasets, and} \item{capture relationship between multiple interacting latent dynamics (context and motion) within class labels. Each hidden layer models a different latent dynamics and the connections among different hidden layers model their interactions.} \end{itemize} We show such generalization of LDCRF to improve performance on our validation sets (see Section \ref{sec:results}). Such interacting hidden layers also allow to capture sequential multi-label/multi-agent interactions (see Fig. \ref{fig:FLDCRF-variants}). See Section \ref{subsec:FLDCRF} for more details on FLDCRF. LSTMs are the most popular kind of Recurrent Neural Networks (RNN), largely due to their ability to capture longer range dependencies among sequence values. They are frequently applied to similar sequence modeling tasks, e.g., path prediction \cite{LSTM}, action recognition \cite{action-lstm} etc. Deep learning systems like LSTMs have been known to involve tedious hyperparameter tuning and their performance are largely big-data-driven. We compare LSTM and FLDCRF (general LDCRF) on our datasets over identical input features and labels. FLDCRF not only outperforms LSTM across our datasets, but also provides easier model selection and requires significantly lesser training time (see Section \ref{sec:results}). We highlight the main contributions of this paper below: \begin{itemize}[nosep] \item We propose a graphical model Factored Latent-Dynamic Conditional Random Fields (FLDCRF\footnote{https://github.com/satyajitneogiju/FLDCRF-for-sequence-labeling.}) for joint sequence prediction/tagging tasks. \item We introduce the influence of vehicle interactions (dynamic context) on pedestrian intention for early and accurate prediction. \item We have created a dataset\footnote{Dataset link to be provided.} specifically designed to capture vehicle influences on pedestrian intention, collected inside Nanyang Technological University campus (NTU dataset). We also conduct an evaluation set on a public real-life dataset, known as JAAD (Joint Attention for Autonomous Driving \cite{JAAD}), devised for early pedestrian intention prediction. \item We compare FLDCRF and LSTM over identical features and labels on both datasets. \end{itemize} In addition to early and accurate pedestrian intention prediction, we put stress on the necessity to simulate real-life prediction environment for our experiments. We apply state-of-the-art object detectors \cite{obj-det} and semantic segmentation systems \cite{semantic-seg} to detect pedestrians/vehicles and to determine pedestrian location respectively. Fig. \ref{fig:Overview} shows the different modules and their inter-connections in our intention prediction system. The outline of this paper is as follows. In Section \ref{sec:lit-review}, we describe state-of-the-art systems on pedestrian behaviour prediction and CRFs. We briefly define a standard nomenclature on considered pedestrian behaviour types in Section \ref{sec:nomenclature}. We propose FLDCRF in Section \ref{sec:model} and introduce our context and motion feature extraction techniques in Section \ref{sec:features-labels}. Next, we discuss characteristics of the considered datasets in Section \ref{sec:dataset}. We describe our experiments in Section \ref{sec:exp-setup} and present our results in Section \ref{sec:results}. We offer concluding remarks and ideas for future research in Section \ref{sec:conclude}. \section{Literature Review} \label{sec:lit-review} From an Autonomous Vehicle perspective, there are two popular problems concerning pedestrian behaviour predictions, a) continuous path prediction \cite{Daimler}, \cite{Context-based} - \cite{traj-est1}, \cite{traj-est3}, \cite{traj-est5}, where future trajectory of a pedestrian is estimated and b) discrete intention prediction \cite{Daimler} - \cite{Michael}, where the goal is to predict discretized pedestrian behaviours. Given a detected pedestrian, his/her position w.r.t.~the AV and a marked AV safety range, solving any problem (path or intent prediction) can ensure a safe autonomous ride. However, a smooth autonomous ride of future (avoiding repeated halts and hard brakes) would require early predictions of pedestrian behaviour for optimal path and dynamics planning. Although pedestrian interactions and static traffic scene elements have been successfully utilized \cite{traj-est2} - \cite{traj-est1}, \cite{traj-est3}, \cite{traj-est5} to improve path prediction accuracy, long-term path prediction ($>$1 sec ahead) is still challenging and error prone, prediction error growing exponentially with prediction horizon \cite{traj-est2}. Additionally, path prediction given fewer ($\leq$10) observed frames, distant objects and moving cameras invoke additional uncertainties. Existing discrete intention prediction approaches (described shortly) primarily focus on accurate prediction rather than early prediction. In this paper, we analyze the feasibility of accurate long-term (up to 2 sec ahead) discrete intent prediction in a variety of scenarios. We briefly discuss existing literature on discrete intent prediction and sequential CRFs below. \subsection{Pedestrian Intention Prediction} Existing intent prediction approaches can be broadly classified into two categories, a) Pedestrian-only, which only take features from the pedestrian of interest and b) Context based, which consider context variables (location, traffic elements etc.) together with pedestrian features. \textbf{Pedestrian-only} approaches \cite{Daimler}, \cite{LDCRF-intent}, \cite{Intent-intersections}, \cite{HMM-intent},\cite{Openpose}, \cite{Michael}, in general, learn machine learning models over pedestrian features (motion and/or appearance) and n-ary intention labels. Pedestrian features include dense optical flow \cite{Daimler}, concatenated position, velocity and head pose \cite{LDCRF-intent}, Motion Contour image based Histogram of Oriented Gradients descriptor (MCHOG) \cite{Intent-intersections}, skeletal features \cite{HMM-intent}, \cite{Openpose} and deep learning appearance features \cite{Michael}. Applied machine learning have been either time-series models (GPDM, PHTM \cite{Daimler}, LDCRF \cite{LDCRF-intent}, HMM \cite{HMM-intent}, LSTM \cite{Michael}) or SVM \cite{Intent-intersections}, \cite{Openpose}. \textbf{Context based} intent prediction approaches \cite{Ped-crossing-intent}, \cite{Motion-classification}, \cite{Particle-intent}, \cite{JAAD-application} consider information like scene spatial layout, i.e. location of curb \cite{Ped-crossing-intent}, \cite{Motion-classification} and traffic elements \cite{Ped-crossing-intent}, \cite{Motion-classification}, \cite{Particle-intent}, \cite{JAAD-application}, such as cross walks, bus stops, traffic lights, zebra crossings etc. along with pedestrian features. The majority of these approaches \cite{Daimler}, \cite{LDCRF-intent}, \cite{Openpose}, \cite{Michael} were evaluated on the Daimler dataset \cite{Daimler Dataset}, a public benchmark consisting of pedestrian crossing/not-crossing sequences. Stopping probability vs time \cite{Daimler}, \cite{LDCRF-intent}, \cite{Openpose}, \cite{Michael} and classification accuracy \cite{Daimler}, \cite{Intent-intersections}, \cite{HMM-intent}, \cite{JAAD-application} are the most common metrics. Since the Daimler data does not capture spontaneous vehicular interactions with pedestrians, we present our results on this data to Appendix \ref{Appen:Daim}. Recently a group of researchers have proposed a relatively large, labeled pedestrian dataset called JAAD \cite{JAAD}, captured from moving vehicles. This dataset provides real-life examples of pedestrian interactions with a vehicle in complex scenes. In \cite{JAAD-application}, the same group proposes a context based system to classify pedestrian crossing/not-crossing behaviour, where they combine an action recognition classifier and a learned attention model for relevant static context variables (traffic light, zebra crossing, stop sign etc.) to classify pedestrian intention. In \cite{JAAD-application2}, Gujjar et al. showed improvements in performance on the same dataset by considering a future generation model. However, we find their evaluation dataset to be weakly relevant to early prediction of intention (evaluation does not involve any temporal information) and hence select our own evaluation sequences (described in Section \ref{sec:exp-setup}) from the JAAD dataset. All context models discussed above for pedestrian behaviour prediction rely on static scene context variables and do not consider dynamic vehicle interactions. To our best knowledge, we are the first to consider effects of such vehicle interactions on pedestrian behaviour \cite{mypaper}. While \cite{HMM-intent} presents a 70\% prediction accuracy of pedestrian stopping actions 0.06 seconds before the event on \cite{CMU-data}, we achieve such accuracy at least 0.9 seconds before the event across our datasets in presence of vehicle interactions. Current best approach \cite{Michael} on the Daimler dataset gives such an accuracy 0.38 seconds before event, although it is tested on a small test dataset (with 13 pedestrian sequences) with limited variations in pedestrian behaviour. We apply proposed FLDCRF over input features (context + motion) and intention labels. We briefly describe related existing sequential CRF models below. \subsection{Conditional Random Fields (CRFs)} CRFs were first introduced in \cite{CRF-first} to accomplish the task of segmenting and labeling sequence data. CRFs are discriminative classifiers and aim at directly modeling the conditional distribution $P(\textbf{y}\mid\textbf{x})$ over input features \textbf{x} and classification labels \textbf{y}. A simple sequential type of CRFs is the Linear Chain CRF (LCCRF, see Fig. \ref{fig:CDCRF}a), which is frequently applied to sequence labeling tasks like word recognition, NP chunking, POS tagging etc. \begin{figure}% \centering \subfloat[\scriptsize{LCCRF}]{{\includegraphics[scale=0.45]{LCCRF.pdf} }}% \qquad \subfloat[\scriptsize{LDCRF}]{{\includegraphics[scale=0.45]{LDCRF.pdf} }}% \caption{Sequential CRF variants. White nodes are (training + testing) observed and black nodes are hidden. Grey nodes are observed only during training.}% \label{fig:CDCRF}% \end{figure} Over the years, multiple variants like semi-markov CRFs \cite{semi-CRF} and Dynamic CRFs \cite{DCRF} have been proposed. Semi-markov CRFs allow long range dependencies among labels $\textbf{y}$ = $\{y_1, y_2, ..., y_T\}$ at the expense of higher computational cost. Dynamic CRFs \cite{DCRF} allow multiple interacting labels ($y_{1,t}$, $y_{2,t}$ etc.) at each time step, making it suitable for multi-label classification tasks. LDCRF was proposed in \cite{LDCRF} (see Fig. \ref{fig:CDCRF}b), where hidden states are embedded in a labeled CRF structure by means of an additional layer of hidden variables $\textbf{h}$ = $\{h_1, h_2, ..., h_T\}$. Each possible classification label is associated with a given set of hidden states. The hidden states model the intrinsic dynamics within each label. FLDCRF utilizes primary LDCRF concept and constraints. We will briefly discuss the concept behind LDCRF and its applicability on the intent prediction problem in Section \ref{subsec:LDCRF}. \section{Nomenclature} \label{sec:nomenclature} In general, an on-road or near-road pedestrian's crossing/not-crossing behaviour is characterized by the real world lateral component of his/her motion w.r.t.~the ego-vehicle moving direction, i.e., \vspace{2mm} \begin{align*} laterally \quad moving \quad &\longleftrightarrow \quad crossing \\ laterally \quad static \quad &\longleftrightarrow \quad not-crossing. \end{align*} At each instant $t$, we try to predict whether a detected pedestrian will be laterally moving or laterally static\footnote{Laterally static instances include pedestrians walking along the curb or sidewalk.} in the near future. Associated critical measures and necessary vehicle dynamics can be derived by further analyzing pedestrian's position w.r.t.~the vehicle. We evaluate pedestrian sequences where early prediction is relevant from an AV perspective. We categorize each such encountered pedestrian sequence into one of the following (see Fig. \ref{fig:Example_sequences}): \begin{enumerate} \item{\textbf{Continuous crossing}: The pedestrian initially has a lateral movement before the curb, continues to move laterally and crosses in front of the vehicle.} \item{\textbf{Stopping at the curb}: The pedestrian initially has a lateral movement before the curb and stops at/near the curb. We call this behaviour `stopping' in the paper for simplicity.} \item{\textbf{Standing at the curb}: The pedestrian continues to be laterally static at/near the curb. This behaviour is referred as `standing' in the paper.} \item{\textbf{Starting to cross}: The pedestrian is initially static laterally at/near the curb and starts to cross in front of the vehicle. This behaviour is called `starting' in the paper.} \end{enumerate} Stopping and starting sequences carry a change in state of motion (laterally moving to laterally static and vice versa) of the pedestrian. As our goal is to predict pedestrian intention early, our primary aim is to detect these state changes early (preferably before any discernible change in motion/appearance) and accurately. In addition, we also want stable and accurate predictions of `crossing' and `not-crossing' intentions for sequences with no change in state of motion (continuous crossing and standing respectively). Each sequence type is associated with an \textit{event instant} (e.g., \textit{crossing}, \textit{stopping} or \textit{starting instant}), illustrated in Fig. \ref{fig:Example_sequences}, which provides the ground truth classification labels and also serves as reference during evaluation. Since a standing sequence cannot be characterized with such an instant, we define a \textit{critical point} based on pedestrian distance and ego-vehicle velocity. To aid the labeling process during training (for early prediction of the transitions in stopping and starting sequences), we define a \textit{pred\_ahead} parameter for each dataset, in number of frames before the respective event instants. We explain how we labeled the pedestrian sequences during training in Section \ref{sec:exp-setup}. \begin{figure}[h!] \vspace{-2mm} \centering \subfloat[\scriptsize{Continuous crossing - continue moving laterally.}]{\label{fig:crossing-example} \includegraphics[scale=0.27]{Crossing_example1.png}} \\[-0.3ex] \vspace{-3mm} \subfloat[\scriptsize{Stopping at the curb - laterally moving then static.}]{\label{fig:stopping-example} \includegraphics[scale=0.46]{Stopping_example1.png}} \\[-0.3ex] \vspace{-2mm} \subfloat[\scriptsize{Standing at the curb - continue to be laterally static.}]{\label{fig:standing-example} \includegraphics[scale=0.32]{Standing_example_new1.png}} \\[-0.3ex] \vspace{-3mm} \subfloat[\scriptsize{Starting to cross - laterally static then laterally moving.}]{\label{fig:starting-example} \includegraphics[scale=0.46]{Starting_example1.png}} \\[-0.3ex] \caption{Examples of common pedestrian sequence types observed from a vehicle (from JAAD \cite{JAAD} dataset). EI stands for event (crossing, stopping etc.) instant.} \label{fig:Example_sequences} \end{figure} \section{Models} \label{sec:model} In this section, we propose Factored Latent-Dynamic Conditional Random Fields (FLDCRF). We briefly discuss the idea behind applying LDCRF to our problem in Section \ref{subsec:LDCRF} and introduce FLDCRF in Section \ref{subsec:FLDCRF}. Since, FLDCRF subsumes LDCRF (structure and model constraints), we move the mathematical details about LDCRF to Appendix \ref{Appen:LDCRF}. \subsection{LDCRF} \label{subsec:LDCRF} As mentioned before, we employ CRF type classifiers in our problem as they directly model the discriminative task $P(\textbf{y} \mid \textbf{x})$. Each of the pedestrian `crossing' and `not-crossing' intentions has underlying intrinsic dynamics. Such intrinsic dynamics can be contextual or motion-related. For example, stopping behaviour of a pedestrian (see Fig. \ref{fig:stopping-example}) can be characterized by a transition from pedestrian `moving' to `rest' state (motion) as well as an interacting vehicle going from `far' to `near' state (context). A standing behaviour may be associated with continuation of the `rest' and `near' states respectively. Since such motion-related or contextual states are hidden (latent), overall intrinsic dynamics of the `not-crossing' intention can be captured by associating hidden states to the intention label and allowing their transitions. These states can be learned from training data by connecting them to the observed context and motion features $x_t$. A LCCRF (see Fig. \ref{fig:CDCRF}a) only allows transition between class labels and hence fails to capture such intrinsic dynamics. LDCRF captures such dynamics by means of an additional layer of hidden variables $\{h_t\}_{t = 1:T}$ (see Fig. \ref{fig:CDCRF}b). $h_t$ can assume values from a predefined set of hidden states. For example, if the class labels `cross' or `not-cross' are associated with sets (of hidden states) $\mathcal{H}_{c}$ and $\mathcal{H}_{nc}$ respectively, and $y_t$ is `cross', then $h_t \in \mathcal{H}_{c}$. To keep training and inference tractable, LDCRF restricts sets $\mathcal{H}_{c}$ and $\mathcal{H}_{nc}$ to be disjoint. \subsection{FLDCRF} \label{subsec:FLDCRF} LDCRF captures the latent dynamics within class labels by means of a layer of hidden variables. We propose multiple interacting hidden layers in a LDCRF structure (see Fig. \ref{fig:FLDCRF single-label model}) to generate new models, reduce model parameters and to capture interaction among different latent dynamics within class labels (see Introduction). \begin{figure}[h] \includegraphics[scale=0.45]{FLDCRF_singlelab.pdf} \centering \caption{FLDCRF graphical model for single-label sequence prediction. The graph shows only Markov connections for transitions between states. Longer range dependencies (semi-Markov for transitions and Markov/semi-Markov for \textit{red} inter-layer influences) are also possible but omitted for simplicity.} \label{fig:FLDCRF single-label model} \end{figure} Such interacting hidden layers can also aid multi-label sequence prediction (see Fig. \ref{fig:FLDCRF-variants}) and social interactions in a multi-agent environment (see \cite{FLDCRF-arxiv}). We defer possible applications of such models to our future work. However, as the single-label variant of FLDCRF (see Fig. \ref{fig:FLDCRF single-label model}) is graphically a special case of the multi-label variant (see Fig. \ref{fig:FLDCRF-variants}), we describe the multi-label mathematical model for better generalization. \begin{figure}[h] \includegraphics[scale=0.45]{FLDCRF_multilab_shortened.pdf} \centering \caption{FLDCRF graphical model for multi-label sequence prediction. Different label categories, $y_{1,t}$ and $y_{2,t}$, over input $x_t$ are connected through their respective hidden layers, $h_{1,t}$ and $h_{2,t}$, influencing each other.} \label{fig:FLDCRF-variants} \end{figure} \subsubsection{Model} \label{FLDCRF-model} Fig. \ref{fig:FLDCRF-variants} shows the graph structure for FLDCRF in multi-label classification problems. Although, we depict the model for only two hidden layers, we mathematically define the model for $L$ layers. Let, \textbf{x} = \{\(x_1, x_2, ... , x_T\)\} denote the sequence of observations. \({\textbf{y}_i}\) = \{\(y_{i,1}, y_{i,2}, ... , y_{i,T}\)\} are the observed labels along layer $i$, $i = 1:L$. In the case of a single-label prediction task, all layers take the same labels, i.e. $y_{i,t}$ is same $\forall i$. Let, \(y_{i,t} \) $\in$ $\Upsilon_i$, where \(\Upsilon_i \) is the alphabet for all possible label categories in layer $i$. \({\textbf{h}_i}\) = \{\(h_{i,1}, h_{i.2}, ... , h_{i,T}\)\} constitutes the $i$-th hidden layer. Each label \(\ell_i\) \(\in \) \(\Upsilon_i \) is associated with a set of hidden states \(\textit{$\mathcal{H}$}_{i,\ell_i} \). \({\mathcal{H}_i}\) is the set of all possible hidden states for layer \textit{i} written as \({\mathcal{H}_i}\) = \(\bigcup_{\ell_i} {\mathcal{H}_{i,\ell_i}} \). \vspace{1mm} \noindent \textbf{Model constraints:} \begin{enumerate} \item{\(\textit{$\mathcal{H}$}_{i,\ell_i} \) are disjoint $\forall \ell_i \in \Upsilon_i $, $\forall i = 1:L$. } \item{$h_{i,t}$ can only assume values from the set of hidden states assigned to the label $y_{i,t}$, i.e., $h_{i,t}$ $\in$ $\textit{$\mathcal{H}$}_{i,y_{i,t}}$, $\forall i = 1:L$ and $\forall t = 1:T$.} \end{enumerate} The joint conditional model is defined as: \vspace{2mm} \begin{equation} \begin{split} \label{feqn1} P\Big( \{{\textbf{y}_i}\}_{1:L} \mid \textbf{x}, \theta \Big) = \sum_{\{{\textbf{h}_i}\}_{1:L}} P\Big(\{{\textbf{y}_i}\}_{1:L} \mid \{{\textbf{h}_i}\}_{1:L}, \textbf{x}, \theta\Big) \\ \cdot \; P\Big(\{{\textbf{h}_i}\}_{1:L} \mid \textbf{x}, \theta\Big). \end{split} \end{equation} Using the graph structure in Fig. \ref{fig:FLDCRF-variants}a, we obtain: \vspace{2mm} \begin{equation} \begin{aligned} \label{feqn1-new} P\Big(& \{{\textbf{y}_i}\}_{1:L} \mid \textbf{x}, \theta \Big) = \sum_{\{{\textbf{h}_i}\}_{1:L}} \Bigg( \prod_{i = 1}^{L} P({\textbf{y}_i} \mid {\textbf{h}_i})\Bigg) P\Big(\{{\textbf{h}_i}\}_{1:L} \mid \textbf{x}, \theta\Big) \\ &= \sum_{\{{\textbf{h}_i}\}_{1:L} : \forall h_{i,t} \in \textit{$\mathcal{H}$}_{i,{y_{i,t}}}} \Bigg( \prod_{i = 1}^{L} P({\textbf{y}_i} \mid {\textbf{h}_i})\Bigg) P\Big(\{{\textbf{h}_i}\}_{1:L} \mid \textbf{x}, \theta\Big) \\ &+ \sum_{\{{\textbf{h}_i}\}_{1:L} : \exists h_{i,t} \not\in \textit{$\mathcal{H}$}_{i,{y_{i,t}}}} \Bigg( \prod_{i = 1}^{L} P({\textbf{y}_i} \mid {\textbf{h}_i})\Bigg) P\Big(\{{\textbf{h}_i}\}_{1:L} \mid \textbf{x}, \theta\Big). \end{aligned} \end{equation} $P({\textbf{y}_i} \mid {\textbf{h}_i})$ can be further simplified by the graph as, \vspace{1mm} \begin{equation} \label{feqn1-explanation} P({\textbf{y}_i} \mid {\textbf{h}_i}) = \prod_{t = 1}^{T} P({y_{i,t}} \mid {h_{i,t}}). \end{equation} Applying model constraints, we can write, \vspace{1mm} \begin{equation} P({y_{i,t}}=\ell_i \mid {h_{i,t}}) = \begin{cases} 1, & h_{i,t} \in \mathcal{H}_{i,{y_{i,t}=\ell_i}} \\ 0, & h_{i,t} \not\in \mathcal{H}_{i,{y_{i,t}=\ell_i}}. \end{cases} \label{feqn1-constraint} \end{equation} Finally, the FLDCRF model in equation \eqref{feqn1-new} can be simplified by \eqref{feqn1-explanation} and \eqref{feqn1-constraint} as: \vspace{2mm} \begin{equation} \label{feqn3} P\Big( \{{\textbf{y}_i}\}_{1:L} \mid \textbf{x}, \theta \Big) = \sum_{\{{\textbf{h}_i}\}_{1:L} : \forall h_{i,t} \in \textit{$\mathcal{H}$}_{i,{y_{i,t}}}} P\Big(\{{\textbf{h}_i}\}_{1:L} \mid \textbf{x}, \theta\Big). \end{equation} $P\Big(\{{\textbf{h}_i}\}_{1:L} \mid \textbf{x}, \theta\Big)$ is defined from the CRF formulation, \vspace{2mm} \begin{equation} \label{feqn4} P\Big(\{{\textbf{h}_i}\}_{1:L} \mid \textbf{x}, \theta\Big) = \frac{1}{\textbf{\textit{Z}}(\textbf{x},\theta)} \exp\left(\sum_k \theta_k . F_k\Big(\{{\textbf{h}_i}\}_{1:L},\textbf{x}\Big)\right), \end{equation} \noindent where index $k$ ranges over all parameters $\theta = \{\theta_k\}$ and \( \textbf{\textit{Z}}(\textbf{x},\theta) \) is the partition function defined as: \vspace{2mm} \begin{equation} \label{feqn5} \textbf{\textit{Z}}(\textbf{x},\theta) = \sum_{\{{\textbf{h}_i}\}_{1:L}} {\exp\left(\sum_k \theta_k . F_k\Big(\{{\textbf{h}_i}\}_{1:L},\textbf{x}\Big)\right)}. \end{equation} In this paper, we only assume Markov connections (as depicted in Figures \ref{fig:FLDCRF single-label model} and \ref{fig:FLDCRF-variants}) along different hidden layers. Therefore, the feature functions \(F_k \)'s are defined as: \vspace{2mm} \begin{equation} \label{feqn6} F_k\Big(\{{\textbf{h}_i}\}_{1:L},\textbf{x}\Big) = \sum_{t=1}^T f_k\Big(\{h_{i,t-1}\}_{1:L}, \{h_{i,t}\}_{1:L}, \textbf{x}, \textit{t}\Big). \end{equation} Also, we only allow different hidden layers to influence each other at the same time instant $t$. Thus, each component function $f_k\big(\{h_{i,t-1}\}_{1:L}, \{h_{i,t}\}_{1:L}, \textbf{x}, \textit{t}\big)$ can be a \textit{state} function $s_k(h_{i,t}, \textbf{x}, t) $, a \textit{transition} function $t_k(h_{i,t-1}, h_{i,t}, \textbf{x}, t) $ or an \textit{influence} function $i_k\big(h_{i,t}, h_{j,t}, \textbf{x}, \textit{t}\big)$, $i,j \in \{1:L\}$. We define \textit{state} and \textit{transition} functions by the following indicator functions, \vspace{1mm} \begin{equation} \begin{split} \label{eqn-state_tran} s_k(h_{i, t}, \textbf{x}, \textit{t}) &= \mathbbm{1}_{\{(h_{i, t},x_t)= k\}}, \\ t_k(h_{i, t-1}, h_{i, t}, \textbf{x}, \textit{t}) &= \mathbbm{1}_{\{(h_{i, t},h_{i, t-1})= k\}}, \end{split} \end{equation} \noindent for discrete observations \textbf{x}. For continuous observations \textbf{x}, state functions are defined by: \vspace{1mm} \begin{equation} \label{eqn-state} s_k(h_{i, t}, \textbf{x}, \textit{t}) = \mathbbm{1}_{\{(h_{i, t})= k\}}. x_t. \end{equation} We define \textit{influence} functions $i_k\big(h_{i,t}, h_{j,t}, \textbf{x}, \textit{t}\big)$ over two or more (depending on the number of inter-related layers of hidden states) hidden variables between layers as: \vspace{2mm} \begin{equation} \label{feqn-state_tran} i_k\big(h_{i,t}, h_{j,t}, \textbf{x}, \textit{t}\big)= \mathbbm{1}_{\big\{\big(h_{i,t}, h_{j,t} \big)= k\big\}}, \quad i,j \in \{1:L\}. \end{equation} We avoid longer range influences (e.g., $h_{i,t-1}$ to $h_{j,t}$, $i,j \in \{1:L\} $) to reduce parameters. For the interaction model depicted in Fig. \ref{fig:FLDCRF-variants}b, the mathematical details remain same only with a minor change in the \textit{state} function, being $s_k(h_{i,t}, x_{i,t}$) instead of $s_k(h_{i,t}, x_t$). A FLDCRF single-label variant with 1 hidden layer and $>$1 hidden states per label corresponds to a LDCRF, while 1 layer and 1 hidden state per label constitutes a LCCRF. \subsubsection{Training model parameters} \label{FLDCRF-training} We estimate the model parameters by maximizing the conditional log-likelihood of the training data given by: \vspace{2mm} \begin{equation} \label{feqn7} \textit{\textbf{L}}(\theta) = \sum_{n=1}^N {\log P\left(\{{\textbf{y}_i}\}_{1:L}^{\left(n\right)} \mid \textbf{x}^{\left(n\right)}, \theta\right)} - \frac{{\parallel\theta\parallel}^2}{2\sigma^2}, \end{equation} \noindent where \textit{N} is the total number of available labeled sequences. The second term in equation \eqref{feqn7} is the log of a Gaussian prior with variance \(\sigma^2\). \subsubsection{Inference} \label{FLDCRF-inference} Multiple label sequences \({\textbf{y}_i}\), $i = 1:L$, can be inferred from the same graph structure by marginalizing over other labels: \vspace{1mm} \begin{equation} \label{feqn8} \hat{\textbf{y}}_i = \text{argmax}_{\textbf{y}_i} \quad \sum_{\big\{\{{\textbf{y}_i}\}_{1:L} - \; \textbf{y}_i\big\}} P\left(\{{\textbf{y}_i}\}_{1:L} \mid \textbf{x}, \hat{\theta}\right), \end{equation} \noindent where $P\left(\{{\textbf{y}_i}\}_{1:L} \mid \textbf{x}, \hat{\theta}\right)$ is obtained by \eqref{feqn3} and estimated parameters $\hat{\theta}$. At each instant $t$, the marginals $P(\{h_{i,t}\}_{1:L} \mid {x_{1:t}}, \hat{\theta})$ are computed and summed according to the disjoint sets of hidden states to obtain joint estimates of desired labels $\hat{y}_{i,t}$, $t = 1,2, ...$, $\forall i = 1:L $, as follows: \vspace{2mm} \begin{equation} \label{feqn9} P(\{y_{i,t}\}_{1:L} \mid \textbf{x}) = \sum_{\{h_{i,t}\}_{1:L}:h_{i,t} \in \textit{$\mathcal{H}$}_{i,{y_{i,t}} }} P\left(\{h_{i,t}\}_{1:L} \mid {x_{1:t}}, \hat{\theta}\right). \end{equation} After marginalizing according to \eqref{feqn8}, the label $\hat{y}_{i,t}$ corresponding to the maximum probability is inferred. As the intention prediction problem must be solved online, we compute $P\left(\{h_{i,t}\}_{1:L} \mid {x_{1:t}}, \hat{\theta}\right)$ by the forward algorithm. Forward-backward algorithm \cite{HMM} and Viterbi algorithm \cite{Viterbi} can also be applied for problems where online inference is not required. \section{Feature Extraction} \label{sec:features-labels} We describe the process to prepare our feature vector $x_t$ in this section. $x_t$ comprises of two components: context features, $x_{c,t}$, and motion features, $x_{m,t}$. \subsection{Context Features} \label{subsec:context} Context features employed in our work are of two types: \begin{figure}% \centering \subfloat[\scriptsize{}]{{\includegraphics[scale=0.3]{Scene_sample.pdf} } \quad \subfloat[\scriptsize{}]{{\includegraphics[scale=0.3]{semantic_seg.pdf} } \quad \subfloat[\scriptsize{}]{{\includegraphics[scale=0.35]{pedestrian_location_context.pdf} } \quad \subfloat[\scriptsize{}]{{\includegraphics[scale=0.4]{Depth_map.pdf} }}% \quad \subfloat[\scriptsize{}]{{\includegraphics[scale=0.27]{bb.pdf} }} \caption{Feature extraction: (a)-(c) depict pedestrian location context. Semantic segmentation of the scene in (a) is presented in (b). Road pixels are highlighted. (c) shows the different regions around pedestrian bounding box to capture necessary location context variables. (d) highlights pre-calibrated depth lines. Figures taken from the NTU dataset. (e) 9 points are selected within a pedestrian bounding box during motion feature extraction.}% \label{fig:Context}% \end{figure} \begin{itemize} \item{\textbf{Pedestrian location context}: Also called \textit{spatial context} in the paper, this context variable encodes the location (e.g. on road, at curb, away) of the pedestrian in a scene.} \item{\textbf{Vehicle interaction context}: We call it \textit{vehicle context} in the paper. This context variable contains approximate longitudinal distance (depth) and velocity measures of the nearest vehicles to the pedestrian of interest.} \end{itemize} \subsubsection{Pedestrian location context} Fig. \ref{fig:Context}(a-b) show a scene and its semantic segmentation. We utilize two segmentation categories, road pixels and non-road pixels. Road pixels are directly obtained from the pre-trained semantic segmentation software \cite{semantic-seg}, and all other pixels are marked as non-road pixels. We highlight 3 different regions around the pedestrian bounding box and their considered dimensions in Fig. \ref{fig:Context}c. We compute the mode of the refined semantic values (road or non-road) in each of the three regions, given by $\{m_1, m_2, m_3\}$, which constitute the location context feature vector $x_{sp,t}$. \subsubsection{Vehicle interaction context} Longitudinal distances (or depths) and velocities of approaching vehicles play a significant role in determining crossing intention of a pedestrian. \vspace{1mm} \noindent\textbf{Depth estimation:} As existing techniques for monocular depth estimation are not much reliable, we employ a pre-calibrated approach. To estimate pedestrian and vehicle depths from camera, we pre-calibrate a scene by 10m, 20m and 30m depth lines (see Fig. \ref{fig:Context}d). We assume that once a camera is steadily mounted on a vehicle, these depth lines will approximately remain same during driving time, unless the road ahead has high slope\footnote{One of the reviewers kindly pointed out that the depth lines may vary with tyre pressure. We will look to replace with a better monocular depth estimation technique in future.}. The depth lines can be written as: \vspace{1mm} \begin{equation} \label{eqn:st_line} y = mx +c, \end{equation} \noindent where $(x,y)$ is a point co-ordinate on the image (in pixels). We assume that the slope $m$ and the intercept $c$ (in pixels) of the straight line have the following approximate exponential relations with depth $d$ (in meters) from camera: \vspace{2mm} \begin{equation} \label{eqn:slope} m(d) = k_m - p_m \cdot e^{{-} dl_m}, \end{equation} \begin{equation} \label{eqn:intercept} c(d) = k_c - p_c \cdot e^{{-} dl_c}, \end{equation} \noindent where $m(d)$ and $c(d)$ are known for $d$ $=$ 10, 20 and 30. $k_m, p_m, l_m, k_c, p_c$ and $l_c$ can be determined by solving equations \eqref{eqn:slope} and \eqref{eqn:intercept} using these 3 pairs of values of $\{m(d), c(d), d\}$. Once these parameters are obtained, depth of any given point $(x,y)$ in the image can be approximately determined from \eqref{eqn:st_line}, \eqref{eqn:slope} and \eqref{eqn:intercept}. We consider the bottom center pixels of the concerned pedestrian and vehicle bounding boxes to estimate their depths. \vspace{1mm} \noindent\textbf{Velocity estimation:} To obtain velocity measures of the nearest vehicles to a pedestrian in the NTU dataset, we compute mean optical flow \cite{Optical Flow} magnitude inside the corresponding vehicle bounding boxes. We utilize the provided ego-vehicle velocity labels in the JAAD dataset. At each instant, the ego-vehicle is assigned one of these velocity labels: `moving slow', `moving fast' or `standing'. Some instants only have the ego-acceleration labels (`slowing down' and `speeding up') and do not contain velocity labels. We associate velocity labels to these instants by imputation/observation. We ignore the ego-acceleration labels in this paper. \vspace{1mm} The complete (depth and velocity) vehicle context for the NTU dataset is given by, $x_{vNTU,t}$ = \{$ncl$, $ncr$, $nclf$, $ncrf$\}, where $ncl$ and $ncr$ are the relative depths (in meters) of the nearest vehicles to a pedestrian on left and right lane respectively and $nclf$, $ncrf$ are the mean optical flow of the non-pedestrian pixels within the bounding boxes of the respective vehicles. At instants with no influencing vehicle, we set $ncl$ and $ncr$ to -1 and $nclf$ and $ncrf$ to 0. The complete vehicle context for the JAAD dataset is given by, $x_{vJAAD,t}$ = \{$ego\_dep$, $ego\_vel$\}, where $ego\_dep$ is the estimated pedestrian depth from the ego-vehicle and $ego\_vel$ is the ego-velocity attribute (`moving slow', `moving fast' etc.). As the other vehicles do not influence pedestrian behaviours in most cases within JAAD dataset, we keep things simpler by only considering the ego-vehicle context. We defer more complete interactions including other vehicles in the scene to our future work. \subsection{Motion Features} \label{subsec:motion} We propose a computationally inexpensive heuristic method to capture relevant pedestrian motion dynamics. The features obtained by this method perform comparably to one of the better benchmarks \cite{Daimler} on the Daimler dataset \cite{Daimler Dataset} (see Appendix \ref{Appen:Daim}). First, we apply a pre-trained object detector \cite{obj-det} to detect and track pedestrians in the videos\footnote{For considered pedestrians in the NTU data. We utilized provided bounding box annotations for pedestrians in the JAAD data.}. For extracting pedestrian motion features at $t$, we consider a sliding window of length $\tau$, i.e., frames $t-\tau+1$ to $t$. The feature extraction process involves three steps: $\bullet$ \textbf{Pre-processing:} We run a Kalman filter on tracked pedestrian bounding boxes to remove noise/occlusions. Further occlusions (e.g., by tree) are linearly interpolated. $\bullet$ \textbf{Real world lateral displacement estimate:} A pedestrian's crossing/not-crossing behaviour is primarily characterized by the real world lateral motion. Lateral motion captured in an image sequence contains components from both lateral and longitudinal pedestrian movements (w.r.t.~the ego-moving direction) in real world. In addition, the real world longitudinal movement is mixed with the ego-vehicle motion. Assuming negligible pedestrian movement along the vertical direction in real world, we obtain approximate estimates of the real world lateral motion in image frame using the camera matrix information (see Appendix \ref{Appen:long-motion}). $\bullet$ \textbf{Fitting non-linear dynamics:} We extract real world lateral displacement estimates for 9 symmetrical points inside the pedestrian bounding box, given by $\{v_p\}$, $p$ = $1:9$ (see Fig. \ref{fig:Context}e). The choice of the points is arbitrary, as long as they are well distributed within the bounding box. Each of these points is tracked and processed over the sliding window to generate $\tau - 1$ lateral displacement values $v_{p,t_1}$, $t-\tau + 2 \leq t_1 \leq t$. Finally, we fit a degree 2 curve to the values $\{{v}_{p,t-\tau+2:t}\}$, $p$ = $1:9$, by least-squares fitting: \vspace{1mm} \begin{equation} \label{eqn-non-linear} v_{p,t_1} = {a_0}_{p,t} + {a_1}_{p,t}t_1 + {a_2}_{p,t}t_1^2 , \end{equation} \noindent resulting in a 27 dimensional feature set, $x_{m,t} = \{\{{a_0}_{p,t}\}_{p = 1:9}, \{{a_1}_{p,t}\}_{p = 1:9}, \{{a_2}_{p,t}\}_{p = 1:9}\}$. $\tau$ must be large enough ($\ge$4) for a good fitting but not too large to include unnecessary past frames in the current dynamics computation. We set $\tau = 10$, but for test sequences with less than 10 frames, we adjust $\tau$ accordingly to generate the motion features. We also tested optical flow features and visual representations as in another work from our group \cite{Michael}. However, including such features resulted in poorer performance on our datasets. Thus we do not include such features in this paper. We plan to test such visual representations using end-to-end models in future. \section{Datasets} \label{sec:dataset} We evaluate on an in-house collected NTU dataset and the public real-life JAAD dataset \cite{JAAD}. The NTU dataset is captured inside Nanyang Technological University Campus by a pair of static cameras at 30 fps framerate and $1920 \times 1080$ resolution. Since our aim is to capture natural vehicle influences on pedestrian intention, we place two synchronized cameras to capture the whole scene of interest, so that approaching vehicles on both sides of the pedestrian can be captured, as depicted in Fig. \ref{fig:NTU_dataset_sample}. In addition, Camera1 is placed in a way to simulate a camera mounted on a stationery ego-vehicle. \begin{figure}[h] \includegraphics[scale=0.45]{NTU_data_sample.jpg} \centering \caption{NTU dataset sample. Camera1 (left) and Camera2 (right) together capture the whole scene of interest.} \label{fig:NTU_dataset_sample} \end{figure} In the videos, we had actors and actual pedestrians strolling in a natural road scene with vehicles. Actors were not given any specific instructions and were asked to move around the scene naturally. We extract 35 continuous crossing and 35 stopping sequences from the videos for our evaluation. The JAAD dataset \cite{JAAD} is a recent pedestrian benchmark which captures real-world pedestrian behaviours in complex scenes from cameras mounted on moving vehicles. It contains 346 video clips at 1920 $\times$ 1080 resolution (few at 1280 $\times$ 720) and 15 fps framerate, each containing multiple pedestrian sequences. All pedestrians of interest are annotated by bounding boxes and are assigned certain movement labels (moving, crossing, stopping, standing, etc.) and personal details (age, sex, movement direction, etc.). Each frame in the videos is also associated with certain traffic attributes, such as zebra crossing, parking lot, etc. To our best knowledge, there is no JAAD data subset proposed for early pedestrian intention prediction. We extract a total of 120 pedestrian sequences distributed across continuous crossing, stopping, starting, and standing scenarios (see Appendix \ref{Appen:JAAD} for sequence details). \section{Experimental Setup} \label{sec:exp-setup} We describe here our experimental data, extracted features and labels and the model specifications. \vspace{2mm} \noindent \textbf{Data for evaluation:} The distribution of test sequences in the two datasets is presented in Table \ref{table:sequence_dist}. We downsampled the NTU dataset sequences to 15 fps. We detect and track \cite{obj-det} pedestrians in Camera1 images. We also define pedestrian \textit{spatial context} only in Camera1 images. Vehicles are detected and tracked in both Camera1 and Camera2 images. We eliminate ambiguity within the common regions of Camera1 and Camera2 by a pre-defined separator (marked by black lines, see Fig. \ref{fig:NTU_dataset_sample}). The object detector could efficiently detect pedestrians and vehicles up to 80 meters from the camera. For pedestrian sequences in the JAAD data, we utilize provided bounding box annotations. \vspace{2mm} \begin{table}[h] \caption{Table showing distribution of test sequences in different datasets.} \label{table:sequence_dist} \begin{center} \renewcommand{\arraystretch}{1.2 \setlength\tabcolsep{1.7pt} \begin{tabular}{|c |c | c| c| c| c|} \hline \backslashbox{Dataset}{Seq type} & \shortstack{Continuous \\ crossing} & Stopping & Starting & Standing & \shortstack{Total \#\\ instances} \\[0.4ex] \hline\hline NTU & 35 & 35 & - & - & 9276 \\ \hline JAAD & 45 & 20 & 40 & 15 & 19650 \\ \hline \end{tabular} \end{center} \vspace{-4mm} \end{table} \vspace{1mm} \noindent \textbf{Features and labels:} In the NTU data, we try to predict the stopping event 30 frames (2 sec) before the \textit{stopping instant}, i.e., \textit{pred\_ahead} = 30. Therefore, we label data from 30 frames before the \textit{stopping instant} in stopping sequences as `not-crossing' during training. Remaining frames in stopping sequences and all frames in continuous crossing sequences are labeled `crossing'. The annotations are depicted in Fig. \ref{label_jaad}. We do not have standing and starting sequences for the NTU data. The complete feature set (see Section \ref{sec:features-labels}) on the NTU data is given by, $x_{t,NTU} = \{x_{m,t}, x_{sp,t}, x_{vNTU,t}\}$. For JAAD dataset, we set \textit{pred\_ahead} = 20 (1.33 sec). The labeling is illustrated in Fig. \ref{label_jaad}. Considered feature set on the JAAD data is given by, $x_{t,JAAD} = \{x_{m,t}, x_{sp,t}, x_{vJAAD,t}\}$. \begin{figure}[h] \includegraphics[scale=0.28]{labeling_JAAD2.pdf} \centering \caption{Labeling the training sequences in the JAAD. `c' denotes `crossing' and `n' denotes `not-crossing' intention. For the JAAD data, \textit{pred\_ahead} = 20. The NTU data follows similar labeling with \textit{pred\_ahead} = 30.} \label{label_jaad} \end{figure} \vspace{1mm} \noindent \textbf{Models compared:} We utilized FLDCRF single-label variant and LSTM as our learning models. We show results for three different systems for both the NTU and JAAD data: 1) Only pedestrian motion based, referred as $mtn$ (or $m$) in the Results section. This system only utilizes pedestrian motion as input features, i.e., $x_t$ = \{$x_{m,t}$\}. 2) Pedestrian motion and location based, referred as $mtn+spa$ (or $ms$). This system takes pedestrian's semantic location (see Section \ref{subsec:context}) along with pedestrian motion as inputs, i.e., $x_t$ = \{$x_{m,t}, x_{sp,t}$\}. 3) Pedestrian motion, location and vehicle context based, referred as $mtn+spa+veh$ (or $msv$). This system considers vehicle context features (see Section \ref{subsec:context}) together with pedestrian motion and location, i.e., $x_t$ = \{$x_{m,t}, x_{sp,t}, x_{v,t}$\}. We employ a nested cross-validation (with 5 outer folds and 4 inner folds) for selecting models and generating results. For selecting LSTM models, we tested the following number of hidden units ($HU$): 2, 5, 10, 20, 50, 150, 300, 500. We trained each model for 1000 epochs, saving model performances at every 100 epochs during validation. We fed all training sequences at each epoch at the rate of 1 sequence per batch. An FLDCRF single-label has two hyper-parameters: number of layers ($num\_layer$) and number of hidden states per label ($\{num\_state_i\}$) along layers $i= 1:L$. For simplicity, we keep same $num\_state$ along all layers, i.e. $num\_state_i = num\_state$, $\forall i = 1:L$. We denote such model by FLDCRF-$<$$num\_layer$$>$/$<$$num\_state$$>$. We selected our models from the following $<$$num\_layer$$>$/$<$$num\_state$$>$ combinations: 1/1, 1/2, 1/3, 1/4, 1/5, 1/6, 2/1, 2/2, 2/3. We trained FLDCRF model parameters with Stan's \cite{Stan} in-built BFGS optimizer. We implemented the LSTM models defined in Keras, a deep learning library for Python running Tensorflow in the backend. LSTM parameters were trained by a default Adam optimizer in Keras. We plotted the results in MATLAB 2015b \cite{MATLAB}. LSTM models are trained with a Nvidia Tesla K80 GPU, while the FLDCRF models are trained with a Intel Xeon E5-1630 v3 CPU. \section{Results} \label{sec:results} We compare different systems on the NTU and JAAD datasets using existing state-of-the-art (LSTM) and proposed (FLDCRF) sequential models. We evaluate the models by two metrics: stopping probability vs time and classification accuracy vs time. Stopping probability corresponds to the probability of the `not-crossing' (laterally static) label. We ideally want high classification accuracy of appropriate class labels (e.g., `not-crossing' in stopping sequences) over the prediction interval. At the same time, it is desired to have high and stable average probability values of appropriate class labels over the prediction interval. It is also preferred that the sequences of same type have minor standard deviation in predicted probability values at each instant. So, our model hyper-parameters are chosen based on the following metric ($mt$), \vspace{1mm} \begin{equation} \label{eqn:metric} mt = \frac{1}{T_p} \sum_{t=T_l.f}^{T_u.f} (acc_t + \frac{1}{N_v} \sum_{i=1}^{N_v} prob_{i,t} - \frac{1}{N_s} \sum_{j=1}^{N_s} std_{j,t}), \end{equation} \noindent where $acc_t$ denotes overall classification accuracy, $prob_{i,t}$ denotes predicted probability of the appropriate class (e.g., `not-crossing' in stopping sequences) for sequence $i$ and $std_{j,t}$ is standard deviation in probability values among sequences of type $j$ at $t$. $f$ is the dataset framerate (15 fps for both datasets). The prediction interval is given by $[T_l \: T_u]$s and $T_p$ = ($\mid$$T_u.f - T_l.f$$\mid$ + 1). $N_v$ is the number of considered validation sequences and $N_s$ is the number of considered sequence types (2 for NTU and 4 for JAAD data). \subsection{NTU dataset} We compare three different systems - $mtn$ (\textit{m}), $mtn+spa$ (\textit{ms}) and $mtn+spa+veh$ (\textit{msv}) with FLDCRF and LSTM on the NTU dataset. `0' along x-axis (see Fig. \ref{fig:NTU stopping} - \ref{fig:all} and Fig. \ref{fig:JAAD-failed}) represents the event (\textit{crossing}, \textit{stopping} etc.) instants. Positive and negative values along x-axis indicate gain and loss in prediction time (in seconds) respectively. Since we try to predict 2 sec (30 frames) ahead of the event, we will limit our analysis within [2 -0.5] s prediction interval. Model hyper-parameters were chosen by considering the value of the metric in \eqref{eqn:metric} within [2 -0.5] s interval. Table \ref{table:system-select} presents a comparison between LDCRF (FLDCRF with 1 hidden layer), FLDCRF and LSTM over nested validation sets on $mtn+spa+veh$ (\textit{msv}) systems. Two layered FLDCRF settings (2/1, 2/2, 2/3) performed better than the 1 layered settings (1/1, 1/2, 1/3, 1/4, 1/5, 1/6) within most of the inner validation sets. The new models (2/1, 2/2, 2/3) also help to enhance the overall FLDCRF performance. LSTM models perform worse than the FLDCRF models on the validation sets in general. We select best performing FLDCRF and LSTM settings for each of the three systems on the inner validation sets and average their performance on the outer folds to obtain our results. \begin {table}[h!] \vspace{3mm} \caption{Comparison between FLDCRF, LDCRF and LSTM over nested validation sets on $mtn+spa+veh$ system (on the NTU data).} \label{table:system-select} \begin{center} \renewcommand{\arraystretch}{1.2 \setlength\tabcolsep{1.7pt} \begin{tabular}{|c |c | c| c| c| c| c|} \hline \backslashbox{Model}{Valid. set} & 1 & 2 & 3 & 4 & 5 & \shortstack{Average \\ performance} \\ [0.4ex] \hline\hline FLDCRF-1layer & 1.4324 & 1.4125 & \textbf{1.4447} & \textbf{1.4458} & 1.4436 & 1.4358\\ \hline FLDCRF-2layers & \textbf{1.4330} & \textbf{1.4199} & \textbf{1.4447} & 1.4419 & \textbf{1.4439} & 1.4366\\ \hline FLDCRF-overall & \textbf{1.4330} & \textbf{1.4199} & \textbf{1.4447} & \textbf{1.4458} & \textbf{1.4439} & \textbf{1.4374}\\ \hline LSTM & 1.3315 & 1.3355 & 1.3077 & 1.4085 & 1.3706 & 1.3508\\ \hline \end{tabular} \end{center} \vspace{-3mm} \end{table} Fig. \ref{fig:NTU stopping} displays the performances of different systems on 35 stopping sequences of the NTU dataset. Motion-only FLDCRF model ($FLDCRF-m$) has consistently low average stopping probability (see Fig. \ref{fig:NTU stopping}a) and classification accuracy (see Fig. \ref{fig:NTU stopping}b) values than models with context in the earlier regions of the curves (before `0'). Adding location context to FLDCRF ($FLDCRF-ms$) slightly improves time, probability and accuracy of predicting the stopping intention in earlier regions. The results improve significantly with higher average stopping probability and accuracy on adding vehicle context features to the FLDCRF model $\left(FLDCRF-msv\right)$. On average, all LSTM models perform worse than the corresponding FLDCRF models in terms of prediction time, accuracy and probability values. \begin{figure}[h!] \centering \subfloat[\scriptsize{Stopping probability vs time.}]{\label{fig:Stopping-a} \includegraphics[scale=0.45, height=3.5cm]{Stopping_sequences_NTU_adj.pdf}} \\[-0.2ex] \vspace{-2.2mm} \subfloat[\scriptsize{Classification accuracy vs time.}]{\label{fig:Stopping-b} \includegraphics[scale=0.45, height=3.5cm]{Stopping_accuracy_NTU.pdf}} \\[-0.2ex] \caption{Performance of different systems on stopping sequences of NTU dataset.} \label{fig:NTU stopping} \end{figure} \begin{figure}[h!] \centering \subfloat[\scriptsize{Stopping probability vs time.}]{\label{fig:Stopping-con-a} \includegraphics[scale=0.45, height=3.5cm]{Stopping_sequences_NTU_wc_adj.pdf}} \\[-0.2ex] \vspace{-2.2mm} \subfloat[\scriptsize{Classification accuracy vs time.}]{\label{fig:Stopping-con-b} \includegraphics[scale=0.45, height=3.5cm]{Stopping_accuracy_NTU_wc.pdf}} \\[-0.2ex] \caption{Performance for stopping sequences with positive vehicle context in the NTU dataset.} \label{fig:NTU stopping context} \end{figure} Fig. \ref{fig:NTU stopping context} compares different models on stopping scenarios aided by positive vehicle context ($ncl\ge0$ and/or $ncr\ge0$). At each instant of the plots, only frames with such vehicle context values are considered. While we find models without vehicle context ($mtn$ and $mtn+spa$) to perform similarly to the earlier results, models with vehicle context ($mtn+spa+veh$) display an enhanced prediction performance with better average probability and accuracy in earlier regions. An average stopping probability of 0.6 and a classification accuracy of 0.7 was achieved and consistently improved from $\sim$0.9 seconds before the actual stopping events by $FLDCRF-msv$ system. The vehicle context aids a gain of $\sim$0.5 seconds in prediction time over $FLDCRF-ms$ system. $LSTM-msv$ performance also improves in these scenarios, but the accuracy values still fail to approach 1 (like FLDCRF models) in the vicinity of the stopping event. Different systems are compared in Fig. \ref{fig:NTU crossing} on 35 continuous crossing sequences of the NTU dataset. Accuracy-wise, all $mtn$ and $mtn+spa+veh$ type models display quite consistent and accurate (close to 1) performance across time. Although, joint models ($mtn+spa+veh$) output smaller average probability values indicating better reliability of the system. LSTM and FLDCRF joint ($mtn+spa+veh$) models perform comparably across time, with FLDCRF type model exhibiting marginally better probability and accuracy values at certain time points. \begin{figure}[h!] \centering \subfloat[\scriptsize{Stopping probability vs time.}]{\label{fig:Crossing-a} \includegraphics[scale=0.45, height=3cm]{Crossing_sequences_NTU.pdf}} \\[-0.2ex] \vspace{-2.2mm} \subfloat[\scriptsize{Classification accuracy vs time.}]{\label{fig:Crossing-b} \includegraphics[scale=0.45, height=2.2cm]{Crossing_accuracy_NTU.pdf}} \\[-0.2ex] \caption{Results on continuous crossing sequences of the NTU dataset.} \label{fig:NTU crossing} \end{figure} \begin{figure}[h] \includegraphics[scale=0.45, height=3cm]{All_accuracy_NTU.pdf} \centering \caption{Overall classification accuracy of different systems on the NTU dataset.} \label{fig:all-NTU} \end{figure} Overall accuracy of considered systems at different prediction horizons on NTU dataset is displayed in Fig. \ref{fig:all-NTU}. `0' in the figure represents event (\textit{stopping} and \textit{crossing}) instants of respective sequences. $FLDCRF-msv$ system proves to be the most accurate at majority of early prediction instants, while performing comparable to certain systems ($FLDCRF-ms$ and $FLDCRF-m$) after the `0' mark, bolstering itself to be selected as the primary choice for an early and accurate intention prediction system on the NTU data. \vspace{2.5mm} \noindent \textbf{Failed case analysis:} Fig. \ref{fig:NTU-failed} shows individual sequence outputs by the $FLDCRF-msv$ system. We highlight sequences (in bold red) where the system fails to make early prediction (i.e., before `0') of the events. All crossing events were predicted correctly before respective \textit{crossing instants} (see Fig. \ref{fig:NTU-failed}a). An unwanted spike can be observed on sequence $crossing\_17$ near the `0' mark, caused by an approaching vehicle. However, the system was able to correct the prediction before the \textit{event instant} to avoid any critical failure. Individual probability outputs of 35 stopping sequences by $FLDCRF-msv$ system are displayed in Fig. \ref{fig:NTU-failed}b. The system fails to make early prediction (before `0') of the stopping event in the two highlighted sequences (by bold red), $stopping\_9$ and $stopping\_32$. \begin{figure}% \centering \subfloat[\scriptsize{Continuous crossing sequences.}]{{\includegraphics[width=4.15cm]{Failed_crossing_NTU.pdf} }}% \subfloat[\scriptsize{Stopping sequences.}]{{\includegraphics[width=4.15cm]{Failed_stopping_NTU_new.pdf} }}% \caption{Individual sequence outputs by $FLDCRF-msv$ on the NTU dataset. Failed cases are shown in bold red.}% \label{fig:NTU-failed}% \end{figure} $FLDCRF-msv$ system performs the earliest and most accurate predictions of the stopping behaviour among considered systems and provides better reliability on crossing sequences at the same time, indicating the power of including vehicle context and better performance of FLDCRF over LSTM on the NTU dataset. Next, we investigate our approach on real-life pedestrian behaviour prediction. \subsection{JAAD dataset} On the JAAD dataset, we evaluate on all four types of sequences: 45 continuous crossing, 20 stopping, 40 standing and 15 starting (see Section \ref{sec:nomenclature}). We compare results from FLDCRF and LSTM on three different systems: $mtn$ (\textit{m}), $mtn+spa$ (\textit{ms}) and $mtn+veh$ (\textit{mv}). Since the JAAD data contains variety of scenarios (some during night-time), semantic segmentation \cite{semantic-seg} performance is not consistent. This results in $mtn+spa$ models performing slightly worse than only $mtn$ models. Therefore, we omit pedestrian location information from our final model, which takes pedestrian motion ($x_{m,t}$) and vehicle context ($x_{vJAAD,t}$) features. Similar to the NTU data, we obtain our test results from a 5-fold nested CV. We make use of the same metric in \eqref{eqn:metric}, within [1.33 -1] s interval, to select hyper-parameter settings on inner validation sets. FLDCRF (1.3640) improves over the LDCRF (1.3627) average performance on the validation sets. Due to space limitation, we only present classification accuracy vs time by different systems on JAAD sequences. We refer to Appendix \ref{Appen:JAAD} for stopping probability vs time curves. Fig. \ref{fig:JAAD-accuracy}a compares different systems on JAAD continuous crossing sequences. FLDCRF systems produce comparable and stable performance with more than 95\% average accuracy in the important region of [1 -0.5] s. $LSTM-mv$ consistently performs worse than $FLDCRF-mv$ in early prediction regions and lacks stability in accuracy across time. LSTM systems without vehicle context exhibit significantly poorer performance compared to other systems. On JAAD stopping scenarios, systems with vehicle context ($FLDCRF-mv$ and $LSTM-mv$) output more stable and early prediction of the stopping behaviour (see Fig. \ref{fig:JAAD-accuracy}b) than $mtn$ and $mtn+spa$ models. These two systems also produce comparable accuracy performances across time, $LSTM-mv$ being marginally better around the stopping instant (`0' mark). $LSTM-m$ and $LSTM-ms$ perform considerably better than $FLDCRF-m$ and $FLDCRF-ms$ respectively, but lacks stability in accuracy across time compared to the systems with vehicle context. \begin{figure}[h]% \centering \subfloat[\scriptsize{Continuous crossing sequences.}]{{\includegraphics[width=4.1cm]{Crossing_accuracy_JAAD_final.pdf} }}% \quad \subfloat[\scriptsize{Stopping sequences.}]{{\includegraphics[width=4.1cm]{Stopping_accuracy_JAAD_final.pdf} }}% \quad \subfloat[\scriptsize{Starting sequences.}]{{\includegraphics[width=4.1cm]{Starting_accuracy_JAAD_final.pdf} }}% \quad \subfloat[\scriptsize{Standing sequences.}]{{\includegraphics[width=4.1cm]{Standing_accuracy_JAAD_final.pdf} }}% \caption{Accuracy performances of different systems vs time on the JAAD data.}% \label{fig:JAAD-accuracy}% \end{figure} All considered systems produce similar prediction patterns on the JAAD starting scenarios (see Fig. \ref{fig:JAAD-accuracy}c), with accuracy performances improving over time, indicating successful predictions of the change of state in pedestrian motion. FLDCRF systems perform considerably better than their LSTM counterparts. $FLDCRF-mv$ performs best among all systems in the early prediction region [1 0] s. However, $FLDCRF-m$ and $FLDCRF-ms$ outperform $FLDCRF-mv$ after the `0' mark. We compare different systems on 15 standing sequences in Fig. \ref{fig:JAAD-accuracy}d. $LSTM-m$ and $LSTM-mv$ perform comparably and better than other systems, reaching a stable 100\% prediction accuracy around/before the critical point. $FLDCRF-mv$ produces a stable 93\% prediction accuracy around and after the critical point, failing to predict the not-crossing behaviour in time in one of the sequences. We will analyze such failed cases shortly. The systems are compared on the task of predicting early transitions (standing-starting and crossing-stopping) in Tables \ref{table:stand-start} and \ref{table:cross-stop}. We consider accuracy of the systems over stipulated prediction windows. As expected, systems with vehicle context $FLDCRF-mv$ and $LSTM-mv$ perform significantly better than systems without vehicle context. $FLDCRF-mv$ outperforms $LSTM-mv$ in all early prediction windows. However, $LSTM-mv$ performs marginally better than $FLDCRF-mv$ after the `0' mark on crossing and stopping scenarios considered together (see Table \ref{table:cross-stop}). \begin{table} \vspace{2mm} \caption{Classification accuracy of all considered JAAD standing \& starting sequences by different systems within different prediction windows.} \centering \renewcommand{\arraystretch}{1.2} \setlength\tabcolsep{1.5pt} \begin{tabular}{|p{3.35cm}|c|c|c|c|c|c|} \hline \multirow{2}{5cm}{\textbf{System}} & \multicolumn{6}{c|}{\textbf{Classification Accuracy(\%)}}\\ \cline{2-7} & 2-0 s & 1.5-0 s & 1-0 s & 0.5-0 s & 0-(-0.5) s & 0-(-1) s\\ \hline \hline FLDCRF- (mtn+veh) & \textbf{56.02} & \textbf{57.98} & \textbf{61.02} & \textbf{68.18} & \textbf{77.17} & \textbf{82.50} \\ \hline FLDCRF- (mtn+spa) & 45.84 & 48.17 & 50.11 & 54.32 & 72.53 & 78.52 \\ \hline FLDCRF- (mtn) & 46.15 & 47.84 & 50.45 & 55.23 & 70.91 & 78.18 \\ \hline LSTM- (mtn+veh) & 51.82 & 53.15 & 56.48 & 60.91 & 71.92 & 77.95 \\ \hline LSTM- (mtn+spa) & 35.24 & 36.84 & 38.07 & 40.91 & 56.16 & 66.82 \\ \hline LSTM- (mtn) & 33.59 & 34.46 & 37.05 & 42.05 & 54.55 & 66.36 \\ \hline \end{tabular} \label{table:stand-start} \vspace{-3mm} \end{table} \begin{table} \vspace{3mm} \caption{Classification accuracy of all considered JAAD continuous crossing \& stopping sequences by different systems within different prediction windows.} \centering \renewcommand{\arraystretch}{1.2} \setlength\tabcolsep{1.5pt} \begin{tabular}{|p{3.35cm}|c|c|c|c|c|c|} \hline \multirow{2}{5cm}{\textbf{System}} & \multicolumn{6}{c|}{\textbf{Classification Accuracy(\%)}}\\ \cline{2-7} & 2-0 s & 1.5-0 s & 1-0 s & 0.5-0 s & 0-(-0.5) s & 0-(-1) s\\ \hline \hline FLDCRF- (mtn+veh) & \textbf{90.47} & \textbf{91.39} & \textbf{91.83} & \textbf{93.08} & 93.68 & 95.29 \\ \hline FLDCRF- (mtn+spa) & 74.85 & 75.57 & 77.21 & 78.85 & 78.63 & 79.62 \\ \hline FLDCRF- (mtn) & 75.93 & 76.31 & 77.88 & 80.19 & 78.29 & 80.67 \\ \hline LSTM- (mtn+veh) & 86.73 & 87.94 & 90.10 & \textbf{93.08} & \textbf{95.90} & \textbf{95.38} \\ \hline LSTM- (mtn+spa) & 73.53 & 75.77 & 77.12 & 80.38 & 82.39 & 85.87 \\ \hline LSTM- (mtn) & 75.87 & 76.79 & 77.4 & 78.46 & 83.93 & 87.5 \\ \hline \end{tabular} \label{table:cross-stop} \vspace{-3mm} \end{table} $FLDCRF-mv$ and $LSTM-mv$ consistently performed well across all sequence types. Other systems have been better at times but lacked consistency. We can observe dominant performance of systems with vehicle context ($FLDCRF-mv$ and $LSTM-mv$) in Fig. \ref{fig:all}, producing superior overall accuracy compared to systems without vehicle context at all prediction instants. Moreover, $FLDCRF-mv$ performs similar/better than $LSTM-mv$ at all considered prediction instants, proving to be the best performing model on the JAAD dataset. $FLDCRF-mv$ produces an average accuracy of $\sim$78\% within the 1-0 s window and $\sim$82\% within the 0.5-0 s window, while $LSTM-mv$ outputs $\sim$75\% and $\sim$78\% in respective windows. Figure \ref{fig:JAAD-runtime} compares the required training and inference time by the considered FLDCRF and LSTM settings on $mtn+veh$ system. While both models needed similar time for inference, FLDCRF models required significantly lesser training time compared to the LSTM models. FLDCRF training and inference times can be further reduced by GPU implementation. \begin{figure}[h] \includegraphics[scale=0.45, height=3cm]{All_accuracy_JAAD_onehot.pdf} \centering \caption{Overall classification accuracy of different systems on JAAD sequences.} \label{fig:all} \end{figure} \begin{figure}% \centering \subfloat[\scriptsize{Average training time per outer fold.}]{{\includegraphics[width=4.1cm]{training_time_wo_av_line.pdf} }}% \quad \subfloat[\scriptsize{Average inference time per frame.}]{{\includegraphics[width=4.1cm]{inference_time_wo_av_line.pdf} }}% \caption{Training and inference times (excluding feature extraction) required by different FLDCRF and LSTM settings on $mtn+veh$ system.}% \label{fig:JAAD-runtime}% \end{figure} We analyze individual sequence outputs and failed cases by the $FLDCRF-mv$ system below. \vspace{2mm} \noindent \textbf{Failed case analysis:} Fig. \ref{fig:JAAD-failed} depicts failures in bold red from $FLDCRF-mv$ system. We denote a pedestrian sequence by its $video\_id$ and $pedestrian\_id$ in JAAD dataset, i.e. $<$$video\_id$$>$\_$<$$pedestrian\_id$$>$. Crossing sequences $0161\_1$ and $0177\_2$, where the system fails to establish stable and accurate (stopping probability $<$ 0.5) outputs after -0.25 s, are highlighted in Fig. \ref{fig:JAAD-failed}a. In sequence 0161\_1, we find a momentary prediction glitch within [-0.5 -1] s window caused by a temporary hesitance from the pedestrian to continue crossing. The vehicle happened to be quite far from the pedestrian during the glitch avoiding a critical failure by the system. The failure in sequence 0177\_2 is caused by inappropriate vehicle context as the ego-vehicle is moving fast quite near ($<$15 m) the pedestrian when the crossing event commenced in a different lane. However, the vehicle decelerated within a short period and stable, accurate output was achieved shortly after the -1.5 s mark. Such errors can be corrected by adding the lane information as context. \begin{figure}% \centering \subfloat[\scriptsize{Continuous crossing sequences.}]{{\includegraphics[width=4.1cm]{Failed_crossing_JAAD_new.pdf} }}% \quad \subfloat[\scriptsize{Stopping sequences.}]{{\includegraphics[width=4.1cm]{Failed_stopping_JAAD_new.pdf} }}% \quad \subfloat[\scriptsize{Starting sequences.}]{{\includegraphics[width=4.1cm]{Failed_starting_JAAD_new.pdf} }}% \quad \subfloat[\scriptsize{Standing sequences.}]{{\includegraphics[width=4.1cm]{Failed_standing_JAAD_new.pdf} }}% \caption{Individual sequence outputs by $FLDCRF-(mtn+veh)$ on the JAAD dataset. Failed cases are shown in bold red.}% \label{fig:JAAD-failed}% \end{figure} All stopping scenarios were predicted correctly by the $FLDCRF-mv$ system, latest by 0.75 s after the stopping instant. The system fails to make early prediction (before `0') of the event on the highlighted sequences (0334\_2 and 0336\_1). A few sequences fall below probability 0.5 after the -1 s mark, but all such cases correspond to a starting event followed by the stopping event. We have found early prediction of the `starting' event quite challenging. Fig. \ref{fig:JAAD-failed}c highlights three starting scenarios that fail to be stable and accurate by -1.5 s on the curves. However, all of them become stable and accurate shortly after the -2 s mark. Standing sequence $0208\_2$ is wrongly predicted as `crossing' by $FLDCRF-mv$ before and well after ($\approx$1 s) the defined critical point. A possible reason behind this is incomplete and inaccurate vehicle context. In this case, the other vehicles before the ego-vehicle are primarily responsible for the pedestrian to remain stationery (see Fig. \ref{fig:stand_fail}). The ego-vehicle is moving slowly during the event in a busy scene and is quite far from the pedestrian, causing the inaccurate context. We defined the critical point randomly due to limited number of images in the sequence. In such cases, we need to consider more complete pedestrian-vehicle interactions, which include other vehicles in the scene. \begin{figure}[h] \includegraphics[scale=0.27]{Standing_failure_corrected.pdf} \centering \caption{Failed prediction by $FLDCRF-mv$ on standing sequence $0208\_02$. $cp$ deontes the critical point (`0' mark). GT is the ground-truth intention at `0'.} \label{fig:stand_fail} \end{figure} JAAD dataset is not equipped with precise camera matrix information for the video sequences. Consequently, we utilized approximate values for these parameters, and observe our motion features to perform weaker than NTU dataset. However, combined with context features we obtain relatively early and accurate intention prediction for all four types of sequences from the $FLDCRF-mv$ and $LSTM-mv$ systems. \section{Conclusion} \label{sec:conclude} We presented a context model for pedestrian intention prediction for Autonomous Vehicles. We introduced vehicle interaction context in the problem for earlier and more accurate prediction. We also proposed Factored Latent-Dynamic Conditional Random Fields (FLDCRF) for single and multi-label sequence prediction/interaction tasks. FLDCRF led to more accurate and stable performance than LSTM over identical input features across our datasets. We plan to compare FLDCRF and LSTM on standard single and multi-label sequential machine learning datasets. We are also specifically interested in utilizing the interaction model variant of FLDCRF to capture complex pedestrian-vehicle interactions and apply the model on joint scene modeling tasks. In the system proposed in the paper, we will look to replace our pre-calibrated depth estimation with fully automated techniques, train with more data and build a real-time system on our approach. Moreover, in future work we will look to augment our intention prediction system with static scene context variables (e.g., zebra crossing, bus stop, lanes etc.) by attention models and propose more complete end-to-end models involving general static and dynamic interactions. \section*{Acknowledgment} We are very thankful to all the reviewers for their valuable feedback. \ifCLASSOPTIONcaptionsoff \newpage \fi
1,116,691,499,470
arxiv
\section{Proof of Theorem \ref{thm:1}} \label{sec:proof} Consider two classes $X$ and $Y$ that have the same distribution (covering the same region) and have sufficient data points. Suppose $X$ and $Y$ have $N_x$ and $N_y$ data points, and assume the sampling density ratio is $\frac{N_y}{N_x} =\alpha$. Before providing the proof of Theorem \ref{thm:1}, we firstly prove Lemma \ref{lma:1}, which will be used later. \begin{lemma} \label{lma:1} If and only if two classes $X$ and $Y$ have the same distribution covering region $\Omega$ and $\frac{N_y}{N_x} =\alpha$, for any sub-region $\Delta \subseteq \Omega$, with $X$ and $Y$ having $n_{xi},n_{yi}$ points, $\frac{n_{yi}}{n_{xi}} =\alpha$ holds. \end{lemma} \begin{proof} Assume the distributions of $X$ and $Y$ are $f(x)$ and $g(y)$. In the union region of $X$ and $Y$, arbitrarily take one tiny cell (region) $\Delta_i$ with $n_{xi}=\Delta_if(x_i)N_x, n_{yi}=\Delta_ig(y_j)N_y; x_i=y_j$. Then, \[ \frac{n_{yi}}{n_{xi}}=\frac{\Delta_ig(x_i)N_y}{\Delta_if(x_i)N_x}=\alpha \frac{g(x_i)}{f(x_i)} \] Therefore: \[ \alpha \frac{g(x_i)}{f(x_i)} =\alpha \Leftrightarrow \frac{g(x_i)}{f(x_i)}=1 \Leftrightarrow \forall x_i:g(x_i)=f(x_i) \] \end{proof} \subsection{Sufficient condition} \textbf{Sufficient condition of Theorem \ref{thm:1}.} \textit{If the two classes $X$ and $Y$ with the same distribution and have sufficient data points, then the distributions of the ICD and BCD sets are nearly identical.} \begin{figure}[h] \centerline{\includegraphics{figs/fig_proof1.pdf}} \caption{Two non-overlapping small cells} \label{fig:5} \end{figure} \begin{proof} Within the area, select two tiny non-overlapping cells (regions) $\Delta_i$ and $\Delta_j$ (Figure \ref{fig:5}). Since $X$ and $Y$ have the same distribution but in general different densities, the number of points in the two cells $n_{xi},n_{yi};n_{xj},n_{yj}$ fulfills: \[ \frac{n_{yi}}{n_{xi}} =\frac{n_{yj}}{n_{xj}} =\alpha \] The scale of cells is $\delta$, the ICDs and BCDs of $X$ and $Y$ data points in cell $\Delta_i$ are approximately $\delta$ because the cell is sufficiently small. By the Definition~\ref{def:1}~and~\ref{def:2}: \[ d_{x_i}\approx d_{x_i,y_i}\approx \delta;\quad x_i,y_i\in \Delta_i \] Similarly, the ICDs and BCDs of $X$ and $Y$ data points between cells $\Delta_i$ and $\Delta_j$ are approximately the distance between the two cells $D_{ij}$: \[ d_{x_{ij}}\approx d_{x_i,y_j}\approx d_{y_i,x_j}\approx D_{ij};\; x_i,y_i\in \Delta_i;\, x_j,y_j\in \Delta_j \] First, divide the whole distribution region into many non-overlapping cells. Arbitrarily select two cells $\Delta_i$ and $\Delta_j$ to examine the ICD set for $X$ and the BCD set for $X$ and $Y$. By the Definition \ref{def:1} and \ref{def:2}: \romannum{1} ) The ICD set for $X$ has two distances: $\delta$ and $D_{ij}$, and their numbers are: \[ d_{x_i}\approx \delta;\; x_i\in \Delta_i:\; |\{d_{x_i}\}|=\frac{1}{2}n_{xi}(n_{xi}-1) \] \[ d_{x_{ij}}\approx D_{ij};\; x_i\in \Delta_i;x_j\in \Delta_j:\; |\{d_{x_{ij}}\}|=n_{xi}n_{xj} \] \romannum{2} ) The BCD set for $X$ and $Y$ also has two distances: $\delta$ and $D_{ij}$, and their numbers are: \[ d_{x_i,y_i}\approx \delta;\; x_i,y_i\in \Delta_i:\; |\{d_{x_i ,y_i}\}|=n_{xi} n_{yi} \] \[ d_{x_i,y_j}\approx d_{y_i,x_j}\approx D_{ij};\; x_i,y_i\in \Delta_i;x_j,y_j\in \Delta_j: \] \[ |\{d_{x_i,y_j}\}|=n_{xi} n_{yj};\; |\{d_{y_i,x_j}\}|=n_{yi} n_{xj} \] Therefore, the proportions of the number of distances with a value of $D_{ij}$ in the ICD and BCD sets are: For ICDs: \[ \frac{|\{d_{x_{ij}} \}|}{|\{d_x \}|} =\frac{2n_{xi} n_{xj}}{N_x (N_x-1)} \] For BCDs, considering the density ratio: \[ \frac{|\{d_{x_i,y_j} \}|+|\{d_{y_i,x_j }\}|}{|\{d_{x,y} \}|} =\frac{\alpha n_{xi} n_{xj}+\alpha n_{xi} n_{xj}}{\alpha N_x^2 }=\frac{2n_{xi} n_{xj}}{N_x^2} \] The ratio of proportions of the number of distances with a value of $D_{ij}$ in the two sets is: \[ \frac{N_x (N_x-1)}{N_x^2}=1-\frac{1}{N_x} \to 1 \; \; (N_x\to \infty) \] This means that the number of proportions of the number of distances with a value of $D_{ij}$ in the two sets is equal. We then examine the proportions of the number of distances with a value of $\delta$ in the ICD and BCD sets. For ICDs: \[ \sum_{i} \frac{|\{d_{x_i}\}|}{|\{d_x\}|} = \frac{\sum_{i} [n_{xi} (n_{xi}-1)]}{N_x (N_x-1)} = \frac{\sum_{i} (n_{xi}^2-n_{xi} )}{N_x^2-N_x} = \frac{\sum_{i} (n_xi^2 ) -N_x}{N_x^2-N_x} \] For BCDs, considering the density ratio: \[ \sum_{i} \frac{|\{d_{x_i,y_i } \}|}{|\{d_{x,y})\}|} = \frac{\sum_{i} (n_{xi}^2 )}{N_x^2} \] The ratio of proportions of the number of distances with a value of $\delta$ in the two sets is: \[ \frac{\sum_{i} (n_{xi}^2 ) }{N_x^2 }\cdot \frac{N_x^2-N_x}{\sum_{i} (n_{xi}^2 ) -N_x } = \sum_{i} \left(\frac{n_{xi}^2}{N_x^2} \right) \cdot \frac{1-\frac{1}{N_x}}{\sum_{i} \left(\frac{n_{xi}^2}{N_x^2} \right) -\frac{1}{N_x}}\to 1 \; \; (N_x\to \infty) \] This means that the number of proportions of the number of distances with a value of $\delta$ in the two sets is equal. In summary, the fact that the proportion of any distance value ($\delta$ or $D_{ij}$) in the ICD set for $X$ and in the BCD set for $X$ and $Y$ is equal indicates that the distributions of the ICD and BCD sets are identical, and a corresponding proof applies to the ICD set for $Y$. \end{proof} \subsection{Necessary condition} \textbf{Necessary condition of Theorem \ref{thm:1}.} \textit{If the distributions of the ICD and BCD sets with sufficient data points are nearly identical, then the two classes $X$ and $Y$ must have the same distribution.} \begin{remark} We prove its \textbf{contrapositive}: if $X$ and $Y$ do not have the same distribution, the distributions of the ICD and BCD sets are not identical. We then apply proof by \textbf{contradiction}: suppose that $X$ and $Y$ do not have the same distribution, but the distributions of the ICD and BCD sets are identical. \end{remark} \begin{proof} Suppose classes $X$ and $Y$ have the data points $N_x, N_y$, which $\frac{N_y}{N_x} =\alpha $. Divide their distribution area into many non-overlapping tiny cells (regions). In the $i$-th cell $\Delta_i$, since distributions of $X$ and $Y$ are different, according to Lemma \ref{lma:1}, the number of points in the cell $n_{xi},n_{yi}$ fulfills: \[ \frac{n_{yi}}{n_{xi}} = \alpha _i; \; \; \exists \alpha _i \neq \alpha \] The scale of cells is $\delta$ and the ICDs and BCDs of the $X$ and $Y$ points in cell $\Delta_i$ are approximately $\delta$ because the cell is sufficiently small. \[ d_{x_i}\approx d_{y_i}\approx d_{x_i,y_i}\approx \delta; \; \; x_i,y_i\in \Delta_i \] In the $i$-th cell $\Delta_i$: \romannum{1}) The ICD of $X$ is $\delta$, with a proportion of: \begin{equation} \label{eq:1} \sum_{i} \frac{|\{d_{x_i}\}|}{|\{d_x\}|} = \frac{\sum_{i} [n_{xi} (n_{xi}-1)]}{N_x (N_x-1)} = \frac {\sum_{i} (n_{xi}^2-n_{xi} )}{N_x^2-N_x}=\frac{\sum_{i} (n_{xi}^2 ) -N_x}{N_x^2-N_x} \end{equation} \romannum{2}) The ICD of $Y$ is $\delta$, with a proportion of: \begin{multline} \label{eq:2} \sum_{i} \frac{|\{d_{y_i}\}|}{|\{d_y\}|} = \frac{\sum_{i} [n_{yi} (n_{yi}-1)]}{N_y (N_y-1)}=\frac {\sum_{i} (n_{yi}^2-n_{yi} )}{N_y^2-N_y}\\ =\frac{\sum_{i} (n_{yi}^2 ) -N_y}{N_y^2-N_y}\Bigg\rvert_{\substack{N_y=\alpha N_x \\ n_{yi} = \alpha _i n_{xi}}} = \frac{\sum_{i} (\alpha _i^2 n_{xi}^2 ) -\alpha N_x}{\alpha^2 N_x^2-\alpha N_x} \end{multline} \romannum{3}) The BCD of $X$ and $Y$ is $\delta$, with a proportion of: \begin{equation} \label{eq:3} \sum_{i} \frac{|\{d_{x_i,y_i} \}|}{|\{d_{x,y} \}|}=\frac {\sum_{i} (n_{xi} n_{yi} ) }{N_x N_y}=\frac {\sum_{i} (\alpha _i n_{xi}^2 ) }{\alpha N_x^2} \end{equation} For the distributions of the two sets to be identical, the ratio of proportions of the number of distances with a value of $\delta$ in the two sets must be 1, that is $\frac{(\ref{eq:3})}{(\ref{eq:1})}=\frac{(\ref{eq:3})}{(\ref{eq:2})}=1$. Therefore, \begin{multline} \label{eq:4} \frac{(\ref{eq:3})}{(\ref{eq:1})}= \frac {\sum_{i} (\alpha _i n_{xi}^2 ) }{\alpha N_x^2} \cdot \frac{N_x^2-N_x}{\sum_{i} (n_{xi}^2 )-N_x}\\ = \frac{1}{\alpha N_x^2}\sum_{i} (\alpha _i n_{xi}^2 )\cdot \frac {1-\frac {1}{N_x} }{\frac {1}{N_x^2} \sum_{i}(n_{xi}^2 ) -\frac {1}{N_x}}\Bigg\rvert_{N_x\to \infty} \\ =\frac {1}{\alpha}\cdot \frac{\sum_{i} (\alpha_i n_{xi}^2 ) }{\sum_{i}(n_{xi}^2 )}=1 \end{multline} Similarly, \begin{multline} \label{eq:5} \frac{(\ref{eq:3})}{(\ref{eq:2})}= \frac {\sum_{i} (\alpha _i n_{xi}^2 ) }{\alpha N_x^2} \cdot \frac{\alpha^2 N_x^2-\alpha N_x}{\sum_{i} (\alpha _i^2 n_{xi}^2 ) -\alpha N_x}\\ = \frac{\sum_{i} (\alpha _i n_{xi}^2 )}{N_x^2}\cdot \frac {\alpha-\frac {1}{N_x} }{\frac {1}{N_x^2} \sum_{i} (\alpha _i^2 n_{xi}^2 ) -\frac {\alpha}{N_x}}\Bigg\rvert_{N_x\to \infty} \\ =\alpha \cdot \frac{\sum_{i} (\alpha_i n_{xi}^2 ) }{\sum_{i} (\alpha _i^2 n_{xi}^2 )}=1 \end{multline} To eliminate the $\sum_{i} (\alpha _i n_{xi}^2 )$ by considering the Eq.~\ref{eq:4}~and~\ref{eq:5}, we have: \[ \sum_{i}(n_{xi}^2 )=\frac{\sum_{i} (\alpha _i^2 n_{xi}^2 )}{\alpha^2} \] Let $\rho_i=\left(\frac{\alpha_i}{\alpha}\right)^2$, then, \[ \sum_{i}(n_{xi}^2 )=\sum_{i} (\rho_i n_{xi}^2 ) \] Since $n_{xi}$ could be any value, to hold the equation requires $\rho_i=1$. Hence: \[ \forall \rho_i=\left(\frac{\alpha_i}{\alpha}\right)^2=1 \Rightarrow\forall \alpha_i=\alpha \] This contradicts $\exists \alpha_i\neq \alpha$. Therefore, the contrapositive proposition has been proved. \end{proof} \section{Synthetic Datasets} \label{sec:syn_names} \begin{table} \tbl{Names of the 97 used synthetic datasets from the Tomas Barton repository$^a$} {\begin{tabular}{cccccc} \toprule 3-spiral & 2d-10c & ds2c2sc13 & rings & square5 & complex8 \\ aggregation & 2d-20c-no0 & ds3c3sc6 & shapes & st900 & complex9 \\ 2d-3c-no123 & threenorm & ds4c2sc8 & simplex & target & compound \\ dense-disk-3000 & triangle1 & 2d-4c & sizes1 & tetra & donutcurves \\ dense-disk-5000 & triangle2 & 2dnormals & sizes2 & curves1 & donut1 \\ elliptical\_10\_2 & dartboard1 & engytime & sizes3 & curves2 & donut2 \\ elly-2d10c13s & dartboard2 & flame & sizes4 & D31 & donut3 \\ 2sp2glob & 2d-4c-no4 & fourty & sizes5 & twenty & zelnik1 \\ cure-t0-2000n-2D & 2d-4c-no9 & xor & smile1 & aml28 & zelnik2 \\ cure-t1-2000n-2D & pmf & hepta & smile2 & wingnut & zelnik3 \\ twodiamonds & diamond9 & hypercube & smile3 & xclara & zelnik5 \\ spherical\_4\_3 & disk-1000n & jain & atom & R15 & zelnik6 \\ spherical\_5\_2 & disk-3000n & lsun & blobs & pathbased & \\ spherical\_6\_2 & disk-4000n & long1 & cassini & square1 & \\ chainlink & disk-4500n & long2 & spiral & square2 & \\ spiralsquare & disk-4600n & long3 & circle & square3 & \\ gaussians1 & disk-5000n & longsquare & cuboids & square4 & \\ \botrule \end{tabular}} \begin{tabnote} $^{a.}$ Available at \url{https://github.com/deric/clustering-benchmark/tree/master/src/main/resources/datasets/artificial}. \end{tabnote} \label{tab:syn_data} \end{table} \section{Introduction} Cluster analysis is an important unsupervised learning method in machine learning. The clustering algorithms divide a dataset into clusters \cite{Jain1999Data} based on the distribution structure of the data, without any prior knowledge. Clustering is widely studied and used in many fields, such as data mining, pattern recognition, object detection, image segmentation, bioinformatics, and data compression \cite{Roiger2017Data,Wen2019shape-based,Guan2018Application,Dhanachandra2017survey, Karim2020Deep, Marchetti2018Spatial}. The shortage of labels for training is a big problem in some machine learning applications, such as medical image analysis, and applications of big data \cite{Nasraoui2019Clustering} because labeling is expensive \cite{Hoo-Chang2016Deep}. Since unsupervised machine learning does not use labels for training, to apply cluster analysis can avoid the problem. In general, the main methods of cluster analysis can be categorized into centroid-based (\textit{e.g.}, k-means), distribution-based (\textit{e.g.}, EM algorithm \cite{Byrne2017EM}), density-based (\textit{e.g.}, DBSCAN \cite{Ester1996density-based}), hierarchical (\textit{e.g.}, Ward linkage \cite{Ward_Jr1963Hierarchical}), and spectral clustering \cite{Von_Luxburg2007tutorial}. None of the clustering methods, however, is able to perform well with all kinds of datasets \cite{Kleinberg2003Impossibility,Von_Luxburg2012Clustering:}. That is, a clustering method that performs well with some types of datasets would perform poorly with some others. For this reason, various clustering methods have been applied to datasets. Consequently, effective clustering validations (measures of clustering quality) are required to evaluate which clustering method performs well for a dataset \cite{Ben-David2009Measures,Adolfsson2019To}. And, clustering validations are also used to tune the parameters of clustering algorithms. There are two categories of clustering validations: \textit{internal} and \textit{external} validations. \textit{External validations} use the truth-labels of classes and predicted labels, and \textit{internal validations} use predicted labels and data. Since external validations require true labels and there are no true class labels in unsupervised learning tasks, we can employ only the internal validations in cluster analysis \cite{Liu2013Understanding}. In fact, to evaluate clustering results by internal validations has the same difficulty as to do clustering itself because measures have no more information than the clustering methods \cite{Pfitzner2008Characterization}. Therefore, the difficulty of designing an internal \textit{Cluster Validity Index} (CVI) is like creating a clustering algorithm. The different part is that a clustering algorithm can update clustering results by a value (loss) from the optimizing function but the CVI provides only a value for clusters evaluation. \subsection{Related works} Various CVIs have been created for the clustering of many types of datasets \cite{Desgraupes2017Clustering}. By methods of calculation \cite{Hu2019Internal}, the internal CVIs are based on two categories of representatives: center and non-center. Center-based internal CVIs use descriptors of clusters. For example, the \textit{Davies–Bouldin} index (DB) \cite{Davies1979Cluster} uses cluster diameters and the distance between cluster centroids. Non-center internal CVIs use descriptors of data points. For example, the \textit{Dunn} index \cite{Dunn1974Well-Separated} considers the minimum and maximum distances between two data points. Besides the DB and Dunn indexes, in this paper, some other typical internal CVIs are selected for comparison. The \textit{Calinski-Harabasz} index (CH) \cite{Calinski1974dendrite} and \textit{Silhouette coefficient} (Sil) \cite{Rousseeuw1987Silhouettes:} are two traditional internal CVIs. In recently developed internal CVIs, the \textit{I} index \cite{Maulik2002Performance}, \textit{WB} index \cite{Zhao2014WB-index:}, \textit{Clustering Validation index based on Nearest Neighbors} (CVNN) \cite{Liu2013Understanding}, and \textit{Cluster Validity index based on Density-involved Distance} (CVDD) \cite{Hu2019Internal} are selected. Eight typical internal CVIs, which range from early studies (Dunn, 1974) to the most recent studies (CVDD, 2019), are selected to compare with our proposed CVI. In addition, an external CVI - the \textit{Adjusted Rand Index} (ARI) \cite{Santos2009On} is selected as the ground truth for comparison because external validations use the true class labels and predicted labels. Unless otherwise indicated, \textbf{the ``CVIs'' that appear hereafter mean internal CVIs and the only external CVI is named ``ARI''}. \section{Distance-based Separability Measure} Since the goal of clustering is to separate a dataset into clusters, in the macro-perspective, how well a dataset has been separated could be indicated via the separability of clusters. In a dataset, data points are assigned class labels by the clustering algorithm. The most difficult situation for separation of the dataset occurs when all labels are randomly assigned and the data points of different classes will have the same distribution (distributions have the same shape, position, and support, \textit{i.e.}, the same probability density function). To analyze the distributions of different-class data, we propose the \textit{Distance-based Separability Index} (DSI) \footnote{More studies about the DSI will appear in other forthcoming publications, which can be found in author’s website linked up with the ORCID: \url{https://orcid.org/0000-0002-3779-9368}.}. Suppose a dataset contains two classes $X$, $Y$ and have $N_x$, $N_y$ data points, respectively, we can define: \begin{definition} \label{def:1} The \textit{Intra-Class Distance} (ICD) set is a set of distances between any two points in the same class. \textit{e.g.}, for class $X$, its ICD set $\{d_x\}$: \[ \{d_x\}=\{ \|x_i-x_j\|_2 | x_i,x_j\in X;x_i\neq x_j\}. \] \end{definition} \begin{remark} The metric for distance is Euclidean $(l^2\,\text{-norm})$. Given $|X|=N_x$, then $|\{d_x\}|=\frac{1}{2}N_x(N_x-~1)$. \end{remark} \begin{definition} \label{def:2} The \textit{Between-Class Distance} (BCD) set is the set of distances between any two points from different classes. \textit{e.g.}, for class $X$ and $Y$, their BCD set $\{ d_{x,y}\}$: \[ \{ d_{x,y} \}=\{ \|x_i-~y_j\|_2 \, |\, x_i\in X;y_j\in Y \}. \] \end{definition} \begin{remark} Given $|X|=N_x,|Y|=N_y$, then $|\{d_{x,y} \}|=N_x N_y$. \end{remark} Then, the Theorem \ref{thm:1} shows how the ICD and BCD sets are related to the distributions of data: \begin{theorem} \label{thm:1} When $|\{d_x\}|,|\{d_y\}|\to \infty$, if and only if the two classes $X$ and $Y$ have the same distribution, the distributions of the ICD and BCD sets are identical. \end{theorem} The full proof of Theorem \ref{thm:1} is shown in \ref{sec:proof}. Here we provide an informal explanation: points in $X$ and $Y$ having the same distribution and covering the same region can be considered to have been sampled from one distribution $Z$. Hence, both ICDs of $X$ and $Y$, and BCDs between $X$ and $Y$ are actually ICDs of $Z$. Consequently, the distributions of ICDs and BCDs are identical. In other words, that the distributions of the ICD and BCD sets are identical indicates all labels are assigned randomly and thus, the dataset has the least separability. \subsection{Computation of the DSI} For computation of the DSI of the two classes $X$ and $Y$, first, the ICD sets of $X$ and $Y$: $\{d_x\},\{d_y\}$ and the BCD set: $\{d_{x,y}\}$ are computed by their definitions (Def. \ref{def:1} and \ref{def:2}). Second, the \textit{Kolmogorov-Smirnov} (KS) distance \cite{scipy.stats.kstest} is applied to examine the similarity of the distributions of the ICD and BCD sets. Although there are other statistical measures to compare two distributions, such as Bhattacharyya distance, Kullback-Leibler divergence, and Jensen-Shannon divergence, most of them require the two sets to have the same number of data points. It is easy to show that the $|\{d_x\}|,|\{d_y\}|$ and $|\{d_{x,y}\}|$ cannot be the same. The Wasserstein distance \cite{ramdas2017wasserstein} is also a potentially suitable measure, but our testing indicates that the Wasserstein distance is not as sensitive as the KS distance. The result of a two-sample KS distance is the maximum distance between two cumulative distribution functions (CDFs): \[ KS(P,Q)=\sup_{x} |P(x)-Q(x)| \] Where $P$ and $Q$ are the respective CDFs of the two distributions $p$ and $q$. Hence, the similarities between the ICD and BCD sets are then computed using the KS distance \footnote{By using the \texttt{scipy.stats.ks\_2samp} from the SciPy package in Python. \url{https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.html}}: $s_x=KS(\{d_x\},\{d_{x,y}\})$ and $s_y=KS(\{d_y\},\{d_{x,y}\})$. Since there are two classes, the DSI is the average of the two KS distances: $DSI(\{X,Y\})=\frac{(s_x+s_y)}{2}$. The $KS(\{d_x\},\{d_y\})$ is not used because it shows only the shape difference between the distributions of two classes $X$ and $Y$, not their location information. For example, the two distributions of classes $X$ and $Y$ having the same shape, but no overlap will have \textit{zero} KS distance between their ICD sets: $KS(\{d_x\},\{d_y\})=0$. And we do not use the weighted average because once the distributions of the ICD and BCD sets can be well characterized, the sizes of $X$ and $Y$ will not affect the KS distances $s_x$ and $s_y$. In general, for an $n$-class dataset, we obtain its DSI with the \textbf{DSI Algorithm}: \begin{enumerate} \item Compute $n$ ICD sets for each class: $\{d_{C_i}\};\; i=1,2,\cdots,n$. \item Compute $n$ BCD sets for each class. For the $i$-th class of data $C_i$, the BCD set is the set of distances between any two points in $C_i$ and $\overline{C_i}$ (other classes, not $C_i$): $\{d_{C_i,\overline{C_i}}\}$. \item Compute the $n$ KS distances between ICD and BCD sets for each class: $s_i=KS(\{d_{C_i }\},\{d_{C_i,\overline{C_i}}\})$. \item Calculate the average of the $n$ KS distances; the DSI of this dataset is $DSI(\{C_i \})=\frac{\sum s_i}{n}$. \end{enumerate} The running time of computing ICD and BCD sets is linear with the dimensionality and quadratic with the amount of data. \subsection{Cluster validation using DSI} A small DSI (low separability) means that the ICD and BCD sets are very similar. In this case, by Theorem \ref{thm:1}, the distributions of classes $X$ and $Y$ are similar too. Hence, data of the two classes are difficult to separate. \begin{figure}[th] \centerline{\includegraphics[width=0.65\textwidth]{figs/fig1.pdf}} \vspace*{8pt} \caption{Two clusters (classes) datasets with different label assignments. Each histogram indicates the relative frequency of the value of each of the three distance measures (indicated by color).} \label{fig:1} \end{figure} An example of two-class dataset is shown in Figure \ref{fig:1}. Figure \ref{fig:1}a shows that, if the labels are assigned correctly by clustering, the distributions of ICD sets will be different from the BCD set and the DSI will reach the maximum value for this dataset because the two clusters are well separated. For an incorrect clustering, in Figure \ref{fig:1}b, the difference between distributions of ICD and BCD sets becomes smaller so that the DSI value decreases. Figure \ref{fig:1}c shows an extreme situation, that is, if all labels are randomly assigned, the distributions of the ICD and BCD sets will be nearly identical. It is the worst case of separation for the two-class dataset and its separability (DSI) is close to zero. Therefore, the separability of clusters can be reflected well by the proposed DSI. The DSI ranges from 0 to 1, $DSI\in (0,1)$, and \textit{\textbf{we suppose that}} the greater DSI value means the dataset is clustered better. \section{Materials} \subsection{Compared CVIs} CVIs are used to evaluate the clustering results. In this paper, several internal CVIs including the proposed DSI have been employed to examine the clustering results from different clustering methods (algorithms). To use different clustering methods on a given dataset may obtain different cluster results and thus, CVIs are used to select the best clusters. We choose eight commonly used (classical and recent) internal CVIs and an external CVI - the \textit{Adjusted Rand Index} (ARI) to compare with our proposed DSI (Table \ref{tab:1}). The role of ARI is the ground truth for comparison because ARI involves true labels (clusters) of the dataset. \begin{table} \tbl{Compared CVIs.} {\begin{tabular}{ccc} \toprule \textbf{Name} & \textbf{Optimal$^a$} & \textbf{Reference} \\ \colrule \textbf{ARI}$^b$ & MAX & (Santos \& Embrechts, 2009) \cite{Santos2009On} \\ \colrule \textbf{Dunn} index & MAX & (Dunn, J.,1973) \cite{Dunn1974Well-Separated} \\ \textbf{C}alinski-\textbf{H}arabasz Index & MAX & (Calinski \& Harabasz, 1974) \cite{Calinski1974dendrite} \\ \textbf{D}avies–\textbf{B}ouldin index & min & (Davies \& Bouldin, 1979) \cite{Davies1979Cluster} \\ \textbf{Sil}houette Coefficient & MAX & (Rousseeuw, 1987) \cite{Rousseeuw1987Silhouettes:} \\ \textbf{I} & MAX & (U. Maulik, 2002) \cite{Maulik2002Performance} \\ \textbf{CVNN} & min & (Yanchi L., 2013) \cite{Liu2013Understanding} \\ \textbf{WB} & min & (Zhao Q., 2014) \cite{Zhao2014WB-index:} \\ \textbf{CVDD} & MAX & (Lianyu H., 2019) \cite{Hu2019Internal} \\ \textbf{DSI} & MAX & Proposed \\ \botrule \end{tabular}} \begin{tabnote} $^{a.}$ Optimal column means the CVI for best case has the minimum or maximum value. $^{b.}$ The ground truth for comparison. \end{tabnote} \label{tab:1} \end{table} \subsection{Synthetic and real datasets} \label{sec:datasets} In this paper, the synthetic datasets for clustering are from the Tomas Barton repository \footnote{\url{ https://github.com/deric/clustering-benchmark/tree/master/src/main/resources/datasets/artificial}}, which contains 122 artificial datasets. Each dataset has hundreds to thousands of objects with several to tens of classes in two or three dimensions (features). We have selected 97 datasets for experiment because the 25 unused datasets have too many objects to run the clustering processing in reasonable time. The names of the 97 used synthetic datasets are shown in \ref{sec:syn_names}. Illustrations of these datasets can be found in Tomas Barton's homepage \footnote{\url{https://github.com/deric/clustering-benchmark}}. The 12 real datasets used for clustering are from three sources: the \textit{sklearn.datasets} package \footnote{\url{https://scikit-learn.org/stable/datasets}}, UC Irvine Machine Learning Repository \cite{Dheeru2017UCI} and Tomas Barton's repository (real world datasets) \footnote{\url{https://github.com/deric/clustering-benchmark/tree/master/src/main/resources/datasets/real-world}}. Unlike the synthetic datasets, the dimensions (feature numbers) of most selected real datasets are greater than three. Hence, CVIs must be used to evaluate their clustering results rather than plotting clusters as for 2D or 3D synthetic datasets. Details about the 12 real datasets appear in Table \ref{tab:2}. \begin{table} \tbl{The description of used real datasets.} {\begin{tabular}{ccccc} \toprule \textbf{Name} & \textbf{Title} & \textbf{Object\#} & \textbf{Feature\#} & \textbf{Class\#} \\ \colrule Iris & Iris plants dataset & 150 & 4 & 3 \\ digits & Optical recognition of handwritten digits dataset & 5620 & 64 & 10 \\ wine & Wine recognition dataset & 178 & 13 & 3 \\ cancer & Breast cancer Wisconsin (diagnostic) dataset & 569 & 30 & 2 \\ faces & Olivetti faces dataset & 400 & 4096 & 40 \\ vertebral & Vertebral column data & 310 & 6 & 3 \\ haberman & Haberman's survival data & 306 & 3 & 2 \\ sonar & Sonar, Mines vs. Rocks & 208 & 60 & 2 \\ tae & Teaching Assistant evaluation & 151 & 5 & 3 \\ thy & Thyroid disease data & 215 & 5 & 3 \\ vehicle & Vehicle silhouettes & 946 & 18 & 4 \\ zoo & Zoo data & 101 & 16 & 7 \\ \botrule \end{tabular}} \label{tab:2} \end{table} \section{Experiments} In general, there are two strategies to evaluate CVIs using a dataset: 1) to compare with ground truth (real clusters with labels); 2) to predict the number of clusters (classes) by finding the optimal number of clusters as identified by CVIs \cite{Cheng2019Novel}. \subsection{Using real clusters} By using datasets' information of real clusters with labels, the steps to evaluate CVIs are: \begin{enumerate} \item To obtain clustering results by running different clustering methods (algorithms) on a dataset. \item To compute CVIs of these clustering results and their ARI (ground truth) using real labels. \item To compare the values of CVI with ARI. \item To repeat the former three steps for a new dataset. \end{enumerate} In this paper, five clustering algorithms from various categories are used, they are: k-means, Ward linkage, spectral clustering, BIRCH \cite{Zhang1996BIRCH:} and EM algorithm (Gaussian Mixture). The CVIs used for evaluation and comparison are shown in Table \ref{tab:1} and the used datasets are introduced in Section \ref{sec:datasets}. And we provide two evaluation methods to compare the values of CVIs with the ground truth ARI; they are called \textit{Hit-the-best} and \textit{Rank-difference}, which are described as follows. \subsubsection{Evaluation metric: Hit-the-best} For a dataset, clustering results obtained by different clustering algorithms would have different CVIs and ARI. If a CVI gives the best score to a clustering result that also has the best ARI score, this CVI is considered to be a correct prediction (hit-the-best). Table \ref{tab:3} shows CVIs of clustering results by different clustering methods on a dataset. For the \texttt{wine} dataset, k-means receives the best ARI score and Dunn, DB, WB, I, CVNN and DSI give k-means the best score; and thus, the six CVIs are hit-the-best. If we mark hit-the-best CVIs as 1 and others as 0, CVI scores in Table \ref{tab:3} can be converted to hit-the-best results (Table \ref{tab:4}) for the \texttt{wine} dataset. \begin{table} \tbl{CVI scores of clustering results on the \texttt{wine} recognition dataset.} {\begin{tabular}{cccccc} \toprule \diagbox{\textbf{Validity}$^a$}{\textbf{\makecell[r]{Clustering \\ method}}} & \textbf{KMeans} & \textbf{\makecell{Ward \\ Linkage}} & \textbf{\makecell{Spectral \\ Clustering}} & \textbf{BIRCH} & \textbf{EM} \\ \colrule ARI$^b$ + & \textbf{0.913}$^c$ & 0.757 & 0.880 & 0.790 & 0.897 \\ \colrule Dunn + & \textbf{0.232} & 0.220 & 0.177 & 0.229 & \textbf{0.232} \\ CH + & 70.885 & 68.346 & 70.041 & 67.647 & \textbf{70.940} \\ DB - & \textbf{1.388} & 1.390 & 1.391 & 1.419 & 1.389 \\ Silhouette + & 0.284 & 0.275 & 0.283 & 0.278 &\textbf{0.285} \\ WB - & \textbf{3.700} & 3.841 & 3.748 & 3.880 & \textbf{3.700} \\ I + & \textbf{5.421} & 4.933 & 5.326 & 4.962 & \textbf{5.421} \\ CVNN - & \textbf{21.859} & 22.134 & 21.932 & 22.186 & \textbf{21.859} \\ CVDD + & 31.114 & \textbf{31.141} & 29.994 & 30.492 & 31.114 \\ DSI + & \textbf{0.635} & 0.606 & 0.629 & 0.609 & 0.634 \\ \botrule \end{tabular}} \begin{tabnote} $^{a.}$ CVI for best case has the minimum (-) or maximum (+) value. $^{b.}$ The first row shows results of ARI as ground truth; other rows are CVIs. $^{c.}$ Bold value: the best case by the measure of this row. \end{tabnote} \label{tab:3} \end{table} \begin{table} \tbl{Hit-the-best results for the \texttt{wine} dataset.} {\begin{tabular}{cccccccccc} \toprule \diagbox[width=8em]{\textbf{\scalebox{0.95}{Dataset}}}{\textbf{\scalebox{0.95}{CVI}}} & \textbf{Dunn} & \textbf{CH} & \textbf{DB} & \textbf{Sil$^a$} & \textbf{WB} & \textbf{I} & \textbf{CVNN} & \textbf{CVDD} & \textbf{DSI} \\ \colrule wine & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 \\ \botrule \end{tabular}} \begin{tabnote} $^{a.}$ Sil = Silhouette. \end{tabnote} \label{tab:4} \end{table} For the hit-the-best, however, the best score can be unstable and random in some cases. For example, in Table \ref{tab:3}, the ARI score of EM is very close to that of k-meams and the Silhouette score of EM is also very close to that of k-meams. If these values fluctuated a little and changed the best cases, the comparison outcome for this dataset will be changed. Another drawback of hit-the-best is that it concerns only one best case and ignores others; it does not evaluate the whole picture for one dataset. The hit-the-best might be a stricter criterion but lacks robustness, and it is vulnerable to extreme cases such as when scores of different clustering results are very close to each other. Hence, we create another method to compare the score sequences of CVIs and ARI through their ranks. \subsubsection{Evaluation metric: Rank-difference} This comparison method fixes the two problems of the hit-the-best: one is instability for similar scores and the other is the bias on only one case. We apply quantization to solve the problem of similar scores. Every score in the score sequence of a CVI (\textit{i.e.}, a row in Table \ref{tab:3}) will be assigned a rank number and similar scores have high probability to be allocated the same rank number. The procedure is: \begin{enumerate} \item Find the minimum and maximum values of $N$ scores from one sequence. \item Uniformly divide [min,max] into $N-1$ intervals. \item Label intervals from max to min by $1,2,\ldots,N-1$. \item If a score is in the $k$-th interval, its rank number is $k$. \item Define rank number of max is 1, and intervals are left open and right closed: (upper value, lower value]. \end{enumerate} \begin{figure}[th] \centerline{\includegraphics{figs/fig2.pdf}} \vspace*{8pt} \caption{An example of rank numbers assignment.} \label{fig:2} \end{figure} Figure \ref{fig:2} shows an example of converting a \textit{score sequence} to a \textit{rank sequence} (rank numbers). The rank number of scores 9 and 8 is 1 because they are in the 1st interval. For the same reason, the rank number of scores 1 and 2 is 4. Such quantization is better than assigning rank numbers by ordering because it avoids the assignment of different rank numbers to very close scores in most cases (it is still possible to use different rank numbers for very close scores; for example, in the Figure \ref{fig:2} case, if scores 8 and 6 changed to 7.1 and 6.9, their rank numbers will still be 1 and 2 even they are very close). \begin{remark} If the score whose rank number is 1 (1-rank score) represents the optimal performance, that the rank number of the maximum CVI score is 1 only works for the CVI whose optimum is maximum but does not work for the CVI whose optimum is minimum, like DB and WB, because its 1-rank score should be minimum. A simple solution to make the rank number work for both types of CVIs is to \textbf{\textit{negate}} all values in score sequences of the CVIs whose optimum is minimum before converting to rank sequence (Figure \ref{fig:2}). Thus, the 1-rank score always represents the optimal performance for all CVIs. \end{remark} Table \ref{tab:5} shows rank sequences of CVIs converted from the score sequences in Table \ref{tab:3}. For each CVI, four ranks are assigned to five scores. Since the ARI row shows the truth rank sequence, for rank sequences in other CVI rows, the more similar to the ARI row, the better the CVI performs. \begin{table} \tbl{Rank sequences of CVIs converted from the score sequences in Table \ref{tab:3}.} {\begin{tabular}{cccccc} \toprule \diagbox[width=10.5em]{\textbf{\scalebox{0.95}{Validity}}}{\scalebox{0.95}{\textbf{\makecell[r]{Clustering \\ method}}}} & \scalebox{0.95}{\textbf{KMeans}} & \scalebox{0.95}{\textbf{\makecell{Ward \\ Linkage}}} & \scalebox{0.95}{\textbf{\makecell{Spectral \\ Clustering}}} & \scalebox{0.95}{\textbf{BIRCH}} & \scalebox{0.95}{\textbf{EM}} \\ \colrule ARI$^a$ & 1 & 4 & 1 & 4 & 1 \\ \colrule Dunn & 1 & 1 & 4 & 1 & 1 \\ CH & 1 & 4 & 2 & 4 & 1 \\ DB & 1 & 1 & 1 & 4 & 1 \\ Silhouette & 1 & 4 & 1 & 3 & 1 \\ WB & 1 & 4 & 2 & 4 & 1 \\ I & 1 & 4 & 1 & 4 & 1 \\ CVNN & 1 & 4 & 1 & 4 & 1 \\ CVDD & 1 & 1 & 4 & 3 & 1 \\ DSI & 1 & 4 & 1 & 4 & 1 \\ \botrule \end{tabular}} \begin{tabnote} $^{a.}$ The first row shows results of ARI as ground truth; other rows are CVIs. \end{tabnote} \label{tab:5} \end{table} For two score sequences (\textit{e.g.}, CVI and ARI), after quantizing them to two rank sequences, we will compute the difference of two rank sequences (called \textit{rank-difference}), which is simply defined as the summation of absolute difference between two rank sequences. For example, the two rank sequences from Table \ref{tab:5} are: \[ \begin{array}{lr} ARI: & \{1,\ 4,\ 1,\ 4,\ 1\} \\ CVDD:& \{1,\ 1,\ 4,\ 3,\ 1\} \end{array} \] Their rank-difference, which is the summation of absolute difference, is: \[ \left|1-1\right|+\left|4-1\right|+\left|1-4\right|+\left|4-3\right|+\left|1-1\right|=7 \] Smaller rank-difference means the distance of two sequences is closer. That two sequences of CVI and ARI are closer indicates a better prediction. It is not difficult to show that rank-difference for two $N$-length score sequences lies in the ranges $[0,N(N-2)]$. Table \ref{tab:6} shows rank-differences calculated by the ARI and nine CVIs from Table \ref{tab:5}. The CVI having the lower rank-difference value is better and 0 is the best because it has the same performance as the ground truth (ARI). \begin{table} \tbl{Rank-difference results for the \texttt{wine} dataset.} {\begin{tabular}{cccccccccc} \toprule \diagbox[width=8em]{\textbf{\scalebox{0.95}{Dataset}}}{\textbf{\scalebox{0.95}{CVI}}} & \textbf{Dunn} & \textbf{CH} & \textbf{DB} & \textbf{Sil$^a$} & \textbf{WB} & \textbf{I} & \textbf{CVNN} & \textbf{CVDD} & \textbf{DSI} \\ \colrule wine & 9 & 1 & 3 & 1 & 1 & 0 & 0 & 7 & 0 \\ \botrule \end{tabular}} \begin{tabnote} $^{a.}$ Sil = Silhouette. \end{tabnote} \label{tab:6} \end{table} \subsection{To predict the number of clusters} \label{sec:c_num} Some clustering methods require setting the number of clusters (classes) in advance, such as the k-means, spectral clustering, and Gaussian mixture (EM). Suppose we have a dataset and know its real number of clusters, $c$; then the steps to evaluate CVIs through predicting the number of clusters in this dataset are: \begin{enumerate} \item To run clustering algorithms by setting the number of clusters $k=2,3,4,\ldots$ (the real number of clusters $c$ is included) to get clusters. \item To compute CVIs of these clusters. \item The predicted number of clusters by the $i$-th CVI: $\hat{k}_i$, is the number of clusters that perform best on the $i$-th CVI. (\textit{i.e.}, the optimal number of clusters recognized by this CVI) \item The successful prediction of the $i$-th CVI is that its predicted number of clusters equals the real number of clusters: $\hat{k}_i=c$. \end{enumerate} For several CVIs, the number of successful predictions could be zero, one, two, or more. Besides CVIs, the success also depends on the datasets and clustering methods. In this study, we selected the \texttt{wine}, \texttt{tae}, \texttt{thy}, and \texttt{vehicle} datasets (see Table \ref{tab:2}), and clustering methods: k-means, spectral clustering, and EM algorithm. \section{Results} \subsection{Clusters of real and synthetic datasets} \begin{table}[t] \tbl{Hit-the-best results for real datasets.} {\begin{tabular}{cccccccccc} \toprule \diagbox[width=8em]{\textbf{\scalebox{0.95}{Dataset}}}{\textbf{\scalebox{0.95}{CVI}}} & \textbf{Dunn} & \textbf{CH} & \textbf{DB} & \textbf{Sil$^a$} & \textbf{WB} & \textbf{I} & \textbf{CVNN} & \textbf{CVDD} & \textbf{DSI} \\ \colrule Iris & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ digits & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1\\ wine & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1\\ cancer & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ faces & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1\\ vertebral & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ haberman & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ sonar & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ tae & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0\\ thy & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ vehicle & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1\\ zoo & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1\\ \colrule Total$^b$ & 3 & 3 & 3 & 2 & 4 & 3 & 5 & 3 & 5 \\ (\textbf{rank}) & (\textbf{4}) & (\textbf{4}) & (\textbf{4}) & (\textbf{9}) & (\textbf{3}) & (\textbf{4}) & (\textbf{1}) & (\textbf{4}) & (\textbf{1})\\ \botrule \end{tabular}} \begin{tabnote} $^{a.}$ Sil = Silhouette. $^{b.}$ Larger value is better (rank number is smaller). \end{tabnote} \label{tab:7} \end{table} \begin{table}[t] \tbl{Rank-difference results for real datasets.} {\begin{tabular}{cccccccccc} \toprule \diagbox[width=8em]{\textbf{\scalebox{0.95}{Dataset}}}{\textbf{\scalebox{0.95}{CVI}}} & \textbf{Dunn} & \textbf{CH} & \textbf{DB} & \textbf{Sil$^a$} & \textbf{WB} & \textbf{I} & \textbf{CVNN} & \textbf{CVDD} & \textbf{DSI} \\ \colrule Iris & 8 & 13 & 15 & 15 & 13 & 11 & 15 & 6 & 15\\ digits & 2 & 2 & 1 & 1 & 4 & 6 & 8 & 7 & 6\\ wine & 9 & 1 & 3 & 1 & 1 & 0 & 0 & 7 & 0\\ cancer & 8 & 7 & 6 & 9 & 7 & 8 & 2 & 7 & 9\\ faces & 4 & 3 & 4 & 4 & 2 & 3 & 9 & 2 & 5\\ vertebral & 6 & 13 & 14 & 12 & 15 & 13 & 15 & 6 & 13\\ haberman & 9 & 7 & 7 & 7 & 7 & 9 & 7 & 7 & 8\\ sonar & 7 & 3 & 3 & 4 & 3 & 4 & 11 & 10 & 3\\ tae & 9 & 14 & 9 & 9 & 14 & 15 & 0 & 9 & 9\\ thy & 5 & 2 & 2 & 2 & 2 & 6 & 2 & 3 & 10\\ vehicle & 12 & 11 & 9 & 13 & 13 & 12 & 3 & 3 & 7\\ zoo & 1 & 6 & 1 & 6 & 6 & 1 & 9 & 8 & 1\\ \colrule Total$^b$ & 80 & 82 & 74 & 83 & 87 & 88 & 81 & 75 & 86 \\ (\textbf{rank}) & (\textbf{3}) & (\textbf{5}) & (\textbf{1}) & (\textbf{6}) & (\textbf{8}) & (\textbf{9}) & (\textbf{4}) & (\textbf{2}) & (\textbf{7})\\ \botrule \end{tabular}} \begin{tabnote} $^{a.}$ Sil = Silhouette. $^{b.}$ Smaller value is better (rank number is smaller). \end{tabnote} \label{tab:8} \end{table} As discussed before, for one dataset and a CVI, an evaluation result can be computed by using the hit-the-best or rank-difference metric. In other words, one result is obtained by comparing one CVI row in Table \ref{tab:3} with the ground truth (ARI). The outcome of a hit-the-best comparison is either 0 or 1; 1 means that the best clusters predicted by CVI are the same as ARI; otherwise, the outcome is 0. Table \ref{tab:4} shows the hit-the-best results of nine CVIs on the \texttt{wine} dataset. The outcome of the rank-difference comparison is a value in the range $[0,N(N-2)]$, where $N$ is the sequence length. As Table \ref{tab:5} shows, the length of sequences is 5; hence, the range of rank-difference is [0, 15]. Table \ref{tab:6} shows the rank-difference results of nine CVIs on the \texttt{wine} dataset. The smaller rank-difference value means the CVI predicts better. We applied\footnote{The code can be found in author’s website linked up with the ORCID: \url{https://orcid.org/0000-0002-3779-9368}} the evaluation method to the selected CVIs (Table \ref{tab:1}) by using real and synthetic datasets (Section \ref{sec:datasets}) and the five clustering methods (Table \ref{tab:3}). Table \ref{tab:7} and Table \ref{tab:9} are hit-the-best comparison results for real and synthetic datasets. Table \ref{tab:8} and Table \ref{tab:10} are rank-difference comparison results for real and synthetic datasets. To compare across data sets, we summed all results at the bottom of each table. For the hit-the-best comparison, the larger total value is better because more hits appear. For the rank-difference comparison, the smaller total value is better because results of the CVI are closer to that of ARI. Finally, ranks in the last row uniformly indicate CVIs’ performances. The smaller rank number means better performance. Since there are 97 synthetic datasets, to keep the tables to manageable lengths, Tables \ref{tab:9} and \ref{tab:10} present illustrative values for the datasets and most importantly, the totals and ranks for each measure. \begin{table}[t] \tbl{Hit-the-best results for 97 synthetic datasets.} {\begin{tabular}{cccccccccc} \toprule \diagbox[width=8em]{\textbf{\scalebox{0.95}{Dataset}}}{\textbf{\scalebox{0.95}{CVI}}} & \textbf{Dunn} & \textbf{CH} & \textbf{DB} & \textbf{Sil$^a$} & \textbf{WB} & \textbf{I} & \textbf{CVNN} & \textbf{CVDD} & \textbf{DSI} \\ \colrule 3-spiral & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ aggregation & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ zelnik5 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ zelnik6 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ \colrule Total$^b$ & 46 & 30 & 35 & 35 & 29 & 31 & 35 & 50 & 40 \\ (\textbf{rank}) & (\textbf{2}) & (\textbf{8}) & (\textbf{4}) & (\textbf{4}) & (\textbf{9}) & (\textbf{7}) & (\textbf{4}) & (\textbf{1}) & (\textbf{3})\\ \botrule \end{tabular}} \begin{tabnote} $^{a.}$ Sil = Silhouette. $^{b.}$ Larger value is better (rank number is smaller). \end{tabnote} \label{tab:9} \end{table} \begin{table}[t] \tbl{Rank-difference results for 97 synthetic datasets.} {\begin{tabular}{cccccccccc} \toprule \diagbox[width=8em]{\textbf{\scalebox{0.95}{Dataset}}}{\textbf{\scalebox{0.95}{CVI}}} & \textbf{Dunn} & \textbf{CH} & \textbf{DB} & \textbf{Sil$^a$} & \textbf{WB} & \textbf{I} & \textbf{CVNN} & \textbf{CVDD} & \textbf{DSI} \\ \colrule 3-spiral & 2 & 12 & 14 & 13 & 14 & 12 & 13 & 1 & 13 \\ aggregation & 3 & 3 & 2 & 2 & 4 & 5 & 2 & 5 & 3 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ zelnik5 & 4 & 10 & 12 & 10 & 11 & 11 & 10 & 4 & 11 \\ zelnik6 & 4 & 3 & 2 & 2 & 3 & 3 & 5 & 2 & 2 \\ \colrule Total$^b$ & 406 & 541 & 547 & 489 & 583 & 554 & 504 & 337 & 415 \\ (\textbf{rank}) & (\textbf{2}) & (\textbf{6}) & (\textbf{7}) & (\textbf{4}) & (\textbf{9}) & (\textbf{8}) & (\textbf{5}) & (\textbf{1}) & (\textbf{3}) \\ \botrule \end{tabular}} \begin{tabnote} $^{a.}$ Sil = Silhouette. $^{b.}$ Smaller value is better (rank number is smaller). \end{tabnote} \label{tab:10} \end{table} \subsection{Prediction of number of clusters} Another strategy of CVI evaluation is to predict the number of clusters (classes). Its detailed processes are described in Section \ref{sec:c_num}. The clustering methods we selected require setting the number of clusters (classes) in advance; they are: k-means, spectral clustering, and EM algorithm. The \textit{a priori} number of clusters we set for the three algorithms are: $k=2,\ 3,\ 4,\ 5,\ 6$ (the real number of clusters is included). Clustering algorithms have been applied on four datasets: the \texttt{wine}, \texttt{tae}, \texttt{thy}, and \texttt{vehicle} datasets (see Table \ref{tab:2} for details). \begin{table}[h] \tbl{Number of clusters prediction results on the \texttt{wine}, \texttt{tae}, \texttt{thy}, and \texttt{vehicle} datasets.}{ \begin{tabular}{cc} \\ The \texttt{wine} dataset has 178 samples in 3 classes. & The \texttt{tae} dataset has 151 samples in 3 classes. \\ \begin{tabular}{cccc} \toprule \diagbox[width=8em]{\scalebox{0.8}{\textbf{Validity}}}{\scalebox{0.8}{\textbf{\makecell[r]{Clustering \\ method}}}} & \scalebox{0.8}{\textbf{KMeans}} & \scalebox{0.8}{\textbf{\makecell{Spectral \\ Clustering}}} & \scalebox{0.8}{\textbf{EM}} \\ \colrule Dunn & \textbf{3}$^b$ & 4 & 6 \\ CH & \textbf{3} & \textbf{3} & \textbf{3} \\ DB & \textbf{3} & \textbf{3} & \textbf{3} \\ Sil$^a$ & \textbf{3} & \textbf{3} & \textbf{3} \\ WB & \textbf{3} & \textbf{3} & \textbf{3} \\ I & \textbf{3} & 2 & 2 \\ CVNN & 2 & 2 & 2 \\ CVDD & 2 & 2 & 2 \\ DSI & \textbf{3} & \textbf{3} & \textbf{3} \\ \botrule \end{tabular} & \begin{tabular}{cccc} \toprule \diagbox[width=8em]{\scalebox{0.8}{\textbf{Validity}}}{\scalebox{0.8}{\textbf{\makecell[r]{Clustering \\ method}}}} & \scalebox{0.8}{\textbf{KMeans}} & \scalebox{0.8}{\textbf{\makecell{Spectral \\ Clustering}}} & \scalebox{0.8}{\textbf{EM}} \\ \colrule Dunn & 2 & \textbf{3} & 2 \\ CH & 6 & 6 & 4 \\ DB & 6 & 6 & 5 \\ Sil & 6 & \textbf{3} & 2 \\ WB & 6 & 6 & 6 \\ I & 5 & \textbf{3} & \textbf{3} \\ CVNN & 2 & 2 & 2 \\ CVDD & 2 & 2 & 2 \\ DSI & 6 & \textbf{3} & 5 \\ \botrule \end{tabular} \\ \\ \\ The \texttt{thy} dataset has 215 samples in 3 classes. & The \texttt{vehicle} dataset has 948 samples in 4 classes. \\ \begin{tabular}{cccc} \toprule \diagbox[width=8em]{\scalebox{0.8}{\textbf{Validity}}}{\scalebox{0.8}{\textbf{\makecell[r]{Clustering \\ method}}}} & \scalebox{0.8}{\textbf{KMeans}} & \scalebox{0.8}{\textbf{\makecell{Spectral \\ Clustering}}} & \scalebox{0.8}{\textbf{EM}} \\ \colrule Dunn & 5 & 2 & 6 \\ CH & \textbf{3} & \textbf{3} & \textbf{3} \\ DB & 5 & \textbf{3} & 4 \\ Sil & 4 & \textbf{3} & 2 \\ WB & 6 & \textbf{3} & 6 \\ I & \textbf{3} & \textbf{3} & 4 \\ CVNN & 2 & 2 & 2 \\ CVDD & 2 & 2 & 5 \\ DSI & 5 & \textbf{3} & 6 \\ \botrule \end{tabular} & \begin{tabular}{cccc} \toprule \diagbox[width=8em]{\scalebox{0.8}{\textbf{Validity}}}{\scalebox{0.8}{\textbf{\makecell[r]{Clustering \\ method}}}} & \scalebox{0.8}{\textbf{KMeans}} & \scalebox{0.8}{\textbf{\makecell{Spectral \\ Clustering}}} & \scalebox{0.8}{\textbf{EM}} \\ \colrule Dunn & 6 & 2 & 5 \\ CH & 2 & 2 & 2 \\ DB & 2 & 2 & 2 \\ Sil & 2 & 2 & 2 \\ WB & 3 & 3 & 3 \\ I & 5 & 2 & 5 \\ CVNN & 2 & 2 & 2 \\ CVDD & 2 & 2 & 2 \\ DSI & 5 & \textbf{4} & 5 \\ \botrule \end{tabular} \\ \\ \end{tabular} } \begin{tabnote} $^{a.}$ Sil = Silhouette. $^{b.}$ Bold value: the successful prediction of the CVI whose predicted number of clusters equals the real number of clusters. \end{tabnote} \label{tab:11} \end{table} Table \ref{tab:11} shows prediction of the number of clusters based on CVIs, clustering algorithms and datasets. The predicted number of clusters by a CVI is the number of clusters that perform best on this CVI. Captions of sub-tables contain the real number of clusters (classes) for each dataset. A successful prediction of the CVI is that its predicted number of clusters equals the real number of clusters. In the results, it is worth noting that \ul{only DSI successfully predicted the number of clusters from spectral clustering for all datasets}. This implies that DSI may work well with the spectral clustering method. \section{Discussion} Although DSI obtains only one first-rank (Table \ref{tab:7}) compared with other CVIs in experiments, having no last rank means that it still performs better than some other CVIs. It is worth emphasizing that all compared CVIs are excellent and widely used. Therefore, experiments show that DSI can join them as a new promising CVI. Actually, by examining those CVI evaluation results, we confirm that \textbf{none of the CVIs performs well for all datasets}. And thus, it would be better to measure clustering results by using several effective CVIs. The DSI provides another CVI option. Also, DSI is unique: none of the other CVIs performs the same as DSI. For example, in Table \ref{tab:7}, for the \texttt{vehicle} dataset, only CVNN and DSI predicted correctly. But for \texttt{zoo} dataset, CVNN was wrong and DSI was correct. For another example, in Table \ref{tab:8}, for the \texttt{sonar} dataset, DSI performed better than Dunn, CVNN, and CVDD; but for the \texttt{cancer} dataset, Dunn, CVNN, and CVDD performed better than DSI. More examples of the diversity of CVI are shown in Table \ref{tab:12} and their plots with true labels are shown in Figure \ref{fig:3} (the \texttt{atom} dataset has three features, and the others have two features). \begin{table} \tbl{Rank-difference results for selected synthetic datasets.} {\begin{tabular}{cccccccccc} \toprule \diagbox[width=8em]{\textbf{\scalebox{0.95}{Dataset}}}{\textbf{\scalebox{0.95}{CVI}}} & \textbf{Dunn} & \textbf{CH} & \textbf{DB} & \textbf{Sil$^a$} & \textbf{WB} & \textbf{I} & \textbf{CVNN} & \textbf{CVDD} & \textbf{DSI} \\ \colrule atom & 0 & 15 & 15 & 15 & 15 & 14 & 4 & 0 & 0 \\ disk-4000n & 10 & 0 & 7 & 0 & 0 & 0 & 11 & 12 & 1 \\ disk-1000n & 6 & 12 & 15 & 12 & 13 & 14 & 15 & 8 & 14 \\ D31 & 5 & 1 & 2 & 1 & 0 & 2 & 10 & 2 & 0 \\ flame & 10 & 6 & 11 & 7 & 7 & 8 & 12 & 11 & 7 \\ square3 & 11 & 0 & 2 & 0 & 0 & 7 & 0 & 11 & 0 \\ \botrule \end{tabular}} \begin{tabnote} $^{a.}$ Sil = Silhouette. \end{tabnote} \label{tab:12} \end{table} The former examples show the need for employing more CVIs because each is different and every CVI may have its special capability. That capability, however, is difficult to describe clearly. Some CVIs’ definitions show them to be categorized into center/non-center representative \cite{Hu2019Internal} or density-representative. Similarly, the DSI is a separability-representative CVI. That is, DSI performs better for clusters having high separability with true labels (like the \texttt{atom} dataset in Figure \ref{fig:3}); otherwise, if real clusters have low separability, the incorrectly predicted clusters may have a higher DSI score (Figure \ref{fig:4}). \begin{figure} \centering \subfloat[\centering atom]{ \frame{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{figs/fig3_1.png}}} \quad \subfloat[\centering disk-4000n]{ \frame{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{figs/fig3_2.png}}} \quad \subfloat[\centering disk-1000n]{ \frame{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{figs/fig3_3.png}}} \vskip 0pt \subfloat[\centering D31]{ \frame{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{figs/fig3_4.png}}} \quad \subfloat[\centering flame]{ \frame{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{figs/fig3_5.png}}} \quad \subfloat[\centering square3]{ \frame{\includegraphics[height=0.2\textwidth,width=0.2\textwidth]{figs/fig3_6.png}}} \caption{Examples for rank-differences of synthetic datasets.} \label{fig:3} \end{figure} Clusters in datasets have great diversity so that the diversity of clustering methods and CVIs is necessary. Since the preferences of CVIs are difficult to analyze precisely and quantitatively, more studies for selecting a proper CVI to measure clusters without true labels should be performed in the future. Having more CVIs expands the options. And before the breakthrough that we discover approaches to select an optimal CVI to measure clusters, it is meaningful to provide more effective CVIs and apply more than one CVI to evaluate clustering results. \begin{figure} \centering \subfloat[\centering{Real clusters:\newline DSI $\approx0.456$}]{ \frame{\includegraphics[height=0.25\textwidth,width=0.25\textwidth]{figs/fig4_1.png}}} \qquad \subfloat[\centering{Predicted clusters:\newline DSI $\approx0.664$}]{ \frame{\includegraphics[height=0.25\textwidth,width=0.25\textwidth]{figs/fig4_2.png}}} \caption{Wrongly-predicted clusters have a higher DSI score than real clusters.} \label{fig:4} \end{figure} In addition, to evaluate CVIs is also an important task. Its general process is: \begin{enumerate} \item To create different clusters from datasets. \item To compute external CVI with true labels as ground truth and internal CVIs. \item To compare results of internal CVIs with the ground truth. Results from an effective internal CVI should be close to the results of an external CVI. \end{enumerate} In this paper, we generated different clusters using a variety of clustering methods. To generate different clusters can also be achieved through changing parameters of clustering algorithms (\textit{e.g.}, the number of clusters $k$ in k-means clustering) or taking subsets of datasets. The comparison step could also apply other methods besides the two evaluation metrics: hit-the-best and rank-difference that we have used. \section{Conclusion} To evaluate clustering results, it is essential to apply various CVIs because there is no universal CVI for all datasets and no specific method for selecting a proper CVI to measure clusters without true labels. In this paper, we propose the DSI as a novel CVI based on a data separability measure. Since the goal of clustering is to separate a dataset into clusters, we hypothesize that better clustering could cause these clusters to have a higher separability. Including the proposed DSI, we applied nine internal CVI and one external CVI (ARI) as ground truth to clustering results of five clustering algorithms on various real and synthetic datasets. The results show DSI to be an effective, unique, and competitive CVI to other CVIs compared here. And we summarized the general process to evaluate CVIs and used two methods to compare the results of CVIs with ground truth. We created the rank-difference as an evaluation metric to compare two score sequences. This metric avoids two disadvantages of the hit-the-best measure, which is commonly used in CVI evaluation. We believe that both the DSI and rank-difference metric can be helpful in clustering analysis and CVI studies in the future. \section{General Appearance} Contributions to the {\it International Journal on Artificial Intelligence Tools} will be\break reproduced by photographing the author's submitted typeset manuscript. It is therefore essential that the manuscript be in its final form, and of good appearance because it will be printed directly without any editing. The manuscript should also be clean and unfolded. The copy should be evenly printed on a high resolution printer (600 dots/inch or higher). If typographical errors cannot be avoided, use cut and paste methods to correct them. Smudged copy, pencil or ink text corrections will not be accepted. Do not use cellophane or transparent tape on the surface as this interferes with the picture taken by the publisher's camera. \section{The Main Text} Contributions are to be in English. Authors are encouraged to have their contribution checked for grammar. American spelling should be used. Abbreviations are allowed but should be spelt out in full when first used. Integers ten and below are to be spelt out. Italicize foreign language phrases (e.g.~Latin, French). The text should be in 10 pt Roman, single spaced with baselineskip of 13~pt. Text area (excluding copyright block and folio) is 6.9 inches high and 5 inches wide for the first page. Text area (excluding running title) is 7.7 inches high and\break 5 inches wide for subsequent pages. Final pagination and insertion of running titles will be done by the publisher. \section{Major Headings} Major headings should be typeset in boldface with the first letter of important words capitalized. \subsection{Sub-headings} Sub-headings should be typeset in boldface italic and capitalize the first letter of the first word only. Section number to be in boldface Roman. \subsubsection{Sub-subheadings} Typeset sub-subheadings in medium face italic and capitalize the first letter of the first word only. Section numbers to be in Roman. \subsection{Numbering and spacing} Sections, sub-sections and sub-subsections are numbered in Arabic. Use double spacing before all section headings, and single spacing after section headings. Flush left all paragraphs that follow after section headings. \subsection{Lists of items} Lists may be laid out with each item marked by a dot: \begin{itemlist} \item item one, \item item two. \end{itemlist} Items may also be numbered in lowercase Roman numerals: \begin{romanlist}[(ii)] \item item one \item item two \begin{romanlist}[(b)] \item Lists within lists can be numbered with lowercase Roman letters, \item second item. \end{romanlist} \end{romanlist} \section{Equations} Displayed equations should be numbered consecutively in each section, with the number set flush right and enclosed in parentheses \begin{equation} \mu(n, t) = {\sum^\infty_{i=1} 1(d_i < t, N(d_i) = n)}. \label{eq1} \end{equation} Equations should be referred to in abbreviated form, e.g.~``Eq.~(\ref{eq1})'' or ``(2)''. In multiple-line equations, the number should be given on the last line. Displayed equations are to be centered on the page width. Standard English letters like x are to appear as $x$ (italicized) in the text if they are used as mathematical symbols. Punctuation marks are used at the end of equations as if they appeared directly in the text. \begin{theorem} Theorems, lemmas, etc. are to be numbered consecutively in the paper. Use double spacing before and after theorems, lemmas, etc. \end{theorem} \begin{proof} The word `Proof' should be typed in boldface. Proofs should end with a box. \end{proof} \section{Illustrations and Photographs} Figures are to be inserted in the text nearest their first reference. Original india ink drawings of glossy prints are preferred. Please send one set of originals with copies. If the author requires the publisher to reduce the figures, ensure that the figures (including letterings and numbers) are large enough to be clearly seen after reduction. If photographs are to be used, only black and white ones are acceptable. \begin{figure}[th] \centerline{\includegraphics[width=5cm]{ijaitf1}} \vspace*{8pt} \caption{SDOF system with viscous damping.} \end{figure} Figures are to be sequentially numbered in Arabic numerals. The caption must be placed below the figure. Typeset in 8 pt Times Roman with baselineskip of 10~pt. Use double spacing between a caption and the text that follows immediately. Previously published material must be accompanied by written permission from the author and publisher. \section{Tables} Tables should be inserted in the text as close to the point of reference as possible. Some space should be left above and below the table. Tables should be numbered sequentially in the text in Arabic numerals. Captions are to be centralized above the tables. Typeset tables and captions in 8 pt Times Roman with baselineskip of 10 pt. \begin{table} \tbl{Comparison of acoustic for frequencies for piston-cylinder problem.} {\begin{tabular}{@{}cccc@{}} \toprule & Analytical Frequency & TRIA6-$S_1$ Model\\ Piston Mass & (Rad/s) & (Rad/s) & \% Error \\ \colrule 1.0\hphantom{00} & \hphantom{0}281.0 & \hphantom{0}280.81 & 0.07 \\ 0.1\hphantom{00} & \hphantom{0}876.0 & \hphantom{0}875.74 & 0.03 \\ 0.01\hphantom{0} & 2441.0 & 2441.0\hphantom{0} & 0.0\hphantom{0} \\ 0.001 & 4130.0 & 4129.3\hphantom{0} & 0.16\\ \botrule \end{tabular}} \begin{tabnote} Table notes. \end{tabnote} \end{table} If tables need to extend over to a second page, the continuation of the table should be preceded by a caption, e.g.~``Table~2. $(${\it Continued}$)$'' \section{Footnotes} Footnotes should be numbered sequentially in superscript lowercase Roman letters.\footnote{Footnotes should be typeset in 8 pt Times Roman at the bottom of the page.} \section{Usage} \begin{verbatim} \documentclass{ws-ijait} \usepackage{url} \begin{document} Sample output using ws-ijait bibliography style file ... ... text\cite{best,pier} and also\cite{jame} ... \bibliographystyle{ws-ijait}
1,116,691,499,471
arxiv
\section{Introduction}\label{sec:intro} The unprecedented statistical precision that upcoming large-scale structure surveys are expected to attain requires cosmologists to develop equally precise methods to predict the various observables. The simplest and most widely applied way to describe the statistical information encoded in the large-scale structure is via the $2$-point correlation function \cite{peebles:1980}. This includes the $2$-point galaxy correlation function, or $2$-point correlations of cosmic shear maps in the case of gravitational lensing, as well as their cross-correlation. The starting point to predicting both these observables is the $2$-point correlation function of matter $\xi_m$, or its Fourier transform, the power spectrum $P_m$. The matter power spectrum is very well understood in the context of gravity-only N-body simulations (that is, neglecting baryonic effects on the total matter distribution). The simulation requirements for a given pre-specified precision have been studied in Ref.~\cite{2016JCAP...04..047S}, and simulations have also allowed for the calibration of semi-analytical models such as {\sc Halofit} \cite{2012ApJ...761..152T} and construction of efficient interpolations such as that of the {\sc Coyote} project \cite{emulator}. Baryonic effects are known to have an impact on the small-scale matter power spectrum \cite{rudd/zentner/kravtsov,2014Natur.509..177V, 2016MNRAS.461L..11H} and work on modeling these effects has also been carried out recently \cite{zentner/rudd/hu, mohammed/seljak, 2015MNRAS.454.1958M, 2016PhRvD..94f3508S}. An accurate model of the matter power spectrum alone is however insufficient to properly exploit upcoming surveys, especially when inferring cosmological parameter values from the data, for which one also needs accurate determinations of the covariance matrix of the power spectrum, \begin{eqnarray}\label{eq:covdefintro} \cov({\v{k}_1, \v{k}_2}) \equiv \big< \hat{P}_m(\v{k}_1) \hat{P}_m(\v{k}_2) \big> - \big< \hat{P}_m(\v{k}_1) \big>\big< \hat{P}_m(\v{k}_2) \big>, \end{eqnarray} in order to quantify the statistical error of the measurements. In the equation above, angle brackets denote ensemble averages and $\hat{P}_m(\v{k})$ is an estimate of the matter power spectrum within some wavemode bin centered at $\v{k}$. The power spectrum covariance, hereafter simply referred to as \emph{matter covariance}, therefore measures the correlation between the power at wavemodes $\v{k}_1$ and $\v{k}_2$. For Gaussian initial conditions, different Fourier modes evolve independently in the linear stages of structure formation. In this regime, only the diagonal terms are non-vanishing and they are trivially related to the matter power spectrum itself. At later stages, nonlinear structure formation effectively couples different Fourier modes, which leads to important off-diagonal ($k_1\neq k_2$) terms in $\cov({\v{k}_1, \v{k}_2})$ through a special configuration of the matter trispectrum (the Fourier transform of the $4$-point correlation function), which we will describe in more detail below. The matter trispectrum is a cumbersome quantity to predict, which is why our current knowledge of the covariance matrix is far poorer than that of the power spectrum. Given that inaccurate estimates of the covariance matrix can result in wrong interpretations of the data (see e.g.~Refs.~\cite{2007MNRAS.375L...6S, 2011MNRAS.416.1045K, 2013MNRAS.432.1928T, 2013PhRvD..88f3537D, 2016MNRAS.458.4462B}), this naturally motivates research in obtaining accurate theoretical predictions of the covariance, including its dependence on cosmological parameters \cite{2009A&A...502..721E, 2012ApJ...760...97L, 2013JCAP...11..009M, 2015JCAP...12..058W}, as well as on baryonic effects. This is important for current data sets, but even more so for future large-volume surveys such as Euclid \cite{2011arXiv1110.3193L}, LSST \cite{2012arXiv1211.0310L} and DESI \cite{2013arXiv1308.0847L}. One frequently employed tool to estimate $\cov({\v{k}_1, \v{k}_2})$ are direct estimates of the covariance via Eq.~(\ref{eq:covdefintro}) using large sets of N-body simulations \cite{2006MNRAS.371.1188H, 2009ApJ...700..479T, joachimpen2011, 2011ApJ...734...76S, 2017arXiv170105690K}. This requires performing thousands of N-body simulations in order to obtain sufficient signal-to-noise in the covariance, which makes these estimates extremely costly in terms of computational resources. Estimating the covariance for many different sets of cosmological parameters therefore becomes prohibitive, as does a realistic modeling of baryonic processes. A complementary approach to simulation estimates is to use perturbation theory. Reference~\cite{2016PhRvD..93l3505B} presented a calculation of the trispectrum in the covariance configuration at the 1-loop level, based on the Effective Field Theory (EFT) of large-scale structure (see Ref.~\cite{porto:2016} for a review). The main limitation of such perturbative approaches is that they are only applicable to sufficiently large scales $k\lesssimk_\text{NL}\approx0.3\:h\,{\rm Mpc}^{-1}$ (at $z=0$), which limits their usefulness in the analysis of data on smaller scales.\footnote{We define the nonlinear scale $k_\text{NL}$ at a given redshift $z$ through $k_\text{NL}^3P_{\rm L}(k_\text{NL},z)/(2\pi^2)=1$.} There have also been attempts to develop semi-analytical phenomenological models of the covariance matrix on scales $k \ > k_\text{NL}$ \cite{2011ApJ...736....8N, 2015MNRAS.453..450C, mohammed/seljak, mohammed1}, but these typically involve simplifying assumptions and/or free parameters that need to be tuned to match other covariance estimates, usually simulation-based ones, for any given cosmology considered. Moreover, systematic errors made in these phenomenological estimates are not under rigorous control, and can only be estimated through comparison with simulation-based estimates. In this paper, our goal is to describe a calculation of the covariance matrix that combines the merits of the simulation- and perturbation theory-based approaches. More concretely, in our approach, we use perturbation theory to identify the mode-coupling terms of the non-Gaussian covariance that can be resummed with simulation-calibrated power spectrum responses (see Ref.~\cite{paper1} for an in-depth discussion and Sec.~\ref{sec:respdef} below for an overview). The power spectrum responses measure the fractional change of the local power spectrum in the presence of long-wavelength perturbations, and they can be measured accurately and non-perturbatively with only a few relatively small-volume simulations \cite{2011ApJS..194...46G, 2011JCAP...10..031B, 2016JCAP...09..007B, wagner/etal:2014, response, takada/hu:2013, li/hu/takada, lazeyras/etal, 2016arXiv161204360L, 2017arXiv170103375C} (with small computational cost compared to that of fully numerical covariance estimates). The types of mode-coupling interactions that are captured by responses are therefore those that describe the coupling between long- and short-wavelength modes. All non-response type terms are calculated using standard perturbation theory (SPT), leaving the whole calculation free from fitting parameters. Moreover, we can use higher-order perturbation theory to estimate the systematic error made in the calculation. In Ref.~\cite{paper1}, we have presented a response-based calculation of the non-Gaussian covariance at tree level in the squeezed regime, i.e., when one of the modes is linear and sufficiently smaller than the other, which can take any other value: $k_{\rm soft} \ll k_{\rm hard}$, $k_{\rm soft} \ll k_\text{NL}$, for any $k_{\rm hard}$, where \begin{equation} k_{\rm soft} \equiv {\rm min}\{k_1, k_2\}\,, \quad k_{\rm hard} \equiv {\rm max}\{k_1, k_2\}\,. \end{equation} This represents an application of the well-known relation between responses and squeezed-limit correlators (in this case, the squeezed trispectrum). In this paper, we go beyond Ref.~\cite{paper1} as we demonstrate how to use responses to resum interaction vertices that involve internal soft-loop momenta, thereby permitting an efficient and accurate evaluation of the covariance for any values of $k_1$, $k_2$, including (quite crucially) cases in which $k_1 \approx k_2$. This constitutes an example of the use of responses in the calculation of non-squeezed $n$-point functions. After establishing some notation and summarizing the definitions of power spectrum responses and covariance in Sec.~\ref{sec:def}, the steps taken in this paper can be outlined as follows: \begin{enumerate} \item In Sec.~\ref{sec:NGcovtree}, working at tree level in standard perturbation theory, we show how to \emph{stitch} together the standard perturbation theory and response-based results presented first in Ref.~\cite{paper1} to fully describe the matter covariance in the regime where $k_{\rm soft} \ll k_\text{NL}$ and any $k_{\rm hard}$. \item Section \ref{sec:NGcov1loop} is devoted to the novel application of responses to calculate loop interactions involving soft loop momenta but fully nonlinear external momenta. Here, we work explicitly at the 1-loop level in perturbation theory, but also describe how to account for higher loops. \item We compare our model results to simulation-based estimates of the angle-averaged covariance in Sec.~\ref{sec:monosims}. The level of agreement we find across the range of scales probed by the simulations suggests that the calculation presented here (which has no free parameters) captures the majority of the total matter covariance. In Sec.~\ref{sec:angles}, we also look at the prediction for the dependence of $\cov(\v{k}_1, \v{k}_2)$ on the angle between the two modes. \end{enumerate} The covariance calculation presented here, being based on a well-defined theoretical framework, is particularly useful as it allows us to determine exactly which contributions are being left out at a given point $(k_1, k_2)$; in particular, these are higher-loop terms, and certain non-response-type terms. This can be used to estimate the error on the covariance prediction (the \emph{error on the error} of the matter power spectrum), as well as to guide further developments. These, as well as other concluding remarks are the subject of Sec.~\ref{sec:summ}. In particular, Fig.~\ref{fig:reg} summarizes which parts of $(k_1,k_2)$-space are already completely captured by our calculation, and which parts can benefit from further work. In Appendix \ref{app:feynman}, we spell out the Feynman rules of cosmological perturbation theory as used in the paper. We collect the expressions to evaluate response functions, as well as the corresponding non-Gaussian covariance terms in Appendices \ref{app:ro} and \ref{app:analy}, respectively. The criterion to distinguish between squeezed and non-squeezed configurations is determined in Appendix \ref{app:fsq}. In Appendix \ref{app:derivation}, we demonstrate explicitly the equivalence between 1-loop covariance terms in standard perturbation theory and the response-based description. Finally, in Appendix \ref{app:mohammed}, we compare our covariance calculation with the prediction of the model presented in Ref.~\cite{mohammed1}. In this paper, we assume a flat $\Lambda{\rm CDM}$ cosmology for all numerical results, with the following parameters: $h = 0.72$, $\Omega_mh^2 = 0.1334$, $\Omega_bh^2 = 0.02258$, $n_s = 0.963$, $\sigma_8(z=0) = 0.801$, $\sum m_{\nu}=0$. These are the same as those used in Ref.~\cite{blot2015} in their estimates of the covariance matrix from simulations, with which we shall compare our results with. Further, in our results below, we use the {\sc CAMB} code \cite{camb} and the {\sc Coyote} emulator \cite{emulator} to compute the linear and the nonlinear matter power spectrum, respectively. \section{Definitions and notation} \label{sec:def} \subsection{Power spectrum responses} \label{sec:respdef} In this section, we briefly recap the definition and physical content of power spectrum responses, and display the equations that we use in the remainder of the paper. We refer the reader to Ref.~\cite{paper1} for a detailed description of the response formalism. Throughout, we only consider equal-time matter correlators, and will not write the time argument explicitly to ease the notation. The Feynman rules of cosmological perturbation theory (which shall be particularly useful in our considerations below) are summarized in Appendix \ref{app:feynman}. Further, we denote magnitudes of vectors as $k = |\v{k}|$ and adopt a shorthand notation for the sum of vectors: $\v{k}_{12\cdots n} = \v{k}_1 + \v{k}_2 + \cdots + \v{k}_n$. The $n$-th order matter power spectrum response $\mathcal{R}_n$ corresponds to the following interaction vertex \begin{array} &\lim_{\{p_a\} \to 0} \left( \raisebox{-0.0cm}{\includegraphicsbox[scale=0.8]{diag_Rndef.pdf}} \right) = \nonumber \\ \nonumber \\ & = \frac12 \mathcal{R}_n(k;\, \{\mu_{\v{k},\v{p}_a}\},\, \{\mu_{\v{p}_a,\v{p}_b}\},\, \{p_a/p_b\}) P_m(k) (2\pi)^3 \d_D(\v{k}+\v{k}'- \v{p}_{1\cdots n})\,, \label{eq:Rndef} \end{array} which is interpreted as the response of the nonlinear power spectrum of the small-scale (hard) mode $\v{k}$ to the presence of $n$ long-wavelength (soft) modes $\v{p}_1, ..., \v{p}_n$. The dashed blob is thus meant to account for the fully evolved nonlinear matter power spectrum $P_m(k)$ and all its possible interactions with the $n$ long wavelength perturbations (including loop interactions --- it is thus a \emph{resummed} vertex). In our notation, $\lim_{\{p_a\} \to 0}$ signifies that we only retain the leading contribution in the limit in which all soft momenta are taken to zero. The response $\mathcal{R}_n$ depends on the scale $k$, as well as on the angles between the $n$ soft modes and their angles with $\v{k}$. The response also depends on the ratios of soft wavenumbers, but not on their absolute values.\footnote{These responses are to be distinguished from those measured in Refs.~\cite{neyrinck/yang, nishimichi/bernardeau/taruya}, which correspond to the derivative of the nonlinear power spectrum with respect to the initial power spectrum (i.e., not to the presence of individual large-scale perturbations).} The diagrammatic representation of $\mathcal{R}_n$ helps to understand the connection of the power spectrum response with the squeezed limit of the $(n+2)$-point matter correlation function. Explicitly, attaching power spectrum propagators to the soft momentum lines in Eq.~(\ref{eq:Rndef}), we can write \begin{array} \lim_{\{p_a\}\to 0} \left( \raisebox{-0.0cm}{\includegraphicsbox[scale=0.8]{diag_sqnp2.pdf}} + (\text{perm.}) \right)& = \<\d(\v{k})\d(\v{k}')\d(\v{p}_1)\cdots \d(\v{p}_n)\>_{c, \mathcal{R}_n} \nonumber\\ = n!\, \mathcal{R}_n(k;\, \{\mu_{\v{k},\v{p}_a}\},\, \{\mu_{\v{p}_a,\v{p}_b}\},\, \{p_a/p_b\}) P_m(k) &\left[\prod_{a=1}^nP_{\rm L}(p_a)\right] \: (2\pi)^3 \d_D(\v{k}+\v{k}'+\v{p}_{1\cdots n})\,, \label{eq:sqnpt} \end{array} where the $n!$ factor accounts for the permutations of the $\v{p}_a$. The subscript ${}_c$ denotes connected correlators, while the subscript ${}_{\mathcal{R}_n}$ in the $(n+2)$-connected correlator indicates that only certain contributions to the correlation function are captured by $\mathcal{R}_n$. There are further response-type contributions which are not included in $\<\d(\v{k})\d(\v{k}')\d(\v{p}_1)\cdots \d(\v{p}_n)\>_{c, \mathcal{R}_n}$. These terms are however completely determined by lower order responses, $\mathcal{R}_m$, $1\leq m \leq n$, in conjunction with perturbation theory kernels. All other terms that contribute to $\<\d(\v{k})\d(\v{k}')\d(\v{p}_1)\cdots \d(\v{p}_n)\>_c$ are small in the squeezed limit. As described in detail in Ref.~\cite{paper1}, the $\mathcal{R}_n$ can be expanded in terms of all local gravitational observables associated with the $n$ long-wavelength modes, to a given order in perturbations. These observables, or operators $O$ (which can be constructed using either Lagrangian or Eulerian coordinates) form a basis $\mathcal{K}_O$ that unequivocally specifies all of the angular dependence of $\mathcal{R}_n$: \begin{equation} \mathcal{R}_n(k;\, \{\mu_{\v{k},\v{p}_a}\},\, \{\mu_{\v{p}_a,\v{p}_b}\},\, \{p_a/p_b\}) = \sum_O R_O(k) \mathcal{K}_O(\{\mu_{\v{k},\v{p}_a}\},\, \{\mu_{\v{p}_a,\v{p}_b}\},\, \{p_a/p_b\})\,. \label{eq:Rndecomp} \end{equation} At any given order, there are different equivalent decompositions of the sum in \refeq{Rndecomp}, which translate into different expressions for the $\mathcal{K}_O$. For instance, Ref.~\cite{bertolini1} displays an alternative, but mathematically equivalent decomposition at $n=1$ and $n=2$. Here, we will use the Eulerian decomposition described in the main text of Ref.~\cite{paper1}. In this paper, we will only explicitly need the second-order response $\mathcal{R}_2 \equiv \mathcal{R}_2(k, \mu_1, \mu_2, \mu_{12}, p_1/p_2)$, which is a function of the hard mode $k$ (and time), the cosine angles $\mu_1 = \v{p}_1\cdot\v{k}/(p_1k)$, $\mu_2 = \v{p}_2\cdot\v{k}/(p_2k)$, $\mu_{12} = \v{p}_1\cdot\v{p}_2/(p_1p_2)$ and the ratio $p_1/p_2$. More specifically, for the application to the matter covariance, the relevant kinematic configuration corresponds to $\mu_1 = \mu$, $\mu_2 = -\mu$, $\mu_{12} = -1$ and $p_1/p_2 = 1$, in which case the expression of $\mathcal{R}_2$ can be given as \begin{eqnarray}\label{eq:PR2_anglecov} \mathcal{R}_2(k; \mu, -\mu, -1, 1) &=& \left[\frac12R_2(k) + \frac23R_{K^2}(k) + \frac29 R_{K.K}(k)\right] + \left[\frac23R_{K\delta}(k) + \frac29 R_{K.K}\right]\mathcal{P}_2(\mu) \nonumber \\ &&+ \left[\frac49R_{KK}(k)\right]\left[\mathcal{P}_2(\mu)\right]^2 \nonumber \\ &\equiv& \mathcal{A}(k) + \mathcal{B}(k)\mathcal{P}_2(\mu) + \mathcal{C}(k)\left[\mathcal{P}_2(\mu)\right]^2, \nonumber \\ \end{eqnarray} where $\mathcal{P}_\ell$ is the Legendre polynomial of order $\ell$ and the second equality serves to define the functions $\mathcal{A}(k)$, $\mathcal{B}(k)$ and $\mathcal{C}(k)$, which help to simplify some notation below. The coefficients $R_O(k)$ are called \emph{response coefficients} and they correspond to the response of the local small-scale power spectrum to specific configurations of the long-wavelength perturbations. At tree level, all $R_O(k)$ can be derived by matching the definition of $\mathcal{R}_2$ to the squeezed four-point function, in the sense of Eq.~(\ref{eq:sqnpt}) (see Ref.~\cite{paper1} for the explicit steps of this derivation). In the nonlinear regime of structure formation, the response coefficients must be determined with the aid of N-body simulations. The first three isotropic response coefficients, $R_1$, $R_2$ and $R_3$, where $R_n(k) \equiv n! R_{\d^n}(k)$, have already been measured accurately with separate universe simulations \cite{response} (see also Refs.~\cite{2011ApJS..194...46G, 2011JCAP...10..031B, 2016JCAP...09..007B, wagner/etal:2014}). In these simulations, the presence of an exactly uniform density perturbation in the simulation volume is simulated by using the equivalence to following structure formation in a spatially curved Friedmann-Roberston-Walker spacetime \cite{CFCpaper2}. The remaining coefficients have so far not been measured in N-body simulations due to complications associated with how to model the presence of these anisotropic long-wavelength perturbations \cite{2016arXiv161001059I,2017arXiv170103375C}. In this paper, we combine the simulation measurements of the isotropic $R_O(k)$ with the nonlinear extrapolation of the anisotropic ones put forward in Ref.~\cite{paper1} (see Fig.~1 there for the numerical results). The explicit expressions for all $R_O(k)$ used in this paper are given in Appendix \ref{app:ro}. \subsection{Matter power spectrum covariance}\label{sec:covariance} Let $\delta(\v{x})$ denote the fractional matter density contrast at $\v{x}$, $\delta(\v{k})$ its Fourier transform (distinguished by their arguments) and $\hat{P}(\v{k})$ the estimated power spectrum in a wavenumber bin centered on the Fourier mode $\v{k}$, in a total survey volume $V$. The matter power spectrum covariance $\cov(\v{k}_1, \v{k}_2)$ measures the correlation between the power spectrum of the modes $\v{k}_1$ and $\v{k}_2$ and is defined as (rewriting Eq.~(\ref{eq:covdefintro})) \begin{eqnarray}\label{eq:covdef} \cov({\v{k}_1, \v{k}_2}) &\equiv& \cov(k_1, k_2, \mu_{12}) \equiv \big< \hat{P}_m(\v{k}_1) \hat{P}_m(\v{k}_2) \big> - \big< \hat{P}_m(\v{k}_1) \big>\big< \hat{P}_m(\v{k}_2) \big> \nonumber \\ &=& \underbrace{V^{-1}[P_m(k_1)]^2\Big[\delta_D(\v{k}_1+\v{k}_2) + \delta_D(\v{k}_1-\v{k}_2)\Big]}_{\rm Gaussian} + \underbrace{V^{-1}T_m(\v{k}_1, -\v{k}_1, \v{k}_2, -\v{k}_2)}_{\rm Non-Gaussian} \nonumber \\ &=& \cov^\text{G}(k_1, k_2, \mu_{12}) + \cov^\text{NG}(k_1, k_2, \mu_{12}), \end{eqnarray} where \begin{eqnarray} \<\delta(\v{k}_1)\delta(\v{k}_2)\> &=& P_m(\v{k}_1) (2\pi)^3\delta_D(\v{k}_1+\v{k}_2) \\ \<\delta(\v{k}_a)\delta(\v{k}_b)\delta(\v{k}_c)\delta(\v{k}_d)\>_c &=& T_m(\v{k}_a, \v{k}_b, \v{k}_c, \v{k}_d) (2\pi)^3 \delta_D(\v{k}_a + \v{k}_b + \v{k}_c+\v{k}_d) \end{eqnarray} define the matter power spectrum $P_m$ and trispectrum $T_m$, respectively. The latter contributes to the covariance in the so-called {\it parallelogram configuration}, $\v{k}_b = -\v{k}_a$, $\v{k}_d = -\v{k}_c$. Note also that so far we have not restricted ourselves to the covariance of the angle-averaged power spectrum (see Eq.~(\ref{eq:averpk}) below), i.e., we allow for the covariance to depend on the angle between the two wavemodes, $\mu_{12} = \v{k}_1\cdot\v{k}_2/(k_1 k_2)$. As indicated in Eq.~(\ref{eq:covdef}), the two terms in the second line are broadly referred to as the Gaussian and non-Gaussian parts of the covariance, on which we comment further below. Before proceeding however, we note that, for a finite survey, Eq.~(\ref{eq:covdef}) is missing an important additional non-Gaussian contribution. This is the so-called super-sample covariance term \cite{2007NJPh....9..446T, 2009ApJ...701..945S, takada/hu:2013, li/hu/takada, 2014PhRvD..90j3530L, 2016arXiv161104723A}, which accounts for the coupling of Fourier modes inside the observed surveyed region with density fluctuations whose wavelength is larger than the typical size of the survey. Formally, this term arises from the convolution of the matter trispectrum with the survey window function. The behavior of the super-sample term is well understood and can be described using the first-order power spectrum response $\mathcal{R}_1$. Below, we shall compare our covariance results with estimates from standard N-body simulations, which do not include fluctuations on scales larger than the simulation box, and are therefore unable to measure the super-sample term. For this reason, we do not consider the super-sample contribution in our results, but note that its inclusion is straightforward. For Gaussian initial conditions, the non-Gaussian contribution is only induced by nonlinear structure formation, so that the Gaussian term dominates at early times. This term correlates the power spectra of two modes only if the modes have the same magnitude and are exactly aligned, $\mu_{12}=1$ or anti-aligned $\mu_{12} = -1$. In the literature, the case of angle-averaged power spectra is that which is most commonly considered: \begin{eqnarray}\label{eq:averpk} \hat{P}_m(k_1) = V_f \int_{V_s(k_1)} \frac{{\rm d}^3\v{k}}{V_s(k_1)}\delta(\v{k})\delta(-\v{k}), \end{eqnarray} where the integral is taken over a spherical shell of radius $k_1$ and width $\Delta k$, $V_s(k_1) = 4\pi k_1^2 \Delta k$, and $V_f = (2\pi)^3/V$ is the volume of a Fourier cell where $V$ is the total survey volume. In this case, the Gaussian part of the covariance becomes \begin{eqnarray}\label{eq:gauss_mono} \cov^{\rm G}(k_i, k_j) = \frac{2}{N_k}P_m(k_i)^2\delta_{ij}\,, \end{eqnarray} where $i,j$ label bins in wavenumber, $N_k = V_s(k_i)/V_f$ is the number of Fourier modes that are averaged over in a given bin, and the Kronecker delta $\delta_{ij}$ ensures that the Gaussian term contributes only to the diagonal of the angle-averaged covariance matrix, $k_i=k_j$. Note that the Gaussian covariance depends on the size of the $k$-bins in which the spectra are measured. The non-Gaussian part of the covariance measures the coupling between Fourier modes that is induced by nonlinear structure formation at late times. This term can also be present in the initial conditions, due to primordial non-Gaussianity, but we do not consider this case here. In the context of standard perturbation theory \cite{Bernardeau/etal:2002}, the non-Gaussian covariance $\cov^{\rm NG}$ (or the \emph{parallelogram} matter trispectrum, in the sense of Eq.~(\ref{eq:covdef})) can be expanded into its tree-, 1-loop and higher-order loop contributions, \begin{eqnarray}\label{eq:covngexp} \cov^{\rm NG}(k_1, k_2, \mu_{12}) = \cov^{\rm NG}_{\rm tree}(k_1, k_2, \mu_{12}) + \cov^{\rm NG}_{\rm 1loop}(k_1, k_2, \mu_{12}) + \big({\rm higher\ loops}\big). \end{eqnarray} This part of the covariance, which is by far the most challenging to measure and predict, is that which we wish to address specifically in this paper. The main idea behind the calculation that we perform here is that, at a given order in standard perturbation theory, we identify the mode-coupling terms that describe the interactions between hard and soft modes and resum them using power spectrum responses; all other terms can be computed as in standard perturbation theory. Up to 1-loop order, we will see how such a combination of the SPT and response approaches is capable of capturing a substantial part of the total non-Gaussian covariance. Before proceeding with the more rigorous description of the calculation of $\cov^{\rm NG}(k_1, k_2, \mu_{12})$ in the next sections, we collect here some of the notation that is used throughout. We will use the words \emph{standard}, \emph{response} and \emph{stitched} to refer, respectively, to the SPT-based, response-based and their combined contributions to the total non-Gaussian covariance. More specifically: \begin{enumerate} \item \emph{Standard tree} and \emph{standard 1-loop}, which we represent as $\cov^{\rm NG}_\text{SPT-tree}$ and $\cov^{\rm NG}_\text{SPT-1loop}$, refers to the standard perturbation theory calculation (at the corresponding tree- or 1-loop levels) that does not employ any response vertices and that loses predictivity whenever any of the external momenta approach $k_\text{NL}$. \item \emph{Response tree} and \emph{response 1-loop}, which we represent as $\cov^{\rm NG}_{\mathcal{R}\text{-tree}}$ and $\cov^{\rm NG}_{\mathcal{R}\text{-1loop}}$, refers to the mode-coupling terms between hard and soft modes that exist at tree and 1-loop levels, respectively, and that can be calculated with response vertices. These contributions lose predictivity if the soft modes involved approach $k_\text{NL}$, but are otherwise valid for any value of the hard modes, including in the nonlinear regime. \item \emph{Stitched tree}, which we represent as $\cov^{\rm NG}_\text{st-tree}$, refers to a specific combination, or \emph{stitching}, of the standard and response contributions. The details of this stitching will be clarified in the sections below. Note that while we apply this procedure at tree level here, the stitching can in principle be applied at any order. \end{enumerate} It is also useful to organize the angular dependence of the covariance into multipoles as \begin{eqnarray}\label{eq:angleaver} \cov(k_1, k_2, \mu_{12}) &=& \sum_{\ell\ {\rm even}} \cov^{\ell}(k_1, k_2) \mathcal{P}_\ell(\mu_{12}) \nonumber \\ \cov^{\ell}(k_1, k_2) &=& \frac{2\ell + 1}{2}\int_{-1}^1{\rm d}\mu_{12} \cov (k_1, k_2, \mu_{12}) \mathcal{P}_\ell(\mu_{12}). \label{eq:covtree_exp} \end{eqnarray} The case of the monopole $\ell = 0$, $\cov^{\rm NG, \ell=0}(k_1, k_2)$, corresponds to the covariance matrix of angle-averaged spectra, which is the case we shall mostly focus on (with the exception of Sec.~\ref{sec:angles}, where we present the $\ell=2$ and $\ell=4$ predictions). Finally, throughout we use $k_{\rm soft}$ and $k_{\rm hard}$ to denote the softest and the hardest of the two $k$-modes of the covariance, i.e., $k_{\rm soft} = {\rm min}\{k_1, k_2\}$ and $k_{\rm hard} = {\rm max}\{k_1, k_2\}$. \section{The \emph{stitched} non-Gaussian covariance at tree level}\label{sec:NGcovtree} In this section, we propose a calculation of the tree-level covariance that combines the standard perturbation theory result, valid only if $k_1, k_2 \ll k_\text{NL}$, with the response-based description first put forward in Ref.~\cite{paper1}, which effectively extends the validity of the calculation to $k_{\rm soft} \ll k_\text{NL}$, but any $k_{\rm hard}$, including in the nonlinear regime. The SPT result for the tree-level non-Gaussian covariance \cite{1999ApJ...527....1S} is given by (see e.g.~Appendix B of Ref.~\cite{paper1} for explicit expressions for all terms of the general tree-level trispectrum) \begin{eqnarray}\label{eq:covtree} V\cov_{\rm SPT-tree}^{\rm NG}(k_1, k_2, \mu_{12}) &=& 12F_3(\v{k}_1, \v{k}_2, -\v{k}_2)P_{\rm L}(k_1)[P_{\rm L}(k_2)]^2 \nonumber \\ &+& 4F_2(\v{k}_1-\v{k}_2, \v{k}_2)^2 [P_{\rm L}(k_2)]^2P_{\rm L}(|\v{k}_1-\v{k}_2|) \nonumber \\ &+& 4F_2(\v{k}_1+\v{k}_2, -\v{k}_2)^2 [P_{\rm L}(k_2)]^2P_{\rm L}(|\v{k}_1+\v{k}_2|) \nonumber \\ &+& 4F_2(\v{k}_1-\v{k}_2, \v{k}_2)F_2(\v{k}_2-\v{k}_1, \v{k}_1)P_{\rm L}(k_1)P_{\rm L}(k_2)P_{\rm L}(|\v{k}_1-\v{k}_2|) \nonumber \\ &+&4 F_2(\v{k}_1+\v{k}_2, -\v{k}_2)F_2(\v{k}_1+\v{k}_2, -\v{k}_1)P_{\rm L}(k_1)P_{\rm L}(k_2)P_{\rm L}(|\v{k}_1+\v{k}_2|) \nonumber \\ &+& (\v{k}_1 \leftrightarrow \v{k}_2), \nonumber \\ \end{eqnarray} where $F_2$ and $F_3$ are the symmetrized second- and third-order standard perturbation theory kernels \cite{Bernardeau/etal:2002}. In the literature, the above equation is sometimes written in a simpler way that anticipates the angle averages that are subsequently taken, but here we opted to remain general. This standard tree level result is only expected to be a good approximation to the full non-Gaussian covariance when both $k_1$ and $k_2$ are in the linear regime, $\max\{k_1,k_2\} \ll k_\text{NL}$. However, as noted already above, if $k_{\rm soft}$ is sufficiently linear and smaller than $k_{\rm hard}$, then it is possible to extend the regime of validity of the tree level calculation to nonlinear values of $k_{\rm hard}$ by making use of the response $\mathcal{R}_2$. This follows from noting that, by taking $n=2$, $\v{p}_1 = -\v{p}_2=\v{k}_{\rm soft}$ and $\v{k} = -\v{k}' = \v{k}_{\rm hard}$ in Eq.~(\ref{eq:sqnpt}), one obtains precisely the squeezed limit of the connected $4$-point function (or trispectrum) in the covariance configuration. The following equation provides a schematic picture of this relation: \begin{array}\label{eq:R2treediag} \raisebox{-0.0cm}{\includegraphicsbox[scale=0.8]{diag_R2tree.pdf}} =\:& \raisebox{-0.0cm}{\includegraphicsbox[scale=0.8]{diag_R2tree_F3.pdf}} + \raisebox{-0.0cm}{\includegraphicsbox[scale=0.8]{diag_R2tree_F21.pdf}} + (\v{k} \leftrightarrow \v{k}')\,, \end{array} where $p_1, p_2$ are understood as much softer than $k, k'$. That is, at tree level, the $\mathcal{R}_2$ vertex captures the coupling described by one $F_3$ kernel in one diagram and two $F_2$ kernels in the other. The above equation is shown and used explicitly in Sec.~4.2 of Ref.~\cite{paper1} to derive the shape of $\mathcal{R}_2$ at tree level. By replacing the tree-level $\mathcal{R}_2$ with its simulation-calibrated shape, then one effectively extends (or resums) the interactions on the right-hand side of Eq.~(\ref{eq:R2treediag}) to all orders in perturbation theory in the hard mode. Referring the reader to Ref.~\cite{paper1} for more details, here we limit ourselves to showing the final response-based result, which is given by \begin{eqnarray}\label{eq:resptreecov} \cov^{\rm NG}_{\mathcal{R}\text{-tree}}(k_1, k_2, \mu_{12}) &=& V^{-1}\,2 \mathcal{R}_2(k_{\rm hard}, \mu_{12}, -\mu_{12}, -1, 1)[P_{\rm L}(k_{\rm soft})]^2P_m(k_{\rm hard}) \nonumber \\ &&+ \mathcal{O}\left(\frac{k_{\rm soft}^2}{k_{\rm hard}^2},\ \frac{k_{\rm soft}^2}{k_\text{NL}^2} \right), \end{eqnarray} where the next-to-leading corrections come from non-response type interactions that are suppressed in the squeezed regime, as well as loop corrections in the soft mode that enter when $k_{\rm soft}$ is no longer much smaller than $k_\text{NL}$. We can now put the above two equations together to construct our stitched tree-level covariance as follows\footnote{When writing Eqs.~(\ref{eq:covtree}), (\ref{eq:resptreecov}) and (\ref{eq:stitched}) and all following relations, we have implicitly averaged over the $k$-bins used to estimate $\hat{P}_m$. Contrary to the Gaussian case, the non-Gaussian covariance does not depend explicitly on the $k$-bin widths, and hence we omit this averaging to shorten the notation.} \begin{eqnarray}\label{eq:stitched} \cov^{\rm NG}_\text{st-tree}(k_1, k_2, \mu_{12}) = \begin{cases} \cov_\text{SPT-tree}^{\rm NG}(k_\text{hard}, k_\text{soft}, \mu_{12}) &,\ {\rm if}\ k_\text{soft} > f_\text{sq} k_\text{hard}\\ \cov_{\mathcal{R}\text{-tree}}^{\rm NG}(k_\text{hard}, k_\text{soft}, \mu_{12}) &, \ {\rm otherwise} \end{cases}\;. \end{eqnarray} Thus, we use the standard tree-level expression in non-squeezed configurations, but switch to the response tree-level result in squeezed ones. The value of $f_\text{sq}$ controls the transition from the non-squeezed to the squeezed regime, and its optimal choice corresponds to a trade-off between two demands. On the one hand, the response prediction is only accurate up to corrections of order $f_\text{sq}^2$ (\refeq{resptreecov}), and hence $f_\text{sq}$ should be chosen as small as possible. On the other hand, for $k_\text{hard}$ that approach or even exceed $k_\text{NL}$, the response prediction is more accurate than the SPT-tree prediction, which makes larger values of $f_\text{sq}$ beneficial (to maximize the volume in $(k_1,k_2)$-space where the response-based result is used). In Appendix \ref{app:fsq}, we describe a procedure to determine the largest value of $f_\text{sq}$ that ensures a given accuracy of Eq.~(\ref{eq:stitched}) based on the standard tree-level covariance. From the exercise performed in Appendix \ref{app:fsq}, we take our fiducial choice to be $f_\text{sq} = 0.5$. From hereon in this paper we shall therefore dub configurations with $k_\text{soft} < k_\text{hard}/2$ as squeezed. \begin{figure} \centering \includegraphics[width=\textwidth]{fig_tree_transition_mono.png} \caption{Non-Gaussian angle-averaged matter power spectrum covariance at tree level. The left panel shows the stitched tree-level covariance matrix of Eq.~(\ref{eq:stitched}) as a color plot. The dashed lines show $k_2 = f_\text{sq}k_1$, $k_2 = k_1/f_\text{sq}$, and draw the boundaries in $(k_1,k_2)$-space in which one either uses the standard tree-level or the response tree-level expressions, as labeled. The right panels show the stitched tree level covariance matrix (red) at two fixed $k_2$ values, as labeled. Also shown is the standard tree level result (blue) and the response tree-level expression (green). Results are shown for $V = 656.25\ h^{-3}{\rm Mpc}^3$. Further, $P_{m,i}\equiv P_m(k_i)$.} \label{fig:tree} \end{figure} Our {\it stitched} tree-level result of Eq.~(\ref{eq:stitched}) is shown in Fig.~\ref{fig:tree}, for the angle-averaged case ($\ell = 0$ in Eq.~(\ref{eq:angleaver})). The matrix is shown as a color plot in the left panel. The two panels on the right show each a {\it slice} at fixed $k_2$ of the matrix on the left (red), as well as the standard (blue) and response (green) results. The upper right panel represents a slice at $k_2 < k_\text{NL}$, while the lower right shows a slice at $k_2 > k_\text{NL}$. The dashed black lines draw the boundary between the squeezed and non-squeezed limits. When both modes are smaller than $k_\text{NL} \approx 0.3\:h\,{\rm Mpc}^{-1}$ (at $z=0$), the color plot displays a smooth transition from the standard- to the response-based results, as it should be by definition. One does note, however, that there are noticeable discontinuities at the junction between the two cases, when both $k$ values are above $k_\text{NL}$. This is expected, as the standard tree-level result does not include any nonlinear corrections, while the response prediction does. Furthermore, even if, say, $k_2 \ll k_1$, but $k_2$ is of order $k_\text{NL}$ or larger, then we do not expect the response tree-level result by itself to be a good description of the covariance. This is because loop contributions become non-negligible in that regime (cf.~$\mathcal{O}(k_\text{soft}^2/k_\text{NL}^2)$ corrections in Eq.~(\ref{eq:resptreecov})). We will see below that, in this regime, our tree-level result is negligible compared to the contribution from 1-loop terms, and hence, the unphysical discontinuity in the stitched-tree-level contribution does not affect the much larger total covariance. \section{Non-Gaussian covariance at the 1-loop level}\label{sec:NGcov1loop} We now extend the calculation of the non-Gaussian part of the covariance by working at 1-loop level (cf.~Eq.~(\ref{eq:covngexp})). To do so, we introduce a new concept in large-scale structure perturbation theory which uses responses to describe the coupling between soft \emph{internal} loop momenta and hard external modes, thereby going beyond the so-far considered application of responses to describe the coupling of soft \emph{external} modes with hard external modes. The 1-loop covariance $\cov^{\rm NG}_{{\rm 1loop}}(k_1, k_2, \mu_{12})$ has contributions from nine types of diagrams (see e.g.~Fig.~4 of Ref.~\cite{2016PhRvD..93l3505B}). In the limit of soft loop momenta $p$, i.e. $p \ll k_1, k_2$, it can be shown that six of these diagrams are linear in $P_{\rm L}(p)$, i.e., they are of the form \begin{eqnarray}\label{eq:Pp} \cov^{\rm NG}_{{\rm 1loop}, P_{\rm L}(p)}(k_1, k_2, \mu_{12}) \stackrel{p \ll k_1,k_2}{\propto} \int {\rm d}^3p\ \left[ \dots \right]\ P_{\rm L}(p)P_{\rm L}(k_i)P_{\rm L}(k_j)P_{\rm L}(k_k), \end{eqnarray} where $k_i,k_j,k_k$ are of the order of the external modes $k_1,k_2$. Here and below, the dots in square brackets denote perturbation theory kernels $F_n$ that we do not write for brevity. On the other hand, the remaining three diagrams contain contributions that involve two powers of $P_{\rm L}(p)$, i.e., \begin{eqnarray}\label{eq:Pp2} \cov^{\rm NG}_{{\rm 1loop}, [P_{\rm L}(p)]^2}(k_1, k_2, \mu_{12}) \stackrel{p \ll k_1,k_2}{\propto} \int {\rm d}^3p\ \left[ \dots \right]\ [P_{\rm L}(p)]^2P_{\rm L}(k_i)P_{\rm L}(k_j)\,. \end{eqnarray} These latter three diagrams can be represented exactly as a single diagram that involves two tree-level $\mathcal{R}_2$ vertices: \begin{array}\label{eq:R21loopdiag} \raisebox{-0.0cm}{\includegraphicsbox[scale=0.8]{diag_cov1l_decomp.pdf}} &=& \left(\raisebox{-0.0cm}{\includegraphicsbox[scale=0.8]{diag_cov1l_decomp_1.pdf}} + 3\ {\rm perms.}\right) \nonumber \\ &+& \left(\raisebox{-0.0cm}{\includegraphicsbox[scale=0.8]{diag_cov1l_decomp_2.pdf}} + 3\ {\rm perms.}\right) \nonumber \\ &+& \left(\raisebox{-0.0cm}{\includegraphicsbox[scale=0.8]{diag_cov1l_decomp_3.pdf}} + 1\ {\rm perm.}\right), \nonumber \\ \end{array} where the role of the long-wavelength perturbations is played by internal loop momenta. The above equation is written in terms of perturbation theory kernels in Eq.~(\ref{eq:Cov1loop_tree}) in Appendix \ref{app:derivation}, where we demonstrate explicitly which standard 1-loop covariance terms are captured by the response approach. Based on the Feynman rules augmented with responses (cf.~Appendix \ref{app:feynman}), this represents a \emph{linking} of two diagrammatic representations of the tree-level response $\mathcal{R}_2^\text{tree}$ (cf.~Eq.~(\ref{eq:R2treediag})). As it was the case at tree level in the covariance, the generalization of the tree-level $\mathcal{R}_2$ to its simulation-calibrated expressions effectively extends the validity of the calculation of these 1-loop terms to nonlinear values of the hard modes $k_1, k_2$. A point that is worth emphasizing is that now both external modes can be of comparable size, i.e., this constitutes an application of the response formalism beyond the commonly used application to describe squeezed-limit correlation functions. At this point, it may not be clear why capturing the terms in \refeq{Pp2} through responses yields a significant advantage. However, as we will comment on in the next subsection, the other 1-loop terms in \refeq{Pp} are suppressed relative to those in \refeq{Pp2} if $k_1,k_2$ are sufficiently large. Further, beyond the limit of soft loop momentum, the contributions from the opposite limit, i.e. $p \gg k_1,k_2$, are suppressed due to mass-momentum conservation. By expressing \refeq{Pp2} in terms of responses, one can therefore capture a substantial part of the covariance in $(k_1,k_2)$-space. Moreover, in the squeezed regime where $k_\text{soft} \ll k_\text{hard}$, additional response diagrams allow us to capture \emph{all} 1-loop terms that are leading order in $k_{\rm soft}/k_{\rm hard}$, although we do not calculate these in this paper. We return to this in \refsec{defnr}. By the Feynman rules, the response diagram in \refeq{R21loopdiag} can then be written as \begin{array}\label{eq:Cov1loop} & \raisebox{-0.0cm}{\includegraphicsbox[scale=0.8]{diag_cov1l.pdf}} +(\text{perm.}) = \nonumber \\ & = 2 \int \frac{{\rm d}^3p}{(2\pi)^3}[P_{\rm L}(p)]^2 \mathcal{R}_2(k_1,\mu_1,-\mu_1,-1,1) P_m(k_1)\, \mathcal{R}_2(k_2,\mu_2,-\mu_2,-1,1) P_m(k_2) \nonumber \\ & = \frac{2P_m(k_1)P_m(k_2)}{(2\pi)^3}\Bigg[\int_0^{ p_\text{max}}p^2[P_{\rm L}(p)]^2{\rm d}p\Bigg] \nonumber \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \int_{-1}^1{\rm d}\mu_1\int_{0}^{2\pi}{\rm d}\varphi \mathcal{R}_2(k_1,\mu_1,-\mu_1,-1,1) \mathcal{R}_2(k_2,\mu_2,-\mu_2,-1,1), \end{array} where $\mu_1 = \v{k}_1\cdot\v{p}/(k_1 p)$, $\mu_2 = \v{k}_2\cdot\v{p}/(k_2 p)$ and the factor of 2 comes from the two possible ways of connecting the loops. In the above equation, we have implicitly fixed the direction of $\v{k}_1$, which means the polar integral of the loop momentum $\v{p}$ is done w.r.t.~$\v{k}_1$. We note for completeness that Eq.~(\ref{eq:Cov1loop}), as written, does not correctly describe cases where $\v{k}_1$ and $\v{k}_2$ are very nearly parallel such that $|\v{p} \pm \v{k}_1 \pm \v{k}_2| \approx p$. These terms can be straightforwardly included using responses as well (see Appendix \ref{app:derivation} for more details). However, after angle-averaging, their contribution to low-order multipoles is suppressed by $(p/k_i)^2$ and thus becomes negligible. We therefore do not include them in the main text. After performing the two angle integrals in Eq.~(\ref{eq:Cov1loop}) one arrives at \begin{eqnarray}\label{eq:cov1loop_res} && \cov^{{\rm NG}}_{\mathcal{R}\text{-1loop}} (k_1, k_2, \mu_{12}) = V^{-1}\frac{2P_m(k_1)P_m(k_2)}{(2\pi)^2}\Bigg[\int_0^{ p_\text{max}}p^2[P_{\rm L}(p)]^2{\rm d}p\Bigg] \nonumber \\ && \hspace*{2cm}\times \Bigg[2\mathcal{A}_1\mathcal{A}_2 + \frac25\mathcal{B}_1\mathcal{B}_2\mathcal{P}_2(\mu_{12}) + \frac25\big(\mathcal{A}_1\mathcal{C}_2+\mathcal{A}_2\mathcal{C}_1\big) + \frac{4}{35}\big(\mathcal{B}_1\mathcal{C}_2 + \mathcal{B}_2\mathcal{C}_1\big)\mathcal{P}_2(\mu_{12}) \nonumber \\ && \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \frac{2}{35}\big(1+2\mathcal{P}_2(\mu_{12})^2\big)\mathcal{C}_1\mathcal{C}_2\Bigg], \end{eqnarray} where $\mathcal{A}_i \equiv \mathcal{A}(k_i)\ (i=1,2)$, and similarly for $\mathcal{B}_i$ and $\mathcal{C}_i$ (cf.~Eq.~(\ref{eq:PR2_anglecov}) for the definition of the $\mathcal{A}(k)$, $\mathcal{B}(k)$ and $\mathcal{C}(k)$ in terms of linear combinations of the response coefficients $R_O(k)$). A key issue to address before evaluating \refeq{cov1loop_res} concerns the value for the maximum loop momentum $p_\text{max}$. Our criterion to choose $p_\text{max}$ is based on the fact that Eq.~(\ref{eq:Cov1loop}) is only strictly valid if $p \ll k_1,k_2$, as well as $p < k_\text{NL}$, otherwise, the blobs in \refeq{Cov1loop} would not correspond to response-type interactions. In this paper, we therefore choose the cutoff of the momentum integral to be \begin{equation} p_\text{max} = \min \{ f_\text{sq} k_1,\, f_\text{sq} k_2,\, k_\text{NL} \}\,. \end{equation} Here, we employ the same fraction $f_\text{sq} = 0.5$ as used in our stitched tree-level result (cf.~Eq.~\ref{eq:stitched}), which assumes that the departure of the response prediction from the full 1-loop trispectrum away from the soft loop momenta limit scales similarly as in the tree-level case (cf.~Fig.~\ref{fig:fsq}). This is a reasonable assumption as the relevant interactions at both tree- and 1-loop levels are controlled by $\mathcal{R}_2$. \subsection{Additional contributions to the 1-loop covariance}\label{sec:defnr} The 1-loop contributions to the covariance that are not captured by \refeq{cov1loop_res} are of two types. One, in the limit of soft loop momentum, is the contribution from the six diagrams that are of the form of Eq.~(\ref{eq:Pp}). The other is the contribution from all 1-loop diagrams with loop momentum $p > p_\text{max}$. We comment on both contributions in turn below. We will conclude that the non-response contributions are small compared to the response contributions everywhere except for $k_1, k_2 \sim 0.1-0.3\,h\,{\rm Mpc}^{-1}$. These missing terms can nonetheless be included with a stitching procedure analogous to that performed for the tree-level covariance in \refsec{NGcovtree}. Let us first consider \refeq{Pp}. The relative size of these non-response terms compared to the response-type ones (cf.~Eq.~(\ref{eq:Pp2})) can be roughly estimated by \begin{eqnarray}\label{eq:ratioest} \frac{P_{\rm L}(k_\text{soft})\int_0^{p_\text{max}}p^2P_{\rm L}(p){\rm d}p}{\int_0^{p_\text{max}}p^2[P_{\rm L}(p)]^2{\rm d}p}, \end{eqnarray} where $k_\text{soft} \gtrsim 0.1\:h\,{\rm Mpc}^{-1}$. In the numerator, it makes sense to use the power spectrum evaluated at $k_\text{soft}$ because $P_{\rm L}(k_\text{soft}) > P_{\rm L}(k_\text{hard})$ in the regime of interest, so that \refeq{ratioest} captures the most relevant terms (we are setting the perturbation theory kernels to unity for this estimate). For our choice of $p_\text{max}$, we have for the above ratio \begin{equation} \mbox{\refeq{ratioest}} \approx \{ 0.27,\ 0.11, \ 0.02\} \quad\mbox{for}\quad k_\text{soft} = \{ 0.1,\ 0.3,\ 1\} \:h\,{\rm Mpc}^{-1}\,, \end{equation} respectively. This indicates that, at the transition from the linear to the nonlinear regime in soft external momenta, $k_{\rm soft} \sim 0.1 - 0.3\:h\,{\rm Mpc}^{-1}$, there are 1-loop terms that are sizeable, but that are not of the type of Eq.~(\ref{eq:Cov1loop}). When both $k_1, k_2 \sim 0.1 - 0.3\:h\,{\rm Mpc}^{-1}$ these missing terms can be calculated with standard perturbation theory; if, on the other hand, $k_{\rm soft} \sim 0.1 - 0.3\:h\,{\rm Mpc}^{-1}$ but $k_{\rm hard} \gg k_{\rm soft}$, then the missing terms can be evaluated by combining standard perturbation theory and response vertices in the same diagram (see e.g.~Eq.~(2.12) of Ref.~\cite{paper1}). Note that for values of $k_\text{soft} \lesssim 0.1\:h\,{\rm Mpc}^{-1}$, the contribution from the 1-loop term is small, and as a result, it is numerically irrelevant whether the 1-loop contribution is accurate. We now turn to the second missing 1-loop part, namely the contribution from loop momenta with $p > p_\text{max}$. Consider a loop momentum $p \gg k_1, k_2$. This corresponds to mode-coupling interactions in which hard ingoing momenta combine to form outgoing soft momenta. These types of couplings are suppressed by momentum and mass conservation (see Appendix~B of Ref.~\cite{abolhasani/mirbabayi/pajer:2016} for a more detailed discussion). Specifically, the perturbation theory kernels in this limit scale as $(k_i/p)^2$ ($i=1,2$), and as a result, the loop integrals in this regime contribute negligibly to the total covariance. This, combined with the fact that the response-type terms dominate for $k_\text{soft} \gtrsim k_\text{NL}$, restricts the contributions from loop momenta $p > p_\text{max}$ to the regime of $k_1, k_2 \sim 0.1 - 0.3\:h\,{\rm Mpc}^{-1}$, as well. We shall return to the importance of these missing contributions below, as we analyze the results of our calculation. We stress that the inadequacy of the response-based approach to correctly describe the 1-loop covariance for $k_1, k_2 \sim 0.1 - 0.3\:h\,{\rm Mpc}^{-1}$ can be circumvented by a {\it stitching} to the standard 1-loop calculation (see Ref.~\cite{2016PhRvD..93l3505B} for the complete expressions), similar to that implemented in the last section for the tree-level covariance. We leave such a {\it stitching} at the 1-loop level (as well as the inclusion of other terms important for $k_{\rm soft} \sim 0.1 - 0.3\:h\,{\rm Mpc}^{-1}$ but $k_{\rm hard} \gg k_{\rm soft}$) for future work. \subsection{Estimate of higher-loop contributions}\label{sec:defhl} We have argued above that, for sufficiently high $k$, the 1-loop contribution to the covariance is dominated by response-type terms. This does not address, however, the issue of the relevance of higher loops on these scales, which we consider now. Let us consider the 2-loop contribution to the covariance in the response approach. This corresponds to a single diagram that is the $n=3$ generalization of Eq.~(\ref{eq:Cov1loop}): \begin{array} & \raisebox{-0.0cm}{\includegraphicsbox[scale=0.8]{diag_cov2l.pdf}} + (\text{perm.}) = \nonumber\\ &\hspace*{0cm} = 6 P_m(k_1)P_m(k_2) \int_{\v{p}_1} \int_{\v{p}_2} P_{\rm L}(p_1) P_{\rm L}(p_2) P_{\rm L}(|\v{p}_{12}|) \mathcal{R}_3(k_1, \cdots) \mathcal{R}_3(k_2, \cdots) \nonumber \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times (2\pi)^3 \d_D(\v{k}_1+\v{k}_1'+\v{k}_2+\v{k}_2'), \label{eq:Cov2loop} \end{array} where the dots in the arguments of $\mathcal{R}_3$ represent all the angles involved (omitted for brevity) and the factor of 6 accounts for the permutations of the internal loop momenta. Further, $\int_{\v{p}} \equiv \int {\rm d}^3p/(2\pi)^3$. An order-of-magnitude estimate of the relative size of this 2-loop contribution to that of Eq.~(\ref{eq:Cov1loop}) can be written as \begin{equation} \frac{[\text{Response 2-loop}]}{[\text{Response 1-loop}]} \sim \frac{6}{2} \left(\frac{\<\mathcal{R}_3\>_{\hat{\v{p}}_1,\hat{\v{p}}_2}}{\<\mathcal{R}_2\>_{\hat{\v{p}}}}\right)^2 \sigma_{p_\text{max}}^2\,, \end{equation} where \begin{equation} \sigma_{p_\text{max}}^2 \equiv \frac1{2\pi^2}\int_0^{p_\text{max}}{\rm d}p\ p^2 P_{\rm L}(p) \end{equation} is the variance of the density field up to the cutoff $p_\text{max}$ employed in the loop integrals, and $\< \>_{\hat{\v{p}}_i}$ denotes the angle-average over the responses in Eqs.~(\ref{eq:Cov1loop}) and (\ref{eq:Cov2loop}). As a very rough estimate, we now assume that only the isotropic response coefficients $R_n/n!$ remain after these angle averages. Note, for instance, that comparing Eqs.~(\ref{eq:Cov1loop}) and (\ref{eq:cov1loop_res}) shows that for $\mathcal{R}_2$ this is not really correct. Keeping this caveat in mind, we obtain \begin{equation} \frac{[\text{Response 2-loop}]}{[\text{Response 1-loop}]} \sim \frac{1}{3} \left(\frac{R_3}{R_2}\right)^2 \sigma_{p_\text{max}}^2\,. \end{equation} By continuing this reasoning to higher loops one obtains \begin{eqnarray} \frac{[\text{Response $n$-loop}]}{[\text{Response 1-loop}]} \sim 2 [(n+1)!]^{-1} \left(\frac{R_{n+1}}{R_2}\right)^2 \sigma_{p_\text{max}}^{2(n-1)}. \end{eqnarray} This order-of-magnitude estimate leaves open the possibility that higher-loop terms in the response approach contribute non-negligibly to the total covariance if $\sigma_{p_\text{max}} \gtrsim 1$, which corresponds to $p_\text{max} \gtrsim k_\text{NL}$. The importance of higher-loop response terms and how their contribution scales with $n$ is dependent also on the details of the shape of the $\mathcal{R}_n$, or more accurately, on the specific angle-averages that characterize the corresponding diagrams. These higher-order response functions have however never been fully derived, which prevents us from drawing decisive conclusions here. Interestingly, Ref.~\cite{response} found that the \emph{Eulerian} isotropic response coefficients $R_n^E(k)$, which measure the response of the power spectrum to evolved isotropic modes, are rapidly suppressed numerically at higher orders $n \geq 2$. We will return to the potential importance of higher-loop terms as we analyze the results of our covariance calculations below. We stress that, for given $k_1, k_2$, the importance of higher-loop terms should progressively decrease in order to render the response approach to the covariance well-defined and predictive. This highly relevant open issue is left for future investigation. \section{Comparison with simulations: angle-averaged case}\label{sec:monosims} We now assess the performance of the matter covariance expressions developed in the previous sections by comparing them to estimates from N-body simulations. In particular, we use the results of Ref.~\cite{blot2015}, who estimated the covariance matrix of the matter power spectrum by cross-correlating the angle-averaged power spectra from more than 12000 simulation boxes with volume $V = [656.25\ h^{-1} {\rm Mpc}]^3$. In this section, we therefore consider only the monopole (angle-averaged) part of our covariance expressions ($\ell=0$ in Eq.~(\ref{eq:angleaver})). In Ref.~\cite{blot2015}, the authors presented results from two sets of simulations: one called Set A, which consists of $12288$ realizations with $N_p = 256^3$ matter tracer particles; and one called Set B, which is made up of a lower number of realizations, $96$, but at higher resolution $N_p = 1024^3$. Apart from providing an independent estimate of the covariance, the diagonal components of the covariance estimated in Set B are used to derive a correction for the power spectrum as well as the covariance measured from Set A for mass resolution effects (see Ref.~\cite{blot2015} for details). In this paper, we show the covariance matrices of Ref.~\cite{blot2015} estimated from the spectra of Set B and the spectra of Set A after this correction is applied. Note that the diagonal elements of the covariance of both sets thus agree by definition, and that the correction applied cancels out when considering the correlation coefficient Eq.~(\ref{eq:corrcoeff}). The cosmological parameters of the simulations of Ref.~\cite{blot2015} (cf.~end of Sec.~\ref{sec:intro}) are almost the same as those used in Ref.~\cite{response} to measure the isotropic response coefficients $R_1(k)$ and $R_2(k)$, which is why we choose these data to compare our results with. We note that our formalism holds generically for any quintessence-type cosmology, provided the corresponding power spectrum responses are known. Other recent estimates of the covariance matrix using simulations include those of Ref.~\cite{li/hu/takada}, who account for the super-sample covariance term, as well as those in Ref.~\cite{2017arXiv170105690K}, which were obtained using over 15000 simulations (see also Refs.~\cite{2006MNRAS.371.1188H, 2009ApJ...700..479T, joachimpen2011, 2011ApJ...734...76S}). Before discussing the detailed comparison, we make some cautionary remarks regarding simulation measurements of the power spectrum covariance. As any measurement, they in general have statistical and systematic errors. The statistical errors are due to the finite number of realizations, or total volume, of the simulations. On large scales (small wavenumber) the statistical error on the simulation measurements is dominated by the limited number of modes sampled. For a total simulated volume $V_t$, this number is given by $k^3 V_t/(2\pi)^3$, and hence it is smallest (largest statistical error) for modes close to the fundamental mode of the individual boxes $k_\text{fund} = 2\pi/L_\text{box}$ (where $L_\text{box}$ is the box size). On nonlinear scales and thus higher wavenumber, these \emph{sample variance} effects become smaller, but the precise error becomes harder to quantify because of mode coupling that effectively correlates the statistical error of the covariance across different wavenumbers. The systematic errors of N-body simulations include the finite resolution due to the number of particles, the subtraction of particle shot noise, and transients from the initial conditions. The first two contributions are expected to be most significant on the smallest scales. Quantifying the systematic error on the estimated power spectrum covariance without numerical convergence tests is very difficult. Further, by definition, simulation-based estimates of the covariance matrix lack the contribution from modes with $k < k_\text{fund} = 2\pi/L_\text{box}$, which can also be seen as systematic error. Note however that, as shown in Ref.~\cite{li/hu/takada}, these can be included at leading order through the super-sample-variance contribution. Strictly, for comparison with these simulations, one should also include a minimum value $p_\text{min} = k_\text{fund}$ in the loop integrals of our calculation. However, for $L_\text{box} \sim 650\,h^{-1}\,{\rm Mpc}$, we have found that this makes an entirely negligible numerical difference. Due to the difficulty of obtaining reliable error estimates on the simulation-based covariance, we will mainly discuss the comparison of our covariance prediction to simulation results in the context of the known deficiencies of the former. As discussed in Secs.~\ref{sec:defnr} and \ref{sec:defhl}, these are non-response-type terms on quasi-linear scales, and higher-loop response terms on fully nonlinear scales. We deliberately avoid quantifying the exact level of agreement between theory and simulations, since it could be misleading given the above mentioned difficulties in estimating the error on the latter. \subsection{Comparison at $z=0$}\label{sec:z0} The color plots in Figure \ref{fig:map} show the correlation coefficient of the angle-averaged matter power spectrum evaluated at two different wavenumbers $k_1, k_2$, which is defined as \begin{eqnarray}\label{eq:corrcoeff} r_{\ell = 0}(k_1, k_2) = \frac{\cov^{\ell = 0}(k_1, k_2)}{\sqrt{\cov^{\ell = 0}(k_1, k_1)\cov^{\ell = 0}(k_2, k_2)}}. \end{eqnarray} $r_\ell(k_1,k_2)$ can take on values between $-1$ and $1$. The upper panels show the contribution from the stitched tree-level non-Gaussian term (cf.~Eq.~(\ref{eq:stitched}); upper left) and response 1-loop result (cf.~Eq.~(\ref{eq:cov1loop_res}); upper right). The lower left panel shows the total prediction for the angle-averaged covariance, which includes the Gaussian diagonal contribution as well. In the upper panels and in the lower left panel, we use the total covariance in the denominator of $r_{\ell = 0}(k_1, k_2)$, i.e., the lower left panel is obtained by summing the two upper panels and adding the Gaussian contribution. Figure \ref{fig:slices} shows instead a few representative \emph{slices} at constant $k_2$ of the covariance matrix $\cov^{\ell = 0}(k_1, k_2)$ (not the correlation coefficient). \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{fig_covariance_maps_corr_mono.png} \caption{Correlation coefficient of the angle-averaged matter covariance, $r_{\ell = 0}(k_1, k_2)$ at $z=0$ (cf.~Eq.~(\ref{eq:corrcoeff})). The four panels display the contribution from the stitched tree level and response 1-loop parts (upper panels), as well as their summed result (together with the Gaussian contribution) and the estimates from the simulations of Set A of Ref.~\cite{blot2015} (lower panels), as labeled. In the upper panels, the matrix used in the denominator of Eq.~(\ref{eq:corrcoeff}) is the total covariance matrix, such that the lower left panel is obtained by summing the two upper ones (in addition to the Gaussian contribution).} \label{fig:map} \end{figure} The upper panels of Fig.~\ref{fig:map} and the panels in Fig.~\ref{fig:slices} are pedagogical in that they illustrate the kinematical regimes in which the tree level and the 1-loop terms contribute most. In particular, the tree-level result dominates when at least one of the modes is $\lesssim 0.1\:h\,{\rm Mpc}^{-1}$. On the other hand, when both modes are $\gtrsim 0.1\:h\,{\rm Mpc}^{-1}$, then most of the contribution comes from the 1-loop term (recall that we do not include the non-response-type loop contribution). The various panels of Fig.~\ref{fig:slices} help to visualize the gradual increase in importance of the 1-loop term as $k_2$ becomes larger. For instance, in the upper left panel for $k_2 = 0.043\:h\,{\rm Mpc}^{-1}$, the 1-loop contribution is fairly small and almost all of the non-diagonal covariance is captured at tree level (blue line). As $k_2$ increases however (left to right, top to bottom), the tree-level result becomes progressively smaller at high $k_1$, and is complemented by the growing contribution of the 1-loop term (green line). In light of the relative importance of tree-level and 1-loop contributions, the sharp discontinuities at high $k$ between the two branches of the stitched tree-level result of Eq.~(\ref{eq:stitched}), as well as the extrapolation of the tree-level response to the case of $k_\text{soft} \gtrsim k_\text{NL}$ are not affecting the total covariance in this regime because the entire stitched tree-level contribution is a small part of the total result. \begin{figure}[t] \vspace{1cm} \centering \includegraphics[width=\textwidth]{fig_covariance_slices_mono_z0_hr_nosims.pdf} \vspace{1cm} \caption{Covariance matrix as a function of $k_1$, for fixed values of $k_2$ (as indicated in the title of each panel) at $z=0$. Each panel shows our stitched tree-level and response 1-loop results, as well as their sum (including also the Gaussian term, visible as the sharp spikes at $k_1=k_2$), as labeled.} \label{fig:slices} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{fig_covariance_slices_mono_z0_hr.pdf} \caption{Covariance matrix as a function of $k_1$, for fixed values of $k_2$ (as indicated in the title of each panel) at $z=0$. Each panel shows the simulation results of Ref.~\cite{blot2015}, as well as the result from our calculation, as labeled. The $k_2$ values are the same as in Fig.~\ref{fig:slices}. The discrepancy between theory and simulations for $k_1 \lesssim 0.03\:h\,{\rm Mpc}^{-1}$ in the lower three panels can be attributed to insufficient volume of the simulations to sample these large modes. In the labels of the y-axis, $P_{m,i}\equiv P_m(k_i)$, which we evaluate using the {\sc Coyote} emulator. The $^*$ in the label of the simulation Set B indicates that the covariance matrix was smoothed with a Gaussian kernel to reduce the noise and facilitate visualization of the \emph{trends} in the measurements.} \label{fig:slicessims} \end{figure} The lower right panel of Fig.~\ref{fig:map} shows $r_{\ell=0}(k_1, k_2)$ from the simulation Set A of Ref.~\cite{blot2015}. The visual comparison to our prediction does not reveal strong differences in either shape or overall amplitude. A more detailed comparison with simulations is shown in Fig.~\ref{fig:slicessims}, where we show our total prediction along with the simulation Set A and Set B results of Ref.~\cite{blot2015}. Up to the approximation employed in the extrapolation of the anisotropic response coefficients $R_O(k)$ (cf.~Appendix \ref{app:ro}), our calculation is guaranteed to capture the total covariance if the soft mode is sufficiently linear, $k_\text{soft} \ll k_\text{NL} \approx 0.3\:h\,{\rm Mpc}^{-1}$. An interesting application of our calculation in this regime is therefore to test simulation-based estimates of the covariance matrix for systematic errors. Indeed, both simulation sets are in relatively good agreement with our calculation whenever $k_\text{soft} \ll k_\text{NL}$; this includes roughly the whole $k_1$ range in the upper three panels, as well as the low-$k_1$ parts of the lower six panels in Fig.~\ref{fig:slicessims}. The differences between Set A and Set B are likely to be mostly caused by the larger statistical uncertainties in Set B due to the smaller volume covered. Further, the departures seen in the simulations for large-scale modes, $k_1 \lesssim 0.03\:h\,{\rm Mpc}^{-1}$ (noticeable in the lower three panels of Fig.~\ref{fig:slicessims}), are likely to be due to insufficient sampling of these modes by both sets of simulations, as we noted already in the beginning of this section. The lower six panels of Fig.~\ref{fig:slicessims} correspond to slices with $k_2 > 0.1\:h\,{\rm Mpc}^{-1}$, in which one notes that our calculation falls short of describing completely the simulation measurements for $k_1 \gtrsim 0.1\:h\,{\rm Mpc}^{-1}$. As we have discussed in Secs.~\ref{sec:defnr} and \ref{sec:defhl}, there are two types of terms that are expected to contribute non-negligibly in this regime, presumably accounting for a large fraction of the observed difference between theory and simulations. One corresponds to 1-loop diagrams that cannot be brought into the form of Eq.~(\ref{eq:Cov1loop}) and that can contribute sizeably when $k_1, k_2 \sim 0.1-0.3\:h\,{\rm Mpc}^{-1}$. The inclusion of these terms via a stitching of standard- and response-based expressions should render the whole calculation fully predictive in this regime. In regimes in which $k_{\rm soft}\sim 0.1-0.3\:h\,{\rm Mpc}^{-1}$ and $k_{\rm hard} > k_\text{NL}$ (e.g.~$k_1 = 0.2\:h\,{\rm Mpc}^{-1}$ in the lower right panel of Fig.~\ref{fig:slicessims}), the missing terms can also be added, and in fact, using responses to describe the interactions that involve $k_{\rm hard}$. These terms, however, are not expected to play a major role in cases when $k_1, k_2 > 0.3\:h\,{\rm Mpc}^{-1}$. In this kinematic regime on the other hand, one expects that 2- and higher-loop terms, which themselves are dominated by response-type terms in this regime (cf.~Eq.~(\ref{eq:Cov2loop})), can account for the missing contribution. An important requirement for the response approach to remain predictive when $k_1, k_2 > 0.3\:h\,{\rm Mpc}^{-1}$ is, therefore, that higher-loop response-type contributions become progressively smaller. For the time being, we cannot provide a conclusive answer on the exact relative size of higher-loop response terms, and defer that to future work. It is also instructive to compare predictions for the diagonal of the covariance matrix ($k_1=k_2$), which is shown in Fig.~\ref{fig:diagonal} (solid lines for $z=0$). On large scales, this is dominated by the Gaussian contribution, and the tree-level non-Gaussian contribution is subdominant at all $k$ values. The 1-loop contribution only starts to become important for $k \gtrsim 0.3\:h\,{\rm Mpc}^{-1}$. As a result, for $k \lesssim 0.1\:h\,{\rm Mpc}^{-1}$, there is good agreement between our calculation and the simulations (up to noise), but this is unsurprising because here the result is set by the trivial and well understood Gaussian contribution. For $k \gtrsim 0.3\:h\,{\rm Mpc}^{-1}$, the simulation results at $z=0$ (solid lines) have a higher amplitude than our calculation (better discernible in the less noisy Set A), but, as already mentioned above, this is a regime in which higher-loop terms are expected to contribute non-negligibly, and hence potentially reduce the gap between theory and simulations. \begin{figure}[t] \centering \includegraphics[scale=0.55]{fig_covariance_diagonal_mono.pdf} \caption{Diagonal of the covariance matrix at $z=0$ (solid) and $z=2$ (dashed). The upper panel shows the simulation results of Ref.~\cite{blot2015} and the result of our calculation, together with its Gaussian, stitched tree-level and response 1-loop parts, as labeled. The lower panel shows the fractional deviation of the simulation results to our calculation. With the normalization adopted for the $y$-axis, the Gaussian line is independent of redshift. The spectra in the denominator is the same for all curves and is evaluated using the {\sc Coyote} emulator.} \label{fig:diagonal} \end{figure} \subsection{Comparison at $z=2$}\label{sec:z2} \begin{figure} \centering \includegraphics[width=\textwidth]{fig_issues_with_res_z2.png} \caption{The upper panels show the $z=2$ correlation coefficient estimates from Ref.~\cite{blot2015} using their two sets of simulations: Set A, which comprises 12288 lower resolution $N_p = 256^3$ simulations; and Set B, which is made of $96$ higher resolution $N_p = 1024^3$ simulations, where $N_p$ is the N-body tracer particle number. The lower left panel shows the corresponding result from our model. The lower right panel shows the $z=2$ covariance matrix as a function of $k_1$, for fixed $k_2 = 1\:h\,{\rm Mpc}^{-1}$ ($P_{m,i}\equiv P_m(k_i)$ in the label of the y-axis is evaluated using the {\sc Coyote} emulator). The $^*$ in the label of the simulation Set B indicates that the covariance matrix was smoothed with a Gaussian kernel to reduce the noise and facilitate visualization of the \emph{trends} in the measurements.} \label{fig:issuesz2} \end{figure} Our prediction for the covariance can be straightforwardly applied to other redshifts as well. To do so, one should use the response coefficients $R_O(k)$ at the desired redshift. This includes 1) using the measured isotropic responses from simulations and adjusting appropriately the nonlinear extrapolation of the anisotropic ones (cf.~Appendix \ref{app:ro}); 2) evaluate all spectra at the desired redshift; 3) and using the corresponding value of the nonlinear scale $k_\text{NL}$ in setting $p_\text{max}$, which increases with redshift. The color plots in Fig.~\ref{fig:issuesz2} show the correlation coefficient measured from both simulation sets of Ref.~\cite{blot2015} and that of our calculation at $z=2$, as labeled. The correlation coefficients measured from the two simulation sets are noticeably different at this redshift. In Ref.~\cite{blot2015}, this is attributed to the fact that the total volume of simulations in Set B (96 realizations) is not sufficient to appropriately sample the covariance. Note however that the results from the two simulation sets are in much better agreement at $z=0$, as we have seen above. Interestingly, our prediction agrees markedly better with the result from Set B. This becomes clearer from the lower right panel of Fig.~\ref{fig:issuesz2}, which shows the slice of the covariance matrix at constant $k_2 = 1\:h\,{\rm Mpc}^{-1}$ (the same as the lower right panel of Fig.~\ref{fig:slices}, but for $z=2$). Our calculation underpredicts both simulation set results, but the level of disagreement between our model and Set A is significantly larger than that with Set B. In fact, at $z=2$ the performance of our calculation in reproducing the results from Set B is comparable to the performance of the same in reproducing the results from both Set A and Set B at $z=0$. The lower volume of the higher-resolution simulation Set B unfortunately prevents us from drawing robust conclusions on the significance of its better agreement (compared to Set A) with our theoretical prediction. This picture becomes different if one focuses only on the diagonal of the covariance matrix at $z=2$, which is shown by the dashed lines in Fig.~\ref{fig:diagonal}. As mentioned at the beginning of this section, the correction of Set A obtained by matching the diagonal elements to Set B ensures that the two agree on the diagonal, within the noise of the smaller Set B. They both agree well with our prediction. Note that this level of agreement for $k \gtrsim 0.6 \:h\,{\rm Mpc}^{-1}$ depends quite crucially on the contribution from the 1-loop term. \section{Angular dependence of the matter power spectrum covariance}\label{sec:angles} \begin{figure} \centering \includegraphics[width=\textwidth]{fig_angledepcomp_paper.pdf} \caption{The left panel shows the diagonal of the multipoles $\ell=0, 2, 4$ of the total covariance matrix given by our prediction, as labeled. The dashed line indicates the Gaussian contribution to the covariance. The middle and right panels show the same three multipoles as a function of $k_1$, for two fixed values of $k_2$, as labeled. All results shown correspond to $z=0$, $V = 656.25\ h^{-3} {\rm Mpc}^3$ and $P_{m,i}\equiv P_m(k_i)$.} \label{fig:angle} \end{figure} We now go beyond the case of the angle-averaged non-Gaussian covariance and analyze its angular dependence, which can be organized into Legendre multipoles according to Eq.~(\ref{eq:angleaver}). For the case of the response-based contributions, the well defined analytical dependence on $\mu_{12}$ allows for all multipoles to be evaluated analytically (cf.~Appendix \ref{app:analy}). The angular dependence of the standard perturbation theory contributions (the tree-level one in our case) is more cumbersome and we perform the angle-averages numerically. Figure \ref{fig:angle} displays a few predictions at $z=0$ for the quadrupole ($\ell=2$) and hexadecupole ($\ell=4$), as well as the monopole ($\ell=0$) case studied more extensively in the previous section. The left panel shows the diagonal of these three multipoles. One notes that there is a hierarchy between the diagonal of these terms, with the monopole being the largest and the hexadecupole the smallest. The case depicted by the dashed black line corresponds to the Gaussian result of Eq.~(\ref{eq:gauss_mono}). All multipoles of the Gaussian contribution have this form. On scales $k\lesssim 0.3\:h\,{\rm Mpc}^{-1}$, all multipoles match the Gaussian result because on these scales the fractional size of the non-Gaussian contribution to the diagonal is negligible. The left two panels of Fig.~\ref{fig:angle} display {\it slices} at constant $k_2$ of the multipoles, which show that the hierarchy displayed along the diagonal does not necessarily hold for off-diagonal terms of the covariance. We note also that in the middle panel, the transition between the squeezed and standard tree-level results for $\ell=2$ and $\ell=4$ exhibits a much sharper discontinuity compared to the case for $\ell=0$. This may suggest that our choice of $f_\text{sq} = 0.5$ may have to be revisited in more careful investigations of the angular dependence of the covariance using our stitched results. We note also that the higher multipoles of the covariance $(\ell > 0)$ depend to a much higher degree on the anisotropic response coefficients $R_O(k)$, for which we currently only have extrapolations based on the physical reasoning described in Ref.~\cite{paper1}. The higher multipoles of the covariance matrix are much less studied in the literature than the monopole, since most investigations only consider the covariance of the angle-averaged matter power spectrum. One exception is Ref.~\cite{joachimpen2011}, who measure the angular dependence of the covariance using N-body simulations (see also Ref.~\cite{1999ApJ...527....1S} for an earlier perturbation theory calculation). Here, we do not attempt to perform detailed comparisons against these simulation results, given the lack of simulation-calibrated anisotropic response coefficients for their cosmology, and defer that to future work. Nevertheless, we point out that the prediction depicted in the left panel of Fig.~\ref{fig:angle} is in agreement with the hierarchy of the diagonal covariance elements shown in Fig.~10 of Ref.~\cite{joachimpen2011}. \section{Summary and Discussion}\label{sec:summ} We have described a calculation of the matter power spectrum covariance $\cov(\v{k}_1, \v{k}_2)$ based on perturbation theory augmented with specific resummed interaction vertices (the responses), which is applicable in all regimes of structure formation. More specifically, we describe the non-Gaussian part of the matter power spectrum covariance which is equivalent to the parallelogram configuration of the matter trispectrum, $T_m(\v{k}_1, -\v{k}_1, \v{k}_2, -\v{k}_2)$ (cf.~Eq.~(\ref{eq:covdef})). There are two other important contributions to the total matter covariance, namely the Gaussian diagonal term and the (also non-Gaussian) super-sample contribution, but these are both well understood. Our calculation is built upon the work of Ref.~\cite{paper1}, in which the authors have illustrated how the calculation of certain mode-coupling interactions in perturbation theory can be made accurate beyond the perturbative regime with the aid of power spectrum responses. The $n$-th order power spectrum responses $\mathcal{R}_n$ describe the coupling of $n$ long-wavelength modes with the local nonlinear matter power spectrum (cf.~Eq.~(\ref{eq:Rndef})), and these responses can be measured accurately with separate universe simulations. The crucial and novel steps of our calculation consist essentially in the identification of the mode-coupling terms in the non-Gaussian covariance that can be described as power spectrum responses (or more technically, that can be resummed to all orders in perturbation theory using responses), thereby enabling efficient and accurate evaluation of these terms in kinematical regimes in which standard perturbation theory breaks down. The well-defined angular structure of the $\mathcal{R}_n$ also permits us to straightforwardly determine the angular dependence of the covariance matrix (cf.~Sec.~\ref{sec:angles}). Although the formalism presented here still needs as ingredients response measurements from simulations, we stress that the number of simulations that need to be performed for these measurements are {\it orders of magnitude} fewer than those that are needed for the direct, fully simulation-based estimation of the power spectrum covariance. In this paper, we have worked explicitly at tree- and 1-loop-levels in the standard perturbation theory expansion of the non-Gaussian covariance (cf.~Eq.~(\ref{eq:covngexp})). At tree level (cf.~Sec.~\ref{sec:NGcovtree}), we have presented a way to stitch together response-based terms with terms from standard perturbation theory. At the 1-loop level (cf.~Sec.~\ref{sec:NGcov1loop}), we have seen that a response approach is particularly useful because a significant part of the contribution comes from the coupling of soft loop to hard external momenta, $p \ll k_1, k_2$, which are precisely interactions that power spectrum responses are able to capture. We have also pointed out, however, that our response-based 1-loop calculation still leaves important contributions to the covariance uncovered, but which can be added after some additional development (cf.~Secs.~\ref{sec:defnr} and \ref{sec:defhl}). In order to organize the discussion about which parts of the covariance are already captured by our description, and which require additional work, we can divide the parameter space of $\cov^{\rm NG, \ell=0}(k_1,k_2)$ into 5 kinematic regimes, as illustrated in Figure \ref{fig:reg}. This division naturally arises when distinguishing three regimes of wavenumber for $k_1$ and $k_2$: the linear regime ($k_i \ll k_\text{NL}$), the quasilinear regime $k_i \lesssim k_\text{NL}$, and the fully nonlinear regime $k \gtrsim k_\text{NL}$. The right panels in \reffig{reg} show $\cov^{\rm NG, \ell=0}(k_1,k_2)$ as a function of $k_1$, while keeping $k_2/k_1$ constant, i.e., fixed level of \emph{squeezing}. We now briefly discuss each of these regimes, denoting as throughout $k_\text{soft} \equiv \min\{k_1,k_2\}$ and $k_\text{hard} \equiv \max\{k_1,k_2\}$. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{fig_regimes_v2.png} \caption{Summary of the various kinematic regimes of the structure of the matter covariance matrix. The color plot shows the non-Gaussian covariance (angle-averaged and at $z=0$) obtained by summing the stitched tree and response 1-loop results, as described in this paper. The regions bounded by the solid lines cover roughly the five kinematic regimes discussed in the text, as labeled (cf.~Sec.~\ref{sec:summ}). The right panels correspond to the two slices of constant $k_2/k_1 = 0.65$ (upper right) and $k_2/k_1 = 0.25$ (lower right) depicted by the dotted lines in the color plot, with the regimes identified as well, as labeled. As discussed in the text, the covariance matrix shown correctly describes regimes \emph{I} and \emph{II}. It also accounts for a majority of the contribution in the other regimes (taking the simulation results of Ref.~\cite{blot2015} to guide the eye). However, there are still important known contributions that can be added to further increase the accuracy of the calculation in regimes \emph{III}, \emph{IV} and \emph{V}.} \label{fig:reg} \end{figure} \bigskip \hspace{0.2 cm} $\bullet$ \underline{\emph{I}: $k_1 \ll k_\text{NL},\ k_2 \ll k_\text{NL}$.} In this regime, with both modes in the linear regime, the covariance is captured completely by the standard tree-level result (cf.~Eq.~(\ref{eq:covtree})). \hspace{0.2 cm} $\bullet$ \underline{\emph{II}: $k_\text{soft} \ll k_\text{NL},\ k_\text{soft} \lesssim f_\text{sq} k_\text{hard}$, for any $k_\text{hard}$.} In this squeezed regime, with the soft mode being in the linear regime, $\cov^{\rm NG, \ell=0}(k_1, k_2)$ is exactly captured by the second-order power-spectrum response $\mathcal{R}_2$ (cf.~Eqs.~(\ref{eq:resptreecov})), up to corrections that scale as $\mathcal{O}((k_\text{soft}/k_\text{hard})^2)$, $\mathcal{O}((k_\text{soft}/k_\text{NL})^2)$. Note that the result remains valid for any value of the hard momenta, including $k_\text{hard} > k_\text{NL}$. \hspace{0.2 cm} $\bullet$ \underline{\emph{III}: $k_\text{soft} \lesssim k_\text{NL},\ k_\text{soft} \lesssim f_\text{sq} k_\text{hard}$, for any $k_\text{hard}$.} This regime is still squeezed, but with quasilinear values of $k_\text{soft}$. The response tree-level result that fully determines regime \emph{II} still contributes, but now loop terms are no longer negligible and become increasingly important with increasing $k_\text{soft}$. While in this paper, we have included only the single response-type contribution that is present for generic configurations (cf.~\refeq{Cov1loop}), we expect that all loop contributions in the squeezed regime can be captured by responses, in the sense that non-response-type contributions are suppressed by $(k_\text{soft}/k_\text{hard})^2$. \hspace{0.2 cm} $\bullet$ \underline{\emph{IV}: $k_\text{soft} \sim k_\text{hard} \lesssim k_\text{NL}$.} In this non-squeezed, quasi-linear regime, the standard tree level term still contributes non-negligibly and the 1-loop contributions are important. In this regime, we expect the non-response-type 1-loop terms to be relevant as well. Higher-order loop terms also become increasingly relevant as $k_1$ and $k_2$ approach $k_\text{NL}$. \hspace{0.2 cm} $\bullet$ \underline{\emph{V}: $k_\text{soft} \sim k_\text{hard} \gtrsim k_\text{NL}$.} In this regime, the tree-level and non-response-type 1-loop contributions are negligible and the result is dominated by response-type loop terms. In this paper, we have worked explicitly at 1-loop order, but higher-loop terms (cf.~Eq.~(\ref{eq:Cov2loop}) for two-loop) are expected to be significant as well. Crucially, we note that if higher loop contributions are not progressively suppressed, then the approach presented here will not be predictive in this regime. \bigskip The discussion points above motivate two immediate steps that can be taken to improve our prediction for $\cov^{\rm NG}$. One is the inclusion of 1-loop terms that cannot be described with power spectrum responses and which are expected to be important in regime \emph{IV}. These terms have already been derived and calculated in Ref.~\cite{2016PhRvD..93l3505B}. One can therefore include them into our model by following a ``stitching'' recipe similar to that employed at tree level in this paper. The other improvement is the inclusion of higher-loop response-type contributions, which is expected to result in relevant contributions to regimes \emph{III, IV,V}. This calculation is also crucial to establish the theoretical consistency of the approach presented here: our prediction on fully nonlinear scales is only robust if the higher-loop contributions can be shown to be progressively suppressed compared to the leading 1-loop contribution derived here. This could happen if the relevant angle-averages of higher-order responses are suppressed. In regimes \emph{I-II}, on the other hand, the current calculation captures already the total leading contribution to the covariance. An interesting consequence of this is that comparisons to our calculation in this regime can therefore serve as useful validation checks of simulation-based estimates of the covariance. \bigskip In summary, we have paved the way towards the development of a physically motivated framework that enables an efficient calculation of the matter power spectrum covariance without any adjustable free parameters, and which is valid deeply into the nonlinear regime of structure formation where standard perturbative schemes break down. Our calculation can also be generalized to describe matter correlations at different redshift values (by making use of unequal-time power spectrum responses \cite{paper1}), which is useful for tomographic cosmic shear analyses. Compared to standard ways to estimate the covariance matrix with N-body simulations, an approach combining simulations with analytical results such as the one put forward here has the enormous advantage of requiring far less computational resources. A straightforward consequence of this is that robust and systematic studies of the dependence on cosmology of the covariance can be performed through this approach. The same can be said about the impact of baryonic effects on the covariance matrix. These constitute pieces of information that are very relevant for upcoming observational surveys such as Euclid \cite{2011arXiv1110.3193L} and LSST \cite{2012arXiv1211.0310L} whose statistical precision will be at a level sufficient to make these systematic effects on the covariance a pressing concern. \begin{acknowledgments} We thank Linda Blot for providing the numerical measurements of the power spectrum covariance, and Jean-Michel Alimi, Linda Blot, Pier-Stefano Corasaniti, Joachim Harnois-D\'eraps, Wayne Hu, Irshad Mohammed, Yann Rasera, Vincent Reverdy, Uro$\check{\rm s}$ Seljak and Zvonimir Vlah for useful discussions. FS acknowledges support from the Marie Curie Career Integration Grant (FP7-PEOPLE-2013-CIG) ``FundPhysicsAndLSS,'' and Starting Grant (ERC-2015-STG 678652) ``GrInflaGal'' from the European Research Council. \end{acknowledgments}
1,116,691,499,472
arxiv
\section{Introduction} Biological membranes are permeable barriers which separate cells from their exterior, and consist of various molecules such as proteins embedded within a lipid bilayer structure. They are of partiular mathematical interest since they can exhibit a variety of shape transition behaviour such as bud-formation or vessicle fission and fusion \cite{mcmahon2005membrane}. Following the pioneering works of Canham and Helfrich \cite{canham1970minimum,helfrich1973elastic}, the established continuum model treats the biomembrane as an infinitesimally thin deformable surface. The associated elastic bending energy (the so called Canham-Helfrich energy), accounting for possible surface tension is given by, \begin{equation} \label{eqn-helfrich} \mathcal{E}(\Gamma):=\int_{\Gamma}\left(\frac{1}{2}\kappa(H-\spon)^2+\sigma+\gbrK\right){\rm d}\Gamma. \end{equation} Here $\Gamma=\partial\Omega$ is a two-dimensional hypersurface in $\mathbb{R}^3$ modelling the biomembrane and is given by the boundary of an open, bounded, connected set $\Omega\subset \mathbb{R}^3$. The parameters $\kappa>0$ and $\kappa_G>0$ are bending rigidities, $\spon$ is called the spontaneous curvature which is a measure of stress within the membrane for the flat configuration, $H$ is the mean curvature, $K$ is the Gauss curvature and $\sigma\geq 0$ is the surface tension. Biomembranes consisting of multiple differing lipid types can undergo phase separation, forming a disordered phase where the lipid molecules can diffuse more freely and an ordered phase where the lipid molecules are more tightly packed together. A connected field of study with large academic interest (for example see \cite{bassereau2018physics}) involves the nature of membrane rafts, more commonly referred to as lipid rafts which were first introduced in \cite{simons1997functional}. These are small (10-200nm), relatively ordered domains which are enriched with cholestorol and sphingolipids and are understood to compartmentalise cellular processes such as signal transduction, protein sorting and are important for other mechanisms such as host-pathogen interactions \cite{pike2006rafts}. Since the size of these rafts are beyond the diffraction limit, direct microscopic observation has not been possible. Experimental resuls have been limited to observations on larger artificial membranes whose composition lack the complexity of biomembranes, or using alternative microscopy techniques such as fluorescence microscopy which alters the composition of the membrane. In both cases the \emph{in vivo} inferences drawn are questionable and the field has remained controversial \cite{sezgin2017mystery}. Since the dynamics and processes governing the formation and maintenance of lipid rafts are not well understood a number of explanations have been offered. One suggestion is that raft formation is driven by cholestorol pinning, and a model for this was recently proposed by Garcke et al. \cite{AbeKam19,garcke2016coupled}. In this paper we consider whether the membrane geometry is a sufficient mechanism driving the formation of these rafts via protein interactions. Experimental results on artificial membranes have shown there exists a correlation between the composition of the different phases and the local membrane curvature \cite{baumgart2003imaging,rinaldin2018geometric,parthasarathy2006curvature,hess2007shape}. Proteins are both able to sense whether the local environment matches their curvature preference, as well as in large enough numbers induce that curvature upon the membrane \cite{callan2013curvature}. Since proteins have a preference for raft type regions we consider here whether phase dependent material parameters offers a possible explanation for the domain formation observed. To that end we introduce an order parameter $\phi$, and consider the energy \begin{equation} \label{eqn-helfrich+tension} \mathcal{E}(\Gamma):=\int_{\Gamma}\left(\frac{1}{2}\kappa(\phi)(H-\spon(\phi))^2+\sigma+\kappa_G(\phi)K+\frac{b\epsilon}{2}|\nabla_\Gamma\phi|^2+\frac{b}{\epsilon}W(\phi)\right){\rm d}\Gamma. \end{equation} The energy \eqref{eqn-helfrich+tension} is a modified version of \eqref{eqn-helfrich} where we have included a Ginzburg-Landau energy functional with coefficient $b>0$, to incorporate the line tension between the two phases as well as making explicit the dependence of the bending rigidities and spontanerous curvature on the phasefield. Here $W(\phi)$ is a smooth double well potential, with the local minimisers corresponding to the value $\phi$ takes in the respective phases, and $\epsilon>0$ is a small parameter commensurate with the width of the interface. An energy of this type was first proposed by Leibler \cite{leibler1986curvature}. In that case, the only material property taken to be dependent on the phase field was the spontaneous curvature, which was assumed to take the form \begin{equation} \label{eqn-leibler} \spon(\phi)=\Lambda\phi, \end{equation} where $\Lambda\in\mathbb{R}$ is the curvature coefficient. An energy of the general form given in \eqref{eqn-helfrich+tension} was considered in \cite{elliott2010modeling,elliott2010surface,elliott2013computation} from computational and formal asymptotics perspectives. The associated variational problem is highly nonlinear and leads to a free boundary on a free boundary. Other models have been suggested, such as in \cite{healey2017symmetry,wang2008modelling}. Here we utilise a recent pertubation approach for approximately spherical biomembranes introduced in \cite{elliott2016small}, in order to simplify \eqref{eqn-helfrich+tension}. This approach for flat domains using the Monge gauge approximation was considered in \cite{elliott2016variational}. The result is a variational problem on a fixed spherical domain. In order to apply the above mentioned perturbation approach we make the following additional assumptions on \eqref{eqn-helfrich+tension}: the only material parameter that depends on the phase field is the spontaneous curvature, which we take to have the form given in \eqref{eqn-leibler}; we rescale the coefficient $\Lambda$ and replace by $\rho\Lambda$, and rescale $b$ and replace by $\rho^2b$; the volume of $\Gamma$ is fixed, as well as the integral of the phasefield. The justification for these assumptions is as follows: a spontanous curvature of this type corresponds to the simple assumption that the proteins induce a curvature proportional to their area concentration; the $\rho$ scaling of the spontaneous curvature induces order $\rho$ deformations of the surface, the $\rho^2$ scaling is motivated since the line tension for lipid rafts has been calculated to depend quadratically on the spontaneous curvature \cite{kuzmin2005line}; the volume contraint corresponds to the impermeability of the membrane, and the order parameter constraint corresponds to a conservation of mass law on the embedded membrane proteins. After making these assumptions we obtain the following energy from \eqref{eqn-helfrich+tension} \begin{equation} \label{eqn-helfrich+tension+assumptions} \mathcal{E}(\Gamma):=\int_{\Gamma}\left(\frac{1}{2}\kappa(H-\rho\Lambda\phi)^2+\sigma+\gbrK+\frac{\rho^2b\epsilon}{2}|\nabla_\Gamma\phi|^2+\frac{\rho^2b}{\epsilon}W(\phi)\right){\rm d}\Gamma, \end{equation} subject to a volume constraint and mean value constraint. We remark that in the case that $\Gamma$ is a closed hypersurface (without boundary), then the Gauss-Bonnet Theorem gives that \begin{equation} \int_{\Gamma}K=2\pi\chi(\Gamma), \end{equation} where $\chi(\Gamma)$ is the Euler characteristic of $\Gamma$. So in the case the material parameter $\kappa_G$ is independent of the phase field, then the Gauss curvature term can be dropped from \eqref{eqn-helfrich+tension+assumptions}. The rest of the paper is set out as follows. In section 2 we briefly cover the notation and some preliminaries on surface calculus. In Section 3 we give the details of the perturbation approach alluded to above and derive an energy that approximates \eqref{eqn-helfrich+tension}. In section 4 we prove that within a suitable space minimisers exist to this approximate energy. In section 5 we consider a gradient flow and prove existence and uniquness results for these equations, before finally in section 6 we conduct some numerical experiments. \section{Notation and preliminaries} Within this section we state the basic definitions and results for a two dimensional $C^2-$hypersurface $\Gamma$ which will be used throughout this paper. For a thorough treatment of the material covered here we refer the reader to \cite{dziuk2013finite}. Given a point $x\in\Gamma$, with unit normal $\nu$, an open subset $U$ of $\mathbb{R}^{3}$ containing $x$, and a function $u\in C^1(U)$ we define the tangential gradient of $u,\nabla_\Gamma u$ by \begin{align} \nabla_\Gamma u=\nabla u-(\nabla u\cdot \nu)\nu, \end{align} and denote it's components by \begin{align} \nabla_\Gamma u=(\underline{D}_1u,\underline{D}_2u,\underline{D}_3u). \end{align} We can also define the Laplace-Beltrami operator of $u$ at $x$ by \begin{align} \Delta_\Gamma u(x)=\sum_{i=1}^{3}\underline{D}_i\underline{D}_iu(x). \end{align} Denoting the tangent space of $\Gamma$ at $x$ by $\T$, we define the Weingarten map $\mathcal{H}:\T\to\T$ by $\mathcal{H}:=\nabla_\Gamma \nu$ with eigenvalues given by the principle curvatures $\kappa_1$ and $\kappa_2$. The mean curvature is then given by \begin{align} H:=\Tr(\mathcal{H})=\kappa_1+\kappa_2, \end{align} which differs from the normal definition by a factor of 2. The Gauss curvature is then given by \begin{align} K:=\det(\mathcal{H})=\kappa_1\kappa_2. \end{align} We can consider the extended Weingarten map $\mathcal{H}:\mathbb{R}^3\to\T$ by defining $\mathcal{H}$ to have zero eigenvalue in the normal direction. For $p\in[1,\infty)$ we define $L^p(\Gamma)$ to be the space of functions $u:L^p(\Gamma)\to\mathbb{R}$ which are measurable with respect to the surface measure $\mathrm{d}\Gamma$ and have finite norm \begin{align} \|u\|_{L^p(\Gamma)}=\left(\int_{\Gamma}^{}|u|^p \:\mathrm{d}\Gamma\right)^\frac{1}{p}. \end{align} We say a function $u\in L^1(\Gamma)$ has the weak derivative $v_i=\underline{D}_iu$, if for every function $\phi\in C^1(\Gamma)$ with compact support $\overline{\{x\in\Gamma:\phi(x)\neq 0\}}$$\subset\Gamma$ we have the relation \begin{align} \int_{\Gamma}^{}u\underline{D}_i\phi \:\mathrm{d}\Gamma=-\int_{\Gamma}^{}\phi v_i \:\mathrm{d}\Gamma+\int_{\Gamma}^{}u\phi H\nu_i\:\mathrm{d}\Gamma. \end{align} We define the Hilbert spaces $H^1(\Gamma)$ and $H^2(\Gamma)$ by \begin{align} H^1(\Gamma):&=\left\{f\in L^2(\Gamma):f\text{ has weak derivatives }\underline{D}_if\in L^2(\Gamma), i\in\{1,2,3\}\right\}, \\ H^2(\Gamma):&=\left\{f\in H^1(\Gamma):f\text{ has weak derivatives }\underline{D}_i\underline{D}_jf\in L^2(\Gamma), i,j\in\{1,2,3\}\right\}, \end{align} \ with inner products given by \begin{align} (u,v)_{H^1(\Gamma)}&=\int_{\Gamma}\left(\nabla_\Gamma\cdot\nabla_\Gamma v+uv\right) \:\mathrm{d}\Gamma, \\ (u,v)_{H^2(\Gamma)}&=\int_{\Gamma}\left(\Delta_\Gamma u\Delta_\Gamma v+\nabla_\Gamma\cdot\nabla_\Gamma v+uv\right) \:\mathrm{d}\Gamma. \end{align} We comment that the inner products given above are not the standard ones used, but the induced norms are equivalent to the usual norms in the case $\Gamma$ is a closed surface, see \cite{dziuk2013finite}. Integration by parts is then given by \begin{theorem} \label{By Parts} Let $\Gamma$ be a bounded $C^2-$hypersurface (without boundary) and suppose $u\in H^1(\Gamma)$ and $v\in H^2(\Gamma)$. Then \begin{align} \int_{\Gamma}\nabla_\Gamma u\cdot\nabla_\Gamma v \:\:{\rm d}\Gamma=-\int_{\Gamma}u\Delta_\Gamma v\:\: {\rm d}\Gamma. \end{align} \end{theorem} Finally, given a family of evolving hypersurfaces $(\Gamma(t))_{t\in[0,T]}$ and velocity $v:\mathcal{G}\to\mathbb{R}^3$ where $\mathcal{G}=\cup_{t\in[0,T]}(\Gamma(t)\times\{t\})$ we consider $(x_0,t_0)\in\mathcal{G}$ and denote by $\gamma:(t_0-\delta,t_0+\delta)\to\mathbb{R}^3$ the unique solution to the initial value problem \begin{align} \gamma^\prime(t)=v(\gamma(t),t),\quad\gamma(t_0)=x_0. \end{align} Then for a function $f:\mathcal{G}\to\mathbb{R}$ we define the material time derivative by \begin{align} \partial^\bullet_t f(x_0,t_0):=\left.\frac{d}{dt}f(\gamma(t),t)\right|_{t=t_0}. \end{align} The transport theorem is then given by \begin{theorem}[Transport Theorem] \label{Transport} Let $\Gamma(t)$ be an evolving surface with velocity field $v$. Then assuming that $f$ is a function such that all the following quantities exist, then \begin{align} \frac{d}{dt}\int_{\Gamma(t)}f\:{\rm d}\Gamma(t)=\int_{\Gamma(t)}\partial^\bullet_t f+f\nabla_\Gamma\cdot v\:{\rm d}\Gamma. \end{align} \end{theorem} \section{Derivation of Model} In this section we apply the perturbation approach detailed in \cite{elliott2016small} in order to obtain an approximate energy to \eqref{eqn-helfrich+tension}. We first consider the Lagrangian \begin{align} \label{eqn-Lagrangian} \mathcal{L}(\Gamma,\lambda):=\kappa\mathcal{W}(\Gamma)+\sigma\mathcal{A}(\Gamma)+\lambda(\mathcal{V}(\Gamma)-V_0). \end{align} where \begin{align} \mathcal{W}(\Gamma)&=\int_{\Gamma}\frac{1}{2}H^2\:{\rm d}\Gamma,& \mathcal{A}(\Gamma)&=\int_{\Gamma}1\:{\rm d}\Gamma,& \mathcal{V}(\Gamma)&=\int_{\Gamma}\frac{1}{3}\text{Id}_\Gamma\cdot\nu\:{\rm d}\Gamma. \end{align} Here $\mathcal{W}$ denotes the Willmore energy, $\mathcal{A}$ the area functional and $\mathcal{V}$ the enclosed volume. Since the Willmore energy is scale invariant and the area is not, the volume is constrained using a Lagrange multiplier $\lambda$ and with fixed volume $V_0$. In addition $\mathcal{A}(\Gamma)$ and $\mathcal{V}(\Gamma)$ must satisfy the isoperimetric inequality \begin{equation} \mathcal{A}^3(\Gamma)\geq 36\pi \mathcal{V}^2(\Gamma). \end{equation} In \cite{elliott2016small} it was shown that \eqref{eqn-Lagrangian} has a critical point $(\Gamma_0,\lambda_0)$, where $\Gamma_0=S(0,R)$, the sphere of radius $R$ centred at the origin and $\lambda_0=-\frac{2\sigma}{R}$. Applying a small forcing term $\rho\mathcal{F}$ we expect a critical point of the perturbed Lagrangian \begin{align} \label{eqn-lagrangian} \mathcal{L}_\rho(\Gamma,\phi,\lambda,\mu):=&\kappa\mathcal{W}(\Gamma)+\sigma\mathcal{A}(\Gamma)+\lambda(\mathcal{V}(\Gamma)-V_0) +\rho\mathcal{F}(\Gamma,\phi,\mu). \end{align} to be of the form $(\Gamma_\rho,\phi_\rho,\lambda_\rho,\mu_\rho)$ where $\Gamma_\rho$ and $\lambda_\rho$ are perturbations given by \begin{align} \Gamma_\rho&=\{p+\rho(u\nu_0)(p):p\in\Gamma_0\},\\ \lambda_\rho&=\lambda_0+\rho\lambda_1, \label{eqn-deform} \end{align} of the critical point $(\Gamma_0,\lambda_0)$ for the non-perturbed Lagrangian $\mathcal{L}$. Here $\nu_0$ is the unit normal to $\Gamma_0$, $\rho\in\mathbb{R}$ such that $\rho\ll 1$ and $u\in C^2(\Gamma,\mathbb{R})$ is the height function that describes the deformation. Since $(\Gamma_\rho,\phi_\rho,\lambda_\rho,\mu_\rho)$ is a critical point it follows that \begin{align} \label{Lagrangian2} \begin{cases} \left.\frac{d}{ds}\mathcal{L}_\rho(\Gamma_\rho,\phi_\rho,\lambda_\rho,\mu_\rho+s\zeta)\right|_{s=0}=0&\qquad\forall\zeta\in\mathbb{R}, \\ \left.\frac{d}{ds}\mathcal{L}_\rho(\Gamma_\rho,\phi_\rho,\lambda_\rho+s\xi,\mu_\rho)\right|_{s=0}=0&\qquad\forall\xi\in\mathbb{R}, \\ \left.\frac{d}{ds}\mathcal{L}_\rho(\Gamma_\rho,\phi_\rho+s\eta,\lambda_\rho,\mu_\rho)\right|_{s=0}=0&\qquad\forall\eta\in C^1(\Gamma_\rho), \\ \left.\frac{d}{ds}\mathcal{L}_\rho(\Gamma_\rho^s,\phi_\rho,\lambda_\rho,\mu_\rho)\right|_{s=0}=0&\qquad\forall g\in C^2(\Gamma_\rho), \end{cases} \end{align} where $\Gamma_\rho^s:=\left\{x+sg\nu_\rho(x):x\in\Gamma_\rho\right\}$ and $\nu_\rho$ is the unit normal to $\Gamma_\rho$. We apply this perturbation method for the case that the forcing term $\mathcal{F}$ is given by \begin{align} \mathcal{F}(\Gamma,\phi,\mu)=\mathcal{F}_1(\Gamma,\phi)+\rho\mathcal{F}_2(\Gamma,\phi)+\mu(\mathcal{C}(\Gamma,\phi)-\alpha), \end{align} where \begin{align} \mathcal{F}_1(\Gamma,\phi)&:=-\int_{\Gamma}H\phi \:{\rm d}\Gamma,& \mathcal{F}_2(\Gamma,\phi)&:=\int_{\Gamma}b\left(\frac{\epsilon}{2}|\nabla_\Gamma\phi|^2+\frac{1}{\epsilon} W(\phi)+\frac{\kappa\Lambda^2\phi^2}{2b}\right)\:{\rm d}\Gamma, \end{align} are two forcing terms obtained from \eqref{eqn-helfrich+tension+assumptions} and $\mu$ is a Lagrange multiplier for the mean value constraint functional \begin{align} \mathcal{C}(\Gamma,\phi)&:=\Xint-_{\Gamma}\phi \:{\rm d}\Gamma=\alpha. \end{align} Since we are interested in doing a Taylor approximation of \eqref{eqn-lagrangian}, we need to calculate the first and second variations of some of the energy functionals above. We remark that in our case when determing the second variation it is sufficient to find the first variation of the first variation, although in general this need not be the case, see Remark 3.2 in \cite{elliott2016small}. We first state the following results, proofs of which can be found in the appendix of \cite{elliott2016small}. \begin{align} \mathcal{W}^\prime(\Gamma_0)[u\nu_0]&=0, & \mathcal{W}^{\prime\prime}(\Gamma_0)[u\nu_0,u\nu_0]&=\int_{\Gamma_0}\left((\Delta_{\Gamma_0}u)^2-\frac{2}{R^2}|\nabla_{\Gamma_0} u|^2\right)\:{\rm d}\Gamma_0, \\ \mathcal{V}^\prime(\Gamma_0)[u\nu_0]&=\int_{\Gamma_0} u\:{\rm d}\Gamma_0,& \mathcal{V}^{\prime\prime}(\Gamma_0)[u\nu_0,u\nu_0]&=\int_{\Gamma_0} H_0u^2\:{\rm d}\Gamma_0, \\ \mathcal{A}^\prime(\Gamma_0)[u\nu_0]&=\int_{\Gamma_0} H_0u\:{\rm d}\Gamma_0,& \mathcal{A}^{\prime\prime}(\Gamma_0)[u\nu_0,u\nu_0]&=\int_{\Gamma_0}\left( |\nabla_{\Gamma_0}u|^2+\frac{2u^2}{R^2}\right)\:{\rm d}\Gamma_0, \end{align} where we have denoted the mean curvature on $\Gamma_0$ and $\Gamma_\rho$ by $H_0$ and $H_\rho$ respectively. Similarly we will denote the extended Weingarten map on $\Gamma_0$ and $\Gamma_\rho$ by $\mathcal{H}_0$ and $\mathcal{H}_\rho$. For ease of notation we will also write $\tau_0=\left.\tau_\rho\right|_{\rho=0}$ and $\tau_1=\left.\partial^\bullet_\rho\tau_\rho\right|_{\rho=0}$ where $\tau$ is a placeholder for $\phi$ and $\mu$. It will be sufficient for our purposes to additionally only calculate the first variation of $\mathcal{F}(\Gamma,\phi,\mu)$, \begin{align} \begin{split} \mathcal{F}^\prime(\Gamma_0,\phi_0,\mu_0)[u\nu,\phi_1,\mu_1]=&\mathcal{F}_1^\prime(\Gamma_0,\phi_0)[u\nu,\phi_1]+\mathcal{F}_2(\Gamma_0,\phi_0) \\ &+\mu_1\left(\mathcal{C}(\Gamma_0,\phi_0)-\alpha\right)+\mu_0\mathcal{C}^\prime(\Gamma_0,\phi_0)[u\nu,\phi_1], \end{split} \end{align} which amounts to calculating the first variation of $\mathcal{F}_1(\Gamma,\phi)$ and $\mathcal{C}(\Gamma,\phi)$. By applying Theorem \ref{Transport} and using that $\partial^\bullet_\rho H_\rho=-\Delta_{\Gamma_\rho} u-|\mathcal{H}_\rho|^2 u$ (see Corollary A.1 in \cite{elliott2016small}) we obtain \begin{align} \begin{split} \left.\frac{d}{d\rho}\mathcal{F}_1(\Gamma_\rho,\phi_\rho)\right|_{\rho=0}&=-\left.\int_{\Gamma_\rho}\partial^\bullet_\rho(H_\rho\phi_\rho)+H_\rho\phi_\rho\nabla_{\Gamma_\rho}\cdot(u\nu_\rho)\:{\rm d}\Gamma_\rho\right|_{\rho=0} \\ &=\int_{\Gamma_0}\phi_0 \Delta_{\Gamma_0}u+\phi_0|\mathcal{H}_0|^2u-H_0\phi_1- H_0^2\phi_0u\:{\rm d}\Gamma_0, \end{split} \end{align} and hence using that $|\mathcal{H}_0|^2=\frac{2}{R^2}$ and $H_0=\frac{2}{R}$ gives that, \begin{align} \mathcal{F}_1^\prime(\Gamma_0,\phi_0)[u\nu_0,\phi_1]=\int_{\Gamma_0}\phi_0\left(\Delta_{\Gamma_0}u-\frac{2u}{R^2}\right)-\frac{2\phi_1}{R}\:{\rm d}\Gamma_0. \end{align} Similarly we obtain \begin{align} \begin{split} \mathcal{C}^\prime(\Gamma_0,\phi_0)[u\nu_0,\phi_1]&=\left.\Xint-_{\Gamma_0}\phi_1+\phi_0\nabla_\Gamma\cdot(u\nu_0)\:{\rm d}\Gamma_0\right.-\frac{\left.\frac{d}{d\rho}\int_{\Gamma_\rho}1\:{\rm d}\Gamma_\rho\right|_{\rho=0}}{\int_{\Gamma_0}1\:{\rm d}\Gamma_0}\Xint-_{\Gamma_0}\phi_0\:{\rm d}\Gamma_0 \\ &=\Xint-_{\Gamma_0}\left(\phi_1+\frac{2\phi_0u}{R}\right)\:{\rm d}\Gamma_0-\frac{2}{R}\left(\Xint-_{\Gamma_0}u\:{\rm d}\Gamma_0\right)\left(\Xint-_{\Gamma_0}\phi_0\:{\rm d}\Gamma_0\right). \end{split} \end{align} We can determine $\mu_0$ explicitly since from \eqref{Lagrangian2} we have that \begin{align} \frac{d}{ds}\rho\mathcal{F}(\Gamma_\rho,\phi_\rho+s\eta,\mu_\rho)=0, \end{align} and therefore \begin{align} \mathcal{F}_1(\Gamma_0,\eta)+\mu_0\mathcal{C}(\Gamma_0,\eta)=0, \end{align} from which we obtain that $\mu_0=\frac{2|\Gamma_0|}{R}$. It therefore follows that \begin{align} \nonumber \mathcal{F}^\prime(\Gamma_0,\phi_0,\mu_0)[u\nu,\phi_1,\mu_1]=&\int_{\Gamma_0}\left[\phi_0\Delta_{\Gamma_0}u+\frac{2\phi_0u}{R^2}+\frac{b\epsilon}{2}|\nabla_{\Gamma_0}\phi_0|^2+\frac{b}{\epsilon}W(\phi_0)+\frac{\kappa\Lambda^2\phi_0^2}{2}\right]\:{\rm d}\Gamma_0, \end{align} where above we have also used the linearised Lagrange multiplier constraints \begin{align} \int_{\Gamma_0}u\:{\rm d}\Gamma_0&=0&\Xint-_{\Gamma_0}\phi_0\:{\rm d}\Gamma_0&=\alpha \end{align} which are obtained from \eqref{Lagrangian2}. We can now prove the following result. \begin{theorem} \label{Taylor} With the assumptions given above it follows that \begin{align} \mathcal{L}_\rho(\Gamma_\rho,\phi_\rho,\lambda_\rho,\mu_\rho)=C_1+\rho C_2+\rho^2\mathcal{E}(\phi_0,u)+\mathcal{O}(\rho^3), \end{align} where \begin{align} \label{eqn-peturb-energy} \begin{split} \mathcal{E}(\phi_0,u):=&\int_{\Gamma_0}\frac{\kappa}{2}(\Delta_{\Gamma_0}u)^2+\frac{1}{2}\left(\sigma-\frac{2\kappa}{R^2}\right)|\nabla_{\Gamma_0}u|^2-\frac{\sigma u^2}{R^2} \\ &\quad+\kappa\Lambda\phi_0\Delta_{\Gamma_0}u+\frac{2\kappa\Lambda u\phi_0}{R^2}+\frac{b\epsilon}{2}|\nabla_{\Gamma_0}\phi_0|^2+ \frac{b}{\epsilon}W(\phi_0)+\frac{\kappa\Lambda^2\phi_0^2}{2}\:{\rm d}\Gamma_0 \end{split} \end{align} for $C_1$ and $C_2$ constant. \end{theorem} \begin{proof} We wish to apply Taylor's Theorem so that we can obtain a good approximation to the perturbed Langrangian $\mathcal{L}_\rho(\Gamma_\rho,\phi_\rho,\lambda_\rho,\mu_\rho)$. Performing a second order Taylor expansion in $\rho$ we obtain that \begin{align} \begin{split} \mathcal{L}_\rho(\Gamma_\rho,\phi_\rho,\lambda_\rho,\mu_\rho)=&\mathcal{L}_0(\Gamma_0,\phi_0,\lambda_0,\mu_0)+\rho\left.\frac{d}{d\rho}\mathcal{L}_\rho(\Gamma_\rho,\phi_\rho,\lambda_\rho,\mu_\rho)\right|_{\rho=0} \\ &+\frac{\rho^2}{2}\left.\frac{d^2}{d\rho^2}\mathcal{L}_\rho(\Gamma_\rho,\phi_\rho,\lambda_\rho,\mu_\rho)\right|_{\rho=0}+\mathcal{O}(\rho^3). \end{split} \end{align} We first observe that $\mathcal{L}_0(\Gamma_0,\phi_0,\lambda_0,\mu_0)=\kappa\mathcal{W}(\Gamma_0)+\sigma\mathcal{A}(\Gamma_0)$. For the second term we use that $(\Gamma_0,\lambda_0)$ is a critical point of $\mathcal{L}$ and obtain that \begin{align} \left.\frac{d}{d\rho}\mathcal{L}_\rho(\Gamma_\rho,\phi_\rho,\lambda_\rho,\mu_\rho)\right|_{\rho=0}&=\kappa\Lambda\mathcal{F}_1(\Gamma_0,\phi_0)=-\frac{2\kappa\Lambda}{R}\int_{\Gamma_0}\phi_0\:{\rm d}\Gamma_0=-8\kappa\Lambda\pi R\alpha. \end{align} We therefore see that the second order term is the lowest order term which depends on any of the variables. It remains to determine the form of this second order term. To do this we write \begin{align} \begin{split} \left.\frac{d^2}{d\rho^2}\mathcal{L}_\rho(\Gamma_\rho,\phi_\rho,\lambda_\rho,\mu_\rho)\right|_{\rho=0}=&\kappa\mathcal{W}^{\prime\prime}(\Gamma_0)[u\nu_0,u\nu_0]+\sigma\mathcal{A}^{\prime\prime}(\Gamma_0)[u\nu_0,u\nu_0]+\lambda_0\mathcal{V}^{\prime\prime}(\Gamma_0)[u\nu_0,u\nu_0] \\ &+2\lambda_1\mathcal{V}^\prime(\Gamma_0)[u\nu_0]+2\mathcal{F}^\prime(\Gamma_0,\phi_0,\mu_0)[u\nu_0,\phi_1,\mu_1] \\ =&2\mathcal{E}(\phi_0,u), \end{split} \end{align} as required. \end{proof} We note that formally taking $R\to\infty$ in \eqref{eqn-peturb-energy} we obtain the approximation given in \cite{leibler1986curvature} and more recently considered in \cite{fonseca2016domain} for a flat domain. It is this energy which we will study in the remainder of the paper. For ease of notation from now on we will denote $\Gamma_0$ by $\Gamma$ and $\phi_0$ by $\phi$. \section{Energy minimisers} We will restrict ourselves to considering the energy $\mathcal{E}(\cdot,\cdot):\mathcal{K}\to \mathbb{R}$ given in \eqref{eqn-peturb-energy} for a $W:\mathbb{R}\to\mathbb{R}$ that satisfies the following properties: \begin{enumerate} \item $W(\cdot)\in C^1(\mathbb{R},\mathbb{R})$, \item There exists $c_0\in \mathbb{R}^+$ such that $(W^\prime(r)-W^\prime(s))(r-s)\geq -c_0|r-s|^2$ $\forall r,s\in \mathbb{R}$, \item There exists $c_1, c_2\in \mathbb{R}^+$ such that $c_1r^4-c_2\leq W(r)$, $\forall r\in\mathbb{R}$, \item There exists $c_3, c_4\in \mathbb{R}^+$ such that $W^\prime(r)\leq c_3W(r)+c_4$, \item There exists $c_5\in\mathbb{R}^+$ such that $W^\prime(r)r\geq-c_5r^2$, \end{enumerate} and for $\mathcal{K}$ given by \begin{equation} \mathcal{K}:=\left\{(\phi,u)\in H^1(\Gamma)\times H^2(\Gamma):\Xint-_{\Gamma}\phi\:{\rm d}\Gamma=\alpha \text{ and }u\in \text{span}\{1,\nu_1,\nu_2,\nu_3\}^\perp\right\}. \end{equation} where the $\nu_i$ are the components of the normal $\nu$ of $\Gamma$ and orthogonality is understood in the $H^2(\Gamma)$ sense; although in this case it's equivalent to orthogonality in the $L^2(\Gamma)$ sense. We motivate this choice of $\mathcal{K}$ as follows. The regularity required means a subspace of $H^1(\Gamma)\times H^2(\Gamma)$ is the natural choice to make. $\int_{\Gamma} u \:{\rm d}\Gamma=0$ is a linearised volume constraint which corresponds to membrane impermeability, $\Xint-_\Gamma \phi \:{\rm d}\Gamma=\alpha$ is a linearised conservation of mass constraint on the membrane particles and $\int_{\Gamma}u\nu_i\:{\rm d}\Gamma=0$ for $i\in\{1,2,3\}$ are linearised translation invariance constraints on the membrane. Mathematically, these translation invariances arise since $\{\nu_1,\nu_2,\nu_3\}$ lie in the nullspace of $\mathcal{E}(\phi,\cdot)$. We first address the question of existence. \begin{Proposition} \label{Prop-Direct} There exists $(\phi^*,u^*)\in\mathcal{K}$ such that \begin{displaymath} \mathcal{E}(\phi^*,u^*)=\inf_{(\phi,u)\in\mathcal{K}}\mathcal{E}(\phi,u). \end{displaymath} \end{Proposition} \begin{proof} We have that $H^1(\Gamma)\times H^2(\Gamma)$ is a Hilbert space so it is reflexive and since $\mathcal{K}$ is a sequentially weakly closed subset of $H^1(\Gamma)\times H^2(\Gamma)$ then existence of a minimiser will follow from the Direct method (See Theorem 9.3-1 in \cite{ciarlet2013linear}) provided $\mathcal{E}(\cdot,\cdot):\mathcal{K}\to\mathbb{R}$ is coercive and sequentially weakly lower semicontinuous. We note the Poincare type inequality given by \begin{equation} \label{eqn-poincare} \int_{\Gamma}^{}u^2\:{\rm d}\Gamma\leq\frac{R^2}{6}\int_{\Gamma}^{}|\nabla_\Gamma u|^2\:{\rm d}\Gamma\leq\frac{R^4}{36}\int_{\Gamma}^{}(\Delta_\Gamma u)^2\:{\rm d}\Gamma, \end{equation} which holds for all $u\in\text{span}\{1,\nu_1,\nu_2,\nu_3\}^\perp$ (see \cite{elliott2016small}). Using this, Young's inequality and property (3) of $W(\cdot)$ it follows that there exists $C_1,C_2$ and $C_3\in\mathbb{R}^+$ such that, \begin{equation} \mathcal{E}(\phi,u)\geq C_1\|u\|^2_{H^2(\Gamma)}+ C_2\|\phi\|^2_{H^1(\Gamma)}-C_3. \end{equation} Hence $\mathcal{E}(\cdot,\cdot):\mathcal{K}\to\mathbb{R}$ is coercive. To prove that $\mathcal{E}(\cdot,\cdot):\mathcal{K}\to\mathbb{R}$ is sequentially weakly lower semi continuous we first note that the quadratic terms in $u$ form a bounded, symmetric and positive definite bilinear form and hence are weakly lower semi-continuous. A similar argument can be applied for the $|\nabla_\Gamma\phi|^2$ term. The remaining terms are also weakly lower semi-continuous by an application of a Rellich-Kondrachov type compactness embedding theorem \cite{aubin1982nonlinear}. This then completes the proof. \end{proof} \subsection{Euler-Lagrange equations} \label{332} Knowing that minimisers of \eqref{eqn-peturb-energy} exist, we want to say something about their structure. Therefore we compute the Euler equations associated with the energy functional $\mathcal{E}(\cdot,\cdot)$ over the space $\mathcal{K}$, and secondly over the full space $H^1(\Gamma)\times H^2(\Gamma)$, by introducing the constraints as Lagrange multipliers. By applying Euler's Theorem (See Theorem 7.1-5 in \cite{ciarlet2013linear}) it follows that a critical point (and hence a minimiser $(\phi^*,u^*)$ of Proposition \ref{Prop-Direct}) is a solution of the following problem: \begin{Problem} \label{Prob-Euler} Find $(\phi,u)\in \mathcal{K}$ such that \begin{align} \label{EL2} \int_{\Gamma}\frac{b}{\epsilon}W^\prime(\phi)w+b\epsilon\nabla_\Gamma \phi\cdot\nabla_\Gamma w+\kappa\Lambda w\Delta_\Gamma u+\frac{2\kappa\Lambda u w}{R^2}+\kappa\Lambda^2\phi w\:{\rm d}\Gamma&=0, \\ \label{EL1} \int_{\Gamma}\kappa\Delta_\Gamma u\Delta_\Gamma v+\left(\sigma-\frac{2\kappa}{R^2}\right)\nabla_\Gamma u\cdot\nabla_\Gamma v-\frac{2\sigma}{R^2}u v+\kappa\Lambda\phi\Delta_\Gamma v+\frac{2\kappa\Lambda\phi v}{R^2}\:{\rm d}\Gamma&=0, \end{align} for all $w\in W:=\left\{\eta\in H^1(\Gamma):\int_{\Gamma}\eta\:{\rm d}\Gamma=0\right\}$ and for all $v\in V:=\{\eta\in H^2(\Gamma):\eta\in \text{span}\{1,\nu_1,\nu_2,\nu_3\}^\perp\}$. \end{Problem} By defining \begin{align*} \varphi_0&:=\int_{\Gamma}u\:{\rm d}\Gamma, & \varphi_i&:=\int_{\Gamma}\nu_i u\:{\rm d}\Gamma, & \varphi_4&:=\int_{\Gamma}(\phi-\alpha)\:{\rm d}\Gamma, \end{align*} for $i\in\{1,2,3\}$ and observing that their Frechet derivatives exist and are continuous, linear and bijective it follows from the Euler-Lagrange Theorem (Theorem 7.15-1 in \cite{ciarlet2013linear}) that if $(\phi,u)$ is a solution of Problem \ref{Prob-Euler} then there exists $\lambda\in\mathbb{R}^5$ such that $(\phi,u,\lambda)$ is a solution of the problem given below. \begin{Problem} Find $(\phi,u,\lambda)\in \mathcal{K}\times \mathbb{R}^5$ such that for all $w\in H^2(\Gamma)$ and for all $v\in H^2(\Gamma)$, \begin{align} \label{EL4} \int_{\Gamma}\left(\frac{b}{\epsilon}W^\prime(\phi)w+b\epsilon\nabla_\Gamma \phi\cdot\nabla_\Gamma w+\frac{2\kappa\Lambda u w}{R^2}+\kappa\Lambda\Delta_\Gamma u w+\kappa\Lambda^2\phi w+\lambda_0w\right)\:{\rm d}\Gamma&=0, \\ \label{EL3} \begin{split} \int_{\Gamma}\left(\kappa\Delta_\Gamma u\Delta_\Gamma v+\left(\sigma-\frac{2\kappa}{R^2}\right)\nabla_\Gamma u\cdot\nabla_\Gamma v-\frac{2\sigma}{R^2}u v+\right.\qquad\qquad\qquad\qquad\qquad\: \\ \left.\kappa\Lambda\phi\Delta_\Gamma v+\frac{2\kappa\Lambda\phi v}{R^2}+\sum_{i=1}^{3}\lambda_iv\nu_i+\lambda_4v\right)\:{\rm d}\Gamma&=0. \end{split} \end{align} \end{Problem} By testing with appropriate functions we can determine the values of the Lagrange multipliers $\lambda_i$ for $i\in\{0,1,2,3,4\}$. Testing equation (\ref{EL4}) with 1 it follows that the Lagrange multiplier $\lambda_0$ is given by \begin{displaymath} \lambda_0=-\kappa\Lambda^2\alpha-\frac{b}{\epsilon}\Xint-_{\Gamma}W^\prime(\phi)\:{\rm d}\Gamma. \end{displaymath} Testing equation (\ref{EL3}) with $\nu_i$, and using the fact that $-\Delta_\Gamma\nu_i=\frac{2}{R^2}\nu_i$ and $\int_{\Gamma}\nu_i\nu_j\:{\rm d}\Gamma=\frac{4\pi R^2}{3}\delta_{ij}$ it follows that \begin{align*} \lambda_i&=0\qquad\text{ for }i=1,2,3. \end{align*} Finally testing equation (\ref{EL3}) it follows that \begin{displaymath} \lambda_4=-\frac{2\kappa\Lambda\alpha}{R^2}. \end{displaymath} The PDEs corresponding with \eqref{EL4} and \eqref{EL3} are then given by \begin{align} \label{eqn-EL1} \frac{b}{\epsilon}W^\prime(\phi)-b\epsilon\Delta_\Gamma\phi+\kappa\Lambda\Delta_\Gamma u+\frac{2\kappa\Lambda u}{R^2}+\kappa\Lambda^2\phi+\lambda_0=&0, \\ \label{eqn-EL2} \kappa\Delta_\Gamma^2 u-\left(\sigma-\frac{2\kappa}{R^2}\right)\Delta_\Gamma u-\frac{2\sigma u}{R^2}+\Lambda\Delta_\Gamma\phi+\frac{2\Lambda\phi }{R^2}+\lambda_4=&0. \end{align} \subsection{Reduced Order Derivation} The Euler-Lagrange equations given in \eqref{eqn-EL1} and \eqref{eqn-EL2} can be simplified to a system of two second order equations. We rewrite \eqref{eqn-EL2} as follows \begin{align} \label{eqn-EL2*} \left(\Delta_\Gamma+\frac{2}{R^2}\right)\left(\frac{\sigma}{\kappa}-\Delta_\Gamma \right)u=\Lambda\left(\Delta_\Gamma+\frac{2}{R^2}\right)(\phi-\alpha), \end{align} and note that if \begin{align} \left(\Delta_\Gamma +\frac{2}{R^2}\right)z=0, \end{align} then $z$ is an eigenfunction of $-\Delta_\Gamma$ with eigenvalue $\frac{2}{R^2}$ and hence $z\in\text{span}\{\nu_1,\nu_2,\nu_3\}$. Therefore it follows from \eqref{eqn-EL2*} that there exists some $\beta\in\text{span}\{\nu_1,\nu_2,\nu_3\}$ such that \begin{align} \label{eqn:red-ord-1} \left(\frac{\sigma}{\kappa}-\Delta_\Gamma\right)u=\Lambda(\phi-\alpha)+\beta. \end{align} Now writing $\mathcal{V}=\text{span}\{1,\nu_1,\nu_2,\nu_3\}^\perp$ it follows from a simple calculation that since $u\in\mathcal{V}$ then $\left(\frac{\sigma}{\kappa}-\Delta_\Gamma\right)u\in\mathcal{V}$ also. Denoting the projection onto $\mathcal{V}$ by $\mathbf{P}$ and applying this projection to \eqref{eqn:red-ord-1} results in \begin{align} \label{eqn:red-ord-2} \left(\frac{\sigma}{\kappa}-\Delta_\Gamma\right)u=\Lambda\mathbf{P}\phi. \end{align} This motivates introducing an operator $\mathcal{G}:\mathcal{V}\to \mathcal{V}$ where given $\eta\in\mathcal{V}$, $\mathcal{G}(\eta)$ denotes the unique solution $v\in\mathcal{V}$ of the elliptic equation \begin{align} \left(\frac{\sigma}{\kappa}-\Delta_\Gamma \right)v=\Lambda\eta. \end{align} From this and \eqref{eqn:red-ord-2} it follows that \begin{align} \label{eqn-reducedHeight} u=\mathcal{G}(\mathbf{P}\phi). \end{align} Therefore we can rewrite \eqref{eqn-EL1} as \begin{align} \label{RedOrd PDE} \frac{b}{\epsilon}\left(W^\prime(\phi)-\Xint-_\Gamma W^\prime(\phi)\:{\rm d}\Gamma\right)-b\epsilon\Delta_\Gamma\phi+\kappa\Lambda\left(\Delta_\Gamma +\frac{2}{R^2}\right)\mathcal{G}(\mathbf{P}\phi)+\kappa\Lambda^2(\phi-\alpha)=0, \end{align} or equivalently \begin{align} \label{RedOrd PDE2} \frac{b}{\epsilon}\left(W^\prime(\phi)-\Xint-_\Gamma W^\prime(\phi)\:{\rm d}\Gamma\right)-b\epsilon\Delta_\Gamma\phi+\kappa\Lambda\mathcal{G}\left(\left(\Delta_\Gamma +\frac{2}{R^2}\right)(\phi-\alpha)\right)+\kappa\Lambda^2(\phi-\alpha)=0. \end{align} Using \eqref{eqn-reducedHeight} we can define a new energy $\widetilde{\mathcal{E}}$ given by \begin{align} \widetilde{\mathcal{E}}(\phi):=\mathcal{E}(\phi,\mathcal{G}(\phi)), \end{align} which simplifies to \begin{align} \label{eqn-reducedenergy} \widetilde{\mathcal{E}}(\phi)=\int_{\Gamma}\frac{\kappa\Lambda}{2}\mathbf{P}\phi\left(\Delta_\Gamma+\frac{2}{R^2}\right)\mathcal{G}(\mathbf{P}\phi)+\frac{b\epsilon}{2}|\nabla_{\Gamma}\phi|^2+ \frac{b}{\epsilon}W(\phi)+\frac{\kappa\Lambda^2\phi^2}{2}\:{\rm d}\Gamma. \end{align} We note that if $(\phi^*,u^*)$ is a minimiser of $\mathcal{E}$ then $u^*=\mathcal{G}(\mathbf{P}\phi^*)$ since it is also a critical point and must satisfy \eqref{eqn-EL2*}. Let us further suppose that $\widetilde{\phi}^*$ is a minimiser of $\widetilde{\mathcal{E}}$ then it follows that \begin{align} \label{eqn:red-ord-3} \mathcal{E}(\phi^*,u^*)\leq\mathcal{E}(\widetilde{\phi}^*,\mathcal{G}(\mathbf{P}\widetilde{\phi}^*))=\widetilde{\mathcal{E}}(\widetilde{\phi}^*)\leq\widetilde{\mathcal{E}}(\phi^*)=\mathcal{E}(\phi^*,\mathcal{G}(\mathbf{P}\phi^*))=\mathcal{E}(\phi^*,u^*), \end{align} and hence all the inequalities in \eqref{eqn:red-ord-3} are equalities so $\phi^*$ is a minimiser of $\widetilde{\mathcal{E}}$ and $(\widetilde{\phi}^*,\mathcal{G}(\mathbf{P}\widetilde{\phi}^*))$ is a minimiser of $\mathcal{E}$. Therefore we find that finding minimisers of $\widetilde{\mathcal{E}}$ is equivalent to finding minimisers of $\mathcal{E}$. \section{Gradient Flow} We observe that the first variation of $\mathcal{E}(\cdot,\cdot)$ is given by \begin{align*} \mathcal{E}^\prime(\phi,u)[w,v]=\int_{\Gamma}&\frac{b}{\epsilon}W^\prime(\phi)w+b\epsilon\nabla_\Gamma\phi\cdot\nabla_\Gamma w+\left(\sigma-\frac{2\kappa}{R^2}\right)\nabla_\Gamma u\cdot\nabla_\Gamma v+\kappa\Delta_\Gamma u\Delta_\Gamma v \\ &-\frac{2\sigma u v}{R^2}+\kappa\Lambda w\Delta_\Gamma u+\kappa\Lambda\phi\Delta_\Gamma v-\frac{2\kappa\Lambda u w}{R^2}-\frac{2\kappa\Lambda \phi v}{R^2}+\kappa\Lambda^2\phi w\:{\rm d}\Gamma. \end{align*} We consider the equations \begin{align} \label{GF1} \begin{split} -\alpha_1(\phi_t,w)_{L^2(\Gamma)}&=\int_{\Gamma}\frac{b}{\epsilon}W^\prime(\phi)w+b\epsilon\nabla_\Gamma\phi\cdot\nabla_\Gamma w \\ &\qquad+\kappa\Lambda w\Delta_\Gamma u +\frac{2\kappa\Lambda u w}{R^2}+\kappa\Lambda^2\phi w\:{\rm d}\Gamma, \end{split} \\ \label{GF2} \begin{split} -\alpha_2(u_t,v)_{L^2(\Gamma)}&=\int_{\Gamma}\left(\sigma-\frac{2\kappa}{R^2}\right)\nabla_\Gamma u\cdot\nabla_\Gamma v+\kappa\Delta_\Gamma u\Delta_\Gamma v \\ &\qquad-\frac{2\sigma u v}{R^2}+\kappa\Lambda\phi\Delta_\Gamma v+\frac{2\kappa\Lambda \phi v}{R^2}\:{\rm d}\Gamma, \end{split} \end{align} for all $v\in V$ and for all $w\in W$, which can be seen to give rise to a gradient flow of $\mathcal{E}(\phi,u)$ in $W\times V$ since \begin{align} \frac{d}{dt}\mathcal{E}(\phi,u)=-\alpha_1\|\phi_t||^2_{L^2(\Gamma)}-\alpha_2\|u_t\|^2_{L^2(\Gamma)}\leq 0. \end{align} By applying the Euler-Lagrange theorem, and introducing Lagrange multipliers $\lambda_i$ for $i\in\{0,1,2,3,4\}$ this implies that for all $w\in H^1(\Gamma)$ and for all $v\in H^2(\Gamma)$, \begin{align} \label{EL5} \begin{split} -\alpha_1(\phi_t,w)_{L^2(\Gamma)}&=\int_{\Gamma}\frac{b}{\epsilon}W^\prime(\phi)w+b\epsilon\nabla_\Gamma\phi\cdot\nabla_\Gamma w \\ &\qquad+\kappa\Lambda w\Delta_\Gamma u +\frac{2\kappa\Lambda u w}{R^2}+\kappa\Lambda^2\phi w+\lambda_0w\:{\rm d}\Gamma, \end{split} \\ \label{EL6} \begin{split} -\alpha_2(u_t,v)_{L^2(\Gamma)}&=\int_{\Gamma}\left(\sigma-\frac{2\kappa}{R^2}\right)\nabla_\Gamma u\cdot\nabla_\Gamma v+\kappa\Delta_\Gamma u\Delta_\Gamma v \\ &\qquad-\frac{2\sigma u v}{R^2}+\kappa\Lambda\phi\Delta_\Gamma v+\frac{2\kappa\Lambda \phi v}{R^2}+\sum_{i=1}^{3}\lambda_iv\nu_i+\lambda_4v\:{\rm d}\Gamma, \end{split} \end{align} where $\lambda_i$ for $i\in\{0,1,2,3\}$ are Lagrange multipliers. Testing equation \eqref{EL5} with $1$ and equation \eqref{EL6} with $1,\nu_1,\nu_2$ and $\nu_3$ as in subsection \ref{332}, we observe that the Lagrange multipliers $\lambda_i$ for $i\in\{0,1,2,3,4\}$ are again given by \begin{align} \lambda_0&=-\kappa\Lambda^2\alpha-\frac{b}{\epsilon}\Xint-_\Gamma W^\prime(\phi)\:{\rm d}\Gamma,&& \lambda_1=\lambda_2=\lambda_3=0,&& \lambda_4=-\frac{2\kappa\Lambda\alpha}{R^2}. \end{align} Hence, a gradient flow of $\mathcal{E}(\cdot,\cdot)$ in $W\times V$ is given by \begin{align} \label{AC PDE} \begin{cases} \alpha_1\phi_t+\frac{b}{\epsilon}W^\prime(\phi)-b\epsilon\Delta_\Gamma\phi+\kappa\Lambda\Delta_\Gamma u+\frac{2\kappa\Lambda u}{R^2}+\kappa\Lambda^2\phi+\lambda_0=0&\Gamma\times(0,T),\\ \alpha_2 u_t-\left(\sigma-\frac{2\kappa}{R^2}\right)\Delta_\Gamma u+\kappa\Delta_\Gamma^2 u-\frac{2\sigma u}{R^2}+\kappa\Lambda\Delta_\Gamma\phi+\frac{2\kappa\Lambda \phi}{R^2}+\lambda_4=0&\Gamma\times (0,T),\\ \phi(\cdot,0)=\phi_0(\cdot)&\Gamma\times\{t=0\},\\ u(\cdot,0)=u_0(\cdot)&\Gamma\times\{t=0\}. \end{cases} \end{align} \subsection{Existence} Before turning to consider numerical simulations of \eqref{AC PDE}, we first address questions related to well-posedness. Beginning with existence we will prove the following result. \begin{theorem} \label{Existence} Suppose $(\phi_0,u_0)\in\mathcal{K}$, then there exists $(\phi,u)\in\mathcal{K}$ such that \begin{align*} \phi&\in L^\infty(0,T;H^1(\Gamma))\cap C([0,T];L^2(\Gamma)), \\ u&\in L^\infty(0,T; H^2(\Gamma))\cap C([0,T];L^2(\Gamma)), \\ \phi^\prime&\in L^2(0,T;L^2(\Gamma)), \\ u^\prime&\in L^2(0,T;L^2(\Gamma)), \\ u_0&=u(0), \\ \phi_0&=\phi(0), \end{align*} and satisfying \begin{align} \begin{split} -\int_{0}^{T}\alpha_1\left<\phi^\prime,\eta\right>\:{\rm dt}=\int_0^T\left[\int_{\Gamma}\right.&\frac{b}{\epsilon}\left(W^\prime(\phi)-\Xint-_\Gamma W^\prime(\phi)\:{\rm d}\Gamma\right)\eta+b\epsilon\nabla_\Gamma\phi\cdot\nabla_\Gamma\eta \\ &\left.-\kappa\Lambda\nabla_\Gamma u\cdot\nabla_\Gamma\eta+\frac{2\kappa\Lambda u\eta}{R^2}+\kappa\Lambda^2(\phi-\alpha)\eta\:{\rm d}\Gamma \right]\:{\rm dt}, \end{split} \\ \begin{split} -\int_{0}^{T}\alpha_2\left<u^\prime,\xi\right>\:{\rm dt}=\int_0^T\left[\int_{\Gamma}\right.& \kappa\Delta_\Gamma u\Delta_\Gamma\xi+\left(\sigma-\frac{2\kappa}{R^2}\right)\nabla_\Gamma u\cdot\nabla_\Gamma\xi \\ &\left.-\frac{2\sigma u\xi}{R^2}+\frac{2\kappa\Lambda(\phi-\alpha)\xi}{R^2}-\kappa\Lambda\nabla_\Gamma\phi\cdot\nabla_\Gamma\xi\:{\rm d}\Gamma\right]\:{\rm dt}, \end{split} \end{align} for all $\eta\in L^2(0,T;H^1(\Gamma))$ and for all $\xi\in L^2(0,T;H^2(\Gamma))$. \end{theorem} \subsubsection{Galerkin problem} We prove Theorem \ref{Existence} using a Galerkin method. Using that there exist smooth eigenfunctions $\left\{z_j\right\}$ of the Laplace-Beltrami operator $-\Delta_\Gamma$ which form an orthonormal basis of $H^1(\Gamma)$ and are orthogonal in $L^2(\Gamma)$, we define $V^m$ as \begin{displaymath} V^m:=span\left\{z_1,z_2,...,z_m\right\}, \end{displaymath} and set $\mathcal{P}_m:L^2(\Gamma)\to V^m$ to be the Galerkin projection given by \begin{displaymath} (P_mv-v,u_m)=0\qquad\forall v\in L^2(\Gamma), u_m\in V^m. \end{displaymath} $P_m$ then satisifies the following strong convergence results, \begin{align} \label{eqn-pm1} \mathcal{P}_mv\to& v\text{ in }L^2(\Gamma)\quad\forall v\in L^2(\Gamma),\\ \mathcal{P}_mv\to& v\text{ in }H^1(\Gamma)\quad\forall v\in H^1(\Gamma),\\ \mathcal{P}_mv\to& v\text{ in }H^2(\Gamma)\quad\forall v\in H^2(\Gamma). \label{eqn-pm3} \end{align} Therefore, the Galerkin system we are considering is given by \begin{align} \label{discretephi} \begin{split} -\alpha_1\left<\phi_m^\prime,\eta_m\right>=\int_{\Gamma}&\frac{b}{\epsilon}\left(W^\prime(\phi_m)-\Xint-_\Gamma W^\prime(\phi_m)\right)\eta_m+b\epsilon\nabla_\Gamma\phi_m\cdot\nabla_\Gamma\eta_m \\ &-\kappa\Lambda\nabla_\Gamma u_m\cdot\nabla_\Gamma\eta_m+\frac{2\kappa\Lambda u_m\eta_m}{R^2}+\kappa\Lambda^2(\phi_m-\alpha)\eta_m\:{\rm d}\Gamma, \end{split} \\ \label{discreteh} \begin{split} -\alpha_2\left<u_m^\prime,\xi_m\right>=\int_{\Gamma}& \kappa\Delta_\Gamma u_m\Delta_\Gamma\xi_m+\left(\sigma-\frac{2\kappa}{R^2}\right)\nabla_\Gamma u_m\cdot\nabla_\Gamma\xi_m \\ &-\frac{2\sigma u_m\xi_m}{R^2}+\frac{2\kappa\Lambda(\phi_m-\alpha)\xi_m}{R^2}-\kappa\Lambda\nabla_\Gamma\phi_m\cdot\nabla_\Gamma\xi_m\:{\rm d}\Gamma, \end{split} \end{align} for all $\eta_m, \xi_m \in V^m$. This system can then be written as an initial value problem for a system of ordinary differential equations with locally Lipschitz right hand sides, for which there exists a unique solution at least locally in time. We observe that \begin{displaymath} \left<\phi_m^\prime,\eta_m\right>=(\phi_m^\prime,\eta_m)\qquad\text{and}\qquad\left<u_m^\prime,\mu_m\right>=(u_m^\prime,\mu_m). \end{displaymath} Testing \eqref{discretephi} and \eqref{discreteh} with $\eta_m=1$ and $\xi_m=1,\nu_1,\nu_2,\nu_3$, and applying standard ODE results it follows that if $(\phi_m(0),u_m(0))\in\mathcal{K}$ then the solution $(\phi_m(t),u_m(t))\in\mathcal{K}$ for $t\in[0,T]$, where $T$ comes from the local existence result used above. \subsubsection{Energy estimates} In order to pass to the limit, and prove existence of the full system we derive some \emph{a priori} estimates by considering the discrete energy $\mathcal{E}(\phi_m,u_m)$. \begin{theorem} \label{Energy Estimates} Suppose $(\phi_m, u_m)\in\mathcal{K}$ satisfy equations ~\eqref{discretephi}~--~\eqref{discreteh} then there exists a constant $C$ independent of $m$ such that \begin{align} \label{En Est 1} \|\phi_m\|_{L^\infty(0,T;H^1(\Gamma))}\leq C, \\ \label{En Est 3} \|u_m\|_{L^\infty(0,T;H^2(\Gamma))}\leq C, \\ \label{En Est 4} \|\phi^\prime_m\|_{L^2(0,T;L^2(\Gamma))}\leq C, \\ \label{En Est 5} \|u^\prime_m\|_{L^2(0,T;L^2(\Gamma))}\leq C, \end{align} \end{theorem} \begin{proof} By differentiating the energy functional $\mathcal{E}(\cdot,\cdot)$ with respect to $t$ we obtain, \begin{align} \frac{d}{dt}\mathcal{E}(\phi_m,u_m)=-\alpha_1\|\phi_m^\prime\|^2_{L^2(\Gamma)}-\alpha_2\|u_m^\prime\|^2_{L^2(\Gamma)}. \end{align} Integrating and using the coercivity of $\mathcal{E}(\cdot,\cdot)$ proven in Proposition \ref{Prop-Direct} it follows that for all $t\in(0,T)$, \begin{align} \|u_m\|^2_{H^2(\Gamma)}+\|\phi_m\|^2_{H^1(\Gamma)}+\int_{0}^{t}\|\phi^\prime_m\|^2_{L^2(\Gamma)}\:{\rm dt}+\int_{0}^{t}\|u_m^\prime\|^2_{L^2(\Gamma)}\:{\rm dt}\leq C, \end{align} where in the above line we have used that $\mathcal{E}(\phi_m(0),u_m(0))\leq C$ where $C$ is some constant independent of $m$. From which it follows that for all $t\in(0,T)$, \begin{align} \sup_{t\in(0,T)}\|u_m\|^2_{H^2(\Gamma)}+\sup_{t\in(0,T)}\|\phi_m\|^2_{H^1(\Gamma)}+\int_{0}^{t}\|\phi^\prime_m\|^2_{L^2(\Gamma)}\:{\rm dt}+\int_{0}^{t}\|u_m^\prime\|^2_{L^2(\Gamma)}\:{\rm dt}\leq C \end{align} which give the required energy bounds. \end{proof} \subsubsection{Existence theorem proof} Applying the energy estimates proven in Theorem \ref{Energy Estimates} and considering subsequences as neccessary, there exist $\phi^*$ and $u^*$ in the indicated spaces such that the following convergence results hold in the weak sense, \begin{align} \label{eqn-weakprime} \phi_m^\prime\rightharpoonup\left(\phi^*\right)^\prime&\text{ in }L^2(0,T;L^2(\Gamma)),&u_m^\prime\rightharpoonup\left(u^*\right)^\prime&\text{ in }L^2(0,T;L^2(\Gamma)), \\ \phi_m\rightharpoonup \phi^*&\text{ in }L^2(0,T;H^1(\Gamma)),&u_m\rightharpoonup u^*&\text{ in }L^2(0,T;H^2(\Gamma)), \end{align} and applying standard compactness results (Aubin-Lions Lemma and Kondrachov's Theorem) the following convergence results hold in the strong sense, \begin{align} \phi_m\to\phi^*&\text{ in }C([0,T];L^2(\Gamma)),&u_m\to u^*&\text{ in }C([0,T];L^2(\Gamma)), \\ \phi_m\to \phi^*&\text{ in }L^2(0,T;L^p(\Gamma)), \label{eqn-strongkon} \end{align} where $p\geq 1$. Furthermore since $\phi_m(0)\to\phi^*(0)$ and $u_m(0)\to u^*$ in $L^2(\Gamma)$ it holds that \begin{align} \phi^*(0)&=\phi_0,&u^*(0)&=u_0. \end{align} Taking $\eta\in L^2(0,T;H^1(\Gamma))$, and $\xi\in L^2(0,T;H^2(\Gamma))$ we have that \begin{align} \begin{split} -\int_{0}^{T}&\alpha_1\left<\phi_m^\prime,\mathcal{P}_m\eta\right>\:{\rm dt}\\ =\int_0^T&\left[\int_{\Gamma}\right.\frac{b}{\epsilon}\left(W^\prime(\phi_m)-\Xint-_\Gamma W^\prime(\phi_m)\:{\rm d}\Gamma\right)\mathcal{P}_m\eta+b\epsilon\nabla_\Gamma\phi_m\cdot\nabla_\Gamma\mathcal{P}_m\eta \\ & \left.\quad-\kappa\Lambda\nabla_\Gamma u_m\cdot\nabla_\Gamma\mathcal{P}_m\eta+\frac{2\kappa\Lambda u_m\mathcal{P}_m\eta}{R^2}+\kappa\Lambda^2(\phi_m-\alpha)\mathcal{P}_m\eta\:{\rm d}\Gamma \right]\:{\rm dt}, \end{split} \end{align} and \begin{align} \begin{split} -\int_{0}^{T}&\alpha_2\left<u_m^\prime,\mathcal{P}_m\xi\right>\:{\rm dt}\\ =\int_0^T&\left[\int_{\Gamma}\right.\kappa\Delta_\Gamma u_m\Delta_\Gamma\mathcal{P}_m\xi+\left(\sigma-\frac{2\kappa}{R^2}\right)\nabla_\Gamma u_m\cdot\nabla_\Gamma\mathcal{P}_m\xi \\ &\left.\quad-\frac{2\sigma u_m\mathcal{P}_m\xi}{R^2}+\frac{2\kappa\Lambda(\phi_m-\alpha)\mathcal{P}_m\xi}{R^2} -\kappa\Lambda\nabla_\Gamma\phi_m\cdot\nabla_\Gamma\mathcal{P}_m\xi\:{\rm d}\Gamma\right]\:{\rm dt}, \end{split} \end{align} Using the convergence results \eqref{eqn-pm1}-\eqref{eqn-pm3} and \eqref{eqn-weakprime}-\eqref{eqn-strongkon} we can pass to the limit to obtain \begin{align} \begin{split} -\int_{0}^{T}\alpha_1\left<(\phi^*)^\prime,\eta\right>\:{\rm dt}=\int_0^T\left[\int_{\Gamma}\right.&\frac{b}{\epsilon}\left(W^\prime(\phi^*)-\Xint-_\Gamma W^\prime(\phi^*)\:{\rm d}\Gamma\right)\eta+b\epsilon\nabla_\Gamma\phi^*\cdot\nabla_\Gamma\eta \\ &\left.-\kappa\Lambda\nabla_\Gamma u^*\cdot\nabla_\Gamma\eta +\left.\frac{2\kappa\Lambda u^*\eta}{R^2}\right. +\kappa\Lambda^2(\phi^*-\alpha)\eta\:{\rm d}\Gamma\right]\:{\rm dt}, \end{split} \end{align} \begin{align} \begin{split} -\int_{0}^{T}\alpha_2\left<(u^*)^\prime,\xi\right>\:{\rm dt}=\int_0^T\left[\int_{\Gamma}\right. &\kappa\Delta_\Gamma u^*\Delta_\Gamma\xi+\left(\sigma-\frac{2\kappa}{R^2}\right)\nabla_\Gamma u^*\cdot\nabla_\Gamma\xi \\ &\left.-\frac{2\sigma u^*\xi}{R^2}+\frac{2\kappa\Lambda(\phi^*-\alpha)\xi}{R^2}-\kappa\Lambda\nabla_\Gamma\phi^*\cdot\nabla_\Gamma\xi\:{\rm d}\Gamma\right]\:{\rm dt}, \end{split} \end{align} $\forall\eta\in L^2(0,T;H^1(\Gamma))$, and $\forall\xi\in L^2(0,T;H^2(\Gamma))$. This completes the proof of Theorem \ref{Existence}. \subsection{Uniqueness Theory} \begin{theorem}[Uniqueness] \label{Uniqueness} There exists at most one solution pair. \end{theorem} \begin{proof} Let $(\phi_i,u_i)$, $i=1,2$ be two solution pairs. Set $\theta^\phi=\phi_1-\phi_2$ and $\theta^u=u_1-u_2$. By subtracting the equations, testing with $\eta=\theta^\phi$ and $\xi=\theta^u$ and using that \begin{align} \frac{d}{dt}\|\theta^\phi\|^2_{L^2(\Gamma)}&=2\left<\left(\theta^\phi\right)^\prime,\theta^\phi\right>,&\frac{d}{dt}\|\theta^u\|^2_{L^2(\Gamma)}&=2\left<\left(\theta^u\right)^\prime,\theta^u\right>, \end{align} for a.e. $0\leq t\leq T$ we obtain \begin{align} \begin{split} -\frac{\alpha_1}{2}\frac{d}{dt}\|\theta^\phi\|^2_{L^2(\Gamma)}=&\int_{\Gamma}\frac{b}{\epsilon}\left(W^\prime(\phi^1)-W^\prime(\phi^2)\right)\theta^\phi\:{\rm d}\Gamma +b\epsilon\|\nabla_\Gamma \theta^\phi\|^2_{L^2(\Gamma)} \\ &+\kappa\Lambda^2\|\theta^\phi\|^2_{L^2(\Gamma)}+\int_{\Gamma}\frac{2\Lambda\kappa\theta^u\theta^\phi}{R^2}-\Lambda\kappa\nabla_\Gamma\theta^\phi\cdot\nabla_\Gamma\theta^u\:{\rm d}\Gamma, \end{split} \end{align} \begin{align} \begin{split} -\frac{\alpha_2}{2}\frac{d}{dt}\|\theta^u\|^2_{L^2(\Gamma)}=&\kappa\|\Delta_\Gamma \theta^u\|^2_{L^2(\Gamma)}+\left(\sigma-\frac{2\kappa}{R^2}\right)\|\nabla_\Gamma\theta^u\|^2_{L^2(\Gamma)} \\ &-\frac{2\sigma}{R^2}\|\theta^u\|^2_{L^2(\Gamma)}+\int_{\Gamma}\frac{2\Lambda\kappa\theta^u\theta^\phi}{R^2}-\Lambda\kappa\nabla_\Gamma\theta^\phi\cdot\nabla_\Gamma\theta^u\:{\rm d}\Gamma. \end{split} \end{align} Using the Poincare type inequality \eqref{eqn-poincare}, structural property (2) of $W(\cdot)$ and Youngs inequality we obtain, \begin{equation} \frac{d}{dt}\left(\|\theta^u\|^2_{L^2(\Gamma)}+\|\theta^\phi\|^2_{L^2(\Gamma)}\right)+c_1\|\theta^u\|^2_{H^2(\Gamma)}+c_2\|\theta^\phi\|^2_{L^2(\Gamma)} \leq C\left(\|\theta^u\|^2_{L^2(\Gamma)}+\|\theta^\phi\|^2_{L^2(\Gamma)}\right), \end{equation} where $c_1,c_2$ and $C$ are strictly positive constants. Uniqueness then follows by Gronwall's inequality. \end{proof} \subsection{Gradient Flow for the reduced energy} Returning to consider the reduced energy \eqref{eqn-reducedenergy}, we can likewise obtain the gradient flow equation \begin{align} \label{RedGradFlow} \alpha_1\phi_t+\frac{b}{\epsilon}\left(W^\prime(\phi)-\Xint-_\Gamma W^\prime(\phi)\:{\rm d}\Gamma\right)-b\epsilon\Delta_\Gamma\phi+\kappa\Lambda\left(\Delta_\Gamma +\frac{2}{R^2}\right)\mathcal{G}(\mathbf{P}\phi)+\kappa\Lambda^2(\phi-\alpha)=0, \end{align} satisfying \begin{align} \frac{d}{dt}\widetilde{\mathcal{E}}(\phi)=-\alpha_1\|\phi_t||^2_{L^2(\Gamma)}\leq 0. \end{align} However, by defining $u=\mathcal{G}(\mathbf{P}\phi)$ as in \eqref{eqn-reducedHeight} then we obtain the system of equations \begin{align} \label{Reduced PDE} \begin{split} \alpha_1\phi_t+\frac{b}{\epsilon}\left(W^\prime(\phi)-\Xint-_\Gamma W^\prime(\phi)\:{\rm d}\Gamma\right)-b\epsilon\Delta_\Gamma\phi+\kappa\Lambda\left(\Delta_\Gamma +\frac{2}{R^2}\right)u+\kappa\Lambda^2(\phi-\alpha)=0,\\ -\Delta_\Gamma u+\frac{\sigma}{\kappa}u=\Lambda\mathbf{P}\phi \end{split} \end{align} which coincides with \eqref{AC PDE} in the case $\alpha_2=0$. In this instance we can again apply a Galerkin approximation and obtain the \emph{a priori} bounds \begin{align} \|\phi_m\|_{L^\infty(0,T;H^1(\Gamma))}\leq C, \\ \|u_m\|_{L^\infty(0,T;H^2(\Gamma))}\leq C, \\ \|\phi^\prime_m\|_{L^2(0,T;L^2(\Gamma))}\leq C. \end{align} From these estimates existence and uniqueness can be shown analagously to Theorem \ref{Existence} and Theorem \ref{Uniqueness}. The case $\alpha_2=0$ can be physically understood as instantaneous relaxation of the surface energy. \section{Numerical Simulations} In this section we present some numerical results for the longtime behaviour of the system of PDEs given by \eqref{Reduced PDE}. We suppose the double well potential is given by \begin{align} W(r)=\frac{1}{4}(r^2-1)^2. \end{align} This choice of $W(\cdot)$ satisifies the structural assumptions given earlier. \subsection{Numerical Scheme} We implement an iterative method as follows. Given a solution $\left(\phi^{(n)},u^{(n)}\right)$ at the previous time step we consider a sequence $\{\phi_k,u_k,\lambda_k\}_{k=1}^\infty$ where $(\phi_k,u_k)$ is a solution to \begin{align} \label{secant1} \begin{split} \int_{\Gamma}&\frac{\phi_k-\phi^{(n)}}{\tau}\eta+\frac{b}{\epsilon}W^{\prime\prime}\left(\phi^{(n)}\right)\left(\phi_k-\phi^{(n)}\right) +\frac{b}{\epsilon}W^\prime\left(\phi^{(n)}\right)\eta \\ &+b\epsilon\nabla_\Gamma \phi_k\cdot\nabla_\Gamma\eta-\kappa\Lambda\nabla_\Gamma u_k\cdot\nabla_\Gamma\eta+\frac{2\kappa\Lambda}{R^2}u_k\eta-\lambda_k\eta+\Lambda^2\kappa(\phi_k-\alpha)\:{\rm d}\Gamma=0, \end{split} \end{align} \begin{align} \label{secant2} \begin{split} \int_{\Gamma}\frac{\sigma}{\kappa}u_k\chi+\nabla_\Gamma u_k\cdot\nabla_\Gamma\chi-\Lambda(\phi_k-\alpha)\chi\:{\rm d}\Gamma&=0, \end{split} \end{align} where in the above, a linearisation has been used for $W^\prime$. The mean value constraint on the height function is directly enforced by \eqref{secant2} provided $\sigma\neq 0$. The mean value constraint on $\phi$ is imposed by the secant method, (following \cite{BloEll93-a}), using the sequence $\{\lambda_{k}\}_{k\geq 1}$ which is constructed as follows \begin{displaymath} \lambda_{k+1}=\lambda_k+\frac{(\lambda_k-\lambda_{k-1})\left(\alpha-\int_{\Gamma}\phi_k\right)}{\left(\int_{\Gamma}\phi_k-\int_{\Gamma}\phi_{k-1}\right)}. \end{displaymath} with $\lambda_1=-\frac{b}{\epsilon}$ and $\lambda_2=\frac{b}{\epsilon}$. We stop the iteration when $|\lambda_{k+1}-\lambda_k|<tol$ and set $\phi^{(n+1)}=\phi_{k+1}$ and $u^{(n+1)}=\mathbf{P}u_{k+1}$. We note that it is not neccessary to consider $\mathbf{P}u_k$ in order to obtain $\phi_k$ since $\left(\Delta_\Gamma+\frac{2}{R^2}\right)\mathbf{P}u_k=\left(\Delta_\Gamma+\frac{2}{R^2}\right)u_k$. DUNE software was used to implemement a surface finite element method. Specifically we used a PYTHON module (c.f. \cite{dedner2018dune}) which implemented a GMRES method with ILU preconditing to solve the system of linear equations \eqref{secant1}-\eqref{secant2}. For the secant iteration we set $tol=10^{-8}$ and for the GMRES iteration we set the residual tolerance and absolute tolerance both to $10^{-10}$. For the case $\sigma=0$ we additionally used a nullspace method from PETSc \cite{dalcin2011parallel,petsc-user-ref,petsc-efficient}. Unless stated otherwise, we used a base grid containing 1026 vertices, and at each time step applied an adaptive grid method on each element $K$ if the condition \begin{align} \label{eqn:adaptive-grid} \|\nabla\phi\|_{L^\infty(K)}>\frac{\mu\epsilon}{|K|}, \end{align} is satisfied, where $\mu=0.05$. For most of our simulations we will use $\epsilon=0.02$ which typically leads to a grid consisting of around 30,000 vertices. Figure \ref{fig:adaptive-grid} illustrates an example of such a grid around an interface. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{"Grid3".png} \caption{An example of how the adaptive grid method given in \eqref{eqn:adaptive-grid} resolves the interface for the case $\epsilon=0.02$.} \label{fig:adaptive-grid} \end{figure} We also used an adaptive time stepping strategy initially using a uniform time step while phase separation occured and then using an adaptive time step (within bounds) that is inversely proportional to \begin{align} \max_{x\in\Gamma_h}\frac{\left|\phi_h^{(m)}(x)-\phi_h^{(m-1)}(x)\right|}{\tau^{(m)}}, \end{align} which should be interpreted as the normal velocity of the interface. To graphically represent the numerical solutions, we deform the surface as described by \eqref{eqn-deform}. Here, for visualisation purposes we exagerate the size of the deformation $u_h$ by setting $\rho=1$ whereas in reality it should be significantly smaller. The colouring of the resulting surface is given by $\phi_h$ with red indicating $+1$ regions and blue $-1$ regions. \subsection{Stabilisation of multiple domains} We first explore whether there exists stable steady state solutions composed of multiple lipid rafts ($+1$ phase domains), a property observed in biological membranes. We choose $\kappa=1, R=1, b=1, \epsilon=0.02$ and $\sigma=10$, and use a uniform time step of $\tau= 10^{-2}$. We choose initial conditions with an increasing number of lipid rafts and investigate the impact of varying the spontaneous curvature $\Lambda$, which acts as the coupling parameter between the phasefield and the deformation. In each case the inital conditions are chosen such that $\alpha=-0.5$. \begin{figure} \centering \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"One".png} \caption{N=1} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Two".png} \caption{N=2} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Three".png} \caption{N=3} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Four".png} \caption{N=4} \end{subfigure} \caption{Stablised steady states solutions of N domains for $\Lambda=2$.} \label{fig-Stabilisation} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\linewidth]{"SponCurv".png} \includegraphics[width=0.45\linewidth]{"SponCurv2".png} \caption{Energy dependence of steady state solutions of $N$ lipid raft domains on spontaneous curvature, $\Lambda$. The graph on the right is a zoomed in version of the graph on the left on an area of interest.} \label{fig-Energy-Lambda} \end{figure} In Figure \ref{fig-Stabilisation} we depict stabilised steady state solutions consisting of $N$ lipid raft domains for $\Lambda=2$. In Figure \ref{fig-Energy-Lambda} we plot the energy \eqref{eqn-peturb-energy} against spontaneous curvature $\Lambda$ for the corresponding steady state solutions. The $\Lambda$ values considered were $0, 0.2, 0.4, 0.6 ...$. This was not possible in all cases. For each $\Lambda$ value where no corresponding energy $\mathcal{E}$ has been plotted in Figure \ref{fig-Energy-Lambda} indicates that a state consiting of $N$ lipid raft domains was not a steady state solution. For example the case $N=1$ and $\Lambda=2.8$ is illustrated in Figure \ref{fig-instability}. \begin{figure} \centering \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Instability1".png} \caption{$\phi_h(\cdot,t=0)$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Instability2".png} \caption{$\phi_h(\cdot,t=60)$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Instability3".png} \caption{$\phi_h(\cdot,t=80)$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Instability4".png} \caption{$\phi_h(\cdot,t=200)$} \end{subfigure} \caption{Unstable state which transitions from 1 domain towards 4 domains. Here for visualiation purposes we don't apply the deformation $u$.} \label{fig-instability} \end{figure} \subsection{Width of interface, $\epsilon$} Since we approximated the line tension by the Ginzburg-Landau energy functional, we wish to check that in the limit $\epsilon\to 0$ we see a tightening on the width of the diffuse interface. This is confirmed in Figure \ref{fig:epsilon} where the initial condition was chosen to have icosahedral rotational symmetry, and parameter values $\kappa=1, R=1, b=1, \Lambda=5$ and $\sigma=1$ were used. \begin{figure} \centering \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"epsilon4".png} \caption{$\epsilon=0.04$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"epsilon3".png} \caption{$\epsilon=0.02$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"epsilon2".png} \caption{$\epsilon=0.01$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"epsilon1".png} \caption{$\epsilon=0.005$} \end{subfigure} \caption{Almost stationary discrete solutions for varying the width of the interface, $\epsilon$.} \label{fig:epsilon} \end{figure} \subsection{Long time behaviour} Starting with an initial condition of the form $\phi(\cdot,t=0)=\alpha+\mathcal{R}$ where $\mathcal{R}$ is a given small mean zero random perturbation, we investigate the long time behaviour for varying the different parameters from which a number of interesting geometric features arise. To start with we set $R=1$ and $\epsilon=0.02$ and consider the parameters $\Lambda=5$, $b=1$, $\alpha=-0.5$, $\sigma=1$ and $\kappa=1$ as a base case, and vary each parameter in turn. Figure \ref{fig:time-evolution} gives a series of snapshots of how the solution varies in time towards an almost stationary state solution, in this case consisting of 12 lipid rafts. Since we have seen that for the same set of parameters it is possible for differing numbers of lipid rafts to stablise, we can't conclude this is a global minimiser, but is indicative of general trends that can be observed for varying certain parameters, e.g. the number of lipid rafts. \begin{figure} \centering \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Type1".png} \caption{$\phi_h(\cdot,t=0)$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Type2".png} \caption{$\phi_h(\cdot,t=0.4)$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Type3".png} \caption{$\phi_h(\cdot,t=0.5)$} \end{subfigure}\\ \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Type4".png} \caption{$\phi_h(\cdot,t\approx3.565)$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Type5".png} \caption{$\phi_h(\cdot,t\approx115.565)$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"Type6".png} \caption{$\phi_h(\cdot,t\approx515.565)$} \end{subfigure} \caption{The time evolution for initial condition $\phi(\cdot,t=0)=-0.5+\mathcal{R}$ with parameters given by $\Lambda=5$, $b=1$, $\sigma=1$ and $\kappa=1$.} \label{fig:time-evolution} \end{figure} \subsubsection{Spontaneous curvature, $\Lambda$} In the case $\Lambda=0$, then there is no coupling so $u=0$ for all time, and $\phi$ evolves according to a conserved Allen-Cahn equation. We observe that as $|\Lambda|$ increases so do the number of lipid rafts, see Figure \ref{fig:Coupling}. This is not surprising since to minimise the energy $\mathcal{E}$, larger $\Lambda$ corresponds to increased curvature. As expected the energy $\mathcal{E}$ coincides for positive and negative values of $\Lambda$ since switching the sign of $\Lambda$ amounts to switching the sign of $u$, which leaves $\mathcal{E}$ unchanged. Further details are given in Table \ref{table:Lambda}. \begin{figure} \parbox{.24\linewidth}{\begin{subfigure}{.9\linewidth} \centering \includegraphics[width=\linewidth]{"Lambda1".png} \caption{$\Lambda=0$} \end{subfigure}} \parbox{0.72\linewidth}{ \begin{subfigure}{.3\linewidth} \centering \includegraphics[width=\linewidth]{"Lambda2".png} \caption{$\Lambda=-0.5$} \end{subfigure} \begin{subfigure}{.3\linewidth} \centering \includegraphics[width=\linewidth]{"Lambda3".png} \caption{$\Lambda=-5$} \end{subfigure} \begin{subfigure}{.3\linewidth} \centering \includegraphics[width=\linewidth]{"Lambda4".png} \caption{$\Lambda=-10$} \end{subfigure} \\ \begin{subfigure}{.3\linewidth} \centering \includegraphics[width=\linewidth]{"Lambda5".png} \caption{$\Lambda=0.5$} \end{subfigure} \begin{subfigure}{.3\linewidth} \centering \includegraphics[width=\linewidth]{"Lambda6".png} \caption{$\Lambda=5$} \end{subfigure} \begin{subfigure}{.3\linewidth} \centering \includegraphics[width=\linewidth]{"Lambda7".png} \caption{$\Lambda=10$} \end{subfigure}} \centering \caption{Almost stationary discrete solutions for varying the coupling coefficient $\Lambda$.} \label{fig:Coupling}\end{figure} \begin{table} \caption{} \label{table:Lambda} \begin{minipage}{\textwidth} \tabcolsep=8pt \begin{tabular}{cccc} \hline\hline {Figure \ref{fig:Coupling}} & {$\Lambda$} & {\# of lipid rafts} & {$\mathcal{E}_h$} \\ \hline (a) & 0 & 1 & 5.1910 \\ (b) & -0.5 & 1 & 6.3583 \\ (c) & -5 & 12 & 66.9928 \\ (d) & -10 & 26 & 204.9876 \\ (e) & 0.5 & 1 & 6.3583 \\ (f) & 5 & 12 & 66.9928 \\ (g) & 10 & 26 & 204.9876 \\ \hline\hline \end{tabular} \end{minipage} \end{table} \subsubsection{Line tension, $b$} Similarly, we would expect that increasing the line tension $b$ would decrease the length of the interface, and hence decrease the number of lipid rafts. This agrees with the observed behaviours illustrated in Figure \ref{fig:b}. Further details are given in Table \ref{table:b}. In Figure \ref{fig:b} (d) we observe that $u$ is positive both in the lipid raft domain but also antipodal to this. This slightly strange behaviour arises from the fact that we have removed the components of the normal $\nu_i$ from $u$. \begin{figure} \centering \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"b1".png} \caption{$b=0.2$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"b2".png} \caption{$b=1$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"b3".png} \caption{$b=2.5$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"b4".png} \caption{$b=50$} \end{subfigure} \caption{Almost stationary discrete solutions for varying the line tension term $b$.} \label{fig:b} \end{figure} \begin{table} \caption{} \label{table:b} \begin{minipage}{\textwidth} \tabcolsep=8pt \begin{tabular}{cccc} \hline\hline {Figure \ref{fig:b}} & {$b$} & {\# of lipid rafts} & {$\mathcal{E}_h$} \\ \hline (a) & 0.2 & 26 & 49.9075 \\ (b) & 1 & 12 & 66.9928 \\ (c) & 2.5 & 6 & 88.0572 \\ (d) & 50 & 1 & 376.0834 \\ \hline\hline \end{tabular} \end{minipage} \end{table} \subsubsection{Mean value of $\phi$} Figure \ref{fig:alpha} shows the effect of varying the mean value of $\phi$, with both stripe and circular raft behaviour observed, as well as no phase separation. Further details are given in Table \ref{table:alpha}. Although Figure \ref{fig:alpha} (a) is almost stationary, its non-symmetric nature is suggestive that this is not a local minimiser. \begin{figure} \centering \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"alpha4".png} \caption{$\alpha=0$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"alpha3".png} \caption{$\alpha=-0.25$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"alpha2".png} \caption{$\alpha=-0.5$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"alpha1".png} \caption{$\alpha=-0.75$} \end{subfigure} \caption{Almost stationary discrete solutions for varying $\alpha$ - the mean value of the order parameter $\phi$.} \label{fig:alpha} \end{figure} \begin{table} \caption{} \label{table:alpha} \begin{minipage}{\textwidth} \tabcolsep=8pt \begin{tabular}{cccc} \hline\hline {Figure \ref{fig:alpha}} & {$\alpha$} & {\# of lipid rafts} & {$\mathcal{E}_h$} \\ \hline (a) & 0 & - & 35.5574 \\ (b) & -0.25 & 12 & 44.2027 \\ (c) & -0.5 & 12& 66.9928 \\ (d) & -0.75 & - & 118.0643 \\ \hline\hline \end{tabular} \end{minipage} \end{table} \subsubsection{Surface tension, $\sigma$} Figure \ref{fig:sigma} shows the effect of varying the surface tension $\sigma$, with increasing $\sigma$ corresponding to increasing numbers of lipid rafts. Further details are given in Table \ref{table:sigma}. Since in the case $\sigma=0$, there is not a unique solution to \eqref{secant2}, we used a nullspace method from PETSc to enforce that $\int u=0$. \begin{figure} \centering \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"sigma1".png} \caption{$\sigma=0$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"sigma2".png} \caption{$\sigma=1$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"sigma3".png} \caption{$\sigma=10$} \end{subfigure} \caption{Almost stationary discrete solutions for varying $\sigma$ - the surface tension.} \label{fig:sigma} \end{figure} \begin{table} \caption{} \label{table:sigma} \begin{minipage}{\textwidth} \tabcolsep=8pt \begin{tabular}{cccc} \hline\hline {Figure \ref{fig:sigma}} & {$\sigma$} & {\# of lipid rafts} & {$\mathcal{E}_h$} \\ \hline (a) & 0 & 8 & 64.0906 \\ (b) & 1 &12 & 66.9928 \\ (c) & 10 & 23 & 79.1846 \\ \hline\hline \end{tabular} \end{minipage} \end{table} \subsubsection{Bending rigitity, $\kappa$} Figure \ref{fig:kappa} illustrates the effect of varying the bending rigidity $\kappa$. We observe that increasing $\kappa$ leads to an increase in the number of lipid rafts. Further details are given in Table \ref{table:kappa}. \begin{figure} \centering \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"kappa4".png} \caption{$\kappa=0.05$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"kappa3".png} \caption{$\kappa=0.1$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"kappa2".png} \caption{$\kappa=1$} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering \includegraphics[width=\linewidth]{"kappa1".png} \caption{$\kappa=10$} \end{subfigure} \caption{Almost stationary discrete solutions for varying the bending rigidity $\kappa$.} \label{fig:kappa} \end{figure} \begin{table} \caption{} \label{table:kappa} \begin{minipage}{\textwidth} \tabcolsep=8pt \begin{tabular}{cccc} \hline\hline {Figure \ref{fig:kappa}} & {$\kappa$} & {\# of lipid rafts} & {$\mathcal{E}_h$} \\ \hline (a) & 0.05 & 1 & 16.9889 \\ (b) & 0.1 & 6 & 37.4941 \\ (c) & 1 & 12 & 66.9928 \\ (d) & 10 & 30 & 440.1609 \\ \hline\hline \end{tabular} \end{minipage} \end{table} \section{Outlook} The relationship of the diffuse interface approach considered here and a sharp interface problem via asymptotics will be considered in a work in preparation by the authors. Another interesting direction to consider would be a phase-dependent bending rigidity for the Gauss curvature within this perturbation approach, and the exploration of whether this could be sufficient to produce raft like regions as well. \acknowledgements{ The work of CME was partially supported by the Royal Society via a Wolfson Research Merit Award. The research of LH was funded by the Engineering and Physical Sciences Research Council grant EP/H023364/1 under the MASDOC centre for doctoral training at the University of Warwick.}
1,116,691,499,473
arxiv
\section{Introduction} \begin{figure}[thbp] \centering \setlength{\abovecaptionskip}{0.1cm} \includegraphics[width=1\linewidth]{figure/first.pdf} \caption{Illustration of the gaze object prediction approaches. (a) Previous gaze prediction methods use two separate backbones to tackle scene image and head image, respectively. (b) The proposed SGS feature extractor can produce features from scene image and head image in a unified manner.} \vspace{-0.6cm} \label{Illustration} \end{figure} Gaze estimation (GE) aims at determining the direction and point that a person is staring at \cite{recasens2015they}. As gaze behavior is an essential aspect of human social behavior\cite{emery2000eyes,land2009looking}, we can infer potential information based on the staring object. For example, in front of the bus station, a person looking at his watch may indicate that he has something urgent to do. Customers staring at the product may want to purchase it in the shopping mall. The stared object can generally reveal our state, \eg what they are doing, or what they plan to do. In the gaze estimation community, researchers usually employ two separated backbones to process the entire scene image and the head image. This scene-head separated structure was first proposed by Recasens \etal~\cite{recasens2015they}, where one backbone captures holistic cues from the entire scene image, and the other backbone analyzes the details from the head image. Later, Lian~\etal \cite{lian2018believe} proposed the multi-scale gaze direction fields to analyze the head image precisely. Recently, Chong \etal\cite{chong2020detecting} elaborately designed interactions between the head branch and the scene branch and utilized deconvolutional layers to produce a fine-grained heatmap. In general, the gaze estimation performance has kept improving in recent years while the network architecture gradually gets complex. Existing models only predict the gaze area that people may stare at rather than precisely predicting the location of the object been be stared at. Tomas \etal\cite{tomas2021goo} recently pointed out that it has significant practical usage to identify the stared thing. As shown in Fig.~\ref{Illustration}, from the bounding box of the stared item, we can infer that the person is likely to buy the \textit{Locally Mango} product, which is beyond the scope of the traditional gaze estimation task. However, performing gaze object prediction is non-trivial and faces the following challenges, \ie heavy network architecture, inconsistent requirements about image size. First of all, although it is an intuitive solution that adds an additional object detection branch to the existing two-branch gaze estimation models \cite{guan2020enhanced, zhao2020learning, saran2018human, recasens2017following, lian2018believe, chong2020detecting, recasens2015they,liu2021goal}, this approach would undoubtedly increase the number of calculations and parameters of the entire network. Besides, gaze estimation models usually employ an image of ordinary size (\eg $224 \times 224$) to capture a global receptive field. In contrast, objects in the retail scenario are generally small and dense, requiring an enlarged image to detect bounding boxes precisely. Moreover, compared with individuals carrying out gaze estimation and object detection, a more suitable way should employ a unified framework and achieve joint optimization. To alleviate the above issues, we make the following three designs. (1) Different from previous gaze estimation works~\cite{recasens2015they, lian2018believe, chong2020detecting} that use two independent branches (see in Fig.~\ref{Illustration} (a)), we propose a specific-general-specific (SGS) mechanism to extract task-specific features from the scene and head image with only one backbone (see in Fig.~\ref{Illustration} (b)), which can help to reduce the parameters and computational burdens, and also make it possible to joint optimization for different inputs. (2) To assist precise object detection, we develop a \textit{Defocus} layer to generate object-specific features. In particular, the input image with ordinary size can not produce a feature map with sufficient resolution for detecting small and dense retail objects. The proposed \textit{Defocus} layer zooms the spatial size via shrinking the channel size, which can produce a feature of high-resolution without losing information or bringing extra computations. (3) To tackle the performance bottleneck of imprecise gaze heatmap, we propose the energy aggregation loss that measures the percentage of energy within the stared box, and use the ground truth bounding box to guide the gaze estimation process. In our work, we propose a unified framework, namely GaTector, to estimate gaze heatmap, detect retail objects and conduct gaze object detection, as shown in Figure \ref{figGOPFramework}. The scene and head images are first jointly tackled by the SGS feature extractor. Then, the object detection head discovers bounding boxes, and the gaze prediction head predicts gaze heatmap, so we can jointly consider gaze prediction and object detection results to carry out gaze object prediction. Also, a novel wUoC metrics proposed to better reveal the difference between boxes even when they share no overlapping area. Our contributions can be summarized as follows: \begin{itemize} \vspace{-0.2cm} \item We proposed a unified method GaTector with a novel wUoC evaluation metric to make an early exploration on the gaze object detection task. \vspace{-0.2cm} \item We propose a novel SGS mechanism that can extract task-specific features by a single backbone while maintaining satisfactory performance. A \textit{Defocus} layer is introduced in SGS to prepare high-resolution feature maps for small retail object detection, and the energy aggregation loss guides the gaze heatmap to be concentrated. \vspace{-0.2cm} \item On the large-scale GOO dataset, we consistently improve the performance on two traditional tracks, \ie gaze estimation, and object detection, while reducing the model parameter and computational costs. In addition, we build a solid baseline for the gaze object detection task to promote future research. \end{itemize} \begin{figure*}[thbp] \centering \setlength{\abovecaptionskip}{0.1cm} \includegraphics[width=0.9\linewidth]{figure/gaze-object-detection.pdf} \caption{Overview of the proposed method. (a) The specific-general-specific (SGS) architecture can provide task-specific features while the sharing backbone. (b) The object detection module and gaze prediction module can be jointly trained by the energy aggregation loss.} \vspace{-0.5cm} \label{figGOPFramework} \end{figure*} \vspace{-0.2cm} \section{Related Work} \vspace{-0.2cm} \noindent \textbf{Gaze following.} As a practical technique\cite{wang2022contextual, wang2022detail}, the gaze following task is proposed by Recasens \etal~\cite{recasens2015they} and is a well-explored branch of gaze estimation. Gaze estimation reveals where a person is looking and serves as an essential clue for understanding human intention\cite{wang2021multiple, wang2021exploring}. Existing gaze estimation works can be divided into three categories according to different scenarios \ie, gaze point estimation\cite{liu2021goal,liu2021generalizing}, gaze following\cite{triesch2006gaze,brooks2005development,li2021looking}, and 3D gaze estimation\cite{masse2019extended, elmadjian20183d}. This paper is related to the gaze following task. In the early phase, Zhu \etal~\cite{zhu2012face} proposed a unified model that can perform face detection, pose estimation, and landmark estimation in realistic images. Recasens \etal\cite{recasens2015they} predict the gaze area by extracting head pose and gaze direction via a deep model. Afterward, Parks \etal\cite{parks2015augmented} combined saliency maps with human head pose and gaze direction to predict the gaze area for the observer. In addition, there are also some works focused on general gaze following \cite{mukherjee2015deep,triesch2006gaze,brooks2005development}. For example, Mukherjee~\etal \cite{mukherjee2015deep} restored the interaction with the environment based on head pose estimation. Unlike predicting gaze area, this paper studies the gaze object prediction task and aims to discover the bounding box of the stared objects, which is more challenging. \noindent \textbf{Object detection.} Recently, object segmentation \cite{zhou2020matnet, zhou2022group, huang2021clrnet, huang2021scribble} and detection methods have received apparent progress, where detection methods can be divided into anchor-free detectors~\cite{redmon2016you,redmon2017yolo9000,redmon2018yolov3,bochkovskiy2020yolov4,tian2019fcos} and anchor-based detectors~\cite{yao2020automatic, ren2015faster, feng2020progressive, lin2017focal,feng2020tcanet}. As a representative method for the anchor-free paradigm, YOLO~\cite{redmon2016you} predicts bounding boxes at points close to the object's center. Later, a series of methods \cite{redmon2017yolo9000,redmon2018yolov3,bochkovskiy2020yolov4} based on YOLO gradually improved object detection performance and formed effective solutions to object detection. The anchor-based methods include two categories: one-stage methods \cite{liu2016ssd,lin2017focal} and two-stage methods \cite{ren2015faster,he2017mask}. The one-stage detector can directly make predictions based on the extracted feature map and default anchors, while the two-stage detector first generates object proposals and then performs detailed refinement. \noindent \textbf{Gaze prediction in the retail industry.} In recent years, the automatic retail system and human-object interaction \cite{zhou2021cascaded, zhou2020cascaded} have aroused increasing research interests. For example, Harwood \etal~\cite{harwood2014mobile} use mobile phones to track the visual attention of consumers in the retail environment. Besides, new benchmarks are proposed to serve the automatic checkout systems, \eg the D2S dataset~\cite{follmann2018mvtec} and the RPC dataset~\cite{wei2019rpc}. Recently, EyeShopper~\cite{bermejo2020eyeshopper} offered an innovative system to achieve a precise estimation of customers' sight. In the retail environment, it is valuable for estimating the stared products to perform exact recommendations. However, existing methods only predict gaze area and leave the gaze object prediction problem under-explored. \section{Method} Given a scene image $\mathbf{I}_{\rm s}$ and head location mask $\mathbf{H}$, the head image $\mathbf{I}_{\rm h}$ usually is generated by cropping the scene image $\mathbf{I}_{\rm s}$. The goal of gaze object prediction task is to predict the bounding box and category label of the object started by a human. \subsection{SGS Mechanism} To extract holistic scene features and detailed head features, traditional gaze follow works \cite{recasens2015they, lian2018believe, chong2020detecting} usually use two independent networks to tackle scene image $\mathbf{I}_{\rm s}$ and head image $\mathbf{I}_{\rm h}$, respectively. However, if this paradigm was used to resolve the gaze object detection problem, an extra object detection branch is needed and makes there exist three parallel branches in the model, which results in a significant increase of the parameters and computational cost. An intuitive way to reduce the parameters and computational cost is to share the backbone when extracting features for the object detection branch and gaze prediction branch. However, this manner will significantly reduce the performance (see Table~\ref{tabCmpGaze}) as the information required by the object detection and gaze prediction are different. To alleviate this issue, this paper proposes a specific-general-specific (SGS) mechanism to jointly consider the task specificity of scene image and head image before and after the shared backbone. SGS employs a sharing backbone to extract features in a general manner and utilizes specific heads to prepare task-specific features for gaze prediction and object detection. As shown in Fig.~\ref{figGOPFramework} (a), we first use two independent convolutional layers to extract input-specific features for scene image and head image before the shared backbone: \begin{equation} \mathbf{f}_{\rm sp}^{\rm s} = \psi^{\rm s} (\mathbf{I}_{\rm s}), \ \ \mathbf{f}_{\rm sp}^{\rm h} = \psi^{\rm h} (\mathbf{I}_{\rm h}), \end{equation} where $\psi^{\rm s}(\cdot)$ and $\psi^{\rm h}(\cdot)$ denote convolutional layers for scene image and head image, respectively. $\mathbf{f}_{\rm sp}^{\rm s}$ and $\mathbf{f}_{\rm sp}^{\rm h}$ denote the extracted features. Then we feed $\mathbf{f}_{\rm sp}^{\rm s}$ and $\mathbf{f}_{\rm sp}^{\rm h}$ into the shared backbone and produce features in a general manner: \begin{equation} \mathbf{f}_{\rm g}^{\rm s} = \psi^{\rm b} (\mathbf{f}_{\rm sp}^{\rm s}), \ \ \mathbf{f}_{\rm g}^{\rm h} = \psi^{\rm b} (\mathbf{f}_{\rm sp}^{\rm h}) \end{equation} where $\mathbf{f}_{\rm g}^{\rm s}$ and $\mathbf{f}_{\rm g}^{\rm s}$ denote the general features for scene image and head image, $\psi^{\rm b}$ denotes the shared backbone. By sharing the backbone, the network parameters and computational cost would be significantly reduced. Afterward, we take the general scene feature $\mathbf{f}_{\rm g}^{\rm s}$ as input and design a novel \textit{Defocus} layer (see in Section~\ref{secObjectDetection}) to generate object-specific feature for detecting the location: \begin{equation} \mathbf{f}_{\rm det}^{\rm s} = \phi^{\rm det} (\mathbf{f}_{\rm g}^{\rm s}) \end{equation} where $\phi^{\rm det}$ denotes the proposed \textit{Defocus} layer, $\mathbf{f}_{\rm det}^{\rm s}$ denotes the feature for object detection. Then, both general head feature $\mathbf{f}_{\rm g}^{\rm h}$ and general scene feature $\mathbf{f}_{\rm g}^{\rm s}$ are sent into two independent convolution layers to generate gaze-specific features for predicting gaze results: \begin{equation} \mathbf{f}_{\rm gaze}^{\rm s} = \phi^{\rm s} (\mathbf{f}_{\rm g}^{\rm s}), \ \ \mathbf{f}_{\rm gaze}^{\rm h} = \phi^{\rm h} (\mathbf{f}_{\rm g}^{\rm h}), \end{equation} where $\phi^{\rm s}(\cdot)$ and $\phi^{\rm h}(\cdot)$ indicate two independent convolution layers. $\mathbf{f}_{\rm gaze}^{\rm s}$ and $\mathbf{f}_{\rm gaze}^{\rm h}$ indicate the scene features and head features for the gaze prediction network (Section~\ref{sectionGazePrediction}). By sharing the backbone and generating specific features before and after the backbone, the proposed SGS mechanism can significantly reduce the parameters and computational cost while maintaining a satisfactory performance, forming an effective and efficient unified framework for gaze object detection. \subsection{Gaze prediction} \label{sectionGazePrediction} \begin{figure}[!t] \centering \setlength{\abovecaptionskip}{0.1cm} \includegraphics[width=0.9\linewidth]{figure/gaze5.pdf} \caption{Illustration of the gaze prediction network.} \vspace{-0.4cm} \label{figGazePrediction} \end{figure} As shown in Fig.~\ref{figGazePrediction}, the gaze prediction network takes two gaze-specific features (\ie $\mathbf{f}_{\rm gaze}^{\rm s}$ and $\mathbf{f}_{\rm gaze}^{\rm h}$) and the head location map (\ie $\mathbf{H}$) as input to predict the gaze results. As the head location map can provide valuable guidance for the gaze prediction task, Chong \etal \cite{chong2020detecting} concatenate the head location map and scene image before extracting holistic scene features. However, the backbone in our GaTector serves both the gaze prediction task and the object detection task, which makes the head location may mislead the object detection process. Thus, we propose a novel ``head-delay" strategy to resolve this problem. It first employs the backbone network to tackle the scene image $\mathbf{I}_{\rm s}$ and produces the general feature. Then, the head location cues are supplied into the gaze prediction network. In particular, we use five convolutional layers with stride 2 to tackle the head location map and combine the generated feature with the scene feature. Based on the above two improvements, we follow \cite{chong2020detecting} to perform gaze object prediction. As shown in Fig.~\ref{figGazePrediction}, the gaze prediction network mainly consists of three modules, \ie the head module, the scene module, and the gaze prediction module. In the scene module, we first process the head location image $\mathbf{H}$ by five convolutional layers and concatenate the output with the gaze-specific scene feature $\mathbf{f}_{\rm gaze}^{\rm s}$ to make the feature map aware of the gaze position. Then, in the head module, we process the head location image $\mathbf{H}$ by three max-pooling with a flatten operation, then concatenate the output with the gaze-specific head feature $\mathbf{f}_{\rm gaze}^{\rm s}$ together to generate an attention map. Finally, we feed the element-wise product of this attention map and the output of the scene module into the gaze prediction module to get the gaze heatmap $\mathbf{M}$. Particularly, the gaze prediction module is an encoder-decoder structure, the encoder consists of two convolutional layers, and the decoder consists of three deconvolutional layers. \subsection{Object detection} \label{secObjectDetection} \begin{figure}[!t] \centering \setlength{\abovecaptionskip}{0.1cm} \includegraphics[width=0.9\linewidth]{figure/pixel-shuffle.pdf} \caption{Illustration of the \textit{Defocus} operation. One channel in the high-resolution feature map is transformed from $r^{2}$ channels in the low-resolution feature map.} \vspace{-0.4cm} \label{pixelshuffle} \end{figure} As shown in Fig.~\ref{figGOPFramework}, in the object detection branch, we propose a new layer named \textit{Defocus} to generate object-specific feature $\mathbf{f}_{\rm det}^{\rm s}$. Then, we adopt the detection head of YOLOv4 \cite{bochkovskiy2020yolov4} to discover objects in the retail scenario precisely. Given an image, the backbone network tackles the entire scene image and outputs general features $\mathbf{f}_{\rm g}^{\rm s}$. An intuitive strategy to detect multiple small objects precisely in the retail scenario is to enlarge the scene image to generate high-resolution features. However, the enlarged scene images contribute little to the gaze prediction process and increase the computational burdens. Another strategy is to interpolate the feature map into a high resolution, which would lose some valuable information or bring extra computational costs. For example, considering a feature map $\mathbf{x} \in \mathbb{R}^{2048 \times 7 \times 7}$, it contains 2048-D features and exhibits spatial size $7 \times 7$. If we first compress the channel dimension via a convolutional layer and then interpolate the feature map to extend the dimension, some valuable information would be lost in the channel compressing process. Alternatively, if we directly interpolate the feature map by a factor of 2, the output would be $\mathbf{x}' \in \mathbb{R}^{2048 \times 14 \times 14}$, which increases computational costs for subsequent steps. To resolve the problem above, we develop the \textit{Defocus} layer to enlarge the feature map without losing information or taking extra computations. This \textit{Defocus} layer is a reverse operation to the \textit{focus} layer \cite{YOLOv5}. As shown in Fig.~\ref{pixelshuffle}, given an enlarging ratio $r$, we first rearrange elements in the feature map and shrink the channel dimension by a factor of $1/r^{2}$, then zoom the height and width by a factor $r$. This rearranging operation requires fewer computational resources, and can hold all information in the produced object-specific feature $\mathbf{f}_{\rm det}^{\rm s}$. After obtaining the object-specific feature $\mathbf{f}_{\rm det}^{\rm s}$, the detection network uses the feature pyramid structure to fuse features from different blocks, where feature maps are interpolated to pursue a consistent spatial size. We also replace the interpolation process with the proposed \textit{Defocus} layer, which requires fewer computational sources but achieves precise detection results. Moreover, we find one high-resolution feature map is sufficient to detect retail objects and remove other detection heads, which saves 49.8\% computational costs, as multiple objects in the retail scene do not exhibit drastic variations concerning the object size. \subsection{Energy aggregation loss} \label{Box energy} The studied gaze object prediction task usually requires high-quality gaze heatmaps to generate precise results. Take the model of \cite{chong2020detecting} as an example, it achieves an AUC of 0.952 while the L$_2$ distance and angular error are 0.075 and 15.1$^\circ$, respectively. Although numerical values of these two errors seem minor, such errors would cause apparent departure on the gaze point and lead to incorrect gaze object predictions. A high-quality gaze heatmap should accurately hit the gaze point, so its energy should be aggregated on the ground truth stared object. Thus, we propose the energy aggregation loss and utilize the ground truth to guide the gaze estimation process. \begin{figure}[!t] \centering \setlength{\abovecaptionskip}{0.1cm} \includegraphics[width=0.85\linewidth]{figure/box-energy.pdf} \caption{Calculation procedure of the energy aggregation loss. ``Mean" denotes the element-wise mean operation.} \vspace{-0.5cm} \label{figTask} \end{figure} As shown in Fig.~\ref{figTask}, given a heatmap $\mathbf{M}$ predicted by the gaze prediction module, the energy of each pixel can be represented as $\mathbf{M}_{i,j}$. If the ground truth gaze object box is defined as $\mathbf{b} = (x_{1}, y_{1}, x_{2}, y_{2})$ where $ (x_{1}, y_{1}) $ and $ (x_{2}, y_{2}) $ are coordinates of the top left and bottom right corners, we can get the average energy within this box by: \begin{equation} E_{b} = \frac{1}{N} \sum^{x_{2}}_{i=x_{1}} \sum^{y_{2}}_{j=y_{1}} \mathbf{M}_{i,j}, \label{eqMeanEng} \end{equation} Where $N$ indicates the number of elements within box $\mathbf{b}$. In addition, we can calculate the total energy within the image $E_{\rm I}$ by calculating the average value of all spatial bins. To guide the heatmap aggregated on the stared object box, we define the energy aggregation loss as: \begin{equation} \mathcal{L}_{eng} = - \ \frac{E_{\rm b}}{E_{\rm I}}. \end{equation} \subsection{Training and inference} \noindent \textbf{Training.} We assign three default boxes at each spatial location for object detection, whose spatial sizes are $12 \times 16$, $19 \times 40$, and $28 \times 64$, respectively. As our detection branch is based on the detection head of YOLOv4~\cite{bochkovskiy2020yolov4}, we employ the same method as YOLOv4 to calculate detection loss $\mathcal{L}_{\rm det}$, which jointly considers the detection confidence score, category classification score, and bounding box regression. In the gaze prediction branch, we first generate the ground truth heatmap from the ground truth gaze point by applying Gaussian blur and then calculate the heatmap loss $\mathcal{L}_{\rm gaze}$ via measuring the MSE loss between the predicted heatmap and the ground truth heatmap. In addition, the energy aggregation loss $\mathcal{L}_{\rm eng}$ uses the ground truth object box to guide the training of the gaze prediction branch. The total loss used in the training process consists of these three terms: \begin{equation} \mathcal{L}_{\rm total} = \mathcal{L}_{\rm det}+ \mathcal{L}_{\rm gaze} + \mathcal{L}_{\rm eng}. \end{equation} \begin{figure}[!t] \centering \setlength{\abovecaptionskip}{0.1cm} \includegraphics[width=0.8\linewidth]{figure/wUoC.pdf} \caption{Illustration of our proposed metric. In cases (a) and (b), the wUoC metric gets small with the decrease of the overlapping area. In cases (c) and (d), wUoC can reveal the similarity when one box fully covers the other.} \vspace{-0.5cm} \label{figWUoC} \end{figure} \noindent \textbf{Inference.} In the inference process, the entire image and cropped head image are firstly processed by the proposed specific-general-specific mechanism to generate task-specific features. Then the gaze prediction branch estimates the gaze heatmap, and the object detection branch predicts the object bounding boxes. Afterward, we follow Eq.~\ref{eqMeanEng} to calculate the mean energy of each box and select the box with max mean energy as the predicted gaze object. \subsection{Evaluation metric} \label{secMetric} As a newly proposed task, our studied gaze object prediction requires a proper evaluation metric. The object detection community first measures the intersection over union (IoU) between predicted and ground-truth bounding boxes, then calculates the average precision (AP) to measure the performance. The AP score would be zero if there is no overlapping area (\ie IoU=0), and AP cannot reveal the distance between prediction and ground truth. However, such distance is meaningful when performing gaze object prediction in the retail scenario. For example, the retail system first predicts the stared object and then sequentially recommends this object and its neighboring ones to the customer. A reasonable solution is to make the traditional IoU metric aware of bounding box distances to alleviate this issue. Given the prediction box $\mathbf{p}$ and the ground truth box $\mathbf{g}$, we calculate their minimum closure and obtain a bounding box $\mathbf{a}$. As shown in Fig.~\ref{figWUoC} (a), the union over closure $\text{UoC} = \frac{\mathbf{p} \cup \mathbf{g}}{\mathbf{a}}$ can reveal the distance between prediction and ground truth. As shown in Fig.~\ref{figWUoC} (b), even if there is no overlapping, the UoC metric can also reveal the distance. However, when one box fully covers the other, the UoC metric would become 1, and cannot distinguish cases like Fig.~\ref{figWUoC} (c) and Fig.~\ref{figWUoC} (d). We further introduce a size similarity weight into the UoC metric. The size similarity weight considers the area of two boxes and can be defined as $ w = \min{(\frac{\mathbf{p}}{\mathbf{g}}, \frac{\mathbf{g}}{\mathbf{p}})}$. Thus, our proposed metric can be formulated as: \begin{equation} \text{wUoC} = \min{(\frac{\mathbf{p}}{\mathbf{g}}, \frac{\mathbf{g}}{\mathbf{p}})} \times \frac{\mathbf{p} \cup \mathbf{g}}{\mathbf{a}}. \label{EqwUoC} \end{equation} As shown in Fig.~\ref{figWUoC} (c) and Fig.~\ref{figWUoC} (d), the wUoC metric can tackle the overlapping case and adaptively reveal the prediction quality. \begin{table}[!t] \small \centering \setlength{\tabcolsep}{6.2pt} \caption{Gaze estimation performance on the GOO-Synth.} \vspace{-0.3cm} \begin{tabular}{l|ccc} \toprule Setups & AUC $\uparrow$ & Dist. $\downarrow$ & Ang. $\downarrow$ \\ \midrule Random & 0.497 & 0.454 & 77.0 \\ Recasens \etal \cite{recasens2015they} & 0.929 & 0.162 & 33.0 \\ Lian \etal\cite{lian2018believe} & 0.954 & 0.107 & 19.7 \\ Chong \etal\cite{chong2020detecting} & 0.952 & 0.075 & 15.1 \\ \midrule \#a Sharing backbone & 0.905 & 0.139 & 27.1 \\ \#b Head-free scene branch & 0.941 & 0.100 & 18.8 \\ \#c Head-delay & 0.951 & 0.091 & 16.2 \\ \#d GaTector & \textbf{0.957} & \textbf{0.073} & \textbf{14.9} \\ \bottomrule \end{tabular}% \label{tabCmpGaze}% \vspace{-0.4cm} \end{table}% \section{Experiments} \subsection{Setups} \noindent \textbf{Dataset.} We use the GOO dataset to evaluate our proposed method. GOO contains annotations of gaze point, gaze object, and bounding boxes for all objects, and has 24 different categories, where the GOO-Synth subset contains 192000 synthetic images. The GOO-Real subset contains 9552 real images. The GOO dataset exhibits the multiple challenges for the gaze object prediction task, such as small size objects, multiple objects. Images in the GOO dataset contain 80 objects on average. \noindent \textbf{Implementation details.} In the proposed GaTector, the sharing backbone network adopts ResNet-50 \cite{he2016deep}. As for object detection, we set $r=2$ in the \textit{Defocus} layer, use NMS with threshold 0.3 to remove redundant boxes, and keep the top 100 boxes for each image. As for gaze estimation, we apply Gaussian blur with kernel 3 to tackle the ground-truth gaze point. The network is optimized by the Adam \cite{kingma2014adam} algorithm for 100 epochs. We set batch size as 32, and set the initial learning rate as $10^{-4}$. All experiments are implemented based on the PyTorch~\cite{NEURIPS2019_9015} and GeForce RTX 3090 GPUs. \noindent \textbf{Metric.} As for gaze object prediction, we use the wUoC metric as described in Section \ref{secMetric}. For gaze prediction, we follow previous works~\cite{tomas2021goo,chong2020detecting,lian2018believe} and use three metrics, \ie AUC, L2 distance (Dist.), and Angular error (Ang.). The AUC metric determines whether each cell in the spatially discretized image is classified as a gaze object. Dist. is the L2 distance between the annotated gaze location and the prediction. Ang. is the angular error of the predicted gaze. For object detection, we followed the previous works \cite{bochkovskiy2020yolov4,ren2015faster,he2017mask} to use the average precision (AP) as our metric. \subsection{Gaze prediction} \begin{table}[!t] \centering \small \setlength{\tabcolsep}{3.5pt} \caption{Object detection performance on the GOO-Real. The computational cost of each setup is reported in terms of Parameters (M) and FLOPs (G).} \vspace{-0.3cm} \begin{tabular}{l|cc|ccc} \toprule Setups & Para. & FLOPs & AP & AP$_{50}$ & AP$_{75}$ \\ \midrule \#a YOLOv4 \cite{bochkovskiy2020yolov4} & 64.07 & 8.69 & 43.69 & 84.02 & 43.59 \\ \#b ResNet-50 backbone & 61.65 & 7.85 & 40.87 & 82.04 & 37.48 \\ \#c Large feature map & 40.35 & \textbf{6.17} & 41.07 & 81.67 & 38.46 \\ \#d Interpolation & 40.35 & 12.37 & 54.67 & 94.93 & 58.21 \\ \#e Defocus & \textbf{39.53} & 12.30 & \textbf{56.20} & \textbf{96.90} & \textbf{60.23} \\ \#f GaTector & 60.78 & 18.11 & 52.25 & 91.92 & 55.34 \\ \bottomrule \end{tabular}% \label{tabCmpDetection}% \vspace{-0.4cm} \end{table}% In traditional gaze follow works \cite{recasens2015they, lian2018believe, chong2020detecting}, two separated networks are used to tackle the scene image and the head image, respectively. Under this paradigm, Lian~\etal \cite{lian2018believe} and Chong \etal \cite{chong2020detecting} achieve promising performance, as shown in Table \ref{tabCmpGaze}. Aiming to employ a sharing backbone to perform gaze object detection, we study multiple strategies to integrate the scene and head branches. First of all, we use ResNet-50 \cite{he2016deep} as a sharing backbone and jointly employ the proposed SGS architecture to serve the scene and head branches. Following Chong \etal \cite{chong2020detecting}, we concatenate scene image and head location map and send the four-channel input to the scene branch. As shown in Table \ref{tabCmpGaze} setup \#a, simply sharing backbone gets an AUC of 0.905 and leads to a 0.047 performance drop. The drop mainly comes from the random initialization of the convolutional block in the image branch. In particular, given a four-channel input, the first convolutional layer cannot use the pre-trained weights in ResNet-50 \cite{he2016deep}, and features extracted from the randomly initialized layer are insufficient to capture sensitive information. Thus, we remove the head location map and only send the scene image to the scene branch. This experiment makes an AUC of 0.941 and verifies the necessity of proper initialization for the first convolutional layer. However, without the head location map, the scene branch still shows a performance drop concerning \cite{chong2020detecting}. Consequently, we propose the head-delay strategy, as described in Section \ref{sectionGazePrediction}. The head-delay strategy gets comparable performance with Chong's method \cite{chong2020detecting}. Moreover, the complete method of our proposed GaTector achieves high-quality gaze estimation performance and makes improvements under all three metrics. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{figure/figure7.pdf} \vspace{-0.4cm} \caption{Visualization results of GaTector, we show the results of object detection (OD), gaze estimation (GE), gaze object prediction (GOD), and ground truth gaze object prediction (GOD GT). We provide five successful cases (left) and one fail case (right)} \vspace{-0.5cm} \label{figvisualization} \end{figure*} \begin{table}[t] \centering \caption{Comparison of gaze object prediction performance, measured by wUoC (\%) on the GOO-Real dataset.} \vspace{-0.3cm} \small \setlength{\tabcolsep}{2.0pt} \begin{tabular}{l|ccc|cc} \toprule \multirow{2}[1]{*}{Setups} & \multirow{2}[1]{*}{Para.} & \multirow{2}[1]{*}{FLOPs} & \multirow{2}[1]{*}{AP$_{50}$} & \multicolumn{2}{c}{GOP} \\ & & & & GT gaze & Pred. gaze \\ \midrule Faster-RCNN \cite{ren2015faster} & 102.70 & 32.92 & 25.47 & 12.04 & 0.69 \\ PAA \cite{paa-eccv2020} & 93.39 & 19.40 & 54.72 & 13.61 & 1.01 \\ RetinaNet \cite{lin2017focal} & 98.05 & 20.02 & 72.86 & 15.06 & 1.77 \\ Sabl \cite{Wang_2020_ECCV} & 97.64 & 19.58 & 73.10 & 15.14 & 1.21 \\ FCOS \cite{tian2019fcos} & 93.34 & 19.16 & 74.78 & 14.17 & 0.66 \\ YOLOv4 \cite{bochkovskiy2020yolov4} & 125.52 & 18.14 & 84.02 & 16.05 & 1.56 \\ \midrule GaTector & \textbf{60.78} & \textbf{18.11} & \textbf{91.92} & \textbf{20.35} & \textbf{3.31} \\ \bottomrule \end{tabular}% \vspace{-0.5cm} \label{tab:addlabel}% \end{table}% \subsection{Object detection} Table \ref{tabCmpDetection} studies the object detection performance under different setups. First of all, vanilla YOLOv4 \cite{bochkovskiy2020yolov4} with CSPDarknet53 initialization gets an AP of 43.69\%, which requires 64.07M parameters. In the proposed GaTector, the object detection branch and the gaze prediction branch require the same backbone network. Thus, we use ResNet-50 \cite{he2016deep} to replace the CSPDarknet53 backbone and, which leads to a performance drop of 2.82\%. Considering the gaze object prediction dataset contains small objects with consistent size, we believe a large feature map is beneficial to detect gaze objects. Given three feature maps with different spatial sizes from YOLOv4, we only perform the object detection on the large feature map and observe slight performance improvements, as shown in Table \ref{tabCmpDetection} setup \#c. The small and dense objects inspire us to use large feature maps. However, due to the constraint of computational burdens and GPU memory, the spatial size of the input image is limited (\eg 224 $\times$ 224). An intuitive way to enlarge the feature map is an interpolation, which gets an AP of {54.67\%} with the cost of more parameters and increased computations. In contrast to interpolation, the proposed \textit{Defocus} layer can improve the performance to 56.2\% but requires fewer computations. As for the complete method, GaTector makes an AP of 52.25\% when only learns from the GOO-Real subset. The simplified data augmentation process primarily causes a performance gap between 56.20\% and 52.25\%. Specifically, when integrating gaze follow and object detection into a unified framework, we only keep transformations suitable to both tasks, \eg random crop and color transformation. Because GOO-Real only contains limited images, the simplified data augmentation would damage the detection performance. \subsection{Gaze object prediction} Table \ref{tab:addlabel} reports the gaze object detection performance under two different paradigms: separated paradigm and unified paradigm. Firstly, we can perform gaze object detection via two separated steps, \ie object detection, and gaze selection. We carry out experiments with six well-performed methods: Faster-RCNN \cite{ren2015faster}, PAA \cite{paa-eccv2020}, RetinaNet \cite{lin2017focal}, Sabl \cite{Wang_2020_ECCV} FCOS \cite{tian2019fcos} and YOLOv4 \cite{bochkovskiy2020yolov4}, including both anchor-based methods \cite{bochkovskiy2020yolov4, ren2015faster, paa-eccv2020, lin2017focal, Wang_2020_ECCV} and anchor-free methods \cite{tian2019fcos}, both one-stage methods \cite{lin2017focal, bochkovskiy2020yolov4, tian2019fcos,paa-eccv2020} and two-stage methods \cite{ren2015faster,Wang_2020_ECCV}. Trained on the GOO-Real dataset, YOLOv4 \cite{bochkovskiy2020yolov4} and FCOS \cite{tian2019fcos} achieve precise performance while the FLOPs are limited. \begin{table*}[t] \small \centering \caption{Ablation studies on GOO-Synth, we report the performance of gaze estimation (GE), object detection (OD), and gaze object prediction (GOP).} \vspace{-0.3cm} \begin{tabular}{l|ccc|ccc|c} \toprule \multirow{2}[1]{*}{Setups} & \multicolumn{3}{c|}{GE} & \multicolumn{3}{c|}{OD} & GOP \\ & AUC $\uparrow$ & Dist. $\downarrow$ & Ang. $\downarrow$ & AP & AP$_{50}$ & AP$_{75}$ & wUoC \\ \midrule \#a w/o input-specific blocks & 0.596 & 0.387 & 70.17 & \multicolumn{1}{l}{35.2} & 66.4 & 41.3 & 0.0 \\ \#b w/o gaze-specific blocks & 0.952 & 0.114 & 20.76 & \multicolumn{1}{l}{54.7} & 93.2 & 60.5 & 24.7 \\ \#c w/o object-specific Defocus & 0.951 & 0.075 & 15.20 & 40.9 & 81.6 & 38.3 & 24.9 \\ \#d w/o $\mathcal{L}_{eng}$ & 0.946 & 0.103 & 17.20 & 55.8 & 94.4 & 61.5 & 25.2 \\ GaTector & \textbf{0.957} & \textbf{0.073} & \textbf{14.91} & \textbf{56.8} & \textbf{95.3} & \textbf{62.5} & \textbf{28.5} \\ \bottomrule \end{tabular}% \vspace{-0.5cm} \label{tabAblaComplete}% \end{table*}% Afterward, we employ a complete GOO dataset to train the proposed GaTector model and predict the gaze heatmap. Given object detection boxes and predicted gaze heatmaps, we follow Eq.(\ref{eqMeanEng}) to select bounding boxes with high mean energy and report results for the gaze object prediction task. As shown in Table \ref{tab:addlabel}, although object detection results are acceptable, the gaze object prediction scores are low. We find the performance bottleneck lies in inaccurate gaze heatmaps. In contrast, if we utilize the ground truth heatmaps to select boxes from the detection results, the gaze object prediction performance would receive apparent gains. Compared with the separated paradigm, our proposed GaTector adopts the unified paradigm, \ie employing a unified network to perform gaze follow, object detection, and conduct gaze object prediction. As shown in Table \ref{tab:addlabel}, GaTector consistently improves the object detection performance from 84.02\% to 91.92\%. However, due to the small amount of data in the Goo-Real dataset, current methods exhibit limited performance on the gaze object prediction task. There are two potential reasons, the predicted gaze point is not accurate enough, or our metric wUoC is stricter than traditional detection performance mAP. \subsection{Ablation studies} \begin{table}[t] \small \centering \caption{Analysis about the bottleneck of GaTector on the GOO-Synth.} \begin{tabular}{cc|cc|c} \toprule \multicolumn{2}{c|}{Gaze Heatmap} & \multicolumn{2}{c|}{Object Box} & GOP \\ \cmidrule{1-4} GT & Pred. & GT & Pred. & mUoC \\ \midrule & \checkmark & & \checkmark & 28.50 \\ & \checkmark & \checkmark & & 29.81 \\ \checkmark & & & \checkmark & 78.79 \\ \bottomrule \end{tabular}% \label{tabAblBottleneck}% \end{table}% In GaTector, we propose an SGS feature extractor, a \textit{Defocus} layer, and an energy aggregation loss to perform gaze object prediction. Table \ref{tabAblaComplete} reports ablation experiments about each component. Firstly, it would cause a dramatic performance drop by replacing two input-specific convolutional layers $\psi^{\rm s}(\cdot)$ and $\psi^{\rm h}(\cdot)$. This verifies the necessity of transposing inputs to a general space before sending them to the general backbone. Besides, both gaze follow and object detection receive an inevitable performance drop when removing gaze-specific output blocks $\phi^{\rm s}(\cdot)$ and $\phi^{\rm h}(\cdot)$, which verifies the influence of the task-specific output convolutional layer. In addition, we replace the \textit{Defocus} layer with the traditional interpolation operation and observe performance drops. These three experiments prove the rationality and necessity of our proposed SGS feature extract mechanism. Finally, removing the energy aggregation loss would damage the performance of gaze estimation, especially under the metric L2 distance and angular error. Table~\ref{tabAblBottleneck} analyzes the performance bottleneck of our proposed GaTector. Using predicted gaze heatmap and object bounding boxes, we select a suitable bounding box by Eq.(\ref{eqMeanEng}) from prediction boxes within the heatmap, GaTector gets wUoC of {28.50\%}. If we use ground truth object boxes, the gaze object prediction performance can only be improved to {29.81\%}, which verifies the current approach has achieved accurate object detection performance. In contrast, the performance will receive huge improvements to 78.79\%, if we use the ground truth heatmap to select boxes, which indicates that the current performance bottleneck mainly lies in gaze estimation. The bottleneck analysis provides a promising direction for future research. \subsection{Qualitative results} Fig.~\ref{figvisualization} exhibits qualitative results for the proposed GaTector method. Faced with various retail scenarios (\eg various distances to the retail objects, different shooting views), our proposed GaTector can accurately localize bounding boxes and relatively precisely predict gaze heatmap, finally leading to accurate gaze object prediction results. The last row of Fig.~\ref{figvisualization} shows a failure case, where errors in the gaze heatmap mislead the gaze object prediction process. \vspace{-0.2cm} \section{Conclusion} \vspace{-0.2cm} We make an early exploration to build a unified framework to tackle the gaze object prediction task. To simultaneously conduct object detection and gaze estimation, we propose a novel SGS mechanism to extract two task-specific features with only one backbone, which can help to reduce model parameters and computational burdens. Then, we introduce two input-specific blocks before the shared backbone and three task-specific blocks after the shared backbone to consider the specificity of inputs. To serve small object detection in retail scenarios, we propose a \textit{Defocus} layer to enlarge the feature map without losing information or bringing extra computations. Also, we design energy aggregation loss and employ the bounding box of the stared object to guide the gaze heatmap prediction. The promising gains of GaTector in all three tracks of the GOO dataset demonstrate the efficacy of our method, which would inspire related tasks, \eg multi-modality learning \cite{wang2021polo}, small object detection \cite{feng2020tcanet}. \vspace{1mm} \noindent \textbf{Limitations.} Since our GaTector is only verified on the benchmark dataset, it may encounter challenges when applied to practical scenarios, \eg lacking sufficient supervision\cite{wang2021pfwnet, yang2021background}, online decision\cite{yang2022colar}. This indicates a promising research direction for future works. \begin{figure*}[thbp] \centering \includegraphics[width=1\linewidth]{figure/yolonet.pdf} \caption{Detailed network architecture of our object detection branch. Df, CONV, and BN indicate the \textit{Defcous}, convolution, and batch normalization operations, respectively. Under each operation, we present the size of the feature map in the form of (channel, height$\times$width).} \label{figDetection} \end{figure*} \section*{A1. Process to calculate the training loss.} The training process of our GaTector is driven by three loss terms,~\ie object detection loss $\mathcal{L}_{det}$, gaze estimation loss $\mathcal{L}_{gaze}$ and our proposed energy aggregation loss $\mathcal{L}_{eng}$. The energy aggregation loss $\mathcal{L}_{eng}$ is illustrated in SubSection 3.4 of our manuscript. Here, we elaborate on detailed processes to calculate the object detection loss $\mathcal{L}_{det}$ and the gaze estimation loss $\mathcal{L}_{gaze}$. For a fair comparison, we keep identical setups with YOLOv4 \cite{bochkovskiy2020yolov4} when calculating the object detection loss $\mathcal{L}_{det}$, and keep identical setups with Chong \etal~\cite{chong2020detecting} when calculating the gaze estimation loss $\mathcal{L}_{gaze}$. \noindent \textbf{Object detection.} Given a predicted box $(x, y, w, h, p, \mathbf{s})$, $(x,y)$ indicates the central point, $(w,h)$ indicates width and height, $p$ is the predicted overlap, and $\mathbf{s} = [s_{0}, s_{1}, ..., s_{C}]$ is the predicted classification score. The corresponding ground truth can be represented as $(x^{g}, y^{g}, w^{g}, h^{g}, o, \mathbf{y})$, where $o$ indicates the ground truth overlap and $\mathbf{y} = [y_{0}, y_{1}, ..., y_{C}]$, $ y_{c} \in \{0, 1\}$ represents whether this bounding box belongs to the $c^{th}$ category. The object detection loss $\mathcal{L}_{det}$ jointly considers classification, overlapping, and box regression. \begin{equation} \mathcal{L}_{det} = \mathcal{L}_{det}^{cls} + \mathcal{L}_{det}^{o} + \mathcal{L}_{det}^{reg}. \end{equation} The classification term calculates the binary cross-entropy loss: \begin{equation} \mathcal{L}_{det}^{cls} = \frac{1}{C+1} \sum_{c=0}^{C} - [y_{c}{\rm log}\hat{s}_{c} + (1 - y_{c}){\rm log}(1 - \hat{s}_{c})], \end{equation} where $\hat{s}_{c}$ indicates the classification score after the sigmoid activation. The overlapping loss adopts the binary cross-entropy loss as well: \begin{equation} \mathcal{L}_{det}^{o} = - [o{\rm log}\hat{p} + (1 - o){\rm log}(1 - \hat{p})], \end{equation} where $\hat{p}$ indicates the overlap score after the sigmoid activation. As for box regression, we adopt the CIoU loss \cite{zheng2020distance}. Succinctly, the regression loss can be calculated as follows: \begin{equation} \mathcal{L}_{det}^{reg}=1 - IoU+\frac{\rho^{2}\left((x, y), (x^{g}, y^{g})\right)}{d^{2}}+\alpha v, \end{equation} where $IoU$ indicates the intersection over union between the predicted box and the ground truth box. The term $\frac{\rho^{2}\left((x, y), (x^{g}, y^{g})\right)}{d^{2}}$ aims to minimize the distances between central points of two boxes, where $\rho^{2}\left((x, y), (x^{g}, y^{g})\right)$ indicates the Euclidean distance and $d$ represents the diagonal length of the smallest enclosing box covering the two boxes. In addition, the term $\alpha v$ measures the consistency of aspect ratio, where $v$ is defined as: \begin{equation} v=\frac{4}{\pi^{2}}\left(\arctan \frac{w^{g}}{h^{g}}-\arctan \frac{w}{h}\right)^{2}. \end{equation} The coefficient $\alpha$ can be calculated as: \begin{equation} \alpha=\frac{v}{(1-IoU)+v}. \end{equation} Please refer to \cite{zheng2020distance} for more details about CIoU loss. \begin{figure}[thbp] \centering \includegraphics[width=0.8\linewidth]{figure/UoC.pdf} \caption{Calculation process of the union over closure.} \label{UoC} \end{figure} \noindent \textbf{Gaze estimation.} We follow Chong \etal~\cite{chong2020detecting} to calculate the gaze estimation loss $\mathcal{L}_{gaze}$. Given the annotated gaze point $\mathbf{q} = (q_{x}, q_{y})$, we apply the Gaussian blur to generate the vanilla ground truth heatmap $\mathbf{T}'$. \begin{equation} \begin{split} \mathbf{T}' = \frac{1}{2 \pi \sigma_{x} \sigma_{y} } \exp \left[-\frac{1}{2}\left(\frac{\left(x-q_{x}\right)^{2}}{\sigma_{x}^{2}}+\frac{\left (y-q_{y}\right)^{2}}{\sigma_{y}^{2}}\right)\right]. \end{split} \label{eqGaussian} \end{equation} In Eq.(\ref{eqGaussian}), $\sigma_{x}$ and $\sigma_{y}$ indicate the standard deviation. We follow Chong \etal~\cite{chong2020detecting} and set $\sigma_{x} = 3$, $\sigma_{y} = 3$. Afterward, we normalize the heatmap and obtain the ground truth heatmap $\mathbf{T} = \mathbf{T} / {\rm max}(\mathbf{T}')$. Given a predicted heatmap $\mathbf{M} \in \mathbb{R}^{H \times W}$, we calculate the mean square error to obtain the gaze estimation loss $\mathcal{L}_{gaze}$: \begin{equation} \mathcal{L}_{gaze} = \frac{1}{H \times W} \sum_{i=1}^{H} \sum_{j=1}^{W} (M_{i,j} - T_{i,j})^{2}. \end{equation} \section*{A2. Process to calculate the wUoC metric.} In our paper, we propose the wUoC metric to measure the performance of gaze object prediction. Given the predicted box $\mathbf{p}$ and the ground truth box $\mathbf{g}$, we calculate their minimum closure and obtain a bounding box $\mathbf{a}$. Then, Figure \ref{UoC} illustrates the process to calculate the UoC (union over closure). Afterward, we further introduce a size similarity weight into the UoC metric. The size similarity weight considers the area of two boxes and can be defined as $\min{(\frac{\mathbf{p}}{\mathbf{g}}, \frac{\mathbf{g}}{\mathbf{p}})}$. Thus, our proposed metric can be formulated as: \begin{equation} \text{wUoC} = \min{(\frac{\mathbf{p}}{\mathbf{g}}, \frac{\mathbf{g}}{\mathbf{p}})} \times \frac{\mathbf{p} \cup \mathbf{g}}{\mathbf{a}}. \label{wUoC} \end{equation} \begin{figure}[thbp] \centering \includegraphics[width=1\linewidth]{figure/backbone.pdf} \caption{Detailed network architecture of the specific-general-specific feature extractor. Under each operation, we present the size of the feature map in the form of (channel, height$\times$width).} \label{figBackbone} \end{figure} \section*{A3. Detailed architecture of each component} \begin{figure}[thbp] \centering \includegraphics[width=1\linewidth]{figure/gazenet.pdf} \caption{Detailed network architecture of the gaze estimation branch. Under each operation, we present the size of the feature map in the form of (channel, height$\times$width).} \label{figGaze} \end{figure} Figure \ref{figBackbone} illustrates the detailed architecture of our backbone network. Given a sense image and a head image, we first employ two specific convolutional layers to convert these two inputs into the general space. Then, a sharing backbone network with four blocks processes two inputs and generates features. Finally, we select features from the last three backbone layers~(\ie $C_{3}$, $C_{4}$, $C_{5}$), use the \textit{Defcous} layer to enlarge the feature map, and prepare specific inputs for the object detection branch. Simultaneously, we utilize two convolutional layers to prepare inputs for the gaze estimation branch. Figure \ref{figDetection} presents the detection branch. There are three inputs with different sizes, \ie $C_{5}$, $C_{4}$, $C_{3}$, which are gradually integrated to detect objects. Figure \ref{figGaze} presents the detailed process to estimate the gaze heatmap. Given the head location map, we employ five convolutional layers to extract the location feature, which is concatenated with the sense feature. Simultaneously, we jointly consider head location and head feature to predict an attention map, which is used to modulate the fused feature. Afterward, we employ three convolutional layers to abstract features, use three deconvolutional layers to enlarge the feature map, and utilize a convolutional layer to estimate the gaze heatmap. {\small \bibliographystyle{ieee_fullname}
1,116,691,499,474
arxiv
\section{Introduction} \label{sec:intro} The concept of primordial black holes (PBHs), associated with comparatively large density fluctuations at the early stages of the evolution of the universe, was introduced by Zel'dovich and Novikov \cite{Zeldovich:1967lct} and later theorized in details by S. W. Hawking and B. J. Carr in \cite{Hawking:1971ei, Carr:1974nx, 1975ApJ...201....1C}. Subsequently, the hydrodynamics of PBH formation \cite{1978SvA....22..129N}, accretion of matter around PBHs \cite{1979A&A....80..104N} and PBH formations in the contexts of Grand Unified Theories \cite{Khlopov:1980mg} and in a double inflation in supergravity \cite{Kawasaki:1997ju} were studied. It is conjectured that these PBHs have either evaporated by Hawking radiation or have evolved into supermassive black holes \cite{Kawasaki:2012wr} or remain as dark matter in the present universe \cite{Kawasaki:2012wr,Carr:2020xqk, Villanueva-Domingo:2021spv,Conzinu:2020cke,MoradinezhadDizgah:2019wjf,Clesse:2016vqa,Kovetz:2017rvv,Inomata:2017okj,Bringmann:2018mxj,Raidal:2018bbj,Green:2020jor,Poulin:2017bwe,Wong:2020yig,Escriva:2021pmf,Calabrese:2021zfq}. The relations between PBHs and primordial gravitational waves have been studied in \cite{Clesse:2016vqa,Choudhury:2013woa, Kovetz:2016kpi,Nakama:2016gzw,Sasaki:2016jop,Kovetz:2017rvv,DeLuca:2020qqa,Domenech:2021wkk,Ozsoy:2020kat}. Formation of PBH during a first order phase transition in the inflationary period has also been studied \cite{Khlopov:2008qy,DeLuca:2021mlh}. Studies of the influence of PBHs on the CMB $\mu$ and $y$ distortions \cite{Deng:2020pxo,Tashiro:2008sf} and $\mu T$ \cite{Ozsoy:2021qrg} cross-correlations have been carried out. Detection of signals from stochastic gravitational wave background, connected with PBH formations, in present and future experiments, has been discussed in Refs. \cite{Garcia-Bellido:2016dkw, Braglia:2020taf}. \par In some of the studies of the inflationary scenarios, PBHs have been identified as massive compact halo object with mass $\sim 0.5 M_ {\odot}$ in the work by J. Yokoyama \cite{Yokoyama:1999xi}, who has also examined the formation of PBHs in the framework of a chaotic new inflation \cite{Yokoyama:1998pt}. Josan and Green \cite{Josan:2010cj}, studied constraints on the models of inflation through the formation of PBHs, using a modified flow analysis. Harada, Yoo and Kohri \cite{Harada:2013epa} examined the threshold of PBH formation, both analytically and numerically. R. Arya \cite{Arya:2019wck} has considered the PBH formation as a result of enhancement of power spectrum during the thermal fluctuations in a warm inflation. Formation of PBHs in density perturbations was studied in two-field Hybrid inflationary models \cite{Garcia-Bellido:1996mdl,Chongchitnan:2006wx}, Starobinsky model including dilaton \cite{Gundhi:2020kzm}, multi-field inflation models \cite{Palma:2020ejf}, isocurvature fluctuation and chaotic inflation models \cite{Yokoyama:1999xi}, inflection-point models \cite{Choudhury:2013woa,Ballesteros:2017fsr}, quantum diffusion model \cite{Biagetti:2018pjj}, model with smoothed density contrast in the super-horizon limit \cite{Young:2019osy} and with the collapse of large amplitude metric perturbation \cite{Musco:2018rwt} and large-density perturbation \cite{Young:2020xmk} upon horizon re-entry. PBH abundance in the framework of non-perturbative stochastic inflation has been studied by F. K\"uhnel and K. Freese \cite{Kuhnel:2019xes}. Relation between the constraints of primordial black hole abundance and those of the primordial curvature power spectrum has also been studied \cite{Kalaja:2019uju,Dalianis:2018ymb}. Recently, PBHs solutions have been obtained in the framework of non-linear cosmological perturbations and non-linear effects arising at horizon crossing \cite{Musco:2020jjb}.\par PBH production has recently been studied, in the framework of $\alpha$-attractor polynomial super-potentials and modulated chaotic inflaton potentials models \cite{Dalianis:2018frf}. Mahbub \cite{Mahbub:2019uhl} utilized the superconformal inflationary $\alpha$-attractor potentials with a high level of fine tuning to produce an ultra-slow-roll region, where the enhancement for curvature power spectra giving rise to massive PBHs was found at $k \sim 10^{14}$ Mpc$^{-1}$. In a subsequent work \cite{Mahbub:2021qeo}, this author re-examined the earlier work using the optimised peak theory. The ultra-slow-roll process along with a non-Gaussian Cauchy probability distribution has been applied in Ref. \cite{Biagetti:2021eep} to obtain large PBHs masses. The constant-rate ultra-slow-roll-like inflation \cite{2021hllNg:} has also been applied to obtain the enhancement in the power spectra, triggered by entropy production, resulting in PBH formation. Ref. \cite {Teimoori:2021pte} has simulated the onset of PBHs formation by adding a term to the non-canonical $\alpha$-attractor potential, which enhances the curvature perturbations at some critical values of the field. The enhancement of the power spectrum by a limited period of strongly non-geodesic motion of the inflationary trajectory and consequent PBH production has been studied by J. Fumagalli \textit{et al.} \cite{Fumagalli:2020adf}. \par In almost all of the above references, PBH formation has been studied in terms of the curvature perturbation $\mathcal{R}$ and the curvature power spectrum $P_{\mathcal{R}} (k)$, which are usually obtained from the Mukhanov-Sasaki equation \cite{1988ZhETF..94....1M,MUKHANOV1992203,mukhanov_2005,10.1143/PTP.76.1036} in the co-moving gauge, characterised by a zero inflaton perturbation ($\delta\phi = 0$). This way of analysis, obviously ignores the role of the inflaton field ($\phi$) in the inflationary scenario of PBH formation. In the present work, we shall take an alternative route. We shall use the spatially flat gauge, thereby including the role of $\phi$ in the mechanism of PBH formation. In this respect, we shall follow the formalism developed in our previous work \cite{Sarkar:2021ird}, comprising a set of linear perturbative evolution equations which could explain the Planck-2018 data \cite{Planck:2018jri} in low $k$ limit. We shall show, here, that the same equations can yield PBH-like solutions in the high $k$ regime with the conventional chaotic $T$ and $E$ model potentials without any modifications. Thus, we find that a salient feature of the present approach is that we can explore different regions of $k$-space evolution in the inflationary period under suitable initial conditions for the same differential equations. We believe that this work will open up an avenue for the dynamical origin of PBH formation in the deep sub-horizon $k$-space. In fact, we shall highlight here the important role played by the Bardeen potential ($\Phi_B$) in building up the density contrasts and the associated PBH formations, which, to our knowledge, has not been done so far in the literature.\par The paper is organised as follows. In Sec. \ref{subsec:1} we briefly describe the basic formalism of the linear perturbation theory, leading to the setting up of the three coupled non-linear differential equations which play the central role of our study of the PBH formations. In Sec. \ref{subsec:alpha_attractor} we write about the $\alpha$-attractor $T$ and $E$ model potentials which have been used in the present study. The expression of the transfer function is written and its plot is shown in Sec. \ref{subsec:trans_function}. Results and discussion are presented in Sec. \ref{sec:results}. Finally, in Sec. \ref{sec:conclusions} we make some concluding remarks. \section{Formalism} \label{sec:Formalism} \subsection{Linear perturbations in the metric and the inflaton field} \label{subsec:1} The Einstein-Hilbert action with minimal coupling between quantised inflaton field, \begin{equation} \phi(t,\vec{X})=\int\frac{d^3 k}{(2\pi)^3}[\phi(k,t)\hat{a}(\vec{k})e^{i\vec{k}.\vec{x}}+\phi^* (k,t)\hat{a}^\dagger (\vec{k})e^{-i\vec{k}.\vec{x}}], \end{equation} and the background linearly-perturbed metric, in spatially flat gauge, with no anisotropic stress, \begin{equation} ds^2 =-(1+2\Phi)dt^2 +2a(t)\partial_i B dx^i dt +a^2 (t)\delta_{ij}dx^i dx^j, \end{equation} is \begin{equation} S=\int d^4 x \sqrt{-g}\left(\frac{1}{2}R -\frac{1}{2}g^{\alpha\beta}\partial_{\alpha}\phi\partial_{\beta}\phi -V(\phi)\right). \end{equation} The linear perturbation in the inflaton field is written as \begin{equation} \phi(t,\vec{X})=\phi^{(0)}(t) + \delta\phi(t,\vec{X}). \end{equation} The perturbation can be translated, through the energy-momentum tensor to that in the density as, \begin{equation} \rho(t,\vec{X})=\rho^{(0)}(t)+\delta\rho(t,\vec{X}), \end{equation} where, \begin{equation} \rho^{(0)}(t)=\frac{{\dot{\phi}}^{(0)^2}}{2}+V(\phi^{(0)}) \label{eq:unperturbed_density} \end{equation} and \begin{equation} \delta\rho(t,\vec{X})=\frac{dV(\phi^{(0)})}{d\phi^{(0)}}\delta\phi + \dot{\phi}^{(0)}\delta\dot{\phi}-\Phi{\dot{\phi}}^{(0)^2}. \label{eq:density_perturbation} \end{equation} (Note: The last term in Eq.(\ref{eq:density_perturbation}) was neglected in Ref. \cite{Sarkar:2021ird} as $\Phi{\dot{\phi}}^{(0)^2}$ is small in slow-roll approximation. In the present paper, we retain this term as we expect the metric perturbation $\Phi$ to play a major role in the PBH formation.) Using the solutions in \cite{Baumann:2009ds} of the unperturbed Einstein's equations, \begin{equation} H^2=\frac{\rho^{(0)}}{3}, \label{eq: equation_1} \end{equation} \begin{equation} \dot{H}+H^2=-\frac{1}{6}(\rho^{(0)}+3p^{(0)}) \label{eq: equation_2} \end{equation} and the perturbed Einstein's equations, \begin{equation} 3H^2\Phi +\frac{k^2}{a^2} \left(-aHB \right) =-\frac{\delta\rho}{2}, \label{eq:equation_3} \end{equation} \begin{equation} H\Phi=-\frac{1}{2}\delta q, \label{eq:equation_4} \end{equation} \begin{equation} H\dot{\Phi}+(3H^2+2\dot{H})\Phi=\frac{\delta p}{2}, \label{eq:equation_5} \end{equation} \begin{equation} (\partial_t +3H)\frac{B}{a}+\frac{\Phi}{a^2}=0 \label{eq:equation_6} \end{equation} and the Bardeen potentials \cite{PhysRevD.22.1882} \begin{equation} \Phi_B = \Phi-\frac{d}{dt}\left[a^2\left(-\frac{B}{a}\right)\right], \label{eq:equation_14} \end{equation} \begin{equation} \Psi_B = a^2H\left(-\frac{B}{a}\right) \label{eq:equation_15} \end{equation} we obtain a relation between $\Phi$ and $\Phi_B$ as, \begin{equation} \Phi=\Phi_B+\partial_t \left(\frac{\Phi_B}{H}\right). \label{eq:equation_20} \end{equation} In Eqs. (\ref{eq:equation_4}) and (\ref{eq:equation_5}) $\delta q$ and $\delta p$ are the magnitudes of the momentum perturbation and pressure perturbation, respectively. Eqs. (\ref{eq:density_perturbation}) and (\ref{eq:equation_20}) show that the density perturbation $\delta\rho$ contains $\phi^{(0)}$, $\delta\phi$ as well as the Bardeen potential $\Phi_B$.\par \par Using the slow roll dynamical horizon crossing condition, $k=aH$, we go from the space of the cosmic time $t$ to that of the mode momentum $k$ and set up three nonlinear coupled differential equations in the $k$-space and solve for the quantities, $\phi^{(0)}(k)$, $\delta\phi(k)$ and $\Phi_B (k)$. The equations, similar to those in \cite{Sarkar:2021ird}, are, \begin{equation} \delta\phi (k^2 \phi'' + k^2 G_1 \phi'^2 + 4k\phi'+ 6G_1)+ \delta\phi' (-2k^3 G_1 \phi'^2)=0, \label{eq:17} \end{equation} \begin{multline} \delta\phi(1+12G_1 ^2 + 6G_2)+ \delta\phi' (4k+ k^2 G_1 \phi')+k^2\delta\phi''\\ + \Phi_B (-k\phi'+12G_1 + k^2 G_1 \phi'^2 + k^3 G_1 \phi' \phi'' -12kG_1 ^2 \phi' \\ +k^3 G_2 \phi'^3)+ \Phi'_B (-2k^2\phi' + 12kG_1 +k^3 G_1 \phi'^2) + \Phi''_B (-k^3\phi')\\=0 \label{eq:18} \end{multline} and \begin{widetext} \begin{eqnarray} \Phi''_B \left(-\frac{k^4}{3} \phi'^2\right) + \Phi'_B \left(-\frac{k^3}{3}\phi'^2 + \frac{k^4}{3} G_1 \phi'^3 - k^3 \phi'^2 -\frac{2k^4}{3}\phi'\phi'' + \frac{2k}{3}+ 2k^3\phi'^2 \right)\nonumber\\ + \Phi_B\left (-\frac{2k^2}{3}\phi'^2 - \frac{2k^3}{3}\phi'\phi''\nonumber + k^3 G_1 \phi'^3 +\frac{k^4}{3}G_2 \phi'^4 + k^4 G_1 \phi'^2 \phi'' -2 + 2k^2 \phi'^2 -2k^3 G_1 \phi'^3 \right)\nonumber\\+\delta\phi''\left(\frac{k^3}{3}\phi'\right)+\delta\phi'\left (2kG_1 +\frac{2k^2}{3}\phi' + \frac{k^3}{3}\phi''\right)+\delta\phi \left(2kG_2 \phi' + 2k^2 \phi'' + 2k^2G_1 \phi'^2 +2k\phi'\right)=0, \label{eq:19} \end{eqnarray} \end{widetext} where, $G_n = \frac{\partial^n}{\partial\phi^{(0)^n}}\ln\sqrt{V(\phi^{(0)})}$, $n=1, 2$. \par In the slow-roll approximation, the density contrast in $k$-space, which is useful for the study of the PBH formation, is given by, \begin{widetext} \begin{eqnarray} \frac{\delta\rho (k)}{\rho^{(0)}(k)}=\nonumber\frac{\delta V (k) +V^{(0)}(k)\left( \frac{k^2}{3}\phi'\delta\phi' + \Phi_B \left(-\frac{k^2}{3}\phi'^2 + \frac{k^3}{3}G_1 \phi'^3\right) + \Phi'_B \left(-\frac{k^3}{3}\phi'^2 \right)\right)}{V^{(0)}(k)}\\=2G_1 \delta\phi + \frac{k^2}{3}\phi'\delta\phi' + \Phi_B \left(-\frac{k^2}{3}\phi'^2 + \frac{k^3}{3}G_1 \phi'^3\right) + \Phi'_B \left(-\frac{k^3}{3}\phi'^2 \right) \label{eq:density_contrast} \end{eqnarray} \end{widetext} where, $\delta V(k) \equiv 2G_1 V^{(0)}(k)\delta\phi(k)$ \\and $V^{(0)}(k) \equiv V(\phi^{(0)}(k))$. In first line of Eq. (\ref{eq:density_contrast}), we have used the Fourier-space version of the slow-roll approximation and, therefore, following Eq. (\ref{eq:unperturbed_density}), we have written, $\rho^{(0)}(k)\approx V^{(0)}(k)$. We will show that, both $\delta\phi(k)$ and $\Phi_B (k)$ in Eq. (\ref{eq:density_contrast}) will play a significant role in making $\frac{\delta\rho(k)}{\rho^{(0)}(k)}>\delta_c (\approx 0.41)$, \textit{i.e.} in the formation of PBHs in the early universe, $\delta_c$ being the density contrast (see Sec. \ref{sec:results} for details). \subsection{The \texorpdfstring{$\alpha$}{a}-attractor \texorpdfstring{$E$}{E}-model and \texorpdfstring{$T$}{T}-model potentials} \label{subsec:alpha_attractor} We use here, the $\alpha$-attractor potentials \cite{Carrasco:2015rva,Kallosh:2013yoa}:\\ (I) the $T$-model potential \begin{equation} V(\phi)=V_{0} \tanh^n\frac{\phi}{\sqrt{6\alpha}}, \label{eq:T model} \end{equation} (II) the $E$-model potential \begin{equation} V(\phi)=V_{0}(1-e^{-\sqrt{\frac{2}{3\alpha}}\phi})^n \end{equation} where $\alpha$ is the inverse curvature of $SU(1,1)/U(1)$ K\"{a}hler manifold \cite{Kallosh:2013yoa}. \subsection{Transfer Function} \label{subsec:trans_function} We have related the quantities at horizon crossing during inflation to those at horizon re-entry in the radiation-dominated era using the transfer function given in Ref. \cite{Musco:2020jjb}: \begin{equation} T(k,\eta)=3\frac{\sin{(k\eta}/\sqrt{3})-(k\eta/\sqrt{3})\cos{(k\eta/\sqrt{3}})}{(k\eta/\sqrt{3})^3}, \label{eq:transfer function} \end{equation} where $\eta$ is the conformal time. \begin{figure}[H] \centering \includegraphics[width=0.9\linewidth]{"plots/transfer_function.pdf"} \caption{Transfer function for radiation dominated period at a very small conformal time $\eta = 10^{-12}$. It has a very small oscillating behaviour around $k\sim10^{13}$ Mpc$^{-1}$, where $k\eta> 1$, although it dies down in the high $k$ limit, $k\rightarrow\infty$. The momentum scale reflects the range of sub-horizon momenta which will be studied in this paper.} \label{fig:Transfer_function} \end{figure} \section{Results and Discussion} \label{sec:results} At the outset, let us give an overall perspective of the evolution of the modes during the inflationary period. As shown and stated in Ref. \cite{Sarkar:2021ird}, the higher $k$ modes undergo less number of e-folds and thus, while the Hubble sphere shrinks, they exit the horizon at later times. As a consequence, the very high $k$ modes remain in the deep subhorizon region during the major part of the inflationary period. When the Hubble sphere starts expanding after the end of inflation, the high $k$ modes re-enter the horizon first in small positive conformal times. The smaller $k$ modes re-enter the horizon at later conformal times.\par Our interest, here, is mainly in the high $k$ regions where, it will be shown that, the density contrast shoots up, at a number of momentum values, signalling the breakdown of the perturbative framework and creating a condition favourable for the formation of PBHs, when the modes re-enter the horizon. Interestingly, this condition is achieved in our study quite naturally by solving the $k$-space evolution equations and not by artificially preparing the inflaton potential to meet this goal.\par \begin{figure}[!h] \centering \includegraphics[width=0.9\linewidth]{"plots/unperturbed_potential.pdf"} \caption{Unperturbed part $V^{(0)}(k)$ of the chaotic $\alpha$-attractor $T$-model potential at high $k$ limit.} \label{fig:potential-high-k} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.9\linewidth]{"plots/perturbed_potential.pdf"} \caption{Perturbation $\delta V (k)$ of chaotic $\alpha$ attractor $T$-model potential in high $k$ limit. The perturbation in the potential, $\delta V(k)\gg V^{(0)}(k)$ (see Figure \ref{fig:potential-high-k}). It blows up as $k\rightarrow\infty$, signifying the breakdown of the slow roll in the linear perturbation formalism and creating a situation, favourable for the PBH formation.} \label{fig:perturbed-potential-high-k} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.9\linewidth]{"plots/unperturbed_inflaton.pdf"} \caption{Unperturbed inflaton field $\phi^{(0)}(k)$ for chaotic $\alpha$ attractor $T$-model potential at high $k$ limit.} \label{fig:inflaton-high-k} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=\linewidth]{"plots/perturbed_inflaton.pdf"} \caption{Perturbation $\delta \phi (k)$ for chaotic $\alpha$ attractor $T$-model potential in high $k$ limit. The perturbation in the inflaton field, $\delta \phi(k)\gg \phi^{(0)}(k)$ (see Figure \ref{fig:inflaton-high-k}). It blows up as $k\rightarrow\infty$, signifying the breakdown of the slow-roll in the linear perturbation formalism and creating a situation, favourable for the PBH formation.} \label{fig:perturbed-inflaton-high-k} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.9\linewidth]{"plots/density_contrast.pdf"} \caption{Density contrast $\left(\frac{\delta \rho}{\rho}\right)_\mathrm{rad}\left[=\left(\frac{\delta \rho}{\rho}\right)_\mathrm{inf}\times T(k,\eta=10^{-12})\right]$ vs. $k$, in the radiation-dominated era, in a high momentum range ($0.43\times 10^{13}$ Mpc$^{-1}$ to $9.25\times 10^{13}$ Mpc$^{-1}$), where signatures of PBHs have been predicted \cite{Green:2020jor}. Here, $\left(\frac{\delta \rho}{\rho}\right)_\mathrm{inf}$ is the density contrast during inflation. Enhancements are observed at $17$ values of $k$. These enhancements suggest presence of PBHs of different masses (see Table \ref{tab:table_name_parameters}). } \label{fig:Density_1} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=1.0\linewidth]{"plots/Bardeen_potential.pdf"} \caption{The Bardeen potential $(\Phi_B)_\mathrm{rad}[=(\Phi_B)_\mathrm{inf}\times T(k,\eta=10^{-12})]$ vs. $k$ in the radiation-dominated era in the same momentum range as in Figure \ref{fig:Density_1}. Here, $(\Phi_B)_\mathrm{inf}$ is the Bardeen potential during inflation. Large negative values of the potentials are observed at the $k$-values where the enhancements of the density contrast occur (see Figure \ref{fig:Density_1}). This demonstrates the crucial role played by the Bardeen potential in the formation of the PBHs.} \label{fig:Bardeen_1} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=\linewidth]{"plots/power_spectrum.pdf"} \caption{The scalar power spectrum $\Delta_s(k)$ vs. $k$ in the same high momentum range as in Figures \ref{fig:potential-high-k} - \ref{fig:perturbed-inflaton-high-k}. Extremely high values of the power spectrum in this range indicate very high correlations among the mode momenta belonging to the quantized inflaton field, which is possible in the case of black hole formations.} \label{fig:Power_spectrum} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.9\linewidth]{"plots/comparison_T_model.pdf"} \includegraphics[width=0.9\linewidth]{"plots/power_law.pdf"} \caption{Comparison of the uses of the $\alpha$-attractor chaotic $T$-model potential (upper figure) and a power law potential, $\phi^2$ (lower figure) in calculating the density contrast $\left(\frac{\delta \rho}{\rho}\right)_\mathrm{inf}$, during inflation, in the high momentum regime. While the density contrast has very large positive values in the former case, it shows large negative values in the latter case. The unphysical nature of the density contrast in high momentum regime, in the case of the power law potential, clearly indicates its unsuitability in the formation of PBHs. Same inference can be drawn during radiation dominated era also.} \label{fig:pic2pic4} \end{figure} In Figures \ref{fig:potential-high-k} through \ref{fig:pic2pic4}, $n = 2$, $\alpha=1$ and $V_0 = 1$ are three values of parameters of the $\alpha$-attractor $T$-model potential (Eq. (\ref{eq:T model})). These values belong to the range of parameters which has been shown to be efficacious in fitting the Planck data in the ($n_s$ - $r$) plane \cite{Sarkar:2021ird, Planck:2018jri}. \par In Figure \ref{fig:potential-high-k}, we show the $k$-space behaviour of the unperturbed chaotic $\alpha$-attractor $T$-model potential in the high $k$ limit, during inflation. In Figure \ref{fig:perturbed-potential-high-k}, we demonstrate the corresponding behaviour of the perturbation in the potential, where it is shown that $\delta V(k)$ becomes very large in the high $k$ limit, signaling breakdown of perturbation in this limit. It may be noted that this does not happen in the low $k$ limit, where $\delta V(k)$ remains very small \cite{Sarkar:2021ird}. In Figure \ref{fig:inflaton-high-k}, we have plotted the unperturbed inflaton field for the chaotic $\alpha$-attractor $T$-model potential in the high $k$ limit and in Figure \ref{fig:perturbed-inflaton-high-k}, the corresponding perturbation, $\delta \phi(k)$, in the same limit. Here also we observe the breakdown of perturbation at high $k$ values. \par In Figures \ref{fig:Density_1} and and \ref{fig:Bardeen_1} we plotted the density contrast and the Bardeen potential, respectively, in the radiation-dominated era. These quantities have been obtained by multiplying their corresponding values during inflation with the transfer function Eq. (\ref{eq:transfer function}). The solutions of the Eqs. (\ref{eq:17}) - (\ref{eq:19}) during inflation have been obtained by considering $\phi^{(0)}(k=10^{8}) = 6.21$, $\delta\phi(k=10^{8})=\Phi_B (k=10^{8})=10^{-14}$, $\phi'^{(0)}(k=10^{8})=\delta\phi'(k=10^{8})=\Phi'_B(k=10^{8})=0$, as initial conditions. \par Looking at Figure \ref{fig:Density_1}, we observe that the values of the density contrasts at all the peaks are above the range of threshold values \textit{viz.}, $0.41 \lesssim \delta_c \leqslant\frac{2}{3}$, given in the current literature \cite{Musco:2018rwt}. Thus, the peaks satisfy the primary criterion for the PBH formations. It may be noted here that, these peaks are the results (see the upper figure in Figure \ref{fig:pic2pic4}) of self-consistent calculation of $k$-space evolution equations (Eqs. (\ref{eq:17}) -(\ref{eq:19})), smoothened by the transfer function, shown in Figure \ref{fig:Transfer_function}.\par Figure \ref{fig:Bardeen_1} highlights the fact that, the large peaks in density contrast correspond to large negative peaks in Bardeen potential at the same value of $k\equiv k_\mathrm{peak}$. This result reflects the strong interplay between the quantum-inflaton-fluctuation-induced density perturbation and the metric perturbation \textit{i.e.}, the Bardeen potential, in the PBH formation.\par Figure \ref{fig:Power_spectrum} shows that, the scalar power spectrum, which is a measure of two-point correlations among the fluctuations, increases very rapidly at the $k$ values of the PBH formations. Such $k$-space behaviour of the scalar power spectrum signifies the breakdown of the perturbative framework, as well as a very high quantum correlation, which may be favourable for the PBH formations.\par Figure \ref{fig:pic2pic4} illustrates the fact that the requirement of large positive inflationary density contrasts at high $k$ values for the PBH formations, is satisfied only by the $\alpha$-attractor $T$-model potential and not by the power law type $\phi^2$ potential, for example. We have also found in \cite{Sarkar:2021ird} that, such type of potential does constitute an experimentally-favourable model for inflation at low $k$ limit within some specified range of parameters. Therefore, the $\alpha$-attractor potential in its pristine form has the capability of explaining the inflationary paradigm at the low $k$ limit and the PBH formation at the high $k$ limit, simultaneously.\par Although the calculations from Figure \ref{fig:potential-high-k} through Figure \ref{fig:pic2pic4} are with the chaotic $\alpha$-attractor $T$ model potential, the results will be similar for the corresponding $E$ model potential. Therefore, we are not displaying those results here. \par To further endorse the identification of the peaks with the PBH formation events, in Table \ref{tab:table_name_parameters} we have shown our calculations regarding some properties of these PBHs. The momentum-dependent PBH masses in the unit of the solar mass($M_\odot$) have been presented in column $4$. The calculation of the masses is based on the formula given in Ref. \cite{Teimoori:2021pte}. Table \ref{tab:table_name_parameters} suggests that, massive PBHs are formed at those values of $k$, where the density contrast shows a large positive peak and the Bardeen potential shows a large negative value, which reflects the efficacy of the Bardeen potential in the PBH formations. Thus it is right to say that the Bardeen potential acts as a source of intense gravitational field, where the tiny quantum nuggets are amplified into massive PBHs. \par The mass-dependent evaporation times and the Hawking temperatures \cite{1975CMaPh..43..199H} are presented in columns $5$ and $6$ respectively. The calculations of these quantities are based on the formulae given in Ref. \cite{Dalianis:2018ymb}. The evaporation time scale of the PBHs in our calculations is found in the range $10^{25}-10^{33}$s, which is very large in comparison to the age of the universe ($\sim 10^{17}$s). Also, the Hawking temperatures ($\sim 10^{-5} - 10^{-8}$ GeV) are very small, showing that the possibility of extinction of such PBHs by Hawking radiation is negligibly small. Thus, these PBHs may exist in the present universe, thereby making themselves potential candidates for the dark matter. \onecolumngrid \begin{widetext} \begin{table}[H] \captionsetup{justification=centering,width=0.68\textwidth} \caption{PBH properties corresponding to the peaks in Figure \ref{fig:Density_1}. These properties are consistent with the LISA and BBO observations \cite{Green:2020jor}.} \begin{center} \begin{adjustbox}{width=0.7\textwidth} \begin{tabular}{ |c|c|c|c|c|c| } \hline Peak No. & $k_\mathrm{peak}$ (in Mpc$^{-1}$) & $(\frac{\delta\rho}{\rho})_\mathrm{rad}|_{k=k_\mathrm{peak}}$ & $M_\mathrm{PBH}$ (in $M_\odot$) & $t_\mathrm{eva}$ (in sec.) & $T_H$ (in GeV)\\ \hline 1 & 0.43 $\times$ 10$^{13}$ & 8.77 & 1.35 $\times$ 10$^{-13}$ & 7.72 $\times$ 10$^{33}$ & 3.72 $\times$ 10$^{-8}$ \\ 2 & 1.05 $\times$ 10$^{13}$ & 3.49 & 2.27 $\times$ 10$^{-14}$ & 3.67 $\times$ 10$^{31}$ & 2.21 $\times$ 10$^{-7}$ \\ 3 & 1.60 $\times$ 10$^{13}$ & 2.29 & 9.79 $\times$ 10$^{-15}$ & 2.95 $\times$ 10$^{30}$ & 5.13 $\times$ 10$^{-7}$ \\ 4 & 2.16 $\times$ 10$^{13}$ & 1.70 & 5.37 $\times$ 10$^{-15}$ & 4.86 $\times$ 10$^{29}$ & 9.36 $\times$ 10$^{-7}$ \\ 5 & 2.71 $\times$ 10$^{13}$ & 1.37 & 3.41 $\times$ 10$^{-15}$ & 1.24 $\times$ 10$^{29}$ & 1.47 $\times$ 10$^{-6}$ \\ 6 & 3.25 $\times$ 10$^{13}$ & 1.14 & 2.37 $\times$ 10$^{-15}$ & 4.18 $\times$ 10$^{28}$ & 2.12 $\times$ 10$^{-6}$ \\ 7 & 3.79 $\times$ 10$^{13}$ & 0.97 & 1.74 $\times$ 10$^{-15}$ & 1.65 $\times$ 10$^{28}$ & 2.89 $\times$ 10$^{-6}$ \\ 8 & 4.30 $\times$ 10$^{13}$ & 0.84 & 1.35 $\times$ 10$^{-15}$ & 7.72 $\times$ 10$^{27}$ & 3.72 $\times$ 10$^{-6}$ \\ 9 & 4.80 $\times$ 10$^{13}$ & 0.79 & 1.09 $\times$ 10$^{-15}$ & 4.06 $\times$ 10$^{27}$ & 4.61 $\times$ 10$^{-6}$ \\ 10 & 5.44 $\times$ 10$^{13}$ & 0.71 & 8.47 $\times$ 10$^{-16}$ & 1.91 $\times$ 10$^{27}$ & 5.93 $\times$ 10$^{-6}$ \\ 11 & 5.97 $\times$ 10$^{13}$ & 0.64 & 7.03 $\times$ 10$^{-16}$ & 1.09 $\times$ 10$^{27}$ & 7.15 $\times$ 10$^{-6}$ \\ 12 & 6.50 $\times$ 10$^{13}$ & 0.58 & 5.93 $\times$ 10$^{-16}$ & 6.54 $\times$ 10$^{26}$ & 8.48 $\times$ 10$^{-6}$ \\ 13 & 7.07 $\times$ 10$^{13}$ & 0.52 & 5.01 $\times$ 10$^{-16}$ & 3.95 $\times$ 10$^{26}$ & 1.00 $\times$ 10$^{-5}$ \\ 14 & 7.60 $\times$ 10$^{13}$ & 0.49 & 4.34 $\times$ 10$^{-16}$ & 2.57 $\times$ 10$^{26}$ & 1.15 $\times$ 10$^{-5}$ \\ 15 & 8.16 $\times$ 10$^{13}$ & 0.44 & 3.76 $\times$ 10$^{-16}$ & 1.66 $\times$ 10$^{26}$ & 1.33 $\times$ 10$^{-5}$ \\ 16 & 8.70 $\times$ 10$^{13}$ & 0.42 & 3.31 $\times$ 10$^{-16}$ & 1.14 $\times$ 10$^{26}$ & 1.51 $\times$ 10$^{-5}$ \\ 17 & 9.25 $\times$ 10$^{13}$ & 0.41 & 2.93 $\times$ 10$^{-16}$ & 7.89 $\times$ 10$^{25}$ & 1.72 $\times$ 10$^{-5}$ \\ \hline \end{tabular} \end{adjustbox} \end{center} \label{tab:table_name_parameters} \end{table} \end{widetext} \twocolumngrid \section{Conclusions} \label{sec:conclusions} In conclusion, we have examined in this paper the possibility of the PBH formation in the inflationary perturbation theory in the spatially flat gauge with the $\alpha$-attractor inflaton potentials. The implications of the present study may be summarized in the following points.\\ (i) Despite the various means of enhancing the power spectrum, that has come up in the literature, such as the modification and the fine-tuning of the potential \cite{Choudhury:2013woa,Ballesteros:2017fsr,Mahbub:2019uhl}, introduction of the two-field \cite{Garcia-Bellido:1996mdl,Chongchitnan:2006wx} and multi-field \cite{Palma:2020ejf} scenarios (although the Planck data \cite{Planck:2018jri} support the single-field inflation) etc., we have shown, in the present work, that we can obtain PBH solutions in the natural way of $k$-space evolution, if we incorporate the interaction of the classically-perturbed gravitational field with the quantum fluctuations of the inflaton field and solve the resulting non-linear coupled differential equation. The PBH formations occur whenever the large fluctuations in the inflaton field meet the large negative potentials in the background gravitational field in some regions of the $k$-space.\\ (ii) One of the striking results, here, is that the Bardeen potential $\Phi_B (k)$ in the spatially flat gauge manifests as a driving force for accumulating mass around the inflaton perturbation, which leads to dynamical PBH formation in radiation dominated era when large sub-horizon modes re-enter the Hubble horizon in small conformal time.\\ (iii) The range of $k$-space \textit{viz.,} $0.43\times 10^{13} - 9.25\times 10^{13}$ Mpc$^{-1}$ for the PBH formation where density contrast exceeds the critical density (see Table \ref{tab:table_name_parameters} and Figure \ref{fig:Density_1}) and Bardeen potential becomes largely negative (see Figure \ref{fig:Bardeen_1}), is naturally found through self-consistent solutions of the Eqs. (\ref{eq:17}) - (\ref{eq:19}), and it is checked that no other ranges of $k$-values meet this requirement. Therefore, we claim that, this is an important outcome of the mutual interaction among $\phi^{(0)}(k)$, $\delta\phi(k)$ and $\Phi_B (k)$ in the background of $\alpha$-attractor potential in the framework of linear cosmological perturbation theory with the spatially flat gauge.\\ (iv) In our formalism, we consider that, the PBHs are the result of large over-density in the inflaton field under strong gravitational environment and there are no other ingredients which can influence PBH formation. For example, we are neither taking into account the PBH formation from dark matter collapse or considering PBHs as dark matter - we are simply suspecting from the large evaporation times and the small Hawking temperatures that our PBHs could be dark matter. Therefore some other prospects regarding PBHs like abundance and mass fraction should be considered as outside the scope of the present paper. Our sole aim, here, is to show, how a single formalism can explain both inflation and PBH formation from single, unmodified $\alpha$-attractor-inflaton perturbation only. Thus, some related future works under this scheme could be inclusion of dark matter and dark energy (or quintessence) in our formalism, which may tighten the $k$-range or may constrain more stringently the PBH properties, useful for upcoming experiments. \section*{Acknowledgements} The present work has been carried out using some of the facilities provided by the University Grants Commission to the Center of Advanced Studies under the CAS-II program. CS and AS acknowledge the government of West Bengal for granting them the Swami Vivekananda fellowship. Bansundhara Ghosh acknowledges the Department of Science and Technology for providing her the DST-Inspire Faculty Fellowship. \section{Introduction} \label{sec:intro} The concept of primordial black holes (PBHs), associated with comparatively large density fluctuations at the early stages of the evolution of the universe, was introduced by Zel'dovich and Novikov \cite{Zeldovich:1967lct} and later theorized in details by S. W. Hawking and B. J. Carr in \cite{Hawking:1971ei, Carr:1974nx, 1975ApJ...201....1C}. Subsequently, the hydrodynamics of PBH formation \cite{1978SvA....22..129N}, accretion of matter around PBHs \cite{1979A&A....80..104N} and PBH formations in the contexts of Grand Unified Theories \cite{Khlopov:1980mg} and in a double inflation in supergravity \cite{Kawasaki:1997ju} were studied. It is conjectured that these PBHs have either evaporated by Hawking radiation or have evolved into supermassive black holes \cite{Kawasaki:2012wr} or remain as dark matter in the present universe \cite{Kawasaki:2012wr,Carr:2020xqk, Villanueva-Domingo:2021spv,Conzinu:2020cke,MoradinezhadDizgah:2019wjf,Clesse:2016vqa,Kovetz:2017rvv,Inomata:2017okj,Bringmann:2018mxj,Raidal:2018bbj,Green:2020jor,Poulin:2017bwe,Wong:2020yig,Escriva:2021pmf,Calabrese:2021zfq}. The relations between PBHs and primordial gravitational waves have been studied in \cite{Clesse:2016vqa,Choudhury:2013woa, Kovetz:2016kpi,Nakama:2016gzw,Sasaki:2016jop,Kovetz:2017rvv,DeLuca:2020qqa,Domenech:2021wkk,Ozsoy:2020kat}. Formation of PBH during a first order phase transition in the inflationary period has also been studied \cite{Khlopov:2008qy,DeLuca:2021mlh}. Studies of the influence of PBHs on the CMB $\mu$ and $y$ distortions \cite{Deng:2020pxo,Tashiro:2008sf} and $\mu T$ \cite{Ozsoy:2021qrg} cross-correlations have been carried out. Detection of signals from stochastic gravitational wave background, connected with PBH formations, in present and future experiments, has been discussed in Refs. \cite{Garcia-Bellido:2016dkw, Braglia:2020taf}. \par In some of the studies of the inflationary scenarios, PBHs have been identified as massive compact halo object with mass $\sim 0.5 M_ {\odot}$ in the work by J. Yokoyama \cite{Yokoyama:1999xi}, who has also examined the formation of PBHs in the framework of a chaotic new inflation \cite{Yokoyama:1998pt}. Josan and Green \cite{Josan:2010cj}, studied constraints on the models of inflation through the formation of PBHs, using a modified flow analysis. Harada, Yoo and Kohri \cite{Harada:2013epa} examined the threshold of PBH formation, both analytically and numerically. R. Arya \cite{Arya:2019wck} has considered the PBH formation as a result of enhancement of power spectrum during the thermal fluctuations in a warm inflation. Formation of PBHs in density perturbations was studied in two-field Hybrid inflationary models \cite{Garcia-Bellido:1996mdl,Chongchitnan:2006wx}, Starobinsky model including dilaton \cite{Gundhi:2020kzm}, multi-field inflation models \cite{Palma:2020ejf}, isocurvature fluctuation and chaotic inflation models \cite{Yokoyama:1999xi}, inflection-point models \cite{Choudhury:2013woa,Ballesteros:2017fsr}, quantum diffusion model \cite{Biagetti:2018pjj}, model with smoothed density contrast in the super-horizon limit \cite{Young:2019osy} and with the collapse of large amplitude metric perturbation \cite{Musco:2018rwt} and large-density perturbation \cite{Young:2020xmk} upon horizon re-entry. PBH abundance in the framework of non-perturbative stochastic inflation has been studied by F. K\"uhnel and K. Freese \cite{Kuhnel:2019xes}. Relation between the constraints of primordial black hole abundance and those of the primordial curvature power spectrum has also been studied \cite{Kalaja:2019uju,Dalianis:2018ymb}. Recently, PBHs solutions have been obtained in the framework of non-linear cosmological perturbations and non-linear effects arising at horizon crossing \cite{Musco:2020jjb}.\par PBH production has recently been studied, in the framework of $\alpha$-attractor polynomial super-potentials and modulated chaotic inflaton potentials models \cite{Dalianis:2018frf}. Mahbub \cite{Mahbub:2019uhl} utilized the superconformal inflationary $\alpha$-attractor potentials with a high level of fine tuning to produce an ultra-slow-roll region, where the enhancement for curvature power spectra giving rise to massive PBHs was found at $k \sim 10^{14}$ Mpc$^{-1}$. In a subsequent work \cite{Mahbub:2021qeo}, this author re-examined the earlier work using the optimised peak theory. The ultra-slow-roll process along with a non-Gaussian Cauchy probability distribution has been applied in Ref. \cite{Biagetti:2021eep} to obtain large PBHs masses. The constant-rate ultra-slow-roll-like inflation \cite{2021hllNg:} has also been applied to obtain the enhancement in the power spectra, triggered by entropy production, resulting in PBH formation. Ref. \cite {Teimoori:2021pte} has simulated the onset of PBHs formation by adding a term to the non-canonical $\alpha$-attractor potential, which enhances the curvature perturbations at some critical values of the field. The enhancement of the power spectrum by a limited period of strongly non-geodesic motion of the inflationary trajectory and consequent PBH production has been studied by J. Fumagalli \textit{et al.} \cite{Fumagalli:2020adf}. \par In almost all of the above references, PBH formation has been studied in terms of the curvature perturbation $\mathcal{R}$ and the curvature power spectrum $P_{\mathcal{R}} (k)$, which are usually obtained from the Mukhanov-Sasaki equation \cite{1988ZhETF..94....1M,MUKHANOV1992203,mukhanov_2005,10.1143/PTP.76.1036} in the co-moving gauge, characterised by a zero inflaton perturbation ($\delta\phi = 0$). This way of analysis, obviously ignores the role of the inflaton field ($\phi$) in the inflationary scenario of PBH formation. In the present work, we shall take an alternative route. We shall use the spatially flat gauge, thereby including the role of $\phi$ in the mechanism of PBH formation. In this respect, we shall follow the formalism developed in our previous work \cite{Sarkar:2021ird}, comprising a set of linear perturbative evolution equations which could explain the Planck-2018 data \cite{Planck:2018jri} in low $k$ limit. We shall show, here, that the same equations can yield PBH-like solutions in the high $k$ regime with the conventional chaotic $T$ and $E$ model potentials without any modifications. Thus, we find that a salient feature of the present approach is that we can explore different regions of $k$-space evolution in the inflationary period under suitable initial conditions for the same differential equations. We believe that this work will open up an avenue for the dynamical origin of PBH formation in the deep sub-horizon $k$-space. In fact, we shall highlight here the important role played by the Bardeen potential ($\Phi_B$) in building up the density contrasts and the associated PBH formations, which, to our knowledge, has not been done so far in the literature.\par The paper is organised as follows. In Sec. \ref{subsec:1} we briefly describe the basic formalism of the linear perturbation theory, leading to the setting up of the three coupled non-linear differential equations which play the central role of our study of the PBH formations. In Sec. \ref{subsec:alpha_attractor} we write about the $\alpha$-attractor $T$ and $E$ model potentials which have been used in the present study. The expression of the transfer function is written and its plot is shown in Sec. \ref{subsec:trans_function}. Results and discussion are presented in Sec. \ref{sec:results}. Finally, in Sec. \ref{sec:conclusions} we make some concluding remarks. \section{Formalism} \label{sec:Formalism} \subsection{Linear perturbations in the metric and the inflaton field} \label{subsec:1} The Einstein-Hilbert action with minimal coupling between quantised inflaton field, \begin{equation} \phi(t,\vec{X})=\int\frac{d^3 k}{(2\pi)^3}[\phi(k,t)\hat{a}(\vec{k})e^{i\vec{k}.\vec{x}}+\phi^* (k,t)\hat{a}^\dagger (\vec{k})e^{-i\vec{k}.\vec{x}}], \end{equation} and the background linearly-perturbed metric, in spatially flat gauge, with no anisotropic stress, \begin{equation} ds^2 =-(1+2\Phi)dt^2 +2a(t)\partial_i B dx^i dt +a^2 (t)\delta_{ij}dx^i dx^j, \end{equation} is \begin{equation} S=\int d^4 x \sqrt{-g}\left(\frac{1}{2}R -\frac{1}{2}g^{\alpha\beta}\partial_{\alpha}\phi\partial_{\beta}\phi -V(\phi)\right). \end{equation} The linear perturbation in the inflaton field is written as \begin{equation} \phi(t,\vec{X})=\phi^{(0)}(t) + \delta\phi(t,\vec{X}). \end{equation} The perturbation can be translated, through the energy-momentum tensor to that in the density as, \begin{equation} \rho(t,\vec{X})=\rho^{(0)}(t)+\delta\rho(t,\vec{X}), \end{equation} where, \begin{equation} \rho^{(0)}(t)=\frac{{\dot{\phi}}^{(0)^2}}{2}+V(\phi^{(0)}) \label{eq:unperturbed_density} \end{equation} and \begin{equation} \delta\rho(t,\vec{X})=\frac{dV(\phi^{(0)})}{d\phi^{(0)}}\delta\phi + \dot{\phi}^{(0)}\delta\dot{\phi}-\Phi{\dot{\phi}}^{(0)^2}. \label{eq:density_perturbation} \end{equation} (Note: The last term in Eq.(\ref{eq:density_perturbation}) was neglected in Ref. \cite{Sarkar:2021ird} as $\Phi{\dot{\phi}}^{(0)^2}$ is small in slow-roll approximation. In the present paper, we retain this term as we expect the metric perturbation $\Phi$ to play a major role in the PBH formation.) Using the solutions in \cite{Baumann:2009ds} of the unperturbed Einstein's equations, \begin{equation} H^2=\frac{\rho^{(0)}}{3}, \label{eq: equation_1} \end{equation} \begin{equation} \dot{H}+H^2=-\frac{1}{6}(\rho^{(0)}+3p^{(0)}) \label{eq: equation_2} \end{equation} and the perturbed Einstein's equations, \begin{equation} 3H^2\Phi +\frac{k^2}{a^2} \left(-aHB \right) =-\frac{\delta\rho}{2}, \label{eq:equation_3} \end{equation} \begin{equation} H\Phi=-\frac{1}{2}\delta q, \label{eq:equation_4} \end{equation} \begin{equation} H\dot{\Phi}+(3H^2+2\dot{H})\Phi=\frac{\delta p}{2}, \label{eq:equation_5} \end{equation} \begin{equation} (\partial_t +3H)\frac{B}{a}+\frac{\Phi}{a^2}=0 \label{eq:equation_6} \end{equation} and the Bardeen potentials \cite{PhysRevD.22.1882} \begin{equation} \Phi_B = \Phi-\frac{d}{dt}\left[a^2\left(-\frac{B}{a}\right)\right], \label{eq:equation_14} \end{equation} \begin{equation} \Psi_B = a^2H\left(-\frac{B}{a}\right) \label{eq:equation_15} \end{equation} we obtain a relation between $\Phi$ and $\Phi_B$ as, \begin{equation} \Phi=\Phi_B+\partial_t \left(\frac{\Phi_B}{H}\right). \label{eq:equation_20} \end{equation} In Eqs. (\ref{eq:equation_4}) and (\ref{eq:equation_5}) $\delta q$ and $\delta p$ are the magnitudes of the momentum perturbation and pressure perturbation, respectively. Eqs. (\ref{eq:density_perturbation}) and (\ref{eq:equation_20}) show that the density perturbation $\delta\rho$ contains $\phi^{(0)}$, $\delta\phi$ as well as the Bardeen potential $\Phi_B$.\par \par Using the slow roll dynamical horizon crossing condition, $k=aH$, we go from the space of the cosmic time $t$ to that of the mode momentum $k$ and set up three nonlinear coupled differential equations in the $k$-space and solve for the quantities, $\phi^{(0)}(k)$, $\delta\phi(k)$ and $\Phi_B (k)$. The equations, similar to those in \cite{Sarkar:2021ird}, are, \begin{equation} \delta\phi (k^2 \phi'' + k^2 G_1 \phi'^2 + 4k\phi'+ 6G_1)+ \delta\phi' (-2k^3 G_1 \phi'^2)=0, \label{eq:17} \end{equation} \begin{multline} \delta\phi(1+12G_1 ^2 + 6G_2)+ \delta\phi' (4k+ k^2 G_1 \phi')+k^2\delta\phi''\\ + \Phi_B (-k\phi'+12G_1 + k^2 G_1 \phi'^2 + k^3 G_1 \phi' \phi'' -12kG_1 ^2 \phi' \\ +k^3 G_2 \phi'^3)+ \Phi'_B (-2k^2\phi' + 12kG_1 +k^3 G_1 \phi'^2) + \Phi''_B (-k^3\phi')\\=0 \label{eq:18} \end{multline} and \begin{widetext} \begin{eqnarray} \Phi''_B \left(-\frac{k^4}{3} \phi'^2\right) + \Phi'_B \left(-\frac{k^3}{3}\phi'^2 + \frac{k^4}{3} G_1 \phi'^3 - k^3 \phi'^2 -\frac{2k^4}{3}\phi'\phi'' + \frac{2k}{3}+ 2k^3\phi'^2 \right)\nonumber\\ + \Phi_B\left (-\frac{2k^2}{3}\phi'^2 - \frac{2k^3}{3}\phi'\phi''\nonumber + k^3 G_1 \phi'^3 +\frac{k^4}{3}G_2 \phi'^4 + k^4 G_1 \phi'^2 \phi'' -2 + 2k^2 \phi'^2 -2k^3 G_1 \phi'^3 \right)\nonumber\\+\delta\phi''\left(\frac{k^3}{3}\phi'\right)+\delta\phi'\left (2kG_1 +\frac{2k^2}{3}\phi' + \frac{k^3}{3}\phi''\right)+\delta\phi \left(2kG_2 \phi' + 2k^2 \phi'' + 2k^2G_1 \phi'^2 +2k\phi'\right)=0, \label{eq:19} \end{eqnarray} \end{widetext} where, $G_n = \frac{\partial^n}{\partial\phi^{(0)^n}}\ln\sqrt{V(\phi^{(0)})}$, $n=1, 2$. \par In the slow-roll approximation, the density contrast in $k$-space, which is useful for the study of the PBH formation, is given by, \begin{widetext} \begin{eqnarray} \frac{\delta\rho (k)}{\rho^{(0)}(k)}=\nonumber\frac{\delta V (k) +V^{(0)}(k)\left( \frac{k^2}{3}\phi'\delta\phi' + \Phi_B \left(-\frac{k^2}{3}\phi'^2 + \frac{k^3}{3}G_1 \phi'^3\right) + \Phi'_B \left(-\frac{k^3}{3}\phi'^2 \right)\right)}{V^{(0)}(k)}\\=2G_1 \delta\phi + \frac{k^2}{3}\phi'\delta\phi' + \Phi_B \left(-\frac{k^2}{3}\phi'^2 + \frac{k^3}{3}G_1 \phi'^3\right) + \Phi'_B \left(-\frac{k^3}{3}\phi'^2 \right) \label{eq:density_contrast} \end{eqnarray} \end{widetext} where, $\delta V(k) \equiv 2G_1 V^{(0)}(k)\delta\phi(k)$ \\and $V^{(0)}(k) \equiv V(\phi^{(0)}(k))$. In first line of Eq. (\ref{eq:density_contrast}), we have used the Fourier-space version of the slow-roll approximation and, therefore, following Eq. (\ref{eq:unperturbed_density}), we have written, $\rho^{(0)}(k)\approx V^{(0)}(k)$. We will show that, both $\delta\phi(k)$ and $\Phi_B (k)$ in Eq. (\ref{eq:density_contrast}) will play a significant role in making $\frac{\delta\rho(k)}{\rho^{(0)}(k)}>\delta_c (\approx 0.41)$, \textit{i.e.} in the formation of PBHs in the early universe, $\delta_c$ being the density contrast (see Sec. \ref{sec:results} for details). \subsection{The \texorpdfstring{$\alpha$}{a}-attractor \texorpdfstring{$E$}{E}-model and \texorpdfstring{$T$}{T}-model potentials} \label{subsec:alpha_attractor} We use here, the $\alpha$-attractor potentials \cite{Carrasco:2015rva,Kallosh:2013yoa}:\\ (I) the $T$-model potential \begin{equation} V(\phi)=V_{0} \tanh^n\frac{\phi}{\sqrt{6\alpha}}, \label{eq:T model} \end{equation} (II) the $E$-model potential \begin{equation} V(\phi)=V_{0}(1-e^{-\sqrt{\frac{2}{3\alpha}}\phi})^n \end{equation} where $\alpha$ is the inverse curvature of $SU(1,1)/U(1)$ K\"{a}hler manifold \cite{Kallosh:2013yoa}. \subsection{Transfer Function} \label{subsec:trans_function} We have related the quantities at horizon crossing during inflation to those at horizon re-entry in the radiation-dominated era using the transfer function given in Ref. \cite{Musco:2020jjb}: \begin{equation} T(k,\eta)=3\frac{\sin{(k\eta}/\sqrt{3})-(k\eta/\sqrt{3})\cos{(k\eta/\sqrt{3}})}{(k\eta/\sqrt{3})^3}, \label{eq:transfer function} \end{equation} where $\eta$ is the conformal time. \begin{figure}[H] \centering \includegraphics[width=0.9\linewidth]{"plots/transfer_function.pdf"} \caption{Transfer function for radiation dominated period at a very small conformal time $\eta = 10^{-12}$. It has a very small oscillating behaviour around $k\sim10^{13}$ Mpc$^{-1}$, where $k\eta> 1$, although it dies down in the high $k$ limit, $k\rightarrow\infty$. The momentum scale reflects the range of sub-horizon momenta which will be studied in this paper.} \label{fig:Transfer_function} \end{figure} \section{Results and Discussion} \label{sec:results} At the outset, let us give an overall perspective of the evolution of the modes during the inflationary period. As shown and stated in Ref. \cite{Sarkar:2021ird}, the higher $k$ modes undergo less number of e-folds and thus, while the Hubble sphere shrinks, they exit the horizon at later times. As a consequence, the very high $k$ modes remain in the deep subhorizon region during the major part of the inflationary period. When the Hubble sphere starts expanding after the end of inflation, the high $k$ modes re-enter the horizon first in small positive conformal times. The smaller $k$ modes re-enter the horizon at later conformal times.\par Our interest, here, is mainly in the high $k$ regions where, it will be shown that, the density contrast shoots up, at a number of momentum values, signalling the breakdown of the perturbative framework and creating a condition favourable for the formation of PBHs, when the modes re-enter the horizon. Interestingly, this condition is achieved in our study quite naturally by solving the $k$-space evolution equations and not by artificially preparing the inflaton potential to meet this goal.\par \begin{figure}[!h] \centering \includegraphics[width=0.9\linewidth]{"plots/unperturbed_potential.pdf"} \caption{Unperturbed part $V^{(0)}(k)$ of the chaotic $\alpha$-attractor $T$-model potential at high $k$ limit.} \label{fig:potential-high-k} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.9\linewidth]{"plots/perturbed_potential.pdf"} \caption{Perturbation $\delta V (k)$ of chaotic $\alpha$ attractor $T$-model potential in high $k$ limit. The perturbation in the potential, $\delta V(k)\gg V^{(0)}(k)$ (see Figure \ref{fig:potential-high-k}). It blows up as $k\rightarrow\infty$, signifying the breakdown of the slow roll in the linear perturbation formalism and creating a situation, favourable for the PBH formation.} \label{fig:perturbed-potential-high-k} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.9\linewidth]{"plots/unperturbed_inflaton.pdf"} \caption{Unperturbed inflaton field $\phi^{(0)}(k)$ for chaotic $\alpha$ attractor $T$-model potential at high $k$ limit.} \label{fig:inflaton-high-k} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=\linewidth]{"plots/perturbed_inflaton.pdf"} \caption{Perturbation $\delta \phi (k)$ for chaotic $\alpha$ attractor $T$-model potential in high $k$ limit. The perturbation in the inflaton field, $\delta \phi(k)\gg \phi^{(0)}(k)$ (see Figure \ref{fig:inflaton-high-k}). It blows up as $k\rightarrow\infty$, signifying the breakdown of the slow-roll in the linear perturbation formalism and creating a situation, favourable for the PBH formation.} \label{fig:perturbed-inflaton-high-k} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.9\linewidth]{"plots/density_contrast.pdf"} \caption{Density contrast $\left(\frac{\delta \rho}{\rho}\right)_\mathrm{rad}\left[=\left(\frac{\delta \rho}{\rho}\right)_\mathrm{inf}\times T(k,\eta=10^{-12})\right]$ vs. $k$, in the radiation-dominated era, in a high momentum range ($0.43\times 10^{13}$ Mpc$^{-1}$ to $9.25\times 10^{13}$ Mpc$^{-1}$), where signatures of PBHs have been predicted \cite{Green:2020jor}. Here, $\left(\frac{\delta \rho}{\rho}\right)_\mathrm{inf}$ is the density contrast during inflation. Enhancements are observed at $17$ values of $k$. These enhancements suggest presence of PBHs of different masses (see Table \ref{tab:table_name_parameters}). } \label{fig:Density_1} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=1.0\linewidth]{"plots/Bardeen_potential.pdf"} \caption{The Bardeen potential $(\Phi_B)_\mathrm{rad}[=(\Phi_B)_\mathrm{inf}\times T(k,\eta=10^{-12})]$ vs. $k$ in the radiation-dominated era in the same momentum range as in Figure \ref{fig:Density_1}. Here, $(\Phi_B)_\mathrm{inf}$ is the Bardeen potential during inflation. Large negative values of the potentials are observed at the $k$-values where the enhancements of the density contrast occur (see Figure \ref{fig:Density_1}). This demonstrates the crucial role played by the Bardeen potential in the formation of the PBHs.} \label{fig:Bardeen_1} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=\linewidth]{"plots/power_spectrum.pdf"} \caption{The scalar power spectrum $\Delta_s(k)$ vs. $k$ in the same high momentum range as in Figures \ref{fig:potential-high-k} - \ref{fig:perturbed-inflaton-high-k}. Extremely high values of the power spectrum in this range indicate very high correlations among the mode momenta belonging to the quantized inflaton field, which is possible in the case of black hole formations.} \label{fig:Power_spectrum} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.9\linewidth]{"plots/comparison_T_model.pdf"} \includegraphics[width=0.9\linewidth]{"plots/power_law.pdf"} \caption{Comparison of the uses of the $\alpha$-attractor chaotic $T$-model potential (upper figure) and a power law potential, $\phi^2$ (lower figure) in calculating the density contrast $\left(\frac{\delta \rho}{\rho}\right)_\mathrm{inf}$, during inflation, in the high momentum regime. While the density contrast has very large positive values in the former case, it shows large negative values in the latter case. The unphysical nature of the density contrast in high momentum regime, in the case of the power law potential, clearly indicates its unsuitability in the formation of PBHs. Same inference can be drawn during radiation dominated era also.} \label{fig:pic2pic4} \end{figure} In Figures \ref{fig:potential-high-k} through \ref{fig:pic2pic4}, $n = 2$, $\alpha=1$ and $V_0 = 1$ are three values of parameters of the $\alpha$-attractor $T$-model potential (Eq. (\ref{eq:T model})). These values belong to the range of parameters which has been shown to be efficacious in fitting the Planck data in the ($n_s$ - $r$) plane \cite{Sarkar:2021ird, Planck:2018jri}. \par In Figure \ref{fig:potential-high-k}, we show the $k$-space behaviour of the unperturbed chaotic $\alpha$-attractor $T$-model potential in the high $k$ limit, during inflation. In Figure \ref{fig:perturbed-potential-high-k}, we demonstrate the corresponding behaviour of the perturbation in the potential, where it is shown that $\delta V(k)$ becomes very large in the high $k$ limit, signaling breakdown of perturbation in this limit. It may be noted that this does not happen in the low $k$ limit, where $\delta V(k)$ remains very small \cite{Sarkar:2021ird}. In Figure \ref{fig:inflaton-high-k}, we have plotted the unperturbed inflaton field for the chaotic $\alpha$-attractor $T$-model potential in the high $k$ limit and in Figure \ref{fig:perturbed-inflaton-high-k}, the corresponding perturbation, $\delta \phi(k)$, in the same limit. Here also we observe the breakdown of perturbation at high $k$ values. \par In Figures \ref{fig:Density_1} and and \ref{fig:Bardeen_1} we plotted the density contrast and the Bardeen potential, respectively, in the radiation-dominated era. These quantities have been obtained by multiplying their corresponding values during inflation with the transfer function Eq. (\ref{eq:transfer function}). The solutions of the Eqs. (\ref{eq:17}) - (\ref{eq:19}) during inflation have been obtained by considering $\phi^{(0)}(k=10^{8}) = 6.21$, $\delta\phi(k=10^{8})=\Phi_B (k=10^{8})=10^{-14}$, $\phi'^{(0)}(k=10^{8})=\delta\phi'(k=10^{8})=\Phi'_B(k=10^{8})=0$, as initial conditions. \par Looking at Figure \ref{fig:Density_1}, we observe that the values of the density contrasts at all the peaks are above the range of threshold values \textit{viz.}, $0.41 \lesssim \delta_c \leqslant\frac{2}{3}$, given in the current literature \cite{Musco:2018rwt}. Thus, the peaks satisfy the primary criterion for the PBH formations. It may be noted here that, these peaks are the results (see the upper figure in Figure \ref{fig:pic2pic4}) of self-consistent calculation of $k$-space evolution equations (Eqs. (\ref{eq:17}) -(\ref{eq:19})), smoothened by the transfer function, shown in Figure \ref{fig:Transfer_function}.\par Figure \ref{fig:Bardeen_1} highlights the fact that, the large peaks in density contrast correspond to large negative peaks in Bardeen potential at the same value of $k\equiv k_\mathrm{peak}$. This result reflects the strong interplay between the quantum-inflaton-fluctuation-induced density perturbation and the metric perturbation \textit{i.e.}, the Bardeen potential, in the PBH formation.\par Figure \ref{fig:Power_spectrum} shows that, the scalar power spectrum, which is a measure of two-point correlations among the fluctuations, increases very rapidly at the $k$ values of the PBH formations. Such $k$-space behaviour of the scalar power spectrum signifies the breakdown of the perturbative framework, as well as a very high quantum correlation, which may be favourable for the PBH formations.\par Figure \ref{fig:pic2pic4} illustrates the fact that the requirement of large positive inflationary density contrasts at high $k$ values for the PBH formations, is satisfied only by the $\alpha$-attractor $T$-model potential and not by the power law type $\phi^2$ potential, for example. We have also found in \cite{Sarkar:2021ird} that, such type of potential does constitute an experimentally-favourable model for inflation at low $k$ limit within some specified range of parameters. Therefore, the $\alpha$-attractor potential in its pristine form has the capability of explaining the inflationary paradigm at the low $k$ limit and the PBH formation at the high $k$ limit, simultaneously.\par Although the calculations from Figure \ref{fig:potential-high-k} through Figure \ref{fig:pic2pic4} are with the chaotic $\alpha$-attractor $T$ model potential, the results will be similar for the corresponding $E$ model potential. Therefore, we are not displaying those results here. \par To further endorse the identification of the peaks with the PBH formation events, in Table \ref{tab:table_name_parameters} we have shown our calculations regarding some properties of these PBHs. The momentum-dependent PBH masses in the unit of the solar mass($M_\odot$) have been presented in column $4$. The calculation of the masses is based on the formula given in Ref. \cite{Teimoori:2021pte}. Table \ref{tab:table_name_parameters} suggests that, massive PBHs are formed at those values of $k$, where the density contrast shows a large positive peak and the Bardeen potential shows a large negative value, which reflects the efficacy of the Bardeen potential in the PBH formations. Thus it is right to say that the Bardeen potential acts as a source of intense gravitational field, where the tiny quantum nuggets are amplified into massive PBHs. \par The mass-dependent evaporation times and the Hawking temperatures \cite{1975CMaPh..43..199H} are presented in columns $5$ and $6$ respectively. The calculations of these quantities are based on the formulae given in Ref. \cite{Dalianis:2018ymb}. The evaporation time scale of the PBHs in our calculations is found in the range $10^{25}-10^{33}$s, which is very large in comparison to the age of the universe ($\sim 10^{17}$s). Also, the Hawking temperatures ($\sim 10^{-5} - 10^{-8}$ GeV) are very small, showing that the possibility of extinction of such PBHs by Hawking radiation is negligibly small. Thus, these PBHs may exist in the present universe, thereby making themselves potential candidates for the dark matter. \onecolumngrid \begin{widetext} \begin{table}[H] \captionsetup{justification=centering,width=0.68\textwidth} \caption{PBH properties corresponding to the peaks in Figure \ref{fig:Density_1}. These properties are consistent with the LISA and BBO observations \cite{Green:2020jor}.} \begin{center} \begin{adjustbox}{width=0.7\textwidth} \begin{tabular}{ |c|c|c|c|c|c| } \hline Peak No. & $k_\mathrm{peak}$ (in Mpc$^{-1}$) & $(\frac{\delta\rho}{\rho})_\mathrm{rad}|_{k=k_\mathrm{peak}}$ & $M_\mathrm{PBH}$ (in $M_\odot$) & $t_\mathrm{eva}$ (in sec.) & $T_H$ (in GeV)\\ \hline 1 & 0.43 $\times$ 10$^{13}$ & 8.77 & 1.35 $\times$ 10$^{-13}$ & 7.72 $\times$ 10$^{33}$ & 3.72 $\times$ 10$^{-8}$ \\ 2 & 1.05 $\times$ 10$^{13}$ & 3.49 & 2.27 $\times$ 10$^{-14}$ & 3.67 $\times$ 10$^{31}$ & 2.21 $\times$ 10$^{-7}$ \\ 3 & 1.60 $\times$ 10$^{13}$ & 2.29 & 9.79 $\times$ 10$^{-15}$ & 2.95 $\times$ 10$^{30}$ & 5.13 $\times$ 10$^{-7}$ \\ 4 & 2.16 $\times$ 10$^{13}$ & 1.70 & 5.37 $\times$ 10$^{-15}$ & 4.86 $\times$ 10$^{29}$ & 9.36 $\times$ 10$^{-7}$ \\ 5 & 2.71 $\times$ 10$^{13}$ & 1.37 & 3.41 $\times$ 10$^{-15}$ & 1.24 $\times$ 10$^{29}$ & 1.47 $\times$ 10$^{-6}$ \\ 6 & 3.25 $\times$ 10$^{13}$ & 1.14 & 2.37 $\times$ 10$^{-15}$ & 4.18 $\times$ 10$^{28}$ & 2.12 $\times$ 10$^{-6}$ \\ 7 & 3.79 $\times$ 10$^{13}$ & 0.97 & 1.74 $\times$ 10$^{-15}$ & 1.65 $\times$ 10$^{28}$ & 2.89 $\times$ 10$^{-6}$ \\ 8 & 4.30 $\times$ 10$^{13}$ & 0.84 & 1.35 $\times$ 10$^{-15}$ & 7.72 $\times$ 10$^{27}$ & 3.72 $\times$ 10$^{-6}$ \\ 9 & 4.80 $\times$ 10$^{13}$ & 0.79 & 1.09 $\times$ 10$^{-15}$ & 4.06 $\times$ 10$^{27}$ & 4.61 $\times$ 10$^{-6}$ \\ 10 & 5.44 $\times$ 10$^{13}$ & 0.71 & 8.47 $\times$ 10$^{-16}$ & 1.91 $\times$ 10$^{27}$ & 5.93 $\times$ 10$^{-6}$ \\ 11 & 5.97 $\times$ 10$^{13}$ & 0.64 & 7.03 $\times$ 10$^{-16}$ & 1.09 $\times$ 10$^{27}$ & 7.15 $\times$ 10$^{-6}$ \\ 12 & 6.50 $\times$ 10$^{13}$ & 0.58 & 5.93 $\times$ 10$^{-16}$ & 6.54 $\times$ 10$^{26}$ & 8.48 $\times$ 10$^{-6}$ \\ 13 & 7.07 $\times$ 10$^{13}$ & 0.52 & 5.01 $\times$ 10$^{-16}$ & 3.95 $\times$ 10$^{26}$ & 1.00 $\times$ 10$^{-5}$ \\ 14 & 7.60 $\times$ 10$^{13}$ & 0.49 & 4.34 $\times$ 10$^{-16}$ & 2.57 $\times$ 10$^{26}$ & 1.15 $\times$ 10$^{-5}$ \\ 15 & 8.16 $\times$ 10$^{13}$ & 0.44 & 3.76 $\times$ 10$^{-16}$ & 1.66 $\times$ 10$^{26}$ & 1.33 $\times$ 10$^{-5}$ \\ 16 & 8.70 $\times$ 10$^{13}$ & 0.42 & 3.31 $\times$ 10$^{-16}$ & 1.14 $\times$ 10$^{26}$ & 1.51 $\times$ 10$^{-5}$ \\ 17 & 9.25 $\times$ 10$^{13}$ & 0.41 & 2.93 $\times$ 10$^{-16}$ & 7.89 $\times$ 10$^{25}$ & 1.72 $\times$ 10$^{-5}$ \\ \hline \end{tabular} \end{adjustbox} \end{center} \label{tab:table_name_parameters} \end{table} \end{widetext} \twocolumngrid \section{Conclusions} \label{sec:conclusions} In conclusion, we have examined in this paper the possibility of the PBH formation in the inflationary perturbation theory in the spatially flat gauge with the $\alpha$-attractor inflaton potentials. The implications of the present study may be summarized in the following points.\\ (i) Despite the various means of enhancing the power spectrum, that has come up in the literature, such as the modification and the fine-tuning of the potential \cite{Choudhury:2013woa,Ballesteros:2017fsr,Mahbub:2019uhl}, introduction of the two-field \cite{Garcia-Bellido:1996mdl,Chongchitnan:2006wx} and multi-field \cite{Palma:2020ejf} scenarios (although the Planck data \cite{Planck:2018jri} support the single-field inflation) etc., we have shown, in the present work, that we can obtain PBH solutions in the natural way of $k$-space evolution, if we incorporate the interaction of the classically-perturbed gravitational field with the quantum fluctuations of the inflaton field and solve the resulting non-linear coupled differential equation. The PBH formations occur whenever the large fluctuations in the inflaton field meet the large negative potentials in the background gravitational field in some regions of the $k$-space.\\ (ii) One of the striking results, here, is that the Bardeen potential $\Phi_B (k)$ in the spatially flat gauge manifests as a driving force for accumulating mass around the inflaton perturbation, which leads to dynamical PBH formation in radiation dominated era when large sub-horizon modes re-enter the Hubble horizon in small conformal time.\\ (iii) The range of $k$-space \textit{viz.,} $0.43\times 10^{13} - 9.25\times 10^{13}$ Mpc$^{-1}$ for the PBH formation where density contrast exceeds the critical density (see Table \ref{tab:table_name_parameters} and Figure \ref{fig:Density_1}) and Bardeen potential becomes largely negative (see Figure \ref{fig:Bardeen_1}), is naturally found through self-consistent solutions of the Eqs. (\ref{eq:17}) - (\ref{eq:19}), and it is checked that no other ranges of $k$-values meet this requirement. Therefore, we claim that, this is an important outcome of the mutual interaction among $\phi^{(0)}(k)$, $\delta\phi(k)$ and $\Phi_B (k)$ in the background of $\alpha$-attractor potential in the framework of linear cosmological perturbation theory with the spatially flat gauge.\\ (iv) In our formalism, we consider that, the PBHs are the result of large over-density in the inflaton field under strong gravitational environment and there are no other ingredients which can influence PBH formation. For example, we are neither taking into account the PBH formation from dark matter collapse or considering PBHs as dark matter - we are simply suspecting from the large evaporation times and the small Hawking temperatures that our PBHs could be dark matter. Therefore some other prospects regarding PBHs like abundance and mass fraction should be considered as outside the scope of the present paper. Our sole aim, here, is to show, how a single formalism can explain both inflation and PBH formation from single, unmodified $\alpha$-attractor-inflaton perturbation only. Thus, some related future works under this scheme could be inclusion of dark matter and dark energy (or quintessence) in our formalism, which may tighten the $k$-range or may constrain more stringently the PBH properties, useful for upcoming experiments. \section*{Acknowledgements} The present work has been carried out using some of the facilities provided by the University Grants Commission to the Center of Advanced Studies under the CAS-II program. CS and AS acknowledge the government of West Bengal for granting them the Swami Vivekananda fellowship. Bansundhara Ghosh acknowledges the Department of Science and Technology for providing her the DST-Inspire Faculty Fellowship.
1,116,691,499,475
arxiv
\section{Introduction}\label{s1} The phenomenon of Anderson localization of a quantum particle or a classical wave in a random environment is one of the central discoveries made by condensed matter physics in the second half of the last century. \cite{Anderson58} Although more than fifty years have passed since Anderson's original paper, Anderson localization remains a vibrant research field. \cite{AL50} One of its central research directions is the physics of Anderson transitions, \cite{evers08} including metal-insulator transitions and transitions of quantum-Hall type (i.e.\ between different phases of topological insulators). While such transitions are conventionally observed in electronic conductor and semiconductor structures, there is also a considerable number of other experimental realizations actively studied in recent and current works. These include localization of light, \cite{wiersma97} cold atoms, \cite{BEC-localization} ultrasound, \cite{faez09} and optically driven atomic systems. \cite{lemarie10} On the theory side, the field received a strong boost through the discovery of unconventional symmetry classes and the development of a complete symmetry classification of disordered systems. \cite{altland97, zirnbauer96, evers08, heinzner05} The unconventional classes emerge due to additional particle-hole and/or chiral symmetries that are, in particular, characteristic for models of disordered superconductors and disordered Dirac fermions (e.g.\ in graphene). In total one has 10 symmetry classes, including three standard (Wigner-Dyson) classes, three chiral, and four Bogoliubov-de Gennes (``superconducting'') classes. This multitude is further supplemented by the possibility for the underlying field theories to have a non-trivial topology ($\theta$ and Wess-Zumino terms), leading to a rich ``zoo'' of Anderson-transition critical points. The recent advent of graphene \cite{graphene-revmodphys} and of topological insulators and superconductors \cite{topins} reinforced the experimental relevance of these theoretical concepts. In analogy with more conventional second-order phase transitions, Anderson transitions fall into different universality classes according to the spatial dimension, symmetry, and topology. In each of the universality classes, the behavior of physical observables near the transition is characterized by critical exponents determined by the scaling dimensions of the corresponding operators. A remarkable property of Anderson transitions is that the critical wave functions are multifractal due to their strong fluctuations. Specifically, the wave-function moments show anomalous multifractal scaling with respect to the system size $L$, \begin{equation}\label{e1.1} L^d \langle |\psi({\bf r})|^{2q} \rangle \propto L^{-\tau_q}, \qquad \tau_q = d(q-1) + \Delta_q, \end{equation} where $d$ is the spatial dimension, $\langle \ldots \rangle$ denotes the operation of disorder averaging and $\Delta_q$ are anomalous multifractal exponents that distinguish the critical point from a simple metallic phase, where $\Delta_q \equiv 0$. Closely related is the scaling of moments of the local density of states (LDOS) $\nu(r)$, \begin{equation}\label{e1.2} \langle \nu^q \rangle \propto L^{-x_q} , \qquad x_q = \Delta_q + qx_\nu, \end{equation} where $x_\nu \equiv x_1$ controls the scaling of the average LDOS, $\langle \nu \rangle \propto L^{-x_\nu}$. Multifractality implies the presence of infinitely many relevant (in the renormalization-group (RG) sense) operators at the Anderson-transition critical point. First steps towards the experimental determination of multifractal spectra have been made recently. \cite{faez09, lemarie10, richardella10} Let us emphasize that when we speak about a $q$-th moment, we neither require that $q$ is an integer nor that it is positive. Throughout the paper, the term ``moment'' is understood in this broad sense. In Refs.~[\onlinecite{mirlin94},\onlinecite{fyodorov04}] a symmetry relation for the LDOS distribution function (and thus, for the LDOS moments) in the Wigner-Dyson symmetry classes was derived: \begin{equation}\label{e1.4} {\cal P}(\nu) = \nu^{-q_*-2}{\cal P}(\nu^{-1}), \qquad \langle \nu^q \rangle = \langle \nu^{q_*-q} \rangle, \end{equation} with $q_*=1$. Equation (\ref{e1.4}) is obtained in the framework of the non-linear $\sigma$-model and is fully general otherwise, i.e., it is equally applicable to metallic, localized, and critical systems. An important consequence of Eq.~(\ref{e1.4}) is an exact symmetry relation for Anderson-transition multifractal exponents \cite{mirlin06} \begin{equation}\label{e1.3} x_q = x_{q_*-q} . \end{equation} While $\sigma$-models in general are approximations to particular microscopic systems, Eq.~(\ref{e1.3}) is exact in view of the universality of critical behavior. In a recent paper, \cite{gruzberg11} the three of us and A.~W.~W.~Ludwig uncovered the group-theoretical origin of the symmetry relations (\ref{e1.4}), (\ref{e1.3}). Specifically, we showed that these relations are manifestations of a Weyl symmetry group acting on the $\sigma$-model manifold. This approach was further used to generalize these relations to the unconventional (Bogoliubov-de Gennes) classes CI and C, with $q_*=2$ and $q_*=3$, respectively. The operators representing the averaged LDOS moments (\ref{e1.2}) by no means exhaust the composite operators characterizing LDOS (or wave-function) correlations in a disordered system. They are distinguished in that they are the dominant (or most relevant) operators for each $q$, but they only represent ``the tip of the iceberg'' of a much larger family of gradientless composite operators. Often, the subleading operators are also very important physically. An obvious example is the two-point correlation function \begin{eqnarray}\label{e1.5} K_{\alpha\beta}({\bf r}_1,{\bf r}_2) &=& |\psi_\alpha^2({\bf r}_1) \psi_\beta^2({\bf r}_2)| \nonumber \\ &-& \psi_\alpha({\bf r}_1) \psi_\beta({\bf r}_2)\psi_\alpha^*({\bf r}_2)\psi_\beta^*({\bf r}_1), \end{eqnarray} which enters in the Hartree-Fock matrix element of a two-body interaction, \begin{equation}\label{e1.6} M_{\alpha\beta} = \int dr_1 dr_2 K_{\alpha\beta}({\bf r}_1,{\bf r}_2) U({\bf r}_1 - {\bf r}_2). \end{equation} Questions about the scaling of the disorder-averaged function $K_{\alpha \beta} ({\bf r}_1,{\bf r}_2)$, its moments, and the correlations of such objects, arise naturally when one studies, e.g., the interaction-induced dephasing at the Anderson-transition critical point. \cite{Lee96,Wang00,burmistrov11} The goals and the results of this paper are threefold: \begin{enumerate} \item We develop a systematic and complete classification of gradientless composite operators in the supersymmetric non-linear $\sigma$-models of Anderson localization. Our approach here differs from that of H\"of and Wegner \cite{Hoef86} and Wegner \cite{Wegner1987a, Wegner1987b} in two respects. Firstly, we work directly with the supersymmetric (SUSY) theories rather than with their compact replica versions as in Refs.~[\onlinecite{Hoef86, Wegner1987a, Wegner1987b}]. Secondly, we employ (a superization of) the Iwasawa decomposition and the Harish-Chandra isomorphism, which allow us to explicitly construct ``radial plane waves'' that are eigenfunctions of the Laplace-Casimir operators of the $\sigma$-model symmetry group, for arbitrary (also non-integer, negative, and even complex) values of a set of parameters $q_i$ [generalizing the order $q$ of the moment in Eq.~(\ref{e1.1}), (\ref{e1.2})]. We also develop a more basic construction of scaling operators as highest-weight vectors (and explain the link with the Iwasawa-decomposition formalism). \item We establish a connection between these composite operators and the physical observables of LDOS and wave-function correlators, as well as with some transport observables. \item Furthermore, the Iwasawa-decomposition formalism allows us to exploit a certain Weyl-group invariance and deduce a large number of relations between the scaling dimensions of various composite operators at criticality. These symmetry relations generalize Eq.~(\ref{e1.3}) obtained earlier for the most relevant operators (LDOS moments). \end{enumerate} It should be emphasized that we do not attempt to generalize Eq.~(\ref{e1.4}), which is also valid away from criticality, but rather focus on Anderson-transition critical points. The reason is as follows. The derivation of Eq.~(\ref{e1.4}) in Ref.~[\onlinecite{gruzberg11}] was based on a (super-)generalization of a theorem due to Harish-Chandra. We are not able to further generalize this theorem to the non-minimal $\sigma$-models needed for the generalization of Eq.~(\ref{e1.4}) to subleading operators. For this reason, we use a weaker version of the Weyl-invariance argument which is applicable only at criticality. This argument is sufficient to get exact relations between the critical exponents. In the main part of the paper we focus on the unitary Wigner-Dyson class A. Generalizations to other symmetry classes, as well as some of their peculiarities, are discussed at the end of the paper. The structure of the paper is as follows. In Sec.~\ref{s2} we briefly review Wegner's classification of composite operators in replica $\sigma$-models by Young diagrams. In Sec.~\ref{s3} we introduce the Iwasawa decomposition for supersymmetric $\sigma$-models of Anderson localization and, on its basis, develop a classification of the composite operators. The correspondence between the replica and SUSY formulations is established in Sec.~\ref{s4} for the case of the minimal SUSY model. Section~\ref{s5} is devoted to the connection between the physical observables (wave-function correlation functions) and the $\sigma$-model composite operators. This subject is further developed in Secs.~\ref{s6a}-\ref{s7}, where we identify observables that correspond to exact scaling operators and thus exhibit pure power scaling (without any admixture of subleading power-law contributions). In Sec.~\ref{s6e} we formulate a complete version (going beyond the minimal-SUSY model considered in Sec.~\ref{s4}) of the correspondence between the full set of operators of our SUSY classification and the physical observables (wave-function and LDOS correlation functions). An alternative and more basic approach to scaling operators via the notion of highest-weight vector is explained in Sec.~\ref{s8}. We also indicate how this approach is related to the one based on the Iwasawa decomposition. In Sec.~\ref{s9} we employ the Weyl-group invariance and deduce symmetry relations among the anomalous dimensions of various composite operators at criticality. The generalization of these results to other symmetry classes is discussed in Sec.~\ref{s10}. In Sec.~\ref{s11} we analyze the implications of our findings for transport observables defined within the Dorokhov-Mello-Pereyra-Kumar (DMPK) formalism. In Sec.~\ref{s12} we discuss peculiarities of symmetry classes whose $\sigma$-models possess additional O(1) [classes D and DIII] or U(1) [classes BDI, CII, DIII] degrees of freedom. Section \ref{s13} contains a summary of our results, as well as a discussion of open questions and directions for further research. \section{Replica $\sigma$-models and\\ Wegner's results}\label{s2} The replica method leads to the reformulation of the localization problem as a theory of fields taking values in a symmetric space $G/K$ -- a \emph{non-linear $\sigma$-model}. \cite{Wegner79,evers08} If one uses fermionic replicas, the resulting $\sigma$-model target spaces are compact, and for the Wigner-Dyson unitary class (a.k.a.\ class A) they are of the type $G/K$ with $G = \text{U}(m_1 + m_2)$ and $K = \text{U}(m_1) \times \text{U}(m_2)$. Bosonic replicas lead to the non-compact counterpart $G^\prime /K$ where $G^\prime = \text{U}(m_1, m_2)$. The total number of replicas $m = m_1 + m_2$ is taken to zero at the end of any calculation, but at intermediate stages it has to be sufficiently large in order for the $\sigma$-model to describe high enough moments of the observables of interest. The $\sigma$-model field $Q$ is a matrix, $Q = g\Lambda g^{-1}$, where $\Lambda = {\rm diag} (\mathbbm{1}_{m_1}, -\mathbbm{1}_{m_2})$ and $g \in G$. Since $Q$ does not change when $g$ is multiplied on the right ($g \to gk$) by any element $k \in K$, the set of matrices $Q$ realizes the symmetric space $G/K$. Clearly, $Q$ satisfies the constraint $Q^2 = 1$. Roughly speaking, one may think of the symmetric space $G/K$ as a ``generalized sphere''. The action functional of the $\sigma$-model has the following structure: \begin{equation}\label{e2.1} S[Q] = \frac{1}{16\pi t} \int \! d^d r \, \text{Tr} (\nabla Q)^2 + h \int \! d^d r \, \text{Tr} Q\Lambda . \end{equation} Here $Q({\bf r})$ is the $Q$-matrix field depending on the spatial coordinates $\bf r$. The parameter $1/16\pi t$ in front of the first term is 1/8 times the conductivity (in natural units). In the renormalization-group (RG) framework, $t$ serves as a running coupling constant of the theory. While the first term is invariant under conjugation $Q({\bf r}) \to g_0 Q({\bf r}) g_0^{-1}$ of the $Q$-matrix field by any (spatially uniform) element $g_0 \in G$, the second term causes a reduction of the symmetry from $G$ to $K$, i.e.\ only conjugation $Q({\bf r}) \to k_0 Q({\bf r}) k_0^{-1}$ by $k_0 \in K$ leaves the full action invariant. The second term provides an infrared regularization of the theory in infinite volume; physically, $h$ is proportional to the frequency. When studying scaling properties, it is usually convenient to work at an imaginary frequency, which gives a non-zero width to the energy levels. If the physical system has a spatial boundary with coupling to metallic leads, then boundary terms arise which are $K$-invariant like the second term in (\ref{e2.1}). Quite generally, physical observables are represented by composite operators of the corresponding field theory. For the case of the compact $\sigma$-model resulting from fermionic replicas, a classification of composite operators without spatial derivatives was developed by H\"of and Wegner \cite{Hoef86} and Wegner. \cite{Wegner1987a,Wegner1987b} It goes roughly as follows. The composite operators were constructed as polynomials in the matrix elements of $Q$, \begin{align}\label{e2.2} P = \sum_{i_1, \ldots, i_{2k}} T_{i_1,\ldots,i_{2k}} Q_{i_1, i_2} \cdots Q_{i_{2k-1},i_{2k}} . \end{align} Such polynomials transform as tensors under the action ($Q \to g Q g^{-1}$) of the group $G = \mathrm{U}(m_1 + m_2)$. They decompose into polynomials that transform irreducibly under $G$, and composite operators (or polynomials in $Q$) belonging to different irreducible representations of the symmetry group $G$ do not mix under the RG flow. The renormalization within each irreducible representation is characterized by a single renormalization constant. Therefore, fixing any irreducible representation it is sufficient to focus on operators in a suitable one-dimensional subspace. In view of the $K$-symmetry of the action, a natural choice of subspace is given by $K$-invariant operators, i.e. those polynomials that satisfy $P(Q) = P(k_0 Q k_0^{-1})$ for all $k_0 \in K$. It can be shown \cite{Hoef86} that each irreducible representation occurring in (\ref{e2.2}) contains exactly one such operator. We may therefore restrict our attention to $K$-invariant operators. By their $K$-invariance, such operators can be represented as linear combinations of operators of the form \begin{equation}\label{e2.3} P_\lambda = \text{Tr} (\Lambda Q)^{k_1} \cdots \text{Tr} (\Lambda Q)^{k_\ell} \end{equation} where $\ell = \mathrm{min} \{ m_1 , m_2 \}$, and $\lambda \equiv \{ k_1, \ldots, k_\ell \}$ is a partition $k = k_1 + \ldots + k_\ell$ such that $k_1\ge \ldots \ge k_\ell \geq 0$. In particular, for $k = 1$ we have one such operator, $\{1\}$, for $k = 2 \leq \ell$ two operators, $\{2\}$ and $\{1, 1\}$, for $k = 3 \leq \ell$ three operators $\{3\}$, $\{2, 1\}$, and $\{1, 1, 1\}$, for $k = 4 \leq \ell$ five operators $\{4\}$, $\{3, 1\}$, $\{2, 2\}$, $\{2, 1, 1\}$, and $\{1, 1, 1, 1\}$, and so on. \cite{burmistrov11} As described below, operators of order $k$ correspond to observables of order $k$ in the LDOS or, equivalently, of order $2k$ in the wave-function amplitudes. It turns out that the counting of partitions yields the number of different irreducible representations that occur for each order $k$ of the operator. More precisely, there is a one-to-one correspondence between the irreducible representations of $G = \mathrm{U}(m_1 + m_2)$ which occur in (\ref{e2.2}) and the set of irreducible representations of $\mathrm{U} (\ell)$, $\ell = \mathrm{min}(m_1,m_2)$, as given by partitions $\lambda = (k_1, \ldots, k_\ell)$. We may also think of the partition $\lambda$ as a Young diagram for $\mathrm{U}(\ell)$. (Please note that Young diagrams and the corresponding partitions are commonly denoted by using parentheses as opposed to the curly braces of the above discussion. An introduction to Young diagrams and their use in our context is given in Appendix \ref{app:notation}.) The claimed relation with the representation theory of $\mathrm{U}(\ell)$ becomes plausible if one uses the Cartan decomposition $G = KAK$, by which each element of $G$ is represented as $g = k a k'$, where $k,k'\in K$, $a\in A$, and $A \simeq \mathrm{U}(1)^\ell$ is a maximal abelian subgroup of $G$ with Lie algebra contained in the tangent space of $G/K$ at the origin. In this decomposition one has $Q = k a \Lambda a^{-1} k^{-1}$. A $K$-invariant operator $P$ satisfies $P(Q) = P(k a \Lambda a^{-1} k^{-1}) = P(a \Lambda a^{-1})$. In other words, $P$ depends only on a set of $\ell$ ``$K$-radial'' coordinates for $a \in A \simeq \mathrm{U}(1)^\ell$ -- this is ultimately responsible for the one-to-one correspondence with the irreducible representations of $\mathrm{U}(\ell)$. The $K$-invariant operators associated with irreducible representations are known as zonal spherical functions. For the case of $G/K = \text{U}(2) / \text{U}(1)\times \text{U}(1) = S^2$, which is the usual two-sphere, they are just the Legendre polynomials, i.e.\ the usual spherical harmonics with magnetic quantum number zero; see also Appendix \ref{appendix-hwv}. Please note that here and throughout the paper we use the convention that the symbol for the direct product takes precedence over the symbol for the quotient operation. Thus \begin{displaymath} G / K_1 \times K_2 \equiv G / (K_1 \times K_2). \end{displaymath} {}From the work of Harish-Chandra \cite{helgason84} it is known that the zonal spherical functions have a very simple form when expressed by $N$-radial coordinates that originate from the Iwasawa decomposition $G = NAK$; see Sec.~\ref{s3} below. This will make it possible to connect Wegner's classification of composite operators with our SUSY classification, where we use the Iwasawa decomposition. H\"of and Wegner \cite{Hoef86} calculated the anomalous dimensions of the polynomial composite operators (\ref{e2.2}) for $\sigma$-models on the target spaces $G(m_1 + m_2)/ G(m_1) \times G(m_2)$ for $G = \mathrm{O}$, $\mathrm{U}$, and $\mathrm{Sp}$ (whose replica limits correspond to the Anderson localization problem in the Wigner-Dyson classes A, AI, and AII, respectively) in $2 + \epsilon$ dimensions up to three-loop order. Wegner\cite{Wegner1987a,Wegner1987b} extended this calculation up to four-loop order. The results of Wegner for the anomalous dimensions are summarized in the $\zeta$-function for each composite operator: \begin{align}\label{e2.4} \zeta_\lambda(t) = a_2(\lambda) \rho(t) + \zeta(3) c_3(\lambda) t^4 + O(t^5), \end{align} where $t$, serving as a small parameter of the expansion, is the renormalized coupling constant of the $\sigma$-model. The coefficients $a_2$ and $c_3$ depend on the operator $P_\lambda$ (defined by the Young diagram $\lambda$) and on the type of model (O, U, or Sp, as well as $m_1$ and $m_2$). The function $\rho(t)$ depends on the model only and not on $\lambda$. The coefficient $a_2$ happens to be the quadratic Casimir eigenvalue associated to the representation with Young diagram $\lambda$ (\ref{a2}). For the case of unitary symmetry (class A), on which we focus, the coefficients satisfy the following symmetry relations: \begin{eqnarray}\label{e2.5} a_2(\lambda, m) &=& - a_2(\tilde\lambda, - m), \\ c_3(\lambda, m) &=& c_3(\tilde\lambda, -m), \label{e2.6} \end{eqnarray} where $\tilde{\lambda}$ is the Young diagram conjugate to $\lambda$, i.e., $\tilde\lambda$ is obtained by reflection of $\lambda$ with respect to the main diagonal. By using the results of Table 2 from Ref.\ [\onlinecite{Wegner1987b}], complementing them with these symmetry relations, and taking the replica limit $m_1 = m_2 = 0$, we can obtain the values of the coefficients $a_2$ and $c_3$ for all polynomial composite operators up to order $k=5$. These values are presented in Table \ref{unitary-table}. The function $\rho(t)$ is given for this model (unitary case, replica limit) by \begin{equation}\label{e2.7} \rho(t) = t + {\textstyle{\frac{3}{2}}}\, t^3 . \end{equation} \begin{table}[t] \begin{tabular}{|c|c|c|c|} \hline $|\lambda|$ & $\lambda$ & $a_2$ & $c_3$ \\ \hline 1 & (1) & 0 & 0 \\ \hline 2 & (2) & 2 & 6 \\ & (1,1) & -2 & 6 \\ \hline 3 & (3) & 6 & 54 \\ & (2,1) & 0 & 0 \\ & (1,1,1) & -6 & 54 \\ \hline 4 & (4) & 12 & 216 \\ & (3,1) & 4 & 24 \\ & (2,2) & 0 & 0 \\ & (2,1,1) & -4 & 24 \\ & (1,1,1,1) & -12 & 216 \\ \hline 5 & (5) & 20 & 600 \\ & (4,1) & 10 & 150 \\ & (3,2) & 4 & 24 \\ & (3,1,1) & 0 & 0 \\ & (2,2,1) & -4 & 24 \\ & (2,1,1,1) & -10 & 150 \\ & (1,1,1,1,1) & -20 & 600 \\ \hline \end{tabular} \caption[]{Coefficients $a_2$ and $c_3$ of the $\zeta$-function for class A in the replica limit. Results for composite operators characterized by Young diagrams up to size $|\lambda| = k = 5$ are shown.} \label{unitary-table} \end{table} A note on conventions and nomenclature is in order here. The way we draw Young diagrams (see Appendix \ref{app:notation}) is the standard way. Thus the horizontal direction corresponds to symmetrization and the vertical one to antisymmetrization. In Wegner's approach fermionic replicas are used, hence his natural observables are antisymmetrized products of wave functions, whereas the description of symmetrized products (like LDOS moments) requires the symmetry group to be enlarged. Wegner uses a different convention for labeling the invariant scaling operators, employing the Young diagrams conjugate to the usual ones used here. Thus, for example, the LDOS moment $\langle \nu^q \rangle$ corresponds in our convention to the Young diagram $(q)$, while it is labeled by $(1^q)$ in Wegner's works. This has to be kept in mind when comparing our Table \ref{unitary-table} with Table 2 of Ref.\ [\onlinecite{Wegner1987b}]. Of course, if one uses bosonic replicas, the situation is reversed: the natural objects then are symmetrized products and the roles of the horizontal and vertical directions get switched. While the works [\onlinecite{Hoef86,Wegner1987a,Wegner1987b}] signified a very important advance in the theory of critical phenomena described by non-linear $\sigma$-models, the classification of gradientless composite operators developed there is complete only for compact models. This can be understood already by inspecting the simple example of $\text{U}(2) / \text{U}(1) \times \text{U}(1) = S^2$ (two-sphere), which is the target space of the conventional $\mathrm{O}(3)$ non-linear $\sigma$-model. As was mentioned above, the corresponding $K$-invariant composite operators are the usual spherical harmonics $Y_{l0}$ with $l = 0, 1, \ldots$, which are Legendre polynomials in $\cos\theta$. (The polar angle $\theta$ parametrizes the abelian group $A$, which is one-dimensional in this case.) It is well known that the spherical harmonics indeed form a complete system on the sphere. The angular momentum $l$ plays the role of the size $k = |\lambda|$ of the Young diagram. The situation changes, however, when we pass to the non-compact counterpart, $\text{U}(1,1)/ \text{U}(1) \times \text{U}(1)$, which is a hyperboloid $H^2$. The difference is that now the polar direction (parametrized by the coordinate $\theta$) becomes non-compact. For this reason, nothing forces the angular momentum $l$ to be quantized. Indeed, the spherical functions on a hyperboloid $H^2$ are characterized by a continuous parameter (determining the order of an associated Legendre function) which takes the role of the discrete angular momentum on the sphere $S^2$. See Appendix \ref{appendix-hwv} for more details. The above simple example reflects the general situation: for theories defined on non-compact symmetric spaces the polynomial composite operators by no means exhaust the set of all composite operators. In the field theory of Anderson localization, we are thus facing the following conundrum: two theories, a compact and a non-compact one [$\text{U}(m_1 + m_2) / \text{U} (m_1) \times \text{U}(m_2)$ resp.\ $\text{U}(m_1, m_2) / \text{U}(m_1) \times \text{U}(m_2)$ for class A], which should describe in the replica limit $m_1 = m_2 = 0$ the same Anderson localization problem, have essentially different operator content. This is a manifestation of the fact that the replica trick has a very tricky character indeed. In this paper we resolve this ambiguity by using an alternative, well-defined approach based on supersymmetry. \vfill\eject \section{SUSY $\sigma$-models: Iwasawa decomposition and classification of composite operators} \label{s3} In the SUSY formalism the $\sigma$-model target space is the coset space \begin{align}\label{sigma-model-class-A} G/K = \text{U}(n,n|2n)/\text{U}(n|n) \times \text{U}(n|n) . \end{align} This manifold combines compact and non-compact features ``dressed'' by anticommuting (Grassmann) variables. Its base manifold $M_0 \times M_1$ is a product of non-compact and compact symmetric spaces: $M_0 = \text{U}(n, n) / \text{U}(n) \times \text{U}(n)$ and $M_1 = \text{U}(2n) / \text{U}(n) \times \text{U}(n)$. The action functional of the SUSY theory still has the same form (\ref{e2.1}), except that the trace Tr is now replaced by the supertrace STr. It is often useful to consider a lattice version of the model (i.e.\ with discrete rather than continuous spatial coordinates); our analysis based solely on symmetry considerations remains valid in this case as well. Furthermore, it also applies to models with a topological term (e.g., for quantum Hall systems in 2D). The size parameter $n$ of the supergroups involved needs to be sufficiently large in order for the model to contain the observables of interest; this will be discussed in detail in Sec.~\ref{s5}. The minimal variant of the model with $n = 1$ can accommodate arbitrary moments $\langle \nu^q \rangle$ of the local density of states (LDOS) $\nu$, \cite{gruzberg11} but is in general insufficient to give more complex observables, e.g.\ moments of the Hartree-Fock matrix element (\ref{e1.5}). We will first describe the construction of operators for the $n = 1$ model \cite{gruzberg11} and then the generalization for arbitrary $n$. Our approach is based on the Iwasawa decomposition for symmetric superspaces, \cite{mmz94,alldridge10} generalizing the corresponding construction for non-compact classical symmetric spaces. \cite{helgason78} The Iwasawa decomposition factorizes $G$ as $G = NAK$, where $A$ is (as above) a maximal abelian subgroup for $G/K$, and $N$ is a nilpotent group defined as follows. One considers the adjoint action (i.e.\ the action by the commutator) of elements of the Lie algebra $\mathfrak{a}$ of $A$ on the Lie algebra $\mathfrak{g}$ of $G$. Since $\mathfrak{a}$ is abelian, all its elements can be diagonalized simultaneously. The corresponding eigenvectors in the adjoint representation are called root vectors, and the eigenvalues are called roots. Viewed as linear functions on $\mathfrak{a}$, roots lie in the space $\mathfrak{a}^*$ dual to $\mathfrak{a}$. A system of positive roots is defined by choosing some hyperplane through the origin of $\mathfrak{a}^*$ which divides $\mathfrak{a}^*$ in two halves, and then defining one of these halves as positive. All roots that lie on the positive side of the hyperplane are considered as positive. The nilpotent Lie algebra $\mathfrak{n}$ is generated by the set of root vectors associated with positive roots; its exponentiation yields the group $N$. The Iwasawa decomposition $G = NAK$ represents any element $g\in G$ in the form $g = nak$, with $n\in N$, $a\in A$, and $k\in K$. This factorization is unique once the system of positive roots is fixed. An explanation is in order here. The Iwasawa decomposition $G = N A K$ is defined as such only for the case of a non-compact group $G$ with maximal compact subgroup $K$. Now the latter condition appears to exclude the symmetric spaces $G/K$ that arise in the SUSY context, as their subgroups $K$ fail to be maximal compact in general. This apparent difficulty, however, can be circumvented by a process of analytic continuation. \cite{mmz94} Indeed, the classical Iwasawa decomposition $G = N A K$ determines a triple of functions $n : G \to N$, $a : G \to A$, $k : G \to K$ by the uniqueness of the factorization $g = n(g) a(g) k(g)$. In our SUSY context, where $K$ is not maximal compact and the Iwasawa decomposition does not exist, the functions $n(g)$, $a(g)$, and $k(g)$ still exist, but they do as functions on $G$ with values in the complexified groups $N_\mathbb{C}$, $A_\mathbb{C}$, and $K_\mathbb{C}$, respectively. In particular, the Iwasawa decomposition gives us a multi-valued function $\ln a$ which assigns to every group element $g \in G$ an element $\ln a(g)$ of the (complexification of the) abelian Lie algebra $\mathfrak{a}$. Note that for $n_0 \in N$, $k_0 \in K$ one has $a(n_0 g k_0) = a(g)$ by construction. Thus one gets a function $\tilde{a}(Q)$ on $G/K$ by defining $\tilde{a}(g \Lambda g^{-1}) \equiv a(g)$. This function is $N$-radial, i.e., it depends only on the ``radial'' factor $A$ in the parametrization $G/K \simeq NA$ and is constant along the nilpotent group $N$: $\tilde{a} (n_0 Q n_0^{-1}) = \tilde{a}(Q)$. Its multi-valued logarithm $\ln \tilde{a}(Q)$ will play some role in what follows. In the case $n = 1$, which was considered in Ref.\ [\onlinecite{gruzberg11}], the space $\mathfrak{a}^*$ is two-dimensional, and we denote its basis of linear coordinate functions by $x$ and $y$, with $x$ corresponding to the boson-boson and $y$ to the fermion-fermion sector of the theory. In terms of this basis we can choose the positive roots to be \begin{align} 2x \,\, (1), && 2iy \,\, (1), && x+iy \,\, (-2), && x-iy \,\, (-2), \label{positive-roots-class-A-n-1} \end{align} where the multiplicities of the roots are shown in parentheses; note that odd roots are counted with negative multiplicity. (A root is called even or odd depending on whether the corresponding eigenspace is in the even or odd part of the Lie superalgebra. Even root vectors are located within the boson-boson and fermion-fermion supermatrix blocks, whereas odd root vectors belong to the boson-fermion and fermion-boson blocks.) For this choice of positive root system, the Weyl co-vector $\rho$ (or half the sum of the positive roots with multiplicities) is \begin{align}\label{rho-first-choice} \rho = -x + iy. \end{align} The crucial advantage of using the symmetric-space parametrization generated by the Iwasawa decomposition is that the $N$-radial spherical functions $\varphi_\mu$ have the very simple form of exponentials (or ``plane waves''), \begin{align}\label{e3.2} \varphi_\mu (Q) &= e^{(\rho + \mu)(\ln \tilde{a}(Q))} \cr &= e^{(-1+\mu_0) x(\ln \tilde{a}(Q)) + (1 + \mu_1)i y(\ln \tilde{a}(Q))}, \end{align} labeled by a weight vector $\mu = \mu_0 x + \mu_1 i y$ in $\mathfrak{a}^*$. The boson-boson component $\mu_0$ of the weight $\mu$ can be any complex number, while the fermion-fermion component is constrained by \begin{align}\label{e3.3} \mu_1 & \in \{-1, -3, -5,\ldots\} \end{align} to ensure that $e^{i (1 + \mu_1)(\ln \tilde{a}(Q))}$ is single-valued in spite of the presence of the logarithm. {}From here on we adopt a simplified notation where we use the same symbol $x$ also for the composition of $x$ with $\ln \tilde{a}$ (and similar for $y$). Thus $x$ may now have two different meanings: either its old meaning as a linear function on $\mathfrak{a}$, or the new one as an $N$-radial function $x \circ \ln \tilde{a}$ on $G/K$. It should always be clear from the context which of the two functions $x$ we mean. With this convention, Eq.\ (\ref{e3.2}) reads $\varphi_\mu = e^{\rho + \mu} = e^{(-1+\mu_0) x + (1 + \mu_1)i y}$. We will also use the notation \begin{align}\label{e3.4} q &= \frac{1 - \mu_0}{2}, & p = -\frac{1 + \mu_1}{2} \in {\mathbb Z}_+, \end{align} where ${\mathbb Z}_+$ means the set of non-negative integers. In this notation the exponential functions (\ref{e3.2}) take the form \begin{align}\label{plane-wave} \varphi_\mu \equiv \varphi_{q,p} = e^{-2qx - 2ipy} . \end{align} We mention in passing that the quantization of $p$ is nothing but the familiar quantization of the angular momentum $l$ for the well-known spherical functions on $S^2$. Indeed, the ``momentum'' $p$ is conjugate to the ``radial variable'' $y$ corresponding to the compact (fermion-fermion) sector. The absence of any quantization for $q$ should also be clear from the discussion at the end of Sec.~\ref{s2}. In fact, $q$ is conjugate to the radial variable $x$ of the non-compact (boson-boson) sector, which is a hyperboloid $H^2$. By simple reasoning based on the observation that $A$ normalizes $N$ (i.e., for any $a_0 \in A$ and $n_0 \in N$ one has $a_0^{-1} n_0 a_0 \in N$), each plane wave $\varphi_\mu$ is an eigenfunction of the Laplace-Beltrami operator and all other invariant differential operators on $G/K$. \cite{eigenfunction} [The same conclusion follows from more general considerations based on highest-weight vectors (see Sec. \ref{s8}).] The eigenvalue of the Laplace-Beltrami operator is \begin{align} \label{SUSY-Casimir} \mu_0^2 - \mu_1^2 = 4 q(q-1) - 4 p(p+1) , \end{align} up to a constant factor. It should be stressed that the $N$-radial spherical functions, which depend only on $a$ in the Iwasawa decomposition $g = nak$, differ from the $K$-radial spherical functions (depending only on $a'$ in the Cartan decomposition $g = k'a'k''$, see Sec.~\ref{s2}), since for a given element $Q = g \Lambda g^{-1}$ or $gK$ of $G/K$ the radial elements $a$ and $a'$ of these two factorizations are different. However, a link between the two types of radial spherical function can easily be established. Indeed, if $\varphi(Q)$ is a spherical function, then for any element $k \in K$ the transformed function $\varphi(k g k^{-1})$ is still a spherical function from the same representation. Therefore, we can construct a $K$-invariant spherical function $\tilde{\varphi}_\mu$ by simply averaging $\varphi_\mu(k^{-1} Q k)$ over $K$, \begin{equation}\label{e3.5} \tilde{\varphi}_\mu(Q) = \int_K dk \, \varphi_\mu(k^{-1} Q k) , \end{equation} provided, of course, that the integral does not vanish. For $n \geq 1$ the space $\mathfrak{a}^*$ has dimension $2n$. We label the linear coordinates as $x_j$, $y_j$ with $j = 1, \ldots, n$; following the notation above, the $x_j$ and $y_j$ correspond to the non-compact and compact sectors, respectively. The positive root system can be chosen as follows: \begin{align} & x_j - x_k\,\, (2), && x_j + x_k \,\, (2), && 2x_j \,\, (1), \nonumber \\ & i(y_l - y_m) \,\, (2), && i(y_l + y_m)\,\, (2), && 2iy_l \,\, (1), \nonumber \\ & x_j + iy_l \,\, (-2), && x_j - iy_l \,\, (-2), \label{positive-roots-class-A-n} \end{align} where $1 \leq j < k \leq n$ and $1 \leq m < l \leq n$. As before, the multiplicities of the roots are given in parentheses, and a negative multiplicity means that the corresponding root is odd, or fermionic. The half-sum of these roots (still weighted by multiplicities) now is \begin{align}\label{half-sum} \rho = \sum_{j=1}^n c_j x_j + i \sum_{l=1}^n b_l y_l \end{align} with \begin{align} \label{e3.6} c_j &= 1 - 2j, & b_l &= 2l-1. \end{align} The $N$-radial spherical functions are constructed just like for $n = 1$. They are still ``plane waves'' $\varphi_\mu = e^{\rho + \mu}$ but now the weight vector $\mu$ has $2n$ components $\mu^0_j$ and $\mu^1_l$, the latter of which take values \begin{align}\label{e3.7} \mu^1_l & \in \{-b_l, -b_l - 2, -b_l - 4, \ldots\}. \end{align} We will also write \begin{align}\label{e3.8} q_j &= -\frac{\mu^0_j + c_j}{2}, & p_l = -\frac{\mu^1_l + b_l}{2} \in {\mathbb Z}_+ . \end{align} In this notation our $N$-radial spherical functions are \begin{align}\label{e3.9} \varphi_\mu \equiv \varphi_{q,p} = \exp\Big(-2 \sum_{j=1}^n q_j x_j - 2i \sum _{l=1}^n p_l y_l \Big). \end{align} On general grounds, these are eigenfunctions of the Laplace-Beltrami operator (and all other invariant differential operators) on $G/K$, with the eigenvalue being \begin{align}\label{e3.10} &{\textstyle{\frac{1}{4}}} \sum_{j=1}^n (\mu^0_j)^2 -{\textstyle{\frac{1}{4}}} \sum_{l=1}^n (\mu^1_l)^2 \cr &= \sum_{j=1}^n q_j(q_j + c_j) - \sum_{l=1}^n p_l(p_l + b_l) \\ &= q_1(q_1 - 1) + q_2(q_2 - 3) + \ldots + q_n(q_n - 2n + 1) \cr &- p_1(p_1 + 1) - p_2(p_2 + 3) - \ldots - p_n(p_n + 2n - 1) , \nonumber \end{align} up to a constant factor. \section{SUSY--replica correspondence for the $n=1$ supersymmetric model}\label{s4} Let us summarize the results of two preceding sections. In Ref.~\ref{s2} we reviewed Wegner's classification of polynomial spherical functions for compact replica models, with irreducible representations labeled by Young diagrams (or sets of non-increasing positive integers giving the length of each row of the diagram). In Sec.~\ref{s3} we presented an alternative classification based on the Iwasawa decomposition of the SUSY $\sigma$-model field. There, the $N$-radial spherical functions are labeled by a set of non-negative integers $p_l$ and a set of parameters $q_j$ that are not restricted to integer or non-negative values. Obviously, the second classification is broader, in view of the continuous nature of the $q_j$. Furthermore, since the SUSY scheme is expected to give, in some sense, a complete set of spherical functions, it should contain Wegner's classification, i.e., each Young diagram of Sec.~\ref{s2} should occur as some $N$-radial plane wave with a certain set of $p_l$ and $q_j$. We are now going to establish this correspondence explicitly. We begin with the case of minimal SUSY, $n = 1$. The starting point is a representation of Green functions as functional integrals over a supervector field containing one bosonic and one fermionic component in both the retarded and advanced sectors. Correlation functions of bosonic fields are symmetric with respect to the spatial coordinates, whereas correlation functions of fermionic fields are antisymmetric. Thus, within the minimal SUSY model one can represent correlation functions involving symmetrization over one set of variables and/or antisymmetrization over another set. On simple representation-theoretic grounds, it follows that the $n=1$ model is sufficient to make for the presence of representations with Young diagrams of the type shown in Fig. \ref{hook}. We refer to such diagrams as {\it hooks} or {\it hook-shaped} for obvious reasons. We introduce two ``dual'' notations (see Appendix \ref{app:notation} for detailed definitions) for Young diagrams by counting the number of boxes either in rows or in columns; in the first case we put the numbers in round brackets and in the second case in square brackets. In particular, the hook diagram of Fig.\ \ref{hook} is denoted either as $(q,1^p)$ or as $[p+1, 1^{q-1}]$. Below, we point out an explicit correspondence between the spherical functions of the $n=1$ SUSY model and these hook diagrams, by computing the values of the quadratic Casimir operators and identifying the relevant physical observables. \begin{figure}[t] \ytableausetup{mathmode, boxsize=1.5em} \begin{ytableau} \none & \none[\scriptstyle p+1] & \none[1] & \none[1] & \none[1] & \none[1] & \none[1] \\ \none[q] & & & & & & \\ \none[1] & \\ \none[1] & \\ \none[1] & \\ \none[1] & \end{ytableau} \caption{Hook-shaped Young diagrams $\lambda = (q, 1^p)$ label the scaling operators that can be described within the minimal ($n=1$) SUSY model. The numbers of boxes in each row and in each column are indicated to the left and above the diagram. The example shown in the figure corresponds to $q = 6$, $p = 4$.} \label{hook} \end{figure} Evaluating the quadratic Casimir (\ref{a2}) for the hook diagrams $(q , 1^p) = [p+1,1^{q-1}]$, and taking the replica limit $m = 0$, we get \begin{align}\label{e4.1} a_2\big((q, 1^p); 0\big) &= q(q-1) - 2 - 4 - \ldots - 2p \nonumber \\ &= q(q-1) - p(p+1) , \end{align} which is the same (up to a constant factor) as the eigenvalue of the Laplace-Beltrami operator (\ref{SUSY-Casimir}) associated with the plane wave $\varphi_{p,q}$ (\ref{plane-wave}) in the SUSY formalism. This fully agrees with our expectations and indicates the required correspondence: the Young diagram of the type $(q, 1^p)$ of the replica formalism corresponds to the plane wave $\varphi_{q,p}$ (or, more precisely, to the corresponding representation) of the SUSY formalism. While a full proof of the correspondence follows from our arguments in Sec. \ref{s6} (see especially the Sec. \ref{s7}), we feel that the agreement between the quadratic Casimir eigenvalues is already convincing enough for our immediate purposes. At this point, it is worth commenting on an apparent ``asymmetry'' between $p$ and $q$ in the above correspondence: the spherical function $\varphi_{q,p}$ corresponds to the Young diagram that has $q$ boxes in its first row but $p+1$ boxes in the first column. The reason for this asymmetry is the specific choice of positive roots (\ref{positive-roots-class-A-n-1}). If instead of $x - iy$ we chose $-x +iy$ to be a positive root (keeping the other three roots), the half-sum $\rho$ would change to \begin{align}\label{e4.2} \tilde \rho = x-iy. \end{align} This corresponds to a different choice of nilpotent subgroup $\tilde N$ (generated by the root vectors corresponding to the positive roots) in the Iwasawa decomposition, and thus, to another choice of ($\tilde{N}$-)radial coordinates $\tilde x$ and $\tilde y$ on the superspace $G/K$. As a result, the plane wave \begin{align}\label{e4.3} \varphi_{\tilde q, \tilde p}(\tilde x, \tilde y) = e^{-2{\tilde q}{\tilde x} - 2i{\tilde p}{\tilde y}} \end{align} characterized by quantum numbers $\tilde p, \tilde q$ in these new coordinates, is an eigenfunction of the Laplace-Beltrami operator with eigenvalue \begin{align}\label{e4.4} (2{\tilde q} + 1)^2 - (2{\tilde p}-1)^2 &= 4 {\tilde q}({\tilde q} + 1) - 4 {\tilde p}({\tilde p} - 1) . \end{align} This is the same eigenvalue as (\ref{SUSY-Casimir}) if one makes the identifications $\tilde{q} = q - 1$ and $\tilde{p} = p + 1$. We thus see that in the new coordinates the asymmetry between $\tilde p$ and $\tilde q$ is reversed: the function $e^{-2{\tilde q}{\tilde x} - 2i{\tilde p}{\tilde y}}$ corresponds to a hook Young diagram with $\tilde{q}+1$ boxes in the first row and $\tilde{p}$ boxes in the first column. Of course the functions $e^{-2qx - 2ipy}$ and $e^{-2{\tilde q}{\tilde x} - 2i{\tilde p}{\tilde y}}$ with $\tilde q = q-1$ and $\tilde p = p+1$ are not identical (since an $N$-radial function is not $\tilde N$-radial in general), but they belong to the same representation. Choosing a system of positive roots for the Iwasawa decomposition is just a matter of convenience; it is simply a choice of coordinate frame. The positive root system (\ref{positive-roots-class-A-n-1}) is particularly convenient, since with this choice the plane waves corresponding to the most relevant operators (the LDOS moments) depend on $x$ only (and not on $y$). Of course, our final results do not depend on this choice. We are now going to identify the physical observables that correspond to the operators of the $n=1$ SUSY model. For $p=0$ the Young diagrams of the type $(q, 1^p)$ reduce to a single row with $q$ boxes, i.e.\ $(q) = [1^q]$, which in our SUSY approach represents the spherical function $e^{-2qx}$. As we have already mentioned, this function corresponds to the moment $\langle \nu^q \rangle$ of the LDOS. Note that for symmetry class A, where the global density of states is non-critical, the moment $\langle \nu^q \rangle$ has the same scaling as the expectation value of the $q$-th power of a critical wave-function intensity, \begin{align}\label{e4.5} A_1({\bf r}) = |\psi({\bf r})|^2 . \end{align} For the unconventional symmetry classes there is a similarly simple relation; one just has to take care of the exponent $x_\rho$ controlling the scaling of the average density of states, see Eqs.~(\ref{e1.1}) and (\ref{e2.2}). The meaning of the subscript in the notation $A_1$ introduced in Eq.~(\ref{e4.5}) will become clear momentarily. We express the equivalence in the scaling behavior by \begin{align} \langle \nu^q \rangle \sim \langle A_1^q({\bf r}) \rangle. \end{align} We now sketch the derivation \cite{gruzberg11} that links $\langle \nu^q \rangle$ with the spherical function $e^{-2qx}$ of the SUSY $\sigma$-model. The calculation of an observable (i.e.\ some correlation function of the LDOS or of wave functions) in the SUSY approach begins with the relevant combination of Green functions being expressed as an integral over a supervector field. \cite{mirlin94, efetov-book, mirlin-physrep, zirnbauer04} In particular, retarded and advanced Green functions \begin{equation}\label{e4.5a} G_{R,A} ({\bf r}, {\bf r}') = (E\pm i\eta - \hat{H})^{-1} ({\bf r}, {\bf r}') \end{equation} (where $\eta$ is the level broadening, which for our purposes can be chosen to be of the order of several mean level spacings) are represented as \begin{eqnarray}\label{e4.6} G_R ({\bf r}, {\bf r}') &=& - i \langle S_R({\bf r}) S_R^\ast ({\bf r}')\rangle, \cr G_A ({\bf r}, {\bf r}') &=& i \langle S_A({\bf r}) S_A^\ast ({\bf r}')\rangle . \end{eqnarray} Here $S_{R,A}$ are the bosonic components of the supervector field $\Phi = (S_R,\xi_R,S_A,\xi_A)$ (with subscripts $R,A$ referring to the retarded and advanced subspaces, respectively), and $\langle \ldots \rangle$ on the r.h.s.\ of Eq.~(\ref{e4.6}) denotes the integration over $\Phi$ with the corresponding Gaussian action of $\Phi$. Alternatively, the Green functions can be represented by using the fermionic (anticommuting) components $\xi_{R,A}$; we will return to this possibility below. In order to obtain the $q$-th power $\nu^q$ of the density of states \begin{equation}\label{e4.7} \nu({\bf r}_0) = \frac{1}{2\pi i} \left( G_A({\bf r}_0, {\bf r}_0) - G_R({\bf r}_0, {\bf r}_0) \right) , \end{equation} one has to take the corresponding combination of the bosonic components $S_i$ as a pre-exponential in the $\Phi$ integral: \begin{align}\label{e4.8} \nu^q({\bf r}_0) = \frac{1}{(2\pi)^q q!} \big\langle &\left( S_R({\bf r}_0) - e^{i\alpha} S_A({\bf r}_0) \right)^q \cr \times &\left( S_R^\ast ({\bf r}_0) - e^{-i\alpha} S_A^\ast ({\bf r}_0)\right)^q \big\rangle , \end{align} where $e^{i\alpha}$ is any unitary number. The next steps are to take the average over the disorder and reduce the theory to the non-linear $\sigma$-model form. The contractions on the r.h.s.\ of Eq.~(\ref{e4.8}) then generate the corresponding pre-exponential expression in the $\sigma$-model integral: \begin{equation}\label{e4.9} \langle \nu^q \rangle = 2^{-q} \left\langle \big( Q_{RR} - Q_{AA} + e^{-i\alpha} Q_{RA} - e^{i\alpha} Q_{AR} \big)_{bb}^q \right\rangle, \end{equation} where $Q\equiv Q({\bf r}_0)$. The indices $b, f$ refer to the boson-fermion decomposition. Although the following goes through for any value of $\alpha$, we now take $e^{i\alpha} = 1$ for brevity. It is then convenient to switch to ${\mathcal Q} = Q\Lambda \equiv Q\sigma_3$; here we introduce Pauli matrices $\sigma_j$ in the $RA$ space, with $\sigma_3 = \Lambda$. It is also convenient to perform a unitary transformation ${\mathcal Q} \to \tilde{\mathcal Q} \equiv U \mathcal{Q} U^{-1}$ in the $RA$ space by the matrix $U = (1 + i\sigma_1 + i\sigma_2 + i \sigma_3 ) / 2$, which cyclically permutes the Pauli matrices: $U \sigma_j U^{-1} = \sigma_{j-1}$. The combination of $Q_{ij}$ entering Eq.~(\ref{e4.9}) then becomes \begin{equation}\label{e4.10} (1/2)(Q_{RR} - Q_{AA} + Q_{RA} - Q_{AR})_{bb} = \tilde{\mathcal Q}_{AA,bb} . \end{equation} The Iwasawa decomposition $g = nak$ leads to ${\mathcal Q} = n a^2 \sigma_3 n^{-1} \sigma_3$, where we used $k \sigma_3 k^{-1} = \sigma_3$ and $a \sigma_3 a^{-1} = a^2 \sigma_3$. Upon making the transformation ${\mathcal Q} \to \tilde{\mathcal Q}$, this takes the form $\tilde {\mathcal Q} = \tilde{n}\tilde{a}^2\sigma_2\tilde{n}^{-1}\sigma_2$, or explicitly \begin{equation}\label{e4.11} \tilde{\mathcal Q} = \begin{pmatrix} 1 & * & * & * \\ 0 & 1 & * & * \\ 0 & 0 & 1 & * \\ 0 & 0 & 0 & 1 \end{pmatrix} \!\! \begin{pmatrix} e^{2x} & 0 & 0 & 0 \\ 0 & e^{2iy} & 0 & 0 \\ 0 & 0 & \!e^{-2iy}\! & 0 \\ 0 & 0 & 0 & e^{-2x} \end{pmatrix} \!\! \begin{pmatrix} 1 & 0 & 0 & 0 \\ * & 1 & 0 & 0 \\ * & * & 1 & 0 \\ * & * & * & 1 \end{pmatrix}, \end{equation} where the symbol $*$ denotes some non-zero matrix elements of nilpotent matrices, and we have reversed the boson-fermion order in the advanced sector in order to reveal the meaning of the Iwasawa decomposition in the best possible way. As explained above, the variables $x$ and $y$ parametrize the abelian group $A$ which is non-compact in the $x$-direction and compact in the $y$-direction. By observing that the 44-element of the product of matrices on the r.h.s.\ of (\ref{e4.11}) is $e^{-2x}$, it follows that the matrix element (\ref{e4.10}) is equal to \begin{align}\label{e4.12} \tilde{\mathcal Q}_{AA,bb} = e^{-2x}. \end{align} This completes our review of the correspondence between LDOS or wave-function moments and the spherical functions of the SUSY formalism: \cite{gruzberg11} \begin{align}\label{e4.13} \langle A_1^q \rangle \sim \langle \nu^q\rangle \longleftrightarrow \varphi_{q,0} = e^{-2qx} . \end{align} Let us emphasize that, although our derivation assumes $q$ to be a non-negative integer, the correspondence (\ref{e4.13}) actually holds for any complex value of $q$. Indeed, both sides of (\ref{e4.13}) are defined for all $q \in \mathbb{C}$, and by Carlson's Theorem the complex-analytic function $q \mapsto \langle \nu^q \rangle$ is uniquely determined by its values for $q \in \mathbb{N}$. At this point the unknowing reader might worry that the positivity of $\langle \nu^q \rangle > 0$ could be in contradiction with the pure-scaling nature of the operator $\varphi_{q,0} = e^{-2qx}$. Indeed, one might argue that if the symmetry group $G$ is compact, then every observable $A$ that transforms according to a non-trivial irreducible representation of $G$ must have zero expectation value with respect to any $G$-invariant distribution. This apparent paradox is resolved by observing that our symmetry group $G$ is \emph{not} compact (or, if fermionic replicas are used, that the replica trick is very tricky). In fact, the SUSY $\sigma$-model has a non-compact sector which requires regularization by the second term (or similar) in the action functional (\ref{e2.1}). Removing the $G$-symmetry breaking regularization ($h \to 0$) to evaluate observables such as $\langle \nu^q \rangle$, one is faced with a limit of the type $0 \times \infty$ which does lead to a non-zero expectation value $\langle \varphi_{q,0} \rangle \not= 0$. We now turn to Young diagrams $(1^{\tilde p}) = [\tilde p]$ (where we use the notation ${\tilde p} = p+1$ as before), which encode total antisymmetrization by the permutation group. These correspond to the maximally antisymmetrized correlation function of wave functions. In fact, such a diagram gives the scaling of the expectation value of the modulus squared of the Slater determinant, \begin{align}\label{e4.14} A_{\tilde p}({\bf r}_1, \ldots, {\bf r}_{\tilde p}) &= |D_{\tilde p}({\bf r}_1, \ldots, {\bf r}_{\tilde p})|^2, \\ D_{\tilde p}({\bf r}_1, \ldots, {\bf r}_{\tilde p}) &= \text{Det} \begin{pmatrix} \psi_1({\bf r}_1) & \cdots & \psi_1({\bf r}_{\tilde p}) \\ \vdots & \ddots & \vdots \\ \psi_{\tilde p}({\bf r}_1) & \cdots & \psi_{\tilde p}({\bf r}_{\tilde p}) \end{pmatrix}. \end{align} Here all points ${\bf r}_i$ are assumed to be close to each other (on a distance scale given by the mean free path $l$), so that after the mapping to the $\sigma$-model they become a single point. Actually, the scaling of the average $\langle A_{\tilde p} \rangle$ with system size $L$ does not depend on the distances $|{\bf r}_i-{\bf r}_j|$ as long as all of them are kept fixed when the limit $L\to\infty$ is taken. However, we prefer to keep the distances sufficiently small, so that our observables reduce to local operators of the $\sigma$-model. Moreover, all of the wave functions $\psi_i$ are supposed to be close to each other in energy (say, within several level spacings). Again, larger energy differences will not affect the scaling exponent; they only set an infrared cutoff that determines the size of the largest system displaying critical behavior. Clearly, $A_{\tilde p}$ reduces to $A_1 = |\psi({\bf r})|^2$ when $\tilde p = 1$ (which was the reason for introducing the notation $A_1$ above). In the SUSY formalism the average $\langle A_{\tilde p}\rangle$ can be represented in the following way. We start with \begin{align}\label{e4.15} A_{\tilde p} \sim \big\langle &[\xi_R^*({\bf r}_1)-\xi_A^*({\bf r}_1)] [\xi_R({\bf r}_1)-\xi_A({\bf r}_1)] \cr \times &[\xi_R^*({\bf r_2})-\xi_A^*({\bf r}_2)] [\xi_R({\bf r_2})-\xi_A({\bf r}_2)] \ldots \cr \times &[\xi_R^*({\bf r}_{\tilde{p}})-\xi_A^*({\bf r}_{\tilde{p}})] [\xi_R({\bf r}_{\tilde{p}})-\xi_A({\bf r}_{\tilde{p}})] \big\rangle, \end{align} where $\xi, \xi^*$ are the fermionic components of the supervector $\Phi$ used to represent electron Green functions. This expression can now be disorder averaged and reduced to a $\sigma$-model correlation function. By a calculation similar to that for the moment $\langle \nu^q \rangle$ we now end up with the average of the $\tilde{p}^\mathrm{th}$ moment of the fermion-fermion matrix element $\tilde{\mathcal Q}_{AA,ff}$ of the $\tilde{Q}$ matrix. Here the alternative choice of positive root system mentioned above (for which the radial coordinates were denoted by $\tilde x$, $\tilde y$) is more convenient since, in a sense, it interchanges the roles of $x$ and $y$ in the process of fixing the system of positive roots. As a result, we get the correspondence \begin{align}\label{e4.16} A_{\tilde p} \longleftrightarrow \varphi_{0,\tilde p} = e^{-2i{\tilde p} {\tilde y}}. \end{align} Combining the two examples above [a single-row Young diagram $(q)$ and a single-column Young diagram $(1^{p+1})$], one might guess that a general hook-shaped diagram $(q, 1^p)$ would correspond to the correlator \begin{align}\label{e4.17} \langle A_1^{q-1} A_{p+1} \rangle . \end{align} This turns out to be almost correct: the hook diagram $(q, 1^p)$ indeed gives the leading scaling behavior of (\ref{e4.17}). However, the correlation function (\ref{e4.17}) is in general not a pure scaling operator but contains subleading corrections to the scaling for $(q, 1^p)$. We note in this connection that in the two examples above each of the wave-function combinations $A_1^q$ and $A_p$ corresponds to a single exponential function on the $\sigma$-model target space, and thus to a single $G$-representation. Therefore, at the level of the $\sigma$-model these combinations do correspond to pure scaling operators. (For the LDOS moments $A_1^q$ this was evident from the results of Ref.\ [\onlinecite{gruzberg11}] but we did not stress it there.) We will show below how to construct more complicated wave-function correlators that correspond to pure scaling operators of the $\sigma$-model. \section{General wave-function correlators}\label{s5} Clearly, one can construct a variety of wave-function correlators that are different from the totally symmetric ($A_1^q$) and totally antisymmetric ($A_p$) correlators considered in Sec.~\ref{s4}. One example is provided by correlation functions that arise when one studies the influence of interactions on Anderson and quantum Hall transitions. \cite{burmistrov11} In that context, one is led to consider moments of the Hartree-Fock matrix element (\ref{e1.6}), which involves the antisymmetrized combination (\ref{e1.5}) of two critical wave functions. In terms of the quantities $A_p$ introduced above, Ref.~[\onlinecite{burmistrov11}] calculated \begin{equation}\label{e5.a1} \big\langle A_2(\psi_1,\psi_2; {\bf r}_1, {\bf r}_2) A_2(\psi_3,\psi_4; {\bf r}_3, {\bf r}_4) \big\rangle, \end{equation} where the expanded notation indicates the wave functions and corresponding coordinates on which the $A_2$ are constructed. Thus all four points and all four wave functions were taken to be different (although all points and all energies were still close to each other). To leading order, the correlator (\ref{e5.a1}) scales in the same way as $\langle A_2^2 \rangle$ (where we take $\psi_1 = \psi_3$, $\psi_2 = \psi_4$, ${\bf r}_1 = {\bf r}_3$, ${\bf r}_2 = {\bf r}_4$). The importance of the phrase ``to leading order'' will become clear in Sec.~\ref{s6}. As we discussed in Sec.~\ref{s4}, the scaling of the average $\langle A_2 \rangle$ is given by the representation with Young diagram $(1^2) = [2]$. The analysis \cite{burmistrov11} of the second moment $\langle A_2^2 \rangle$ shows that its leading behavior is given by the diagram $(2^2)= [2^2]$. A natural generalization of this is the following proposition: the Young diagram \begin{align}\label{eq:general-YD-p} \lambda = [p_1, p_2, \ldots, p_m] \end{align} relates to the replica $\sigma$-model operator that describes the leading scaling behavior of the correlation function \begin{align}\label{e5.1} \langle A_{p_1} A_{p_2} \cdots A_{p_m} \rangle . \end{align} We will argue in Sec.\ \ref{s6} below that this is indeed correct. Here we wish to add a few comments. In general, all combinations $A_{p_i}$ may contain different points and different wave functions (as long as the points and the energies are close) without changing the leading scaling behavior. Thus, a general correlator corresponding to a Young diagram $\lambda$ will involve $|\lambda|$ points and the same number of wave functions. However, if \begin{equation} \lambda = [p_1, p_2, \ldots, p_m] = [k_1^{a_1}, \dots, k_s^{a_s}] \end{equation} we may choose to use the same points and wave functions for all $a_i$ combinations $A_{k_i}$ of a given size $k_i$. This yields a somewhat simpler correlator \begin{align}\label{eq:general-corr-p} K_\lambda = \langle A_{k_1}^{a_1} \cdots A_{k_s}^{a_s} \rangle \end{align} with the same leading scaling. If we use the alternative notation \begin{align}\label{eq:general-YD-q} \lambda = (q_1, q_2, \ldots, q_n) = (l_1^{b_1}, \dots, l_s^{b_s}) \end{align} for the Young diagram (\ref{eq:general-YD-p}), the correlator (\ref{eq:general-corr-p}) can also be written as \begin{align}\label{eq:general-corr-q} K_\lambda = \langle A_{b_1}^{l_1 - l_2} A_{b_1 + b_2}^{l_2 - l_3} \cdots A_{b_1 + \ldots + b_{s-1}}^{l_{s-1} - l_s} A_{b_1 + \ldots + b_s}^{l_s} \rangle, \end{align} see Eqs. (\ref{a-l}) and (\ref{k-b}) in Appendix \ref{app:notation}. In fact, as is easy to see, this can also be rewritten in a natural way as \begin{align}\label{e1} K_{(q_1,\ldots, q_n)} = \langle A_1^{q_1 - q_2} A_2^{q_2 - q_3} \cdots A_{n-1}^{q_{n-1} - q_n} A_n^{q_n} \rangle. \end{align} If we introduce the notation \begin{align}\label{e5.2} \nu_1 &= A_1, & \nu_i &= \frac{A_i}{A_{i-1}}, & 2 \leq i \leq n, \end{align} then the correlator $K_\lambda$ can also be cast in the following form: \begin{align}\label{e5.3} K_{(q_1,\ldots, q_n)} = \langle \nu_1^{q_1} \nu_2^{q_2} \cdots \nu_{n-1}^{q_{n-1}} \nu_n^{q_n} \rangle. \end{align} Below we will establish the correspondence of the correlation functions (\ref{e5.3}) with Young diagrams that was stated in this section. We will also show how to build pure-scaling correlation functions and establish a connection with the Fourier analysis on the symmetric space of the SUSY $\sigma$-model. \section{Exact scaling operators}\label{s6} Let us now come back to the issue of exact scaling operators. In the preceding section we wrote down a large family (\ref{e5.1}) of wave-function correlators. In general, the members of this family do not show pure scaling. We are now going to argue, however, that if we appropriately symmetrize (or appropriately choose) the points or wave functions that enter the correlation function, then pure power-law scaling does hold. \subsection{An example}\label{s6a} Let us begin with the simplest example illustrating the fact stated above. This example is worked out in detail in Sec.~3.3.3 of Ref.~[\onlinecite{mirlin-physrep}] and is provided by the correlation function \begin{equation}\label{e3} \big\langle |\psi_1({\bf r}_1) \psi_2({\bf r}_2)|^2 \big\rangle . \end{equation} When the two points and the two wave functions are different, this yields \begin{equation}\label{e6.1} \big\langle (Q_{RR,bb} - Q_{AA,bb})^2 \big\rangle = 2 - 2 \big\langle Q_{RR,bb} Q_{AA,bb} \big\rangle \end{equation} after the transformation to the $\sigma$-model. Now the expression $1 - Q_{RR,bb} Q_{AA,bb}$ is not a pure-scaling $\sigma$-model operator: by decomposing it according to representations, one finds that it contains not only the leading term with Young diagram $(2)$, but also the subleading one, $(1,1)$. To get the exact scaling operator for $(2)$, which is \begin{equation}\label{e4} 1 - \big\langle Q_{RR,bb} Q_{AA,bb} + Q_{RA,bb} Q_{AR,bb} \big\rangle, \end{equation} one has to symmetrize the product of wave functions in (\ref{e3}) with respect to points (or wave function indices): the correlator that does exhibit pure scaling is \begin{align} \big\langle |\psi_1({\bf r}_1) \psi_2({\bf r}_2) + \psi_1({\bf r}_2) \psi_2({\bf r}_1)|^2 \big\rangle . \end{align} Alternatively, one can take the points to be equal and consider the correlation function \begin{equation}\label{e5} \big\langle |\psi_1({\bf r}_1)\psi_2({\bf r}_1)|^2 \big\rangle. \end{equation} Then one gets the exact scaling operator right away, since the correlation function (\ref{e5}) already has the required symmetry. One can also take the same wave function: \begin{equation}\label{e6} \big\langle |\psi_1({\bf r}_1)\psi_1({\bf r}_2)|^2 \big\rangle, \qquad \big\langle |\psi_1({\bf r}_1)|^4 \big\rangle. \end{equation} All of these reduce to the same exact scaling operator (\ref{e4}) in the $\sigma$-model approximation. \subsection{Statement of result}\label{s6b} The example above gives us a good indication of how to get wave-function correlators corresponding to pure-scaling operators: the product of wave functions should be appropriately (anti)symmetrized before the square of the absolute value is taken. To be precise, in order to get a pure-scaling correlation function for the diagram (\ref{eq:general-YD-p}) [giving the leading scaling contribution to Eq.~(\ref{e5.1})], one should proceed in the following way: \begin{itemize} \item[(i)] View the points and wave functions as filling the Young diagram (\ref{eq:general-YD-p}) by forming the normal Young tableau $T_0$ (see Appendix \ref{app:notation} for definitions); \item[(ii)] Consider the product of wave-function amplitudes \begin{equation}\label{e6.2} \psi_1({\bf r}_1) \psi_2 ({\bf r}_2) \cdots \psi_N({\bf r}_N), \quad N = p_1 + \ldots + p_m. \end{equation} In the notation of Appendix \ref{app:notation} this is $\Psi_\lambda (T_0, T_0)$. \item[(iii)] Perform the Young symmetrization $c_\lambda = b_\lambda a_\lambda$ according to the rules described in Appendix~\ref{app:notation} (symmetrization $a_\lambda$ with respect to all points in each row followed by antisymmetrization $b_\lambda$ with respect to all points in each column). In this way we obtain \begin{equation}\label{e6.2'} \Psi_\lambda(T_0, c_\lambda T_0). \end{equation} \item[(iv)] Take the absolute value squared of the resulting expression: \begin{equation}\label{e6.2''} \big|\Psi_\lambda(T_0, c_\lambda T_0)\big|^2. \end{equation} \end{itemize} Several comments are in order here. First, one can define several slightly different procedures of Young symmetrization. Specifically, one can perform it with respect to points (as described above) or, alternatively, with respect to wave functions (obtaining $|\Psi_\lambda (c_\lambda T_0, T_0)|^2$). Also, one can perform the Young symmetrization in the opposite order (first antisymmetrization along the columns, then symmetrization along the rows: $\tilde{c}_\lambda = a_\lambda b_\lambda$). In fact, it is not difficult to see that carrying out $\tilde{c}_\lambda$ with respect to wave functions is the same as performing $c_\lambda$ with respect to points, and vice versa, see Eqs.\ (\ref{YS-action-1}), (\ref{YS-action-2}). While for different schemes one will in general obtain from (\ref{e6.2}) slightly different expressions, they will scale in the same way upon averaging, as they belong to the same irreducible representation. Furthermore, once a Young symmetrization of the pro! duct (\ref{e6.2}) has been performed, one can, instead of taking the absolute value squared, simply multiply it with the product $\psi_1^*({\bf r}_1) \psi_2^* ({\bf r}_2) \ldots \psi_N^*({\bf r}_N)$. Finally, the symmetrization with respect to points is redundant if the corresponding points (or wave functions) are taken to be the same, see Eq.\ (\ref{Psi-minimal}). To illustrate the procedure, let us return again to the correlation function (\ref{e5.a1}) \begin{equation}\label{e2} \big\langle A_2(\psi_1,\psi_2; {\bf r}_1, {\bf r}_2) A_2(\psi_3,\psi_4; {\bf r}_3, {\bf r}_4) \big\rangle, \end{equation} considered in Ref.~[\onlinecite{burmistrov11}]. As we have already discussed, its leading scaling is that of the Young diagram $(2^2)$; however,\ Eq. (\ref{e2}) includes also corrections due to subleading operators. In order to get the corresponding pure-scaling correlation function, we should start from the product $\psi_1({\bf r}_1)\psi_2({\bf r}_2)\psi_3({\bf r}_3)\psi_4({\bf r}_4)$ and apply the Young symmetrization rules corresponding to the diagram $(2^2)$. This will lead to the expression \begin{align}\label{e7} & [\psi_1({\bf r}_1)\psi_2({\bf r}_2) + \psi_1({\bf r}_2)\psi_2({\bf r}_1)] \cr & \times [\psi_3({\bf r}_3)\psi_4({\bf r}_4) + \psi_3({\bf r}_4)\psi_4({\bf r}_3)], \end{align} further anti-symmetrized with respect to the interchange of ${\bf r}_1$ with ${\bf r}_3$ and with respect to interchange of ${\bf r}_2$ with ${\bf r}_4$. Finally, one should take the absolute value squared. As an alternative to the symmetrization, one can simply set ${\bf r}_1={\bf r}_2$ and ${\bf r}_3={\bf r}_4$ (which means choosing the minimal Young tableau $T_{\text{min}}$ for $T_r$), in which case there is no need to symmetrize. This results in \begin{align} & \big| [\psi_1({\bf r}_1)\psi_3({\bf r}_3) - \psi_1({\bf r}_3)\psi_3({\bf r}_1)] \cr &\times [\psi_2({\bf r}_1)\psi_4({\bf r}_3) - \psi_2({\bf r}_3)\psi_4({\bf r}_1)]\big|^2 . \end{align} A similar expression can be gotten by setting $\psi_1 = \psi_2$ and $\psi_3 = \psi_4$. Finally, one can do both, keeping only two points and two wave functions. This results exactly in \begin{align} \big|\Psi_{(2^2)}\big(T_{\text{min}}, b_{(2^2)}T_{\text{min}} \big)\big|^2 = \big| D_2^2 \big|^2 = A_2^2, \end{align} which is thus a pure scaling operator. This has a natural generalization to the higher-order correlation functions (\ref{e1}) as follows. Let ${\bf r}_1, \ldots, {\bf r}_n$ be a set of $n$ distinct points. For each $m \leq n$ evaluate $A_m$ at the point ${\bf r}_m$ on a set of wave functions $\psi^{(m)}_1, \ldots, \psi^{(m)}_m$. The coincidence of evaluation points takes care of the symmetrization along all rows of the Young diagram. Moreover, the antisymmetrization is included in the definition of the $A_i$. Therefore, with such a choice of points the correlation function (\ref{e1}) will show pure scaling. This statement is independent of the choice of wave functions $\psi_j^{(m)}$: all of them can be different, or some of them corresponding to different $m$ can be taken to be equal. The most ``economical'' choice is to take only $n$ different wave functions $\psi_1, \ldots, \psi_n$ and for each $m$ set $\psi_j^{(m)} = \psi_j$ (independent of $m$) for $j = 1, \ldots, m$. This is the choice made by the minimal tableau (see Eq.\ (\ref{Psi-minimal-1})): \begin{align}\label{e6.2'''} \Psi_\lambda(T_{\text{min}}, b_\lambda T_{\text{min}}) = D_1^{q_1 -q _2} D_2^{q_2 - q_3} \cdots D_n^{q_n} . \end{align} where the numbers $(q_1, \ldots, q_n)$ specify the representation with Young diagram $\lambda$ as in Eq.\ (\ref{eq:general-YD-q}). \subsection{Sketch of proof}\label{s6c} We now sketch the proof of the relation between the wave-function correlation functions with the proper symmetry and the $\sigma$-model operators from the corresponding representation. In accordance with Eq.~(\ref{e4.8}), we begin with an integral over a supervector field $S$, \begin{equation}\label{e6.3} \big\langle c_\lambda\{S^-_1({\bf r}_1) \cdots S^-_N({\bf r}_N)\}\, c_\lambda\{S^{*-}_1({\bf r}_1)\cdots S^{*-}_N({\bf r}_N)\}\big\rangle. \end{equation} As before, $S$ denotes the bosonic components of the superfield; the superscript in $S^-$ reflects the structure in the advanced-retarded space: $S^- = S_R - S_A$. This structure ensures that, upon performing contractions, we get the required combinations of Green functions, $G_R - G_A$. We emphasize, however, that one could equally well choose $S_R + S_A$ or $S_R - e^{i\alpha} S_A$ for any $\alpha$, as was done in Eq.\ (\ref{e4.8}). Indeed, by Eq.\ (\ref{e4.6}) all that matters is that the coefficients of $S_R$ and $S_A$ have the same absolute value. We also mention that the freedom in choosing $\alpha$ is elucidated in more detail in Sec.\ \ref{s8} and Appendix \ref{subsec:hyperboloid}. The subscript of the $S$-fields in Eq.~(\ref{e6.3}) is the replica index. (Recall that we consider an enlarged number of field components.) The symbol $c_\lambda\{\ldots\}$ denotes the Young symmetrization of the replica indices according to the chosen Young diagram $\lambda = (q_1,q_2, \ldots) = [p_1,p_2, \ldots]$, and $N = |\lambda| = \sum p_i = \sum q_j$. It is given by the product $c_\lambda = b_\lambda a_\lambda$ of the corresponding symmetrization and antisymmetrization operators. (Although in Eq.~(\ref{e6.3}) we put $c_\lambda$ twice, it would actually be sufficient to Young-symmetrize only $S$ fields, or only $S^*$ fields.) It is possible to express the correlation function (\ref{e6.3}) in a more economical way (i.e., by introducing fewer field components), without changing the scaling operator that results on passing to the $\sigma$-model. This economy of description is achieved by observing that symmetrization is provided simply by the repeated use of the same replica index: \begin{eqnarray}\label{e6.4} && \big\langle b_\lambda \{S^-_1({\bf r}_1^{(1)})\cdots S^-_1({\bf r}_{q_1}^{(1)}) S^-_2({\bf r}_1^{(2)})\cdots S^-_2({\bf r}_{q_2}^{(2)}) \cdots \cr && \times S^-_n({\bf r}_1^{(n)})\cdots S^-_n({\bf r}_{q_n}^{(n)})\} \cr && \times b_\lambda \{S^{*-}_1({\bf r}_1^{(1)}) \cdots S^{*-}_1({\bf r}_{q_1}^{(1)}) S^{*-}_2({\bf r}_1^{(2)})\cdots S^{*-}_2({\bf r}_{q_2}^{(2)}) \cdots \cr && \times S^{*-}_n({\bf r}_1^{(n)})\cdots S^{*-}_n({\bf r}_{q_n}^{(n)})\} \big\rangle. \end{eqnarray} Here we denoted by ${\bf r}_1^{(j)}, \ldots, {\bf r}_{q_j}^{(j)}$ the points filling the $j$-th row of the Young diagram $(q_1,\ldots q_n) = [p_1, \ldots, p_m]$, and $b_\lambda\{\ldots\}$ still denotes the operation of antisymmetrization along the columns of the Young diagram. Performing all Wick contractions and writing Green functions as sums over wave functions, one sees that Eqs.~(\ref{e6.3}) and (\ref{e6.4}) give (up to an irrelevant overall factor) exactly the Young-symmetrized correlation function of wave functions that was described in Sec.~\ref{s6}. Specifically, the obtained correlation function yields the average of Eq.\ (\ref{e6.2''}). By the process of transforming to the $\sigma$-model, the $2N$ field values of $S$ and $S^\ast$ in (\ref{e6.3}), (\ref{e6.4}) get paired up in all possible ways to form a polynomial of $N$-th order in the matrix elements of $Q$. The general rule for this is \cite{mirlin-physrep} \begin{equation} S_{p_1}^{-}({\bf r}_1) S^{*-}_{p_2} ({\bf r}_2) \to f(|{\bf r}_1 - {\bf r}_2|) \widehat{\mathcal Q}_{p_1 p_2} \left( {\textstyle{\frac{1}{2}}} ({\bf r}_1 + {\bf r}_2) \right) , \end{equation} where the prefactor $f(|{\bf r}_1 - {\bf r}_2|) = (\pi\nu)^{-1} {\rm Im} \langle G_A({\bf r}_1, {\bf r}_2)\rangle$ depends on the distance between the two points, and $\widehat{\mathcal{Q}} \equiv \tilde{\mathcal{Q}}_{ AA,bb} = \frac{1}{2} (Q_{RR} - Q_{AA} + Q_{RA} - Q_{AR})_{bb}$ was introduced in Eq.~(\ref{e4.10}). In a 2D system for example, $f(r) = e^{-r/2l} J_0 (k_F r)$. The key properties of the function $f(r)$ are $f(0) = 1$ (more generally, $f(r) \simeq 1$ as long as the distance is much smaller than the Fermi wave length, $r\ll \lambda_F$) and $f(r) \ll 1$ for $r\gg \lambda_F$. In the latter case the corresponding pairing between the fields $S$ and $S^\ast$ can be neglected. Assuming that all points in the correlation function (\ref{e6.3}) are separated by distances $r \gg \lambda_F$, we get an expression of the diagonal structure \begin{equation}\label{e6.5} \big\langle (c_\lambda^{(L)} \otimes c_\lambda^{(R)}) (\widehat{\mathcal Q}_{11} \widehat{\mathcal Q}_{22} \cdots \widehat{\mathcal Q}_{NN}) \big\rangle, \end{equation} where $c_\lambda^{(L)} \otimes c_\lambda^{(R)}$ means that we Young-symmetrize separately with respect to \emph{both} sets of indices (left and right). (If in Eq.~(\ref{e6.3}) only one Young symmetrizer is included, then only the corresponding set of indices is Young symmetrized here; this does not change the irreducible representation that Eq.~(\ref{e6.5}) belongs to.) Similarly, starting from Eq.~(\ref{e6.4}) and assuming that all points are sufficiently well separated, we obtain \begin{equation}\label{e6.6} \big\langle (c_\lambda^{(L)} \otimes c_\lambda^{(R)}) (\widehat{\mathcal Q}_{j_1 j_1} \widehat{\mathcal Q}_{j_2 j_2} \cdots \widehat{\mathcal Q}_{j_N j_N}) \big\rangle, \end{equation} where the first $q_1$ indices $j_i$ are equal to 1, the next $q_2$ are equal to 2, and so on, and the last $q_n$ are equal to $n$. In this case the operator $a_\lambda$ for symmetrization is redundant (since symmetrization of equal indices has a trivial effect) and we may simplify the expression by replacing $c_\lambda$ by the operator $b_\lambda$ for antisymmetrization along the columns of the Young diagram. One can also take some points in the original expressions (\ref{e6.3}), (\ref{e6.4}) to coincide (provided that the result does not vanish upon antisymmetrization); this will not influence the symmetry and scaling nature of the resulting correlation functions. To complete our (sketch of) proof, we must show that the polynomial \begin{equation} P_\lambda = (c_\lambda^{(L)} \otimes c_\lambda^{(R)}) (\widehat{\mathcal Q}_{j_1 j_1} \widehat{\mathcal Q}_{j_2 j_2} \cdots \widehat{\mathcal Q}_{j_N j_N}) \end{equation} is a pure-scaling operator of the non-linear $\sigma$-model. This will be achieved by showing that $P_\lambda$ is an eigenfunction of all Laplace-Casimir operators for $G/K$. The latter can be done in two different ways. Firstly, one may argue with the help of the Iwasawa decomposition that $P_\lambda$ is an $N$-radial spherical function and thus has the desired eigenfunction property. In subsection \ref{s7} below, we spell out this argument along with its natural generalization to complex powers $q$. Secondly, it is possible to get the desired result directly (without invoking the Iwasawa decomposition) by showing that the function $P_\lambda$ is a highest-weight vector for the action of $G$ on the matrices $Q$. This is done in subsection \ref{s8}. \subsection{Argument via Iwasawa decomposition}\label{s7} We now argue that the polynomial $P_\lambda$ is an eigenfunction of all Laplace-Casimir operators for $G/K$. To this end, our key observation is that $P_\lambda$ can be written as a product of powers of the principal minors (i.e., in our case, the determinants of the right lower square sub-matrices) of the matrix $\widehat{\mathcal{Q}} \equiv \tilde{\mathcal Q}_{AA,bb}$ for the case of $n$ replicas. Indeed, following the derivation of Eqs.\ (\ref{e6.2'''}), (\ref{Psi-minimal-1}), we can associate the left indices of the $n \times n$ matrix $\widehat{\mathcal Q}$ with one minimal Young tableau, and the right indices with another minimal tableau. As a result, if we denote by $d_j$ the principal minor of $\widehat{\mathcal Q}$ of size $j \times j$, we see that \begin{align}\label{e6.8} P_\lambda \propto d_1^{q_1 -q _2} d_2^{q_2 - q_3} \cdots d_n^{q_n} , \end{align} since the Young symmetrizer $c_\lambda$ here acts essentially as the antisymmetrizer $b_\lambda$, producing determinants of the principal submatrices of $\widehat{\mathcal Q}$. The final step of the argument is to show that $P_\lambda$ agrees (up to a constant) with the $N$-radial spherical function $\varphi_{q,0}$ of Eq.\ (\ref{e3.9}): \begin{equation} P_\lambda \propto \varphi_{q,0} , \end{equation} which is already known to have the desired property. For that, let us write $\widehat{\mathcal Q}$ in Iwasawa decomposition as \begin{align} \begin{pmatrix} 1 &\ldots &* &* \\ \vdots &\ddots &\vdots &\vdots \\ 0 &\ldots &1 &* \\ 0 &\ldots &0 & 1 \end{pmatrix} \begin{pmatrix} e^{-2x_n}\!\!\! &\ldots &0 &0 \\ \vdots & \ddots & \vdots &\vdots \\ 0 & \ldots &e^{-2x_2}\!\!\! &0 \\ 0 &\ldots &0 & e^{-2x_1} \end{pmatrix} \begin{pmatrix} 1 &\ldots &0 &0 \\ \vdots &\ddots &\vdots &\vdots \\ * &\ldots &1 &0 \\ * &\ldots &* & 1 \end{pmatrix} . \end{align} (Precisely speaking, this is the Iwasawa decomposition of the full matrix $\tilde{\mathcal Q} =\tilde{n}\tilde{a}^2\sigma_2\tilde{n}^{-1}\sigma_2$ projected to the boson-boson part of the right lower block; cf.\ Eq.\ (\ref{e4.11}).) Due to the triangular form of the first and last matrices in this decomposition, the principal minors $d_j$ of this matrix are \begin{align} d_j = \prod_{i=1}^j e^{-2x_i} = \exp\Big(-2\sum_{i=1}^j x_i\Big). \end{align} When this expression is substituted into Eq.\ (\ref{e6.8}), we get exactly the function $\varphi_{q,0}$ of Eq.\ (\ref{e3.9}) for the set $q = (q_1,\ldots,q_n)$ of positive integers $q_j$. Since this function is an eigenfunction of all Laplace-Casimir operators for $G/K$, it follows that $P_\lambda$ has the same property. This completes our proof. To summarize, recall that in Sec.\ \ref{s6b} we specified a certain set of wave-function correlators. Our achievement here is that we have related these correlators to pure-scaling operators of the non-linear $\sigma$-model. By doing so, we have arrived at the prediction that our wave-function correlators exhibit the same pure-power scaling. Finally, let us remark that, although the analysis above was formulated in the language of the SUSY $\sigma$-model, it could have been done equally well for the replica $\sigma$-models. (In the presence of a compact sector, where the Iwasawa decomposition is not available without complexification, it would actually be more appropriate to carry out the final step of the argument by the theory of highest-weight vector as outlined in Sec.\ \ref{s8} and Appendix \ref{appendix-hwv}.) \subsection{Generalization to arbitrary $q_j$}\label{s6e} We now come to a generalization of our correspondence. The correlators considered in Sec.\ \ref{s6} up to now were polynomials (of even order) in wave-function amplitudes $\psi$ and $\psi^*$, and the resulting $\sigma$-model operators were polynomials in $Q$. The important point to emphasize here is that the wave-function correlation functions (\ref{e1}) are perfectly well-defined for all complex values of the exponents $q_j$ $(j = 1, \ldots, n$). At the same time, while the polynomial $\sigma$-model operators of Wegner's classification clearly require the numbers $q_j$ to be non-negative integers, the $N$-radial spherical functions (\ref{e3.9}) given by the SUSY formalism, \begin{equation}\label{e6.25} \varphi_{q,0} = \exp\Big(-2 \sum_{j=1}^n q_j x_j \Big), \end{equation} do exist for arbitrary quantum numbers $q = (q_1, \ldots, q_n)$. Thus one may suspect that our correspondence extends beyond the integers to all values of $q$. This turns out to be true by uniqueness of analytic continuation, as follows. We gave an indication of the argument in Sec.\ \ref{s4} and will now provide more detail. Let $n = 1$ for simplicity (the reasoning for higher $n$ is no different), and consider \begin{equation} f(q) \equiv \langle \nu^q \rangle / \langle \nu \rangle^q \qquad (q \in \mathbb{C}). \end{equation} The triangle inequality gives $| f(q) | \leq f(\mathrm{Re}\, q)$. By the definition of $\nu$ and the fact that the total density of states is self-averaging, one has an a-priori bound for positive real values of $q :$ \begin{equation} 0 \leq f(q) \leq (L/l)^{dq} \qquad (q \geq 0), \end{equation} where $l$ is the lattice spacing (or UV cutoff) of the $d$-dimensional system. In conjunction with the functional relation \cite{gruzberg11} $f(q) = f(1-q)$, this inequality leads to a bound of the form \begin{equation} | f(q) | \leq e^{C_L (1 + |\mathrm{Re}\,q |)} \qquad (q \in \mathbb{C}), \end{equation} where $C_L \propto \ln L$ is a constant. Thus, in finite volume, $f$ is an entire function of exponential type and is also bounded along the imaginary axis. By Carlson's Theorem, this implies that $f$ is uniquely determined by its values on the non-negative integers. It follows that the result of our derivation, taking the pure-scaling correlation functions (\ref{e1}) to $\sigma$-model expectation values of the $N$-radial spherical functions $\varphi_{q,0}$ of (\ref{e6.25}), extends from non-negative integer values of $q$ to all complex values of $q$. This relation is expected to persist in the infinite-volume limit $L \to \infty$. \subsection{Alternative construction of scaling operators: highest-weight vectors} \label{s8} In previous sections we constructed scaling operators in the $\sigma$-model from the Iwasawa decomposition. Here we show how to construct the same operators by using a different approach based on the notion of {\it highest-weight vector}. We just outline the basic idea of this approach, relegating details of the construction to Appendix \ref{appendix-hwv}. The $\sigma$-model field $Q$ takes values in a symmetric space $G/K$. Our goal is to identify gradientless scaling operators of the $\sigma$-model, i.e., operators that reproduce (up to multiplication by a constant) under transformations of the renormalization group. We know that the change of a local $\sigma$-model operator, say $A$, under an infinitesimal RG transformation can be expressed by differential operators acting on $A$ considered as a function on $G/K$. Assuming that the $\sigma$-model Lagrangian is $G$-invariant, the infinitesimal RG action is by differential operators which are $G$-invariant (also known as Laplace-Casimir operators). Thus a gradientless operator of the $\sigma$-model is a pure scaling operator if it is an eigenfunction of the full set of Laplace-Casimir operators on $G/K$. Such eigenfunctions can be constructed by exploiting the notion of highest-weight vector, as follows. Let $\mfg \equiv \mathfrak{g}_\mathbb{C}$ denote the {\it complexified} Lie algebra of the Lie group $G$. The elements $X \in \mfg$ act on functions $f(Q)$ on $G/K$ as first-order differential operators $\widehat{X}$: \begin{align}\label{e8.1} (\widehat{X} f)(Q) = \frac{d}{dt}\Big|_{t=0} f \big(e^{-tX} Q \, e^{tX}). \end{align} By definition, this action preserves the commutation relations: $[\widehat{X}, \widehat{Y}] = \widehat{[X,Y]}$. Fixing a Cartan subalgebra $\mfh \subset \mfg$ we get a root-space decomposition \begin{align}\label{e8.2} \mfg = \mfn_+ \oplus \mfh \oplus \mfn_-, \end{align} where the nilpotent Lie algebras $\mfn_\pm$ are generated by positive and negative root vectors. We refer to elements of $\mfn_+$ ($\mfn_-$) as raising (resp.\ lowering) operators. (Comparing with the Iwasawa decomposition of Sec.\ \ref{s3}, we observe that $\mathfrak{n}_+$ is the same as the complexification of $\mathfrak{n}$, and $\mathfrak{a}$ is a subalgebra of $\mathfrak{h}$, with the additional generators of $\mathfrak{h}$ lying in the complexified Lie algebra of $K$.) Now suppose that $\varphi_\lambda$ is a function on $G/K$ with the properties \begin{align}\label{e8.3} &1. \quad \widehat{X} \varphi_\lambda = 0 && \text{for all } X \in \mfn_+, \cr &2. \quad \widehat{H} \varphi_\lambda = \lambda(H) \varphi_\lambda && \text{for all } H \in \mfh. \end{align} Thus $\varphi_\lambda$ is annihilated by the raising operators from $\mfn_+$ and is an eigenfunction of the Cartan generators from $\mfh$. Such an object $\varphi_\lambda$ is called a highest-weight vector, and the eigenvalue $\lambda$ is called a highest weight. Since the Lie algebra acts on functions on $G/K$ by first-order differential operators, it immediately follows that the product $\varphi_{ \lambda_1 + \lambda_2} = \varphi_{\lambda_1} \varphi_{\lambda_2}$ of two highest-weight vectors, as well as an arbitrary power $\varphi_{q\lambda} = \varphi_\lambda^q$ of a highest-weight vector, are again highest-weight vectors with highest weights $\lambda_1 + \lambda_2$ and $q\lambda$, respectively. In the compact case the power $q$ has to be quantized (a non-negative integer) so that $\varphi_\lambda^q$ is defined globally on the space $G/K$. On the other hand, in the non-compact case, we can find a {\it positive} ($\varphi_\lambda > 0$) highest-weight vector, and then it can be raised to an arbitrary complex power $q$. Now recall that a Casimir invariant $C$ is a polynomial in the generators of $\mfg$ with the property that $[C,X] = 0$ for all $X \in \mfg$. The Laplace-Casimir operator $\widehat{C}$ is the invariant differential operator which corresponds to the Casimir invariant $C$ by the action (\ref{e8.1}). If a function $\varphi_\lambda$ has the highest-weight properties (\ref{e8.3}), then this function is an eigenfunction of all Laplace-Casimir operators of $G$. To see this, one observes that on general grounds every Casimir invariant $C$ can be expressed as \begin{align}\label{e8.4} C = C_\mfh + \sum_{\alpha > 0} D_\alpha X_\alpha, \end{align} where every summand in the second term on the right-hand side contains some $X_\alpha \in \mfn_+$ as a right factor. Thus the second term annihilates the highest-weight vector $\varphi_\lambda$. The first term, $C_\mfh$, is a polynomial in the generators of the commutative algebra $\mfh$ and thus has $\varphi_\lambda$ as an eigenfunction by the second relation in (\ref{e8.3}). In summary, gradientless scaling operators can be constructed as functions that have the properties of a highest-weight vector. To generate the whole set of such operators, one uses the fact that powers and products of heighest-weight vectors are again heighest-weight vectors. Let us discuss how this construction is related to the Iwasawa decomposition $G = NAK$. $N$-radial functions $f(Q)$ on $G/K$ by definition have the invariance property \begin{align}\label{e8.5} f(n Q n^{-1}) = f(Q) && \forall n \in N. \end{align} Any such function is automatically a highest-weight vector if the nilpotent group $N$ is such that its (complexified) Lie algebra coincides with the algebra $\mfn_+$ of raising operators. Indeed, if $X$ is an element of the Lie algebra of $N$, then \begin{align}\label{e8.6} (\widehat{X} f)(Q) = \frac{d}{dt}\Big|_{t=0} f\big(e^{-tX} Q\, e^{tX}) = 0, \end{align} since the expression under the $t$ derivative does not depend on $t$ by the invariance (\ref{e8.5}). In Appendix \ref{appendix-hwv} we implement this construction explicitly. We consider certain linear functions of the matrix elements of $Q$, which we write as \begin{align}\label{e8.7} \mu_Y(Q) = \text{Tr} (YQ). \end{align} {}From the definition (\ref{e8.1}) it is easy to see that \begin{align}\label{e8.8} &(\widehat{X} \mu_Y )(Q) = \frac{d}{dt}\Big|_{t=0} \text{Tr} \big(e^{tX} Y e^{-tX} Q) = \mu_{[X,Y]}(Q). \end{align} Then, if $[X,Y] = 0$, the function $\mu_Y(Q)$ is annihilated by $\widehat{X}$. To construct highest-weight vectors, which are annihilated by all $\widehat{X}$ for $X \in \mfn_+$, we then build certain polynomials from these linear functions, and form products of their powers. In this manner we recover exactly the set of scaling operators (\ref{e6.8}), (\ref{e6.25}). \section{Weyl group and symmetry relations between scaling exponents} \label{s9} In the preceding sections we constructed wave-function observables that show pure-power scaling, by establishing their correspondence with scaling operators of the SUSY $\sigma$-model. Now we are ready to explore the impact of Weyl-group invariance on the spectrum of scaling exponents for these operators (and the corresponding observables) at criticality. The Weyl group $W$ is a discrete group acting on the Lie algebra $\mathfrak{a}$ of the group $A$, or equivalently, on its dual $\mathfrak{a}^*$. Acting on $\mathfrak{a}^*$, $W$ is generated by reflections $r_\alpha$ at the hyperplanes orthogonal to the even roots $\alpha$: \begin{equation}\label{e9.1} r_\alpha: \; \mathfrak{a}^* \to \mathfrak{a}^*, \quad \mu \mapsto \mu - 2\alpha \frac{\langle \alpha,\mu\rangle}{\langle\alpha,\alpha\rangle}, \end{equation} where $\langle\cdot,\cdot\rangle$ is the Euclidean scalar product of the Euclidean vector space $\mathfrak{a}^\ast$. Key to the following is the Harish-Chandra isomorphism, see Refs.~[\onlinecite{helgason78}, \onlinecite{helgason84}] for the classical version and Ref.~[\onlinecite{alldridge10}] for the SUSY generalization (which we need). The statement is that there exists a homomorphism (actually, an isomorphism in classical situations) from the algebra of $G$-invariant differential operators on $G/K$ to the algebra of $W$-invariant differential operators on $A$. This homomorphism (or isomorphism, as the case may be) is easy to describe: given a $G$-invariant differential operator $D$ on $G/K$, one restricts it to its $N$-radial part, which can be viewed as a differential operator on $A$, and then performs a so-called Harish-Chandra shift $(\lambda \to \lambda - \rho)$ by the half-sum of positive roots $\rho$. The shifted operator turns out to be $W$-invariant. This property of $W$-invariance is what matters to us here, for it has the consequence that if $\chi_\mu(D)$ denotes the eigenvalue of $D$ on the spherical function (or highest-weight vector) $\varphi_\mu$, see Eq.\ (\ref{e3.9}), then \begin{equation}\label{e9.3} \chi_{w\mu} = \chi_\mu \end{equation} for all $w \in W$. In words: if two spherical functions $\varphi_\mu$ and $\varphi_\lambda$ have highest weights $\lambda = w \mu$ related by a Weyl-group element $w \in W$, then their eigenvalues are the same, $\chi_\mu(D) = \chi_\lambda(D)$, for any $D$. To the extent that the $\sigma$-model renormalization group transformation is $G$-invariant (and hence is generated by some $G$-invariant differential operator on $G/K$), we have the following important consequence: the scaling dimensions of the scaling operators (which arise as eigenvalues of the $G$-invariant operator associated with the fixed point of the RG flow) are $W$-invariant. For our purposes it will be sufficient to focus on the subgroup of the Weyl group which is generated by the following transformations on $\mathfrak{a}^\ast$: (i) sign inversion of any one of the $\mu^0$-components: $\mu_i^0 \to - \mu_i^0$ (reflection at the hyperplane $\mu_i^0 = 0$), and (ii) pairwise exchange of $\mu^0$-components: $\mu_i^0 \leftrightarrow \mu_j^0$ (reflection at the hyperplane $\mu_i^0-\mu_j^0 =0 $). In view of Eq.~(\ref{e3.8}) these induce the following transformations of the plane-wave numbers $q_j$: \begin{enumerate} \item[(i)] sign inversion of $q_j + \dfrac{c_j}{2}$ for any $j\in \{1, 2, \ldots, n \}$: \begin{equation}\label{e9.4} q_j \to - c_j - q_j, \end{equation} where $c_j$ is the coefficient in front of $x_j$ in the expression for the half-sum $\rho$ of positive roots, see Eqs.~(\ref{half-sum}), (\ref{e3.6}); \item[(ii)] permutation of $q_i + \dfrac{c_i}{2}$ and $q_j + \dfrac{c_j}{2}$ for some pair $i, j \in \{1, 2, \ldots, n \}$: \begin{equation}\label{e9.5} q_i \to q_j + \frac{c_j - c_i}{2}; \quad q_j \to q_i + \frac{c_i - c_j}{2} . \end{equation} \end{enumerate} By combining all such operations, one generates a subgroup $W_0 \subset W$ of the Weyl group. Whenever two scaling operators with quantum numbers $q = (q_1,q_2,\ldots,q_n)$ and $q' = (q_1',q_2',\ldots,q_n')$ are related by a Weyl transformation $w \in W_0$, the scaling dimensions of these scaling operators must be equal. We now present some examples of this general statement. As before, we focus on class A, for which $c_j = 1-2j$, see Eq.~(\ref{e3.6}). Generalizations to the other classes will be discussed below. Consider first the most symmetric representations $(q)$, which are characterized by a single number $q_1 \equiv q$. (Here, for convenience, we continue to use Young-diagram notation, even though $q_1$ need not be a positive integer and does not correspond to a representation of polynomial type.) The invariance under the Weyl group then implies that the following two representations \begin{equation} \label{e9.6} (q), \quad (1-q), \end{equation} (here we used $c_1 = - 1$) give identical scaling dimensions. This is exactly the symmetry statement (\ref{e1.3}) governing the multifractal scaling of the LDOS moments. Next, consider representations of the type $(q_1,q_2)$. By applying the Weyl symmetry operations above, we can generate from it a series of 8 representations: \begin{eqnarray}\label{e9.7} && (q_1,q_2),\ \ (1-q_1,q_2),\ \ (q_1,3-q_2),\ \ (1-q_1,3-q_2), \nonumber \\ && (2-q_2,2-q_1),\ \ (-1+q_2,2-q_1), \nonumber \\ && (2-q_2,1+q_1),\ \ (-1+q_2,1+q_1). \end{eqnarray} Again, all of them are predicted to give the same scaling dimension. As an important example, starting from the trivial representation $(0) \equiv (0,0)$ (i.e.\ the unit operator) we generate the following set: \begin{eqnarray}\label{e9.8} && (0,0)\,,\ \ (1,0)\,,\ \ (0,3)\,,\ \ (1,3)\,, \nonumber \\ && (2,2)\,,\ \ (-1,2)\,,\ \ (2,1)\,,\ \ (-1,1)\,. \end{eqnarray} Since $(0,0)$ has scaling dimension zero, we expect the same to hold for all other representations of this list -- as long as the Anderson-transition fixed point is in class A. This is a remarkable statement. In fact, among the set of representations (\ref{e9.8}), four are of polynomial type and ``standard'' in that they are also present in the replica approach of Wegner. Aside from the trivial representation $(0,0)$, these are $(1,0)$, $(2,2)$, and $(2,1)$. The representation $(1,0)$ corresponds to $\langle Q\rangle$ which is well known to be non-critical in the replica limit. However, for the polynomial representations $(2,2)$ and $(2,1)$, our exact result seems to be new. It is worth emphasizing that Wegner's four-loop perturbative $\zeta$-function \cite{Wegner1987b} is fully consistent with our finding: both coefficients $a_2$ and $c_3$ vanish for these operators in the replica limit, see Table~\ref{unitary-table} above. Moreover, a numerical analysis \cite{burmistrov11} of the correlation function (\ref{e5.a1}), whose leading scaling behavior is controlled by $(2,2)$, also yielded a result consistent with $\chi_{(2,2)} = 0$. For our next example of importance, consider the case of $q_1 = q_2$. Inspecting Eq.~(\ref{e9.7}) we see, in particular, that the $\sigma$-model operators $(q,q)$ and $(2-q,2-q)$ have the same scaling dimensions. Now we know [see Sec.~\ref{s7} and Eq.~(\ref{e1})] that the operator $(q,q)$ corresponds to the moment $\langle A_2^q \rangle$ of the Hartree-Fock type correlation function $A_2$, Eq.\ (\ref{e1.5}). Thus we learn that the multifractal spectrum of scaling dimensions for the Hartree-Fock moments $A_2^q$ is symmetric under the reflection $q \leftrightarrow 2-q$. One can continue these considerations and look at equivalences between scaling dimensions for $n = 3$, i.e.\ for operators $(q_1,q_2,q_3)$, and so on. In general, the Weyl orbit of an operator $(q_1,\ldots,q_n)$ with $n$ different components consists of $2^n n!$ operators (due to $2^n$ sign inversions and $n!$ permutations) with equal scaling dimensions. We have checked that all results obtained by Wegner, \cite{Wegner1987b} who analyzed operators described by Young diagrams up to size 5 and up to four loops (see Table~\ref{unitary-table}), are fully consistent with this prediction. This includes the afore-mentioned representations $(1)$, $(2,1)$, and $(2,2)$, as well as $(3,1,1)$: all of them are related to the trivial representation by Weyl-group operations and, indeed, Wegner obtained zero values of $a_2$ and $c_3$ for all of them (in the replica limit). Furthermore, the operators $(3,2) = [2,2,1]$ and $(3,1) = [2,1,1]$ are clearly related to each other by the Weyl reflection $q_2 \to 3-q_2$. Again, as is shown in Table~\ref{unitary-table}, Wegner's four-loop results, $a_2 = 4$ and $c_3 = 24$, are the same for these. \section{Other symmetry classes}\label{s10} In order to apply the Weyl-symmetry argument to the other symmetry classes, we need the expressions for the half-sum of positive roots for them. More specifically, we will now present the ``bosonic'' part $\rho_b$ (which is a linear combination of the basic functions $x_j$) of $\rho$. By transcription of the above analysis, its coefficients $c_j$ determine the Harish-Chandra shift entering the Weyl transformation rules for the operators $(q_1,\ldots,q_n)$, see Eqs.~(\ref{e9.4}) and (\ref{e9.5}). The root systems for all symmetry classes are listed in Table~\ref{table:root-systems} of Appendix \ref{appendix-B}. The resulting $\rho_b$ are \begin{equation}\label{e10} \rho_b = \sum c_j x_j , \end{equation} where the coefficients $c_j$ ($j = 1, 2, \dots$) read \begin{align} &c_j = 1-2j, && \text{class A}, \label{e9a} \\ &c_j = -j, && \text{class AI}, \label{e9b} \\ &c_j = 3-4j, && \text{class AII}, \label{e9c} \\ &c_j = 1-4j, && \text{class C}, \label{e9d}\\ &c_j = 1-j, && \text{class D}, \label{e9e}\\ &c_j = -2j, && \text{class CI}, \label{e9f}\\ &c_j = 2-2j, && \text{class DIII}, \label{e9g} \\ &c_j = \frac{1}{2}-j, && \text{class BDI}, \label{e9h} \\ &c_j = 2-4j, && \text{class CII}, \label{e9i} \\ &c_j = 1-2j, && \text{class AIII}. \label{e9j} \end{align} The results obtained above for class A generalize in a straightforward manner to four of the other classes, which comprise the two remaining Wigner-Dyson classes, AI and AII, and two of the Bogoliubov-de Gennes classes, C and CI. The Weyl-symmetry operations involve the pertinent values of $c_j$ in each case. For example, for the most symmetric operators $(q)$ (characterizing the LDOS moments) we obtain the correspondence $(q) \leftrightarrow (-c_1-q)$, where $-c_1$ has value 1 for the classes A, AI, and AII, value 2 for class CI, and value 3 for class C. This is exactly the symmetry (\ref{e1.3}) obtained in Ref.~[\onlinecite{gruzberg11}], with $q_* = - c_1$. Correspondences between the representations with two or more numbers $(q_1, \ldots, q_n)$ are obtained in exactly the same way as described in Sec.~\ref{s9} for class A. Again, we have checked that the four-loop results of Wegner \cite{Wegner1987b} for the orthogonal and symplectic classes (AI and AII) conform with our exact symmetry relations. Specifically, for class AI, our results imply the following Weyl-symmetry relations (and thus equal values of the scaling dimensions): (i) $(2,2) \leftrightarrow (2)$; (ii) $(1,1) \leftrightarrow (2,1,1)$; (iii) $(3,2) \leftrightarrow (3)$; (iv) $(2,2,1) \leftrightarrow (1) \leftrightarrow (0)$ (scaling exponent equal to zero); these are the Young diagrams up to size $5$ studied in Ref.~[\onlinecite{Wegner1987b}]. For class AII the dual correspondences hold: (i) $(2,2) \leftrightarrow (1,1)$; (ii) $(2) \leftrightarrow (3,1)$; (iii) $(2,2,1) \leftrightarrow (1,1,1)$; (iv) $(3,2) \leftrightarrow (1) \leftrightarrow (0)$. Needless to say, the results of Ref.~[\onlinecite{Wegner1987b}] for the coefficients $a_2$, $c_3$ for these operators do conform with the predicted relations. Generalization to the remaining five classes (D, DIII, BDI, CII, and AIII) is more subtle due to peculiarities of their $\sigma$-model manifolds. We defer this issue to Sec.~\ref{s12}. \section{Transport observables}\label{s11} We now address the question whether the classification and symmetry analysis of the present paper are also reflected in transport observables. To begin, we remind the reader that such a correspondence between wave-function and transport observables has previously been found for the case of the $(q)$ operators. Specifically, one can consider the scaling of moments of the two-point conductance at criticality, \cite{janssen99,klesse01} \begin{equation}\label{e11.1} \langle g^q({\bf r},{\bf r}')\rangle \sim |{\bf r}-{\bf r}'|^{ -\Gamma_q}. \end{equation} Actually, $g^q({\bf r},{\bf r}')$ is not a pure-scaling operator \cite{janssen99} (unlike the LDOS moments considered above), thus Eq.~(\ref{e11.1}) should be understood as characterizing the leading long-distance behavior of $\langle g^q({\bf r},{\bf r}')\rangle$. Nevertheless, it turned out that the transport exponents $\Gamma_q$ and the LDOS exponents $x_q = \Delta_q + qx_\rho$ are related as \cite{evers08} \begin{equation}\label{e17} \Gamma_q = \left \{ \begin{array}{ll} 2x_q, & \qquad q\le q_*/2, \\ 2x_{q^*}, & \qquad q \ge q_*/2 . \end{array} \right. \end{equation} Notice that while the LDOS spectrum $x_q$ is symmetric with respect to the point $q_*/2 = -c_1/2$, the two-point conductance spectrum $\Gamma_q$ ``terminates'' (i.e., has a non-analyticity and becomes constant) at this point. Yet, the spectrum $\Gamma_q$ clearly carries information about the Weyl symmetry: if one performs its analytic continuation (starting from the region below $q_*/2$), one gets the spectrum $2x_q = 2x_{q^\ast - q}$. A physically intuitive argument explaining Eq.~(\ref{e17}) is as follows. For sufficiently low $q$, the moments of $g({\bf r},{\bf r}')$ are controlled by small values of the conductance. When $g({\bf r},{\bf r}')$ is small, one can think of it as a tunneling conductance that is proportional to the product of the LDOS at the points ${\bf r}$ and ${\bf r'}$. The corresponding correlation function $\langle \nu^q({\bf r}) \nu^q({\bf r'}) \rangle$ scales with $|{\bf r-r'}|$ with the exponent $2x_q$, in agreement with the first line of Eq.~(\ref{e17}). (This argument can be also cast in the RG language, see the end of Sec.~\ref{s11.2}.) On the other hand, the two-point conductance cannot be larger than unity. For this reason the relation $\Gamma_q = 2 x_q$ does not hold beyond the symmetry point $q = q_*/2$. The moments with $q\ge q_*/2$ are controlled by the probability to have $g({\bf r},{\bf r}')$ of order unity. In view of the relation (\ref{e17}), a natural question is whether there are any transport observables corresponding to the composite operators $(q_1,q_2,\ldots)$ beyond the dominant one, $(q)$. We argue below that this is indeed the case, construct explicitly these transport observables, and conjecture a relation between the critical exponents. In order to get some insight into this problem, it is instructive to look first at quasi-1D metallic systems, whose transport properties can be described within the DMPK formalism. \cite{beenakker97, evers08} The rationale behind this is as follows. First, the classification of transport observables that we are aiming at is based (in analogy with the classification of wave-function observables as developed above) purely on symmetry considerations and, therefore, should be equally applicable to metallic systems. Second, a 2D metallic system is ``weakly critical'' (at distances shorter than the localization length), and the corresponding anomalous dimensions can be studied within the perturbative RG (which is essentially the same as Wegner's RG analysis in $2 + \epsilon$ dimensions). By a conformal mapping, a 2D system can be related to the same problem in a quasi-1D geometry (with a power-law behavior translating into an exponential decay). Therefore, if some symmetry properties of spectra of transport observables generically hold at criticality, we may expect to see some manifestations of them already in the solution of the DMPK equation. \subsection{DMPK, localized regime}\label{s11.1} In the DMPK approach, the transfer matrix of a quasi-1D system is described by ``radial'' coordinates (w.r.t.\ a Cartan decomposition) $X_j$, $j = 1, 2, \ldots, N$, where $N$ is the number of channels. All transport properties of the wire are expressed in terms of these radial coordinates. In particular, the dimensionless conductance is \begin{equation}\label{e11.2} g = \sum_{j=1}^N T_j = {\rm Tr} \: T = {\rm Tr}\: tt^\dagger\,, \end{equation} where \begin{equation}\label{e11.3} T_j = \frac{1}{\cosh^2 X_j} \end{equation} are the transmission eigenvalues, i.e.\ the eigenvalues of $T = tt^\dagger$ (and $t^\dagger t$), where $t$ is the transmission matrix. The DMPK equations describe the evolution with system length (playing the role of a fictitious time) of the joint distribution function for the transmission eigenvalues (or the coordinates $X_j$), and they have the form of diffusion equations on the symmetric space associated with the noncompact group of transfer matrices. In the localized regime, where the wire length $L$ is much larger than the localization length $\xi$, the typical value of each transmission eigenvalue becomes exponentially large relative to the next one: $1 \gg T_1 \gg T_2 \gg \ldots \gg T_N$. As a result, the equations for the random variables $X_j$ decouple, yielding an advection-diffusion equation for each $X_j$. The solution has a Gaussian form, with both the average $\langle X_j \rangle$ and the variance ${\rm var}(X_j)$ proportional to $L/\xi$ and with ${\rm var}(X_j)$ independent of $j$. Each of the symmetry classes therefore gives rise to a set of numbers $\langle X_j \rangle / {\rm var}(X_j)$ (which depend solely on the corresponding symmetric spaces). Remarkably, comparing the above results (\ref{e9a})--(\ref{e9j}) with the known DMPK results, we observe that for all symmetry classes one has \begin{equation}\label{e11.4} - c_j = \frac{\langle X_j \rangle}{{\rm var}(X_j)} . \end{equation} (In the case of the chiral classes, we note that Eq.~(\ref{e11.4}) holds when the $X_j$ evolve according to the DMPK equations with an even number of channels.) This result allows us to draw a link between the transport quantity $T_j$ and the LDOS observable $\nu_j$ defined in Eq.~(\ref{e5.2}). Indeed, if we use the approximation $T_j \approx 4e^{-2X_j}$, which is valid in the localized regime, we get \begin{equation}\label{e11.5} \langle T_j^q \rangle \sim \exp\{2v q(q+c_j)\}, \qquad v = {\rm var}(X_j). \end{equation} This expression for $\langle T_j^q \rangle$ has a point $q = -c_j/2$ of reflection symmetry. [We should add that this requires a continuation of Eq.~(\ref{e11.5}) from its range of actual validity to a region of larger $q$, see the discussion below Eq.~(\ref{e17}).] Now we recall that the scaling of $\langle \nu_j^q \rangle$ is determined by the representation $(0,\ldots,0,q,0,\ldots)$, with $q$ at the $j$-th position, see Eq.~(\ref{e5.3}). Hence $\langle \nu_j^q\rangle \sim \langle \nu_j^{ -c_j-q}\rangle$, i.e.\ the symmetry point of the multifractal spectrum for $\nu_j$ is exactly $-c_j/2$. This links $T_j$ with $\nu_j$, as stated above. We now write \begin{align} \label{e15} T_m &= \frac{T_1 T_2 \cdots T_m}{T_1 T_2 \cdots T_{m-1}} = \frac{S_m}{S_{m-1}}, & S_m =T_1 \cdots T_m, \end{align} and draw an analogy between Eq.~(\ref{e15}) and Eq.~(\ref{e5.2}). Specifically, $T_m$ corresponds to $\nu_m$ (as we have already seen earlier) and $S_m$ to $A_m$. To further strengthen the analogy, we point out that $S_m$ can be presented in the form of the absolute value squared of a determinant. Indeed, consider first $m=2$. Choose two incoming ($p,q$) and two outgoing ($r,s$) channels and consider the $2 \times 2$ matrix $t^{(2)}$ formed by the elements $t_{ij}$, $i=p,q$, $j=r,s$, of the transmission matrix. Then calculate the absolute value squared of the determinant of this matrix, and sum over the channel indices $p,q,r,s$: \begin{align} \label{e16} &\sum_{p,q,r,s} \big|{\rm det}\:t^{(2)}_{ij} \big|^2 = \sum_{p,q,r,s} |t_{pr}t_{qs} - t_{ps}t_{qr}|^2 \nonumber \\ &= 2 ({\rm Tr} \, tt^\dagger)^2 - 2 {\rm Tr} (tt^\dagger)^2 = 2 ({\rm Tr} \, T)^2 - 2 {\rm Tr} \, T^2 \nonumber \\ &= 2\big[(T_1 + T_2)^2 - (T_1^2 + T_2^2)\big] = 4 T_1 T_2 = 4S_2. \end{align} The same applies to higher correlation functions: by considering the determinant of an $m\times m$ matrix $t^{(m)}$ and taking its absolute value squared, we get $S_m$ (up to a factor). If the total number of channels is $m$, this is straightforward (the modulus squared of the determinant then equals $T_1 T_2 \cdots T_m$); if the total number of channels is larger than $m$, then, strictly speaking, averaging over the choice of $m$ channels is required. We expect, however, that the determinant will typically behave in the same way for any choice of the channels. To summarize, the transmission eigenvalues $T_m$ of the DMPK model characterize the leading contribution to the decay of transport quantities $S_m/S_{m-1}$, where the $S_m$ are given by the absolute values squared of the determinants of $m\times m$ transmission matrices. There is a clear correspondence between the wave-function observables $\nu_i = A_i/A_{i-1}$ and the transport observables $T_m = S_m/S_{m-1}$. In the next subsection we generalize this construction to critical systems. \subsection{Transport observables at criticality}\label{s11.2} We are now ready to formulate a conjecture about the scaling of subleading transport quantities at criticality. It generalizes the relation (\ref{e17}) between the scaling exponents of the moments of the conductance ($\Gamma_q$) and of the LDOS ($x_q$). Consider a system at criticality and take two points ${\bf r}_1$ and ${\bf r}_2$ separated by a (large) distance $R$. Attach $N$ incoming and $N$ outgoing transport channels near each of these two points. This yields an $N\times N$ transmission matrix $t$. Define $B_m$ as the absolute value squared of the determinant of its upper-left $m\times m$ corner (i.e., of the transmission matrix $t^{(m)}$ for the first $m$ incoming and first $m$ outgoing channels). This lets us build a family of transport correlation functions ($n\le N$): \begin{eqnarray}\label{e18} M_{q_1 q_2 \ldots q_n}(R) &=& \big\langle B_1^{q_1-q_2} B_2^{q_2-q_3} \cdots B_{n-1}^{q_{n-1}-q_n}B_n^{q_n} \big\rangle \nonumber \\ & = & \langle \tau_1^{q_1} \cdots \tau_n^{q_n}\rangle, \end{eqnarray} where $\tau_n = B_n / B_{n-1}$. The conjecture is that the critical index $\Gamma_{q_1 q_2 \ldots q_n}$ determining the leading dependence on $R$ of $M_{q_1 q_2\ldots q_n}(R)$ is \begin{equation}\label{e19} \Gamma_{q_1q_2\ldots q_n} = 2 x_{q_1q_2\ldots q_n}, \end{equation} where $x_{q_1 q_2 \ldots q_n}$ is the scaling exponent of the $\sigma$-model operator $(q_1,\ldots, q_n)$ for the correlator (\ref{e5.3}). This is a generalization of Eq.~(\ref{e17}). As with Eq.\ (\ref{e17}), the relation~(\ref{e19}) is expected to be valid only for $q_i$ not too large; probably, the condition is $q_i \le - c_i/2$ for all $i$. Let us sketch an RG argument in favor of Eq.~(\ref{e19}). We expect that the quantity (\ref{e18}) is represented in field-theory language as a correlation function of two local operators (at points ${\bf r}_1$ and ${\bf r}_2$, respectively), each of which has the same scaling properties as $\nu_1^{q_1} \cdots \nu_n^{q_n}$. Performing an RG transformation that reduces the scale $R$ down to a microscopic scale, we will then get a factor $R^{-2x_{q_1 q_2 \ldots q_n}}$. After this the correlation function becomes of the order of unity; thus, we obtain (\ref{e19}). Possibly, a rigorous proof may be constructed for class A by a generalization of the formula of Ref.~[\onlinecite{klesse01}]. It should be stressed that we do not expect the correlation functions Eq.~(\ref{e18}) to show pure scaling: as we pointed out, not even the moments of the conductance show it. \cite{janssen99} \section{Classes with O(1) and U(1) additional degrees of freedom} \label{s12} There are five symmetry classes with $\sigma$-model target spaces that either have two connected components and thus an associated $\mathbb{Z}_2 = \text{O}(1)$ degree of freedom [classes D and DIII], or have $\mathbb{Z}$ for their fundamental group due to the presence of a U(1) degree of freedom [classes BDI, CII, AIII]. These degrees of freedom complicate the application of our Weyl-symmetry argument. We mention in passing that the classes at hand are the five symmetry classes that feature topological insulators in 1D (precisely because, owing to the O(1) and U(1) degrees of freedom, their $\sigma$-model spaces have the said topological properties). Below we briefly outline our present understanding of the Weyl-symmetry issue for these classes and the open questions. \subsection{Classes D and DIII}\label{s12.1} The target manifolds of the $\sigma$-models for these symmetry classes consist of two disjoint parts [${\text O}(1) = \mathbb{Z}_2$ degree of freedom]. In general, the $\sigma$-model field can ``jump'' between the two components, thereby creating domain walls. The arguments based on the Weyl symmetry in the form presented above apply directly if such domain walls are prohibited (i.e.\ if the $\sigma$-model field stays within a single component of the manifold). There are several situations when this is the case: \begin{itemize} \item The DMPK model of a quasi-1D wire does not include domain walls. \cite{gruzberg05} This explains the agreement between our symmetry result and the DMPK results for these two classes; \item The O(1) version of the Chalker-Coddington network model in 2D; \cite{chalker02} \item A good metal in 2D. (In this case domain walls are, strictly speaking, present but their effect is exponentially small and thus expected to be negligible.) \end{itemize} Note that the Weyl-group invariance of the LDOS moments for the classes D and DIII yields the symmetry point $q_* / 2 = - c_1 / 2 = 0$. This implies that the distribution function $P(\ln \nu)$ is symmetric under $\ln \nu \to - \ln \nu$ (see Eq. (\ref{e1.4}) with $q_* = 0$), i.e.\ $\ln\nu = 0$ is the most probable (or typical) value. This result is incompatible with exponential localization of the eigenstates, which would imply exponentially small typical LDOS values. We thus arrive at the conclusion that, in the absence of domain walls, systems described by the $\sigma$-model for class D or DIII cannot have a localized phase. The models listed in the previous paragraph exemplify this general statement. In the case of a good 2D metal in class D or DIII, the scaling behavior can be found by perturbative RG, with the smallness of the inverse conductance $1/g \ll 1$ ensuring the validity of the loop expansion. In particular, the one-loop RG calculation of the average DOS scaling yields \cite{bocquet00} $\langle\nu\rangle \propto \ln L \propto g(L)$. We know that the scaling exponents for the LDOS moments depend quadratically on $q$ in one-loop approximation (which is governed by the quadratic Laplace-Casimir operator). Therefore, in view of the $q \to -q$ Weyl symmetry, we expect the LDOS moments to behave as \begin{equation}\label{e20} \langle\nu^q\rangle \propto (\ln L)^{q^2}. \end{equation} It should of course be possible to check this directly by a numerical calculation. \subsection{Chiral classes}\label{s12.2} For the chiral classes, the situation is even more subtle. We expect that the Weyl-group invariance should show up most explicitly in operators that are scalars with respect to the additional U(1) degree of freedom. The LDOS moments, however, do not belong to this category. We leave the SUSY-based classification of operators and the investigation of the impact of the Weyl-group invariance to future work. \vfill\eject \section{Summary and outlook}\label{s13} In this paper we have developed a classification of composite operators without spatial derivatives at Anderson-transition critical points in disordered systems. These operators represent observables describing correlations of the local density of states (or wave-function amplitudes). Our classification is motivated by the Iwasawa decomposition for the (complexification of the) supersymmetric $\sigma$-model field. The Iwasawa decomposition has the attractive feature that it gives rise to spherical functions which have the form of ``plane waves'' when expressed in terms of the corresponding radial coordinates. Viewed as composite operators of the $\sigma$-model, these functions exhibit pure-power scaling at criticality. Alternatively, and in fact more appropriately, the same operators can be constructed as highest-weight vectors. We further showed that a certain Weyl-group invariance (due to the Harish-Chandra isomorphism) leads to numerous exact symmetry relations among the scaling dimensions of the composite operators. Our symmetry relations generalize those derived earlier for the multifractal exponents of the leading operators. While we focused on the Wigner-Dyson unitary symmetry class (A) in most of the paper, we have also sketched the generalization of our results to some other symmetry classes. More precisely, our results are directly applicable to five (out of the ten) symmetry classes: the three Wigner-Dyson classes (A, AI, AII) and two of the Bogoliubov-de Gennes classes (C and CI). Moreover, they should also be valid for the remaining two Bogoliubov-de Gennes classes (D and DIII), as long as $\sigma$-model domain walls are suppressed (i.e.\ the $\sigma$-model field stays within a single component of the manifold). Our results imply that in this situation the system is protected from Anderson localization. In other words, localization in the symmetry classes D and DIII may take place only due to the appearance of domain walls. We have further explored the relation of our results for the LDOS (or wave-function) correlators to transport characteristics. We have constructed transport observables that are counterparts of the composite operators for wave-function correlators and conjectured a relation between the scaling exponents. Our work opens a number of further research directions; here we list some of them. \begin{enumerate} \item[(i)] Verification of our predictions by numerical simulation of systems housing critical points of various dimensionalities, symmetries, and topologies would be highly desirable. While the LDOS multifractal spectra have been studied for a considerable number of critical points, the numerical investigation of the scaling of subleading operators is still in its infancy. Preliminary numerical results for the spectra of scaling exponents of the moments $\langle A_2^q\rangle$ and $\langle A_3^q\rangle$ at the quantum Hall critical point \cite{bera-evers-unpublished} do support our predictions. Furthermore, it would be very interesting to check numerically our predictions for the scaling of transport observables. \item[(ii)] As mentioned in Sec.~\ref{s12.2}, it remains to be seen to what extent our results can be generalized to the chiral symmetry classes, and what their implications for observables will be. \item[(iii)] In this work, we have studied critical points of non-interacting fermions. In some cases the electron-electron interaction is RG-irrelevant at the fixed point in question, so that the classification remains valid in the presence of the interaction. An example of such a situation is provided by the integer quantum Hall critical point with a short-range electron-electron interaction. \cite{Lee96,Wang00,burmistrov11} However, if the interaction is of long-range (Coulomb) character, the system is driven to another fixed point. (This also happens in the presence of short-range interactions for fixed points with spin-rotation symmetry: in this case, the Hartree-Fock cancelation of the leading term in the two-point function (\ref{e1.5}) does not take place.) The classification of operators and relevant observables at such interacting fixed points, as well as the analysis of possible implications of the Weyl-group invariance, remain challenging problems for future research. \end{enumerate} \section{Acknowledgments} We thank V.~Serganova for useful discussions, and S.~Bera and F.~Evers for informing us of unpublished numerical results. \cite{bera-evers-unpublished} This work was supported by DFG SPP 1459 ``Graphene'' and SPP 1285 ``Semiconductor spintronics'' (ADM). IAG acknowledges the DFG Center for Functional Nanostructures for financial support of his stay in Karlsruhe, and the NSF Grants No. DMR-1105509 and No. DMR-0820054. ADM acknowledges support by the Kavli Institute for Theoretical Physics (University of California, Santa Barbara) during the completion of this work. MRZ acknowledges financial support by the Leibniz program of the DFG.
1,116,691,499,476
arxiv
\section{Introduction} Pulsars are rotating neutron stars, spinning down as they lose their rotational energy in the form of a pair-plasma wind, $\gamma$- and X-rays, and radio waves. Millisecond pulsars (MSPs), with periods $\lesssim 20$ ms and fields $\sim 10^8$ G, have long been thought to be old pulsars spun up by a ``recycling'' process \citep{smarr76a,alpar82a,radhakrishnan82a} in low-mass X-ray binaries (LMXBs). During this process, material from a lighter companion is transferred to the pulsar via an accretion disk, spinning it up. This general model is well-supported by observations of neutron star LMXBs, which in a few cases actually show millisecond X-ray pulsations as the accreting material forms hotspots at the magnetic poles of the neutron star \citep[the so-called accreting millisecond X-ray pulsars or AMXPs;][and references therein]{wijnands98a,patruno12a}. That said, the end of this recycling process, when accretion stops and the radio pulsar becomes active, remains mysterious: why and when does accretion stop? How does the episodic nature of LMXB accretion affect the spin frequency of the neutron star? What happens when the accretion rates are too low to overcome the ``centrifugal barrier'' imposed by the pulsar's magnetic field and spin? What happens to the accretion stream when the pulsar becomes active, producing a powerful pair-plasma wind? What is the ultimate fate of the companion in such a system? A range of ideas have been put forward to answer these questions, but they have been very difficult to test against observations due to the lack of systems known to be in such transitional states. Recently, however, three systems have been observed to transition between an (eclipsing) radio pulsar state and a LMXB state where an accretion-disk is present: PSR~J1023+0038 \citep[hereafter J1023;][]{archibald09a,patruno14a,stappers14a}, PSR J1824-2452I \citep[M28I;][]{papitto13a,linares14a}, and XSS~J12270$-$4859 \citep[hereafter XSS~J12270;][]{bassa14a,roy15a}. A fourth system, 1RXS~J154439.4$-$112820, is also suspected to be a member of this same class of transitional systems, but has not yet been seen as a radio pulsar \citep{bogdanov15a}. It is already clear that the end of accretion is an episodic process. During the episodes where an accretion disk is present, several possible inner-disk behaviours are suggested by X-ray observations. M28I exhibited millisecond X-ray pulsations and thermonuclear bursts at relatively high X-ray luminosities ($L_X \sim 10^{36}$ erg s$^{-1}$), attributes consistent with AMXPs during outburst. J1023 and XSS~J12270, however, have never been observed by X-ray all-sky monitors to exceed $L_X \sim 10^{34.5}$ erg s$^{-1}$, and the nature of these relatively faint X-ray states is unclear. One possibility would be so-called ``propeller-mode accretion'', in which material from the companion forms an accretion disk extending down into the pulsar's light cylinder and shorting out the radio pulsar activity \citep{illarionov75a,ustyugova06a}. In propeller-mode accretion, the pressure of infalling material is balanced by the magnetic field of the neutron star outside the corotation radius; as a result, the neutron star's rotation accelerates the inner edge of the disk and material cannot fall further inward (the ``centrifugal barrier''), instead being ejected in polar outflows. If the gas cannot be expelled from the system \citep[for example, because the centrifugal barrier is insufficient to bring the gas to the escape velocity;][]{spruit93a} then the inner accretion disk regions can become a ``dead disk" \citep{syunyaev77a} and remain trapped near co-rotation \citep{dangelo10a,dangelo12a}. In this case episodic accretion onto the neutron-star is still possible. \citet{eksi05a} suggest that there may even be a narrow window in which a disk can remain stably balanced outside the pulsar's light cylinder, in which case excess material might be expelled from the system by the wind from the active pulsar. Very recently, the detection of X-ray pulsations in J1023 has shown that a trickle of matter is reaching the neutron star surface during the LMXB state \citep{archibald15a}, ruling out models in which none of the accreting material manages to pass the centrifugal barrier. After submission of this manuscript, coherent X-ray pulsations from XSS J12270 were also found \citep{papitto15a}, highlighting the remarkable similarity between these two systems. Radio imaging observations provide a complementary viewpoint on the accretion processes in LMXBs, since the radio emission is thought to be primarily generated in outflowing material. The most luminous (as measured from their X-ray emission) neutron star LMXBs, known as the ``Z-sources'', have all shown radio continuum emission, which in a number of cases has been resolved into collimated jets \citep[e.g.][]{fender98a,fomalont01a,spencer13a}. Several of the more numerous, lower-luminosity ``atoll sources'' have also shown continuum radio emission, although other than a marginal detection in Aql X-1 \citep{miller-jones10a}, these have not been spatially resolved, and their intrinsic radio faintness has typically precluded detailed study. Black hole LMXBs show significantly brighter radio continuum emission than their neutron star counterparts \citep{migliari06a}, which at high luminosities has been spatially resolved into extended jets \citep{dhawan00a,stirling01a}. At lower luminosities, the continuum radio emission from black hole systems is attributed to conical, partially self-absorbed jets \citep{blandford79a}, owing to the flat radio spectra, the high radio brightness temperatures, and the unbroken correlation between radio and X-ray emission \citep[$L_{\rm r}\propto L_{\rm X}^{0.7}$; e.g.][]{corbel00a,gallo03a}. While this correlation is seen to hold down to the lowest detectable X-ray luminosities in the black hole systems \citep{gallo06a,gallo14a}, the radio behaviour of neutron stars has not to date been studied at X-ray luminosities $\lesssim10^{35}$\,erg\,s$^{-1}$. The solid surface of a neutron star should make the accretion flow radiatively efficient, in contrast to the radiatively inefficient accretion flows around quiescent black holes, and simple accretion theory then predicts a steeper radio/X-ray correlation index for neutron star systems, of $L_{\rm r}\propto L_{\rm X}^{1.4}$ \citep[e.g.][]{migliari06a}. While possible correlations between the radio and X-ray emission have been reported for two systems \citep{migliari03a,tudose09a}, they were only derived over a small range in X-ray luminosity, and differ markedly in the reported correlation indices. Deep radio imaging observations of low-luminosity neutron star systems would ascertain whether they are also able to launch jets when accreting far below the Eddington luminosity, and via comparisons with the behaviour of their black hole counterparts, could determine the role played by the event horizon, the depth of the gravitational potential well, and the stellar magnetic field, in launching jets. A study of J1023 has the potential to answer several of the outstanding questions concerning pulsar recycling and neutron star accretion physics posed above. The system consists of PSR~J1023+0038, a ``fully" recycled MSP (where fully recycled is usually defined as $P\lesssim10$ ms) with $P=1.69$ ms, plus a stellar companion with a mass of 0.24\ensuremath{M_{\odot}}\ \citep{archibald09a,deller12b}. Multiple lines of evidence point to the fact that J1023 is a transitional object; the companion appears to be Roche-lobe-filling, and the millisecond pulsar exhibits timing variability consistent with the presence of substantial amounts of ionized material in the system \citep{archibald09a,archibald13a}. Moreover, archival optical and ultraviolet data from the period May 2000 to January 2001 combined with the knowledge that J1023 harbours a neutron star showed that J1023 possessed an accretion disk during this time \citep{thorstensen05a,archibald09a}. Recently, the transitional nature was spectacularly confirmed when J1023 abruptly returned to a LMXB-like state, complete with double-peaked emission lines and optical and X-ray flickering indicating an accretion disk, and the disappearance of radio pulsations \citep{stappers14a,halpern13a,patruno14a}. \citet{archibald15a} show that coherent X-ray pulsations are (intermittently) present in the LMXB state, indicating that material is being accreted onto the neutron star surface, in contrast to most theoretical models. These coherent X-ray pulsations are present while X-ray luminosity is a factor of 10--100 lower than that at which pulsations have been seen in other AMXPs \citep[][particularly Figure~5]{archibald15a}. This implies that channeled accretion similar to that seen in higher-luminosity AMXPs is occurring at a much lower accretion rate in J1023 \citep[the amount of material reaching the stellar surface must be in the range $10^{-13} - 10^{-11}$ \ensuremath{M_{\odot}}\ yr$^{-1}$;][]{archibald15a}, offering the opportunity to develop a more complete understanding of the accretion process across the whole population of LMXBs. Since J1023 is relatively nearby \citep[$d$ = 1.37 kpc;][]{deller12b}, it is an excellent testbed for studying the accretion process at very low mass-transfer rates, with the fact that the system parameters are precisely known from its time as a radio pulsar \citep{archibald13a} an added bonus. The X-ray luminosity (measured in the 1-10 keV band\footnote{Throughout this paper, we use the 1-10 keV band to compare X-ray luminosities between different systems \citep[as used by, e.g.,][]{gallo14a}.}) of J1023 in the LMXB state is $\sim2.3\times10^{33}$ erg/s \citep[][converted from the reported 0.3-10 keV value using webPIMMS\footnote{\url{https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl}}]{bogdanov15b}, substantially brighter than the 9$\times10^{31}$ erg/s observed in the radio pulsar state \citep[][converted from the reported 0.5-10 keV value using webPIMMS]{archibald10a}. Optical observations \citep{halpern13a} make it clear that a disk is present, but the details are unclear. Particularly puzzling is a five-fold increase in the system's $\gamma$-ray luminosity \citep[measured in the 0.1--300~GeV range;][]{stappers14a} relative to its radio pulsar state \citep[when the $\gamma$-rays are thought to be produced directly in the pulsar magnetosphere;][]{archibald13a}. No radio pulsations have been detected from the system in the LMXB state \citep{stappers14a}, but even in its radio pulsar state the signal is eclipsed for $\sim$50\% of the orbit at observing frequencies $\lesssim$1.4GHz by ionized material in the system \citep{archibald13a}. Given the clear evidence for accreted material reaching the neutron star surface provided by the X-ray pulsations, it seems inescapable that the radio pulsar mechanism is unable to function, at least a majority of the time, during the LMXB state \citep{archibald15a}. However, the balance between accreted and ejected material remains unknown, while the possibility remains that the radio pulsar mechanism and associated pulsar wind could reactivate during the brief cessations of X-ray pulsations -- but only if the radio emission is then being scattered or absorbed in intervening ionized material, to remain consistent with the non-detection of radio pulsations. To investigate the scenarios presented above, radio imaging observations can be employed. A compact jet launched by the accretion disk would exhibit a flat to slightly inverted radio spectrum in the cm range ($0 \lesssim \alpha \lesssim 0.5$, where $\alpha$ is the spectral index and flux density $\propto \nu^\alpha$), due to the superposition of self-absorbed synchrotron components originating in different regions along the jet \citep{blandford79a}. Depending on the conditions in the jet, optically thin synchrotron emission could also be (intermittently) dominant, with a spectral index $-1 \lesssim \alpha \lesssim -0.5$. A typical example is the black hole LMXB GX 339--4, whose radio spectral index as monitored over 10 years is consistently bounded by the range $-0.2 \lesssim \alpha \lesssim 0.5$, but showed one excursion into an optically thin regime with $\alpha = -0.5$ \citep{corbel13a}. If the radio pulsar mechanism remained active, this could be clearly differentiated from jet emission by its much steeper spectrum ($\alpha = -2.8$; \citealp{archibald09a}). A potential complication is the presence of free-free absorption in intervening material, which could have originated from the companion's Roche-lobe overflow. This would lead to a suppression of the lower frequency emission, and could make the interpretation of spectra with a limited fractional bandwidth difficult. In order to determine the presence and nature of any jet launched by accretion onto J1023 and to determine whether the radio pulsar remains (intermittently) active while the disk is present, we made observations with the Karl G. Jansky Very Large Array (VLA), the European VLBI Network (EVN) and the Low Frequency Array (LOFAR) to cover a wide range of timescales, frequencies, and angular resolutions. We support these radio observations with X-ray monitoring from the \emph{Swift} telescope, and an analysis of the publically available \emph{Fermi} $\gamma$-ray data. These observations are described in Section~\ref{sec:obs}, and the results are presented in Section~\ref{sec:results}. Our interpretation of the nature of the radio emission from the system, as well as the implications for our understanding of accreting compact objects generally, are presented in Section~\ref{sec:discussion}. \section{Observations and data processing} \label{sec:obs} We observed J1023 in the radio band a total of 15 times between 2013 November and 2014 April. These observations are summarized in Table~\ref{tab:observations}. For all observations, the position for J1023 was taken from the VLBI ephemeris presented in \citet{deller12b}. During this period, J1023 was also monitored in the X-ray band with the {\em Swift} satellite. The X-ray observations are described in Section~\ref{sec:xray} below. \begin{deluxetable*}{cccc} \tabletypesize{\small} \tablecaption{Radio imaging observations of J1023} \tablewidth{0pt} \tablehead{ \colhead{Start MJD} & \colhead{Duration (hours)} & \colhead{Instrument\tablenotemark{A}} & \colhead{Frequency range (GHz)} } \startdata 56606.68 & 2.0 & VLA (B) & 4.5 -- 5.5, 6.5 -- 7.5 \\ 56607.18 & 5.0 & LOFAR & 0.115 -- 0.162 \\ 56607.57 & 1.0 & VLA (B) & 2.0 -- 4.0 \\ 56609.09 & 7.5 & EVN & 4.93 -- 5.05 \\ 56635.52 & 1.0 & VLA (B) & 8.0 -- 12.0 \\ 56650.39 & 1.0 & VLA (B) & 8.0 -- 12.0 \\ 56664.31 & 1.0 & VLA (B) & 8.0 -- 12.0 \\ 56674.40 & 1.0 & VLA (B) & 8.0 -- 12.0 \\ 56679.46 & 2.5 & VLA (BnA) & 1.0 -- 2.0, 4.0 -- 8.0, 12.0 -- 18.0 \\ 56688.38 & 1.0 & VLA (BnA) & 8.0 -- 12.0 \\ 56701.21 & 1.0 & VLA (BnA)\tablenotemark{B} & 8.0 -- 12.0 \\ 56723.35 & 1.0 & VLA (A) & 8.0 -- 12.0 \\ 56735.28 & 1.0 & VLA (A) & 8.0 -- 12.0 \\ 56748.06 & 1.0 & VLA (A) & 8.0 -- 12.0 \\ 56775.20 & 1.0 & VLA (A) & 8.0 -- 12.0 \enddata \tablenotetext{A}{VLA = The Karl G. Jansky Very Large Array, configuration indicated in parentheses; EVN = The European VLBI Network; LOFAR = The Low Frequency Array.} \tablenotetext{B}{Observation made during the move from BnA to A array, and hence had reduced sensitivity and sub-optimal $uv$ coverage.} \label{tab:observations} \end{deluxetable*} \subsection{VLA observations} Our VLA observations were made using Director's Discretionary Time (DDT) under the project codes 13B-439 and 13B-445. All VLA observations made use of the source J1024--0052 as a gain calibrator, with a calibration cycle time ranging from 10 minutes at the lower frequencies to 2.5 minutes at 12--18 GHz. Pointing solutions were performed every hour during the observation on MJD 56679 (which included 12--18 GHz scans). The source 1331+305 (3C286) was used as both bandpass and primary flux reference calibrator at all frequencies. Additionally, from the 3rd VLA epoch onwards, a scan on J1407+2827 was included to calibrate the instrumental polarization leakage. The VLA observations were reduced in CASA \citep{mcmullin07a}, using a modified version of the standard VLA pipeline\footnote{https://science.nrao.edu/facilities/vla/data-processing/pipeline} which incorporated additional steps for calibration of instrumental polarization, following standard procedures and using J1331+3030 as the polarization angle calibrator. After an initial pipeline run, the data were inspected and additional manual editing was performed to remove visibilities corrupted by radio frequency interference, after which the basic calibration was re-derived. For each frequency, a concatenated dataset including all data at that band was produced in order to make the most accurate possible model of the field. The brightest source in the field surrounding J1023 is J102358.2+003826, a galaxy at redshift 0.449 \citep[obtained with SDSS spectroscopy;][]{ahn14a} with a flux density of 30 mJy at 5 GHz and a steep spectral index ($\alpha = -$0.8). At frequencies above 2 GHz it dominates the field model we construct for self-calibration. It is resolved by the VLA at all frequencies and in all configurations, and the use of a multi-frequency, multi-scale clean (nterms = 2 in the clean method of CASA) was essential to obtain an acceptable Stokes I model with the wide bandwidths available to the VLA. This model reveals that the bright core is less steep ($\alpha \sim$ $-$0.4) than the surrounding arcsecond-scale extended emission ($\alpha \lesssim -$1.0), as shown in Figure~\ref{fig:inbeam}. An iterative procedure of self-calibration and imaging was used at all bands. Depending on the frequency (and hence field of view), a number of other background sources are visible; the source J102348.2+004017 (intrinsic flux density $\sim$200 $\mu$Jy at 10 GHz, intrinsic spectral index $-$0.7) is separated from J1023 by 2 arcminutes and is visible in all epochs. We refer to this source hereafter as the ``check" source; the fitted brightness of this source (which should not vary in time during an observation) is used to ensure that our absolute flux calibration is adequate. \begin{figure*} \includegraphics[width=1.0\textwidth]{ds9-inbeam.eps} \caption{VLA image of the source J102358.2+003826, which dominates the field at frequencies above 2 GHz. The greyscale shows 4-8 GHz emission on a logarithmic scale, while the contours show the emission at 8-12 GHz. The first contour is at 3$\sigma$ (8 $\mu$Jy beam$^{-1}$) and the contours increase by factors of 2. Much of the extended emission is lost in the 8-12 GHz image, due to a combination of higher resolution and the steep spectrum ($\alpha \lesssim -1$) of the extended emission. An accurate model of this source was necessary to avoid sidelobe contamination at the position of J1023. } \label{fig:inbeam} \end{figure*} \begin{figure} \includegraphics[width=0.48\textwidth]{J1023.pointing.eps} \caption{Location of the target source, the brightest source in the field (J102358.2+003826) and the ``check" source, for the 8--12 GHz VLA observations. The pointing center was slightly offset from J1023 to ensure that the response in the direction of J102358.2+003826 was not too greatly attenuated, whilst maintaining near-optimal sensitivity at the position of J1023. The dashed and dotted lines show the half power point of the VLA primary beam at 8 GHz and 12 GHz respectively. Both J102358.2+003826 and the check source sit close to the edge of the primary beam, and hence their measured flux density can be substantially affected by pointing errors (see text for details). } \label{fig:pointinglayout} \end{figure} With the Stokes I field model and calibrated datasets in hand, the following self-calibration and imaging procedure was followed for each epoch: \begin{enumerate} \item The data were self-calibrated and all sources except J1023 and the check source were subtracted. At frequencies of 8 GHz and above, a solution interval of 75 seconds for phase calibration and 45 minutes for amplitude + phase calibration was used; at 4 -- 8 GHz, the phase self-calibration interval was 30 seconds and the amplitude self-calibration interval was 10 minutes, and below 4 GHz the self-calibration interval was 15 seconds and the amplitude self-calibration interval was 5 minutes. \item In the 8-12 GHz epochs, the amplitude self-calibration was inverted and the inverted solutions applied to the subtracted dataset, to undo the effects of the amplitude self-calibration. This was necessary due to the location of the source J102358.2+003826 at or beyond the half-power point of the antenna primary beam, meaning that pointing errors dominated the amplitude self-calibration solutions. Application of these solutions to the target or check source direction would be counter-productive, so they were only used to obtain an accurate subtraction of J102358.2+003826. At lower frequencies, J102358.2+003826 is well inside the primary beam and this inversion is unnecessary. \item Multi-frequency clean with nterms=2 was used to derive the average brightness and spectral index of J1023 and the check source over the entire observation in Stokes I. When cleaning, a clean box with the same size as the synthesized beam was placed on the known source position. The reference frequency was set to 1.5 GHz, 3 GHz, 6 GHz, 10 GHz or 15 GHz, depending on the recorded band. \item If J1023 was not detected, an estimate of the noise $\sigma$ was made using a 60x60 pixel area in the center of the residual image, and a 3$\sigma$ upper limit was recorded. \item If J1023 was detected, then a point source model was fit to the image using the task imfit and the flux density (taken from the fitted peak brightness, since the source is unresolved), spectral index, and spectral index error were recorded. This step was also always performed for the check source. \item For the epochs when J1023 was brightest, we reimaged the data in Stokes I, Q, U and V, using nterms=1 since multi-frequency deconvolution with multiple Taylor terms cannot be performed on multiple Stokes parameters. \item Additionally, a light curve was made in Stokes I by cleaning short segments of time. The granularity of the light curve depended on the flux density during the epoch, and ranged from 30 seconds during the brightest epochs to $\sim$ 8 minutes (the scan duration) during the faintest epochs. As will be shown below, the spectral index of J1023 was typically very flat, and accordingly this step was performed with a simple clean (nterms=1). \item For both J1023 and the check source, the flux density was estimated for each time slice using the task imfit as in point 5 above. The error in the flux density was estimated from the rms in an off-source box, as the brightness error reported by imfit is typically underestimated unless a large region is included in the fit (in which case the convergence of the fit was often unreliable). \item The measured flux densities and spectral indices were corrected for the effects of the primary beam, using a simple Airy disk model of the beam which assumed an effective diameter of 25m. \end{enumerate} Using the check source, it was possible to examine the consistency of both the overall amplitude scale between epochs and the amplitude scale within an epoch. For every epoch, we compared the fitted flux density of the check source (with errors) at each time slice to the average value over the whole epoch, and computed the reduced $\chi^2$ assuming a constant value. The reduced $\chi^2$ obtained in this manner ranged between 0.7 and 1.6 for the 10 GHz observations, consistent with a constant flux density to within the measurement error given the small sample size of 10-35 measurements per epoch. Accordingly, the amplitude scale within a single observation at 10 GHz is reliable. For the single observation on MJD56606 at 5 and 7 GHz, the reduced chi--squared of the check source amplitude is 2.1, which may be indicative of an incomplete model for J102348.2+004017 at this frequency, and means that the flux density errors in this observation are possibly slightly underestimated. However, this does not affect the conclusions that follow. Between epochs, however, the measured flux density of the check source at 10 GHz varied between 90 and 160 $\mu$Jy, and is inconsistent with a constant value, indicating an epoch-dependent absolute flux calibration error of up to $\sim$25\% in several epochs (although 8 of the 10 epochs fall within the range 110 - 130 $\mu$Jy). This can be readily understood by inspecting Figure~\ref{fig:pointinglayout}, which shows that the check source is close to the half-power point of the antenna primary beam, and so pointing errors of tens of arcseconds (typical for the VLA without pointing calibration) will lead to considerable amplitude variations (25\% for a pointing error of 30\arcsec). However, these pointing errors would not lead to similarly dramatic amplitude calibration uncertainties at the position of J1023, since it is much closer to the pointing center (up to a maximum of 10\% for a pointing error of 30\arcsec). We estimate that the absolute calibration of the J1023 flux values varies betweeen epochs by 5-10\%, and include a 10\% error contribution in quadrature when quoting average flux density values for an epoch. \subsection{EVN observations} J1023 was observed with the very long baseline interferometry technique (VLBI), with the European VLBI Network (EVN) in real-time e-VLBI mode on 2013 November 13 between 2:00--10:00 UT. The following telescopes of the e-EVN array participated: Jodrell Bank (MkII), Effelsberg, Hartebeesthoek, Medicina, Noto, Onsala (25m), Shanghai (25m), Toru\'n, Yebes, and the phased-array Westerbork Synthesis Radio Telescope (WSRT). The total data rate per telescope was 1024~Mbit/s, resulting in 128~MHz total bandwidth in both left- and right-hand circular polarization spanning 4.93 -- 5.05 GHz, using 2-bit sampling. Medicina and Shanghai produced the same bandwidth using 1-bit sampling and a total data rate of 512~Mbit/s per telescope. The target was phase-referenced to the nearby calibrator J1023+0024 in cycles of 1.5 min (calibrator) -- 3.5 min (target). The position of J1023+0024 was derived from the astrometric solution of \citet{deller12b}. Check scans on the calibrator J1015+0109, with coordinates taken from the NASA Goddard Space Flight Center (GSFC) 2011A astro solution, were also included. The data were correlated with the EVN Software Correlator \citep[SFXC;][]{pidopryhora09a} at the Joint Institute for VLBI in Europe (JIVE) in Dwingeloo, the Netherlands. The data were analysed in the 31DEC11 version of AIPS \citep{greisen03a} using the ParselTongue adaptation of the EVN data calibration pipeline \citep{reynolds02a, kettenis06a}. Imaging was performed in version 2.4e of Difmap \citep{shepherd94a} using natural weighting. J1023 was tentatively detected with a peak brightness of 50$\pm13$~$\mu$Jy beam$^{-1}$ (3.8$\sigma$ significance) at the expected position (which is known to an accuracy much better than the synthesized beam size \citep{deller12b}, making a 3.8$\sigma$ result significant). To assess the quality of phase-referencing in our VLBI experiment, we compared the phase-referenced and phase self-calibrated e-EVN data on J1015+0109. This revealed that the amplitude losses due to residual errors in phase-referencing did not exceed the $10\%$ level. The difference between the phase-referenced and the a-priori assumed coordinates were 0.25~mas in right ascension and 2.5~mas in declination; the latter slightly exceeds the 3$\sigma$ error quoted in the GSFC astro solution, and it is probably because of the difference between the true and the assumed tropospheric zenith delay during the correlation. This source position shift is nevertheless within the naturally weighted beamsize of 6.1$\times$3.1~mas, major axis position angle 81~degrees. Finally, in addition to the e-EVN data, we also processed the WSRT local interferometer data in AIPS. Comparison of WSRT and e-EVN flux densities of compact calibrators showed that the absolute calibration of the VLBI amplitudes were accurate to $\sim4\%$. \subsection{LOFAR observations} \label{sec:lofar} J1023 was observed by LOFAR \citep{van-haarlem13a} for 5 hours on 2013 November 13, between 04:22--09:19 UT (and hence spanning more than one full orbit), in a director's discretionary time observation. The observing bandwidth was 48 MHz spanning the frequency range 114-162 MHz. All Dutch stations were included, for a total of 60 correlated elements. 3C196 was used as a calibrator source and was observed for 1 minute every 15 minutes. The field centered on J1023 was iteratively self-calibrated, beginning with a model built from the revised VLSS catalog \citep{lane14a}, followed by imaging at low resolution, further self-calibration, and finally imaging at high resolution (using a maximum baseline length of 25 km). For the results described below, only the bandwidth range 138.5--161.7 MHz was used. When in the radio pulsar state, J1023's rotation-powered radio pulsations were easily detectable using LOFAR's high-time-resolution beam-formed modes \citep{stappers11a} and would have been similarly obvious in imaging observations, with a period-averaged 150 MHz flux density of $\sim$40 mJy (V.~Kondratiev et al., in prep). In our observations, J1023 was not detected, with a 3$\sigma$ upper limit of 5.4 mJy beam$^{-1}$. The attained image rms is a factor of 10 higher than the predicted thermal rms, and is limited by sidelobes from bright sources in the field which are corrupted by direction-dependent gain errors. The observation spanned local sunrise, which is a time of rapid ionospheric variability and hence worse than usual direction-dependent errors. Improvements in LOFAR data reduction techniques since the time of these observations, including direction-dependent gain calibration, mean that images which closely approach the thermal noise are now becoming possible, and we therefore anticipate being able to improve upon these limits in a future analysis. \subsection{\emph{Swift} X-ray observations} \label{sec:xray} We analyzed 52 targeted \emph{Swift} observations taken between 2013 October 31 and 2014 June 11, most of which were also presented in \citet{coti-zelati14a}. We used all data recorded by the \emph{Swift}/X-ray-Telescope (XRT) which always operated in photon-counting mode (PC mode), with a time resolution of 2.5 s. The cumulative exposure time was $\approx 87$ ks. The data were analysed with the {\ttfamily HEASoft} v.6.14 and {\ttfamily xrtpipeline} and we applied standard event screening criteria (grade 0--12 for PC data). We extracted the photon counts from a circular region of size between 20 and 40 arcsec (depending on the source luminosity). The background was calculated by averaging the counts detected in three circular regions of the same size scattered across the field of view after verifying that they did not fall close to bright sources. The 1--10 keV X-ray counts were then extracted and binned in 10-s intervals with the software {\ttfamily xselect}. The X-ray counts were then corrected with the task {\ttfamily xrtlccorr}, which accounts for telescope vignetting, point-spread function corrections and bad pixels/columns and rebinned the counts in intervals of 50 seconds. \subsection{\emph{Fermi} $\gamma$-ray monitoring} We examined the \emph{Fermi} data on J1023 up to 2014 October 20, extending the light curve shown in \citet{stappers14a} to a time baseline in the LMXB state of almost 16 months. As in \citet{stappers14a}, we carried out the analysis in two ways. First we selected a spectral model for J1023 and all sources within 35 degrees. We then divided the time of the \emph{Fermi} mission into segments of length $5\times10^6$ seconds, and using the $0.1$--$300\;\text{GeV}$ photons within 30 degrees of the position of J1023 from each time segment, we fit for the normalizations of all sources within 7 degrees of J1023 (we followed the binned likelihood tutorial on the \emph{Fermi} Cicerone\footnote{\url{http://tinyurl.com/fermibinnedtutorial}}). The normalization of J1023 serves as an estimate of background-subtracted $\gamma$-ray flux. Because this method depends on complex spectral fitting procedures, we estimated the flux using a simple aperture photometry method as well: we selected all photons within one degree of J1023 and $1$--$300\;\textrm{GeV}$ (since photons with lower energies have a localization uncertainty greater than a degree), then used their exposure-corrected flux. \section{Results} \label{sec:results} \subsection{Radio spectral index} \label{sec:specindex} Figure~\ref{fig:radiospectrum} illustrates the instantaneous radio spectrum obtained for J1023 in 3 different observations: the 4.5 -- 7.5 GHz observation on MJD 56606, the 8-12 GHz observation on MJD 56674 (when the source was brightest), and the 3--frequency observation on MJD 56679. Table~\ref{tab:averagevals} lists the spectral indices obtained from all of the VLA observations where J1023 was detected. The median value of $\alpha$ from the 12 observations with a spectral index determination is $0.04$. The limited precision of the spectral index determination during the fainter epochs makes it difficult to say with certainty within what range the spectral index varies; considering only the reasonably precise measurements (with an error $<$ 0.3) yields a range $-0.3 < \alpha < 0.3$. In any case, it is certain that there is some variability: the reduced $\chi^2$ obtained when assuming a constant spectral index across all 12 epochs is 3.0, and some of the better-constrained epochs differ at the 3$\sigma$ level. Within a single epoch, we were able to test for spectral index variability on timescales down to $\sim$5 minutes only for the few observations where J1023 was brightest; here, variation at the 2$\sigma$ level ($\Delta\alpha \simeq \pm0.3$) was seen on timescales of $\sim$30 minutes. This variability on timescales of minutes to months would be expected if the emission originated in a compact jet, as the observed spectrum is made up of the sum of individual components along the jet (which have different spectra) whose presence and prominence changes with time. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=0.48\textwidth]{psr.cband.spectralindex.eps} \\ \includegraphics[width=0.48\textwidth]{psr.epoch4.spectralindex.eps} \\ \includegraphics[width=0.48\textwidth]{psr.wideband.spectralindex.eps} \end{tabular} \caption{The radio spectrum of J1023, as seen at 4.5 -- 7.5 GHz during a period of typical flux density (MJD 56606, top), 8 -- 12 GHz during a period of enhanced flux density (MJD 56674, middle), and 1 -- 18 GHz during a period of relatively low activity (MJD 56679, bottom). The gray shaded region shows the 1$\sigma$ error region (including the fitted errors on reference flux density and spectral index). } \label{fig:radiospectrum} \end{center} \end{figure} We examined the possibility of a spectral turnover of J1023 at frequencies below 4 GHz, as is expected theoretically for a partially self-absorbed compact jet \citep[e.g.][]{markoff01a}, but not unambiguously detected to date in any compact jets from an LMXB. Although relatively few deep, low-frequency observations have been made of LMXBs, we note that the bright black hole high mass X-ray binary Cygnus X-1 has been detected at frequencies as low as 350 MHz (G. de Bruyn, priv. comm.) Only two observations of around 40 minutes on-source each were made of J1023 below 4 GHz, and the sensitivity is somewhat poorer than at the other frequencies due to reduced bandwidth. In the 3--frequency VLA observation of MJD 56679, the flux density of J1023 was relatively low, and the 1--2 GHz upper limit of 72 $\mu$Jy (3$\sigma$) is consistent with the extrapolated flux density from the higher frequencies (30--50 $\mu$Jy). In the 2-4 GHz observation on MJD 56607, J1023 was not detected in the average image, with a 3$\sigma$ upper limit of 30 $\mu$Jy beam$^{-1}$. However, in the corresponding light curve of the 2-4 GHz observations, $\sim$3$\sigma$ peaks at the $\sim$60 $\mu$Jy beam$^{-1}$ level are seen at the position of J1023 in two of the eight scans, and we treat these as possible detections. In any case, the average flux density during this observation must be considerably lower than the minimum value of $\sim$45 $\mu$Jy seen at higher frequencies in any 40 minute period; if there is no spectral turnover, then J1023 must have been a factor of $\sim$2 fainter in this epoch than in any other observation. More low-frequency (preferably simultaneous) observations would be needed to definitively determine whether a spectral turnover is present. \begin{deluxetable*}{cccc} \tabletypesize{\small} \tablecaption{Per-epoch average flux density and spectral index values for J1023. The quoted uncertainties for the average flux density include statistical fit errors and the estimated 10\% uncertainty in the absolute flux density scale; the latter dominates.} \tablewidth{0pt} \tablehead{ \colhead{Start MJD} & \colhead{Reference frequency (GHz)} & \colhead{Flux density ($\mu$Jy)} & \colhead{Spectral index} } \startdata 56606.68 & 6 & 122 $\pm$ 12 & $0.09\pm0.18$ \\ 56607.18 & 3 & $<$30 & -- \\ 56635.54 & 10 & 85 $\pm$ 9 & $0.93 \pm 0.48$ \\ 56650.41 & 10 & 428 $\pm$ 43 & $-0.18 \pm 0.10$ \\ 56664.34 & 10 & 160 $\pm$ 16 & $-0.02 \pm 0.24$ \\ 56674.42 & 10 & 533 $\pm$ 53 & $-0.27 \pm 0.07$ \\ 56679.50 & 10 & 70 $\pm$ 7 & $0.27 \pm 0.12$ \\ 56688.41 & 10 & 101 $\pm$ 10 & $0.15 \pm 0.34$ \\ 56701.23 & 10 & 78 $\pm$ 9 & $0.82 \pm 0.64$ \\ 56723.37 & 10 & 76 $\pm$ 8 & $0.49 \pm 0.47$ \\ 56735.30 & 10 & 100 $\pm$ 10 & $-0.23 \pm 0.35$ \\ 56748.09 & 10 & 54 $\pm$ 6 & $-0.31 \pm 0.61$ \\ 56775.22 & 10 & 45 $\pm$ 6 & $-0.93 \pm 0.73$ \enddata \label{tab:averagevals} \end{deluxetable*} \subsection{Radio variability} In the 15 radio observations made in the period 2013 November to 2014 April, the flux density of J1023 varies by almost two orders of magnitude, with factor-of-two variability within 2 minutes and order-of-magnitude variability on timescales of 30 minutes. Figures~\ref{fig:lightcurvesummary} and~\ref{fig:lightcurvezooms} show the radio lightcurve over this six month period; note the rapid variability and high flux density during the observation on MJD 56674. From light travel time arguments, we can therefore constrain the maximum source size to be 120 light-seconds, $\sim$30 times the binary separation of 4.3 light-seconds \citep{archibald13a}, although it could of course be considerably smaller. Given the observed flux density of 1 mJy, the brightness temperature must reach at least $3\times10^{8}$ K. If we neglect the effects of relativistic beaming \citep[the system inclination is known to be 42 degrees;][]{archibald13a}, then under the typical assumption of a maximum brightness temperature of $10^{12}$\,K (as expected for unbeamed, incoherent, steady-state synchrotron emission) we can calculate the minimum source size to be $\sim$2 light-seconds: around half the binary separation, and 7500 times larger than the pulsar light cyclinder \citep[81 km;][]{bogdanov15b}. We also searched for a frequency-dependent time delay in our VLA light curves. For the observation on MJD 56674 where the source was brightest and shows the most rapid variability, we imaged the data from 8--10 GHz and 10--12 GHz separately with a time resolution of 30 seconds. We then performed a cross-correlation analysis to determine if any time lag was present at the lower frequency, as might be expected for a jet in which the lower frequency emission primarily originates further from the origin. No significant offset was detected, which is unsurprising given the 30 second time resolution (the shortest timescale at which we still obtain detections of a reasonable significance) and the source size limits derived above. The radio flares seen with the VLA (Figure~\ref{fig:lightcurvezooms}) could be related to the observed X-ray flaring activity, but the radio emission from J1023 does not appear to exhibit the short, sharply defined excursions to a lower luminosity mode that are seen in X-ray observations \citep{bogdanov15b}. Although we lack the sensitivity to probe the radio emission to very faint levels on minute timescales, in the periods of brightest radio emission (MJD 56650 and 56674), we can exclude sharp ``dips" in flux density on timescales of two minutes. In general, the variation appears to be slower in radio than X-ray, and this slower variation implies that the radio emission region is more extended than the region in which the X-rays are produced. \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{allradio.eps} \caption{The radio light curve of J1023 from the VLA and EVN observations. The points show the median values at each epoch, with the error bars encompassing the maximum and minimum values seen within the epoch. The short timescale variability is within an epoch is shown in Figure~\ref{fig:lightcurvezooms}. } \label{fig:lightcurvesummary} \end{center} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{lightcurve.56606.7.PSR_J1023.eps} & \includegraphics[width=0.3\textwidth]{lightcurve.56607.6.PSR_J1023.eps} & \includegraphics[width=0.3\textwidth]{lightcurve.56635.5.PSR_J1023.eps} \\ \includegraphics[width=0.3\textwidth]{lightcurve.56650.4.PSR_J1023.eps} & \includegraphics[width=0.3\textwidth]{lightcurve.56664.3.PSR_J1023.eps} & \includegraphics[width=0.3\textwidth]{lightcurve.56674.4.PSR_J1023.eps} \\ \includegraphics[width=0.3\textwidth]{lightcurve.56679.5.PSR_J1023.eps} & \includegraphics[width=0.3\textwidth]{lightcurve.56688.4.PSR_J1023.eps} & \includegraphics[width=0.3\textwidth]{lightcurve.56701.2.PSR_J1023.eps} \\ \includegraphics[width=0.3\textwidth]{lightcurve.56723.4.PSR_J1023.eps} & \includegraphics[width=0.3\textwidth]{lightcurve.56735.3.PSR_J1023.eps} & \includegraphics[width=0.3\textwidth]{lightcurve.56748.1.PSR_J1023.eps} \\ \includegraphics[width=0.3\textwidth]{lightcurve.56775.2.PSR_J1023.eps} \end{tabular} \caption{The light curves from the 13 VLA observations (where J1023 could be detected on a sub-observation timescale). The colors representing different frequencies are the same as those used for Figure~\ref{fig:lightcurvesummary}. Factor-of-two variations are common within minutes, and the flux density increases by an order of magnitude within half an hour in one case (MJD 56650). } \label{fig:lightcurvezooms} \end{center} \end{figure*} Table~\ref{tab:averagevals} shows the mean flux density (at the central frequency of 3, 6, or 10~GHz as appropriate) and spectral index observed for J1023 at each epoch. \subsection{Radio polarization} Our best constraints on the fractional polarization of the radio continuum emission come from the two epochs when J1023 was brightest (MJD 56650 and 56674). Although linear polarization was detected from J102358.2+003826 (which, given its location beyond the half-power point of the primary beam, could be instrumental), J1023 showed no evidence for significant linear polarization in either epoch, down to a $3\sigma$ upper limit on the fractional polarization of 3.9 per cent. To get the best possible limit, we selected only the time periods when the Stokes I emission from J1023 exceeded 500\,$\mu$Jy\,beam$^{-1}$ in each epoch, stacked those data from the two epochs, and made a combined image in Stokes I, Q, U and V. This allowed us to reduce our $3\sigma$ limit on the fractional polarization to 24\,$\mu$Jy\,beam$^{-1}$ (3.4 per cent). While the data from MJD 56674 showed possible evidence for Stokes V emission at 23\,$\mu$Jy\,beam$^{-1}$ (4.1 per cent), the check source showed a consistent level of Stokes V emission at that epoch (9.1 $\mu$Jy\,beam$^{-1} = 4.6$ per cent), strongly suggesting that this could be caused by errors in the gain calibration or instrumental leakage correction. Linear polarization has been detected in compact jets from hard-state black hole X-ray binaries \citep[e.g.][]{corbel00a,russell14a}, at levels of up to a few per cent. Therefore, while our limits on the polarization of J1023 are consistent with the levels expected from a partially self-absorbed compact jet, they can neither definitively confirm nor rule out this scenario. \subsection{X-ray variability} Figure~\ref{fig:xraylightcurvesummary} shows the X-ray light curve of J1023 as observed by \emph{Swift}. Figure~\ref{fig:xraylightcurvezooms} shows two epochs, each of duration $\sim$20 minutes, to highlight the variability seen on short timescales. We note that higher sensitivity X-ray observations of J1023 made with \emph{XMM-Newton} reveal much shorter-timescale structure in the X-ray lightcurve, showing that J1023 spends most of its time in a stable ``high" mode, with brief drops down to a lower-luminosity ``low" mode and infrequent higher-luminosity ``flares" \citep{bogdanov15b}. The switches between the modes occur on a timescale of tens of seconds, and the duration of the modes typically ranges from tens of seconds to tens of minutes. This trimodal X-ray behavior seen in J1023 \citep[and also, very recently, in XSS J12270;][]{papitto15a} differs markedly from the behavior of black hole LMXBs, where variability of comparable magnitude and timescale has been seen in quiescence but without the quantization into distinct modes \citep[e.g., V404 Cygni,][]{bradley07a}. While some other black hole LMXBs such as A0620--00 and V4641 Sagitarii have shown distinct optical modes which indicate changing accretion properties \citep[e.g.,][]{cantrell08a,macdonald14a}, the timescales over which these persist are far longer -- weeks to months. \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{allxray.eps} \caption{The X-ray light curve of J1023. The points show the median value at each epoch, with the error bars showing the range encompassing 67\% of the points in the observation. Examples of the short-term variability within an observation are shown in Figure~\ref{fig:xraylightcurvezooms}. } \label{fig:xraylightcurvesummary} \end{center} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.48\textwidth]{xray.56684.7.eps} & \includegraphics[width=0.48\textwidth]{xray.56774.7.eps} \end{tabular} \caption{The light curves from two arbitrarily selected \emph{Swift} observations, showing the variation in count rate on minute timescales.} \label{fig:xraylightcurvezooms} \end{center} \end{figure*} We converted the \emph{Swift} XRT count rate to 1--10 keV luminosity using webPIMMS and a power-law model with $N_\mathrm{H} = 5\times10^{20}$ cm$^{-2}$ and photon index $\Gamma = 1.56$, as obtained by an analysis of {\em Swift} data on J1023 by \citet{coti-zelati14a}. The median luminosity seen in the {\em Swift} observations is $2.0\times10^{33}$ erg s$^{-1}$, in agreement with the ``high" mode luminosity of $2.3\times10^{33}$ erg s$^{-1}$ measured by \citet{bogdanov15b} and the mean luminosity of $2.4\times10^{33}$ erg s$^{-1}$ measured by \citet{coti-zelati14a}, where the luminosity was converted to 1-10 keV in both cases. \subsection{\emph{Fermi} $\gamma$-ray increase} \citet{stappers14a} showed that the $\gamma$-ray flux of J1023 increased by a factor of $\sim$5 in the LMXB state compared to the radio pulsar state, but were able to include only a few months of LMXB-state data. Our analysis of the longer span of data now available confirms their result, showing a sustained increase in the 0.1 -- 300 GeV flux by a factor of $6.5\pm 0.7$. Figure~\ref{fig:fermi} shows the $\gamma$-ray photon flux of J1023 in both the MSP and LMXB state; no trend or variability within the LMXB state is obvious. \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{1023-flux.eps} \caption{The \emph{Fermi} $\gamma$-ray light curve of J1023, extended from that presented in \citet{stappers14a}. Top panel: $\gamma$-ray photon fluxes obtained by fitting for the normalization of a spectral model. Bottom panel: $\gamma$-ray photon fluxes obtained by measuring numbers of counts within a one-degree aperture; this panel considers only photons of $>1\,\textrm{GeV}$ since lower-energy photons are too poorly localized. The vertical line marks the disappearance of radio pulsations from J1023 in 2013 June \citep{stappers14a}. In both panels, the red line shows average $\gamma$-ray flux (with dotted red line showing the 1$\sigma$ errors) before and after the 2013 June state transition. } \label{fig:fermi} \end{center} \end{figure*} \subsection{Comparison of the X-ray and radio light curves} In Figure~\ref{fig:mediancomparison}, we show the median luminosity of J1023 at each epoch in the radio and X-ray bands. The median flux density for both the X-ray and radio emission remains remarkably constant over time, indicating that the system is in a relatively stable configuration during the LMXB state (on timescales of months to over a year), as noted by \citet{archibald15a} and \citet{bogdanov15b}. The lack of sufficient simultaneity between the radio and X-ray observations prevents us from drawing conclusions as to whether the variability is correlated. There are no simultaneous VLA and \emph{Swift} data, and only a 15 minute overlap between the first VLA observation and the aforementioned \emph{XMM-Newton} observation \citep[discussed separately by][]{bogdanov15b}. For the purposes of the discussion below, we do not assume any correlation between the radio and X-ray variations and simply use the median value of radio luminosity and the median value of X-ray luminosity, along with error bars encompassing the most compact 67\% interval for both quantities. \begin{figure*} \begin{center} \includegraphics[width=0.85\textwidth]{xray-radio-median.eps} \caption{The X-ray and radio light curves from the VLA and \emph{Swift}, where the median value of each observation is plotted as a single point. \emph{Swift} observations that were shorter than 15 minutes and hence unlikely to sample a representative range of the X-ray emission were excluded from the plot. For both the radio and X-ray panels, the central 67\% range is shown as the gray region. A few outlier points notwithstanding, the median X-ray luminosity remains exceedingly stable, within approximately $\pm20$\%. The radio is somewhat more variable, which may be due to the fact that the individual radio measurements generally required longer averages (up to 480 seconds, compared to 50 seconds for the {\em Swift} samples). This yields a smaller number of time samples in the radio lightcurve, a considerable fraction of which would include times corresponding to two different X-ray modes averaged together, whereas the X-ray observations have more time points (with a cleaner separation between modes). Since a 45-minute radio observation would sample a number of mode changes, this could lead to an increased scatter for the median radio points.} \label{fig:mediancomparison} \end{center} \end{figure*} \section{Discussion} \label{sec:discussion} \subsection{A jet origin for the radio emission of J1023} \label{sec:disc:jet} The radio flux density and spectral index, and the time variability thereof, seen for J1023 are indicative of synchrotron emission originating in material outflowing from the system. No evidence is seen from the radio spectral index (down to timescales of 5 minutes) of steep spectrum emission that would be indicative of radio pulsar emission. We cannot categorically rule out intermittent bursts of radio pulsar activity, but such periods would have to be both sparse and brief in order to remain consistent with our results, or else completely absorbed by free-free absorption by dense ionized material surrounding the neutron star. Taken together with X-ray pulsations reported by \citet{archibald15a} that indicate near-continual accretion onto the neutron star surface, and the lack of radio pulsation detections reported in \citet{bogdanov15b}, the ubiquitous flat-spectrum radio emission makes an (even intermittently) active radio pulsar in J1023 in the present LMXB state appear unlikely. In analogy to other LMXB systems, the most obvious interpretation of our data is that J1023 in its present state hosts a compact, partially self-absorbed jet powered by the accretion process. The presence of collimated outflows has been previously been directly confirmed via high-resolution imaging in a number of cases for both neutron star and black hole LMXBs \citep{dhawan00a,stirling01a,fomalont01a,fender04a}. In the case of black hole systems, the evidence for jets is compelling even down to extremely low accretion rates: V404 Cygni is believed to host a jet in quiescence with $L_X \sim 10^{32.5}$ erg s$^{-1}$ \citep[e.g.,][]{hynes09a,miller-jones08a}, while A0620-00 \citep{gallo06a,russell13a} and XTE J1118+480 \citep{gallo14a,russell13a} also show radio emission at $L_X < 10^{32}$ erg s$^{-1}$ and evidence supportive of a jet break between the radio and optical bands \citep{gallo07a}. For neutron star LMXBs, however, direct evidence of jets \citep[and other supporting evidence such as optical/near-infrared excesses that are most consistent with jet emission;][]{russell07a} has come only from systems that are undergoing accretion at much higher rates. The X-ray luminosity of J1023 is more than two orders of magnitude lower than for any other neutron star system for which a jet had previously been inferred \citep{migliari11a}. Alternative explanations for the flat spectral index ($-0.3 \lesssim \alpha \lesssim 0.3$) and high brightness temperature limit ($\mathrm{T}_{\mathrm{b}} \simeq 3 \times 10^{8}$\,K) for J1023 are, however, difficult to imagine. Optically thin free-free emission could provide $\alpha\sim-0.1$, but would require an extremely hot population of electrons to match the observed brightness temperature, as $T_e = T_b/\tau$ and in the optically thin regime $\tau \ll 1$. For optically thin synchrotron emission, obtaining a flat spectral index $\alpha\sim0$ requires an extremely hard electron population, with a power-law distribution index $p \sim 1$. Such a hard distribution of electrons would be strongly at odds with the commonly assumed process of shock acceleration, which produces $2 < p < 2.5$ \citep{jones91a}. We consider both of these scenarios (optically thin free-free emission and optically thin synchrotron emission) to be unlikely, and note that definitive proof of a jet in the J1023 system could be provided in the future by resolved VLBI imaging with a very sensitive global VLBI array. Other strong, albeit not definitive, supporting evidence for the jet interpretation could come from the detection of a frequency dependent time delay in the J1023 lightcurve (where the magnitude of the delay implied a relativistic velocity), the detection of linear polarization (implying synchrotron emission), or the detection of a spectral break. \subsection{A possible radio/X-ray correlation for transitional MSP systems} \begin{figure*} \begin{center} \includegraphics[width=0.85\textwidth]{lrlx.eps} \caption{Radio and X-ray luminosity of known LMXB systems, including black hole systems (small black points) and neutron star systems (small magenta stars for AMXPs, small cyan squares for hard-state LXMBs, and large blue symbols for transitional MSP systems). The best-fit correlation for the black hole systems from \citet{gallo14a} is shown with a black dashed line, and the ``atoll/Z-source" and ``hard-state" correlations for the neutron star systems presented in \citet{migliari06a} and \citet{migliari11a} are shown in cyan dotted and cyan dash-dot respectively. J1023 (blue circle symbol) and the other two known transitional MSP systems XSS J12270 \citep[blue square symbol;][]{de-martino13a,bassa14a} and M28I \citep[blue triangle symbol;][]{papitto13a} are highlighted. For J1023, we show an error box that spans the 67\% range of luminosity seen in our radio and X-ray monitoring (taking the median value from each observation, as shown in Figure~\ref{fig:mediancomparison}); the limited radio monitoring for XSS J12270 and M28I precludes such error bars for these sources, but we note that for M28I at least variability was seen within the single radio observation \citep{papitto13a}, and so similar uncertainties could be expected for these sources. The transitional MSP systems appear to exhibit a similar scaling of radio luminosity with X-ray luminosity to the black hole systems, but sit in between the black hole systems and the neutron star systems.} \label{fig:lrlx} \end{center} \end{figure*} In Figure~\ref{fig:lrlx}, we show the radio and X-ray luminosity of J1023 along with the two other known transitional MSP systems XSS J12270 \citep{de-martino13a,bassa14a} and M28I \citep{papitto13a} and most of the published black hole and neutron star LMXB systems \citep{gallo06a,corbel08a,brocksopp10a,soleri10a,coriat11a,migliari11a,jonker12a,ratti12a,russell12a,corbel13a,gallo14a}. The radio luminosity shown in Figure~\ref{fig:lrlx} is defined as $L_R = 4 \pi d^2 \nu S_\nu$, where $d$ is the distance, $\nu$ is the reference radio frequency (in this case, 5 GHz) and $S_\nu$ is the flux density at this frequency. Where necessary, a flat spectral index is assumed to convert measured flux densities to the reference frequency of 5 GHz; for J1023, we take the median 10 GHz flux density of 87 $\mu$Jy and use a flat spectral index (as noted in Section~\ref{sec:specindex}, the median spectral index is very close to 0). The dashed line shows the well-established correlation between radio and X-ray luminosity for black holes \citep[$L_R \propto L_X^{0.61}$,][black line]{gallo14a}, as well as the two postulated correlations for neutron star systems ($L_R \propto L_X^{0.7}$, ``atoll/Z-source" correlation [blue dotted line]; \citealp{migliari06a}, and $L_R \propto L_X^{1.4}$, ``hard-state" correlation [blue dash-dotted line]; \citealp{migliari11a}). We note that studies have differed on the exact correlation index for black hole LMXBs, finding values ranging between 0.6 and 0.7 \citep{gallo03a,gallo12a,gallo14a} and that a tighter correlation between radio and X-ray luminosity is generally seen for individual sources compared to the population as a whole \citep{gallo14a}. Further, some black hole systems on the so-called ``outlier track'' (visible in Figure~\ref{fig:lrlx} as the cluster of X-ray underluminous points at $10^{36} \lesssim L_X \lesssim 10^{37.5}$\, erg s$^{-1}$, which extend down towards the neutron star systems) have been seen to exhibit a correlation index of 1.4 in some cases, which could be taken as evidence that some black hole LMXBs enter a radiatively efficient accretion regime under certain circumstances \citep{coriat11a}. The discussion that follows focuses on lower accretion rates, where no evidence for radiatively efficient accretion in black hole systems has been seen, and is insensitive to changes in the correlation index for black hole LMXBs in the range 0.6 -- 0.7. The three transitional MSP systems appear to be overluminous in the radio band (or underluminous in the X-ray band) compared to either postulated correlation for the neutron star systems; they appear to follow the same relationship as the black hole sources and the ``atoll/Z" neutron star sources, but with a radio luminosity approximately a factor of 5 fainter than the black hole sources, and an order of magnitude brighter than the ``atoll/Z" neutron star sources. M28I occupies a point on the $L_R / L_X$ space that is encompassed by the AMXP population (the magenta stars in Figure~\ref{fig:lrlx}); AMXPs appear to be on average radio-overluminous when compared to other neutron star LMXBs of similar X-ray luminosity \citep{migliari11a}. Thus, the transitional MSP sources are clearly distinct from the hard-state neutron star LMXBs, and the AMXPs appear to straddle the two classes. Below, we examine what processes could lead to this observational differentiation. \subsection{Jet-dominated states in LMXB systems} A variety of jet production mechanisms could potentially operate in LMXB systems. \citet{blandford82a} and \citet{blandford77a} describe two models which have been widely applied to black hole systems, in which the disk magnetic field enables the collimation of the material emitted in the jet. The former model, in which the jet is collimated by magnetic field lines anchored in the accretion disk, can be applied directly to neutron star systems. The latter model extracts energy from the black hole spin and requires that disk magnetic field lines be anchored to the black hole ergosphere; as such it cannot be directly applied to neutron star systems, but the coupling of material in an accretion disk to a neutron star magnetosphere can still drive collimated outflows via a ``propeller" mode \citep[e.g.,][]{romanova05a, ustyugova06a,romanova09a}. Where radio emission in an LMXB system is thought to originate in a jet, it can be used as a proxy for the jet power. For typical jet models, the observed radio luminosity $L_R$ is related to the total jet power $L_j$ by $L_R \propto L_j^{1.4}$ \citep[e.g.][]{blandford79a,markoff01a,falcke96a}. The observed scaling of $L_R \propto L_X^{0.7}$ highlighted in Figure~\ref{fig:lrlx} for black hole LMXBs and (potentially) for transitional MSPs then implies that $L_j \propto L_X^{0.5}$. If the accretion power $L_{\mathrm{tot}}$ is assumed to be liberated only via radiation (traced by $L_X$) or the jet ($L_j$), then we have $L_{\mathrm{tot}} = L_X + L_j$, and, by substitution, $L_{\mathrm{tot}} = L_X + A L_X^{0.5}$ \citep{fender03a}. A boundary between two regimes occurs when $L_X = L_j = A L_X^{0.5}$; at higher accretion rates, the accretion is radiatively efficient and dominated by the X-ray output ($L_X \sim L_{tot}$, $L_X \propto \dot{M}$, and $L_j \propto \dot{M}^{0.5}$), while below this value, the accretion is radiatively inefficient and the system is jet-dominated ($L_X \ll L_{tot}$, $L_X \propto \dot{M}^2$, and $L_j \propto \dot{M}$). For a number of black hole LMXBs, the coefficient $A$ has been estimated to range between 0.006 and 0.1 (where all luminosities are given in Eddington units), based on quasi-simultaneous measurements of $L_j$ and $L_X$ at a given accretion rate \citep{fender01a,corbel02a,fender03a,migliari06a}. A key reason for this wide range of allowed values for $A$ is uncertainty in the ratio of radiated luminosity to total luminosity in the jet, as only the radiated component can be directly measured -- this ratio is generally assumed to be $\lesssim$0.05 \citep[e.g.][]{fender01a}, but remains poorly constrained. Proceeding under the assumption that these values of $A$ are representative for all black hole LMXB systems leads to the determination that all quiescent black hole LMXBs are accreting below the threshold level and hence in a jet-dominated state \citep{fender03a,migliari06a}, although with a sufficiently low value of $A$ it would be possible for a system to remain radiatively efficient even down to quiescent luminosities. The potential for advection of energy across the black hole event horizon complicates this simple interpretation, adding a new ``sink" for some fraction of $L_{\mathrm{tot}}$ and meaning that a system could be radiatively inefficient and yet not jet-dominated; however, this process cannot operate in the neutron star LMXBs. If the difference in radio luminosity between neutron star LMXBs and black hole LMXBs is due purely to a difference in jet power, then the representative values of $A$ calculated for black hole LMXBs can be scaled to calculate the accretion rate at which neutron star LMXBs will transition into a radiatively inefficient, jet-dominated regime. Measurements of neutron star LMXBs at higher accretion rates (small cyan squares in Figure~\ref{fig:lrlx}) show an average radio luminosity 30 times below that given by the black hole $L_R / L_X$ relationship, meaning this transition would occur around or below the lowest accretion rates inferred based on X-ray measurements \citep{fender03a}. Such an interpretation {\em assumes} a scaling of $L_R \propto L_X^{0.7}$ that holds down to quiescence, since previous observations have not been sensitive enough to make radio detections at lower accretion rates. For systems which exhibit $L_R \propto L_X^{1.4}$, as has been suggested by \citet{migliari03a} for 4U 1728--34 and shown by the cyan dash-dot line in Figure~\ref{fig:lrlx}, then the system will remain radiatively efficient at all accretion rates ($L_j \propto L_X \propto \dot{m}$; $L_j < L_X$). The higher-than-expected radio luminosity of the 3 transitional MSP systems indicates that they become radiatively inefficient and jet-dominated at low (but observable) accretion rates. The factor-of-5 difference in radio luminosity compared to the black hole LMXBs implies a coefficient $A$ for the transitional MSP systems that is only 3 times lower than that of the black hole LMXBs, meaning that the transition to radiatively inefficient accretion occurs at an accretion rate 10 times lower (in Eddington units). Assuming that for the black hole systems $A$ lies somewhere in the range 0.006 to 0.1 as described above, this corresponds to an X-ray luminosity of $7\times10^{33}$ - $2\times10^{36}$ erg s$^{-1}$ for a 1.4 M$_{\odot}$ neutron star. J1023, with a 1--10 keV luminosity of $2\times10^{33}$ erg s$^{-1}$, appears therefore to be jet-dominated. XSS J12270 is likewise very probably jet-dominated, but in contrast, the observations of M28I at $\gtrsim1\times10^{36}$ erg s$^{-1}$ (around 1\% of the Eddington luminosity) are more likely to be in the radiation-dominated, radiatively efficient regime. There are a number of influences on the radiative efficiency of the accretion flow and radiative efficiency and power of the jet that could lead to the observed offset between the transitional MSPs and the black hole LMXBs in the $L_R - L_X$ correlation. First, the mass of the compact object is likely to affect the jet properties. The ``fundamental plane" relationship for accreting black holes shows, for a fixed X-ray luminosity, a dependence of radio luminosity on the black hole mass $M_{\mathrm{BH}}$ of the form $L_R \propto M_{\mathrm{BH}}^{\beta}$, where $\beta \simeq 0.6-0.8$ \citep{merloni03a,falcke04a,plotkin12a}. Based on this relationship, a ``typical" stellar-mass black hole of $\sim$8 M$_{\odot}$ \citep{kreidberg12a} would already be 3--4 times more radio-luminous than a neutron-star mass (1.4 M$_{\odot}$) black hole with the same X-ray luminosity. Interaction with the neutron star magnetic field could also affect the efficiency of jet formation. The radius at which the accretion disk transitions to a radiatively inefficient flow (at a given accretion rate) could differ, altering the radiative efficiency of the disk. Finally, the small amount of material that reaches the neutron star surface will also contribute to the radiated energy in the transitional MSP case, whereas this energy could be removed by advection in the case of a black hole LMXB. The exact contribution of material accreting on to the neutron star surface to the X-ray luminosity of J1023 is unclear; only in the high mode do we have a lower limit of 8\% from the detection of coherent pulsations \citep{archibald15a,bogdanov15b}. This value is a lower limit because the total X-ray emission originating from the hotspots will exceed the observed 8\% pulsed fraction, due to the finite hotspot size and propagation effects such as gravitational bending \citep{beloborodov02a}. \subsection{The influence of propellor-mode accretion} We speculate that the (at least intermittent) activation of a propeller-style accretion mode is a key reason for the apparently higher-efficiency jet formation in the transitional MSP systems, compared to other neutron star LMXBs. This is an attractive proposition in that 1) it naturally explains the transition to a jet-dominated regime at lower accretion rates (and hence X-ray luminosities), since the propeller-mode can only operate at these lower accretion rates \citep{ustyugova06a}, 2) simulations have shown that the high-velocity, low-density, collimated axial outflow dominates the energetics of the expelled material \citep{romanova09a}, and 3) a large majority, but not all, of the material is ejected, leaving a small fraction able to reach the neutron star surface \citep{romanova09a,ustyugova06a}. This offers the possibility of explaining the X-ray pulsations observed by \citet{archibald15a} and the radio emission reported here, as well as the similar results for XSS J12270 \citep{papitto15a}. However, we caution that the results of \citet{romanova09a} are based exclusively on magnetohydrodynamic modeling, and so the radiative processes that could lead to the observed radio emission still need to be carefully modeled. Finally, propeller-mode accretion could also explain the high-energy properties of J1023. The marked increase in $\gamma$-ray luminosity in the LMXB state seen by \citet{stappers14a}, and confirmed here to be roughly constant throughout the LMXB state, has previously been hypothesized to be powered by a pulsar wind interaction \citep{stappers14a,coti-zelati14a}. Based on our radio imaging data, however, we judge it unlikely that the radio pulsar ever becomes active during the current LMXB state. In the propeller-mode accretion scenario, the $\gamma$-ray luminosity increase can instead be explained by synchrotron self-Compton emission from electrons accelerated at the inner edge of the accretion disk by the pulsar's magnetic field. Such a model was developed by \citet{papitto14a} to explain the high-energy emission of XSS J12270; if it is correct, and if (as we predict) all transitional MSP objects undergo propeller-mode accretion at low accretion rates, then it is to be expected that J1023 would produce $\gamma$-ray emission via the same mechanism. However, also in this case there are a few caveats to consider. The model proposed by \citet{papitto14a} assumes the presence of a Fermi acceleration process occurring at the disk/magnetosphere boundary, which, although plausible, has not been previously demonstrated to occur in LMXB systems. Furthermore, as noted by \citet{papitto14a}, the emission region responsible for the high-energy emission cannot also produce the observed radio emission; as we have shown, the size of the radio-emitting region in J1023 must be of order 0.5--30 times the binary separation of 4.3 light-seconds. Finally, the \citet{papitto14a} model does not include accretion onto the neutron star surface, although they note that this is a possibility if the amount of inflowing material is very small. The X-ray pulsations seen by \citet{archibald15a} for J1023 show that at least some of the X-ray luminosity of J1023 \emph{is} powered by accretion onto the neutron star surface After submission of this manuscript, the discovery of X-ray pulsations with similar properties for XSS J12270 was also published \citep{papitto15a}, reinforcing the necessity of considering the flow of material to the neutron star surface. Accordingly, the model of \citet{papitto14a} would need further development to reach a self-consistent description of the properties of J1023 from optical to $\gamma$-rays, and further extension (ideally incorporating radiative modeling based on the magnetohydrodynamic models of \citealp{romanova09a}) to cover the radio emission. Given the wealth of multiwavelength data that has been assembled for J1023 \citep{bogdanov15b}, such a detailed analysis should be feasible in a future work. An analysis of the accretion physics of J1023 is complicated by the fact that J1023 has been seen to switch between three distinct X-ray luminosity modes while in its current LMXB state \citep[the ``low", ``high", and ``flare" modes;][]{archibald15a,bogdanov15b} on timescales ranging from 10s of seconds to hours. Very similar X-ray modality has also recently been seen in XSS J12270 \citep{papitto15a}. The jet production mechanism in J1023 might operate in only one of the three X-ray modes, or in several or indeed all modes; alternatively, a jet may be present in all modes but with substantially changing properties. As with the radio emission, the $\gamma$-ray production could differ between the observed modes. Useful extra information could be provided by truly contemporaneous and high-sensitivity radio and X-ray observations, allowing a cross-correlation analysis of the radio and X-ray light curves. If a time delay and smoothing time for the radio emission could be measured, this could provide useful information on the jet velocity, the size of the radio emitting region, and its separation from the neutron star. Resolving the jet and tracking the motion of jet components using VLBI imaging (ideally in conjunction with X-ray observations) would offer a different, similarly powerful handle on the jet physics; however, as our EVN observations showed, this would require an extremely sensitive VLBI array, as well the good fortune to observe during a period of above-average radio luminosity. \subsection{Expanding the study of jets in neutron star LMXBs at low accretion rates} In addition to ongoing investigation of J1023 in the LMXB state, the identification of additional members of this apparent source class of jet-dominated neutron star LMXBs would be extremely helpful in furthering our understanding of the accretion process(es) in this regime. This is especially true since the current single $L_R/L_X$ data points for J1023 and XSS J12270 cannot reveal the correlation index for the individual sources, whereas several black hole LMXBs have had measurements made over several orders of magnitude in X-ray luminosity (and hence accretion rate) which allow this estimation to be made for individual sources in addition to the population as a whole \citep{gallo14a}. Several other known LMXBs display X-ray properties more or less similar to the transitional MSP systems: variable X-ray luminosities, which cluster above their quiescent values at around $10^{33}$ erg s$^{-1}$ for extended periods of time, and a spectrum with a hard power-law component ($\Gamma < 2$). A summary of such known systems is given in \citet{degenaar14a}. However, with one exception, all are considerably more distant than J1023, making the task of identifying any flat-spectrum radio emission challenging. The nearest and most promising source is Cen X$-$4 \citep[$d \sim 1.2$\,kpc;][]{chevalier89a}. Whilst the X-ray luminosity and variability are similar to J1023, \citet{chakrabarty14a} and \citet{dangelo15a} show that the X-ray spectrum is considerably different, with a significant thermal component and a cutoff of the power-law component above 10 keV. In contrast, \citet{tendulkar14a} find no such cutoff for J1023. \citet{bernardini13a} suggest that the X-ray emission of Cen X$-$4 is derived primarily from accretion onto the neutron-star surface and find it unlikely that a strong, collimated outflow would be present. Given the proximity of Cen X-4, confirming or denying the presence of an outflow should be relatively straightforward with a combined radio/X-ray observing campaign. Other sources include XMM J174457-2850.3 \citep[$d = 6.5$\,kpc;][]{degenaar14a}, SAX J1808.4-3658 \citep[a known AMXP at $d=2.5-3.5$\,kpc that has been detected several times in the radio during outburst;][]{gaensler99a,rupen05a,galloway06a,campana08a}, EXO 1745$-$248 \citep[$d\sim8.7$\,kpc][]{wijnands05a}, and Aql X$-$1 \citep[an atoll source at $d\sim5$\,kpc which has once shown coherent X-ray pulsations and has been detected in radio continuum at higher luminosities;][]{rutledge02a,casella08a,tudose09a}. However, whilst all of these are strong candidates for propeller-mode accretion, their distance means that the expected average radio flux density would be of the order 3 -- 25 $\mu$Jy, making a detection challenging to near-impossible, even with very deep VLA imaging. The identification of new transitional MSP systems (either found in the LMXB state, such as the system described by \citealt{bogdanov15a}, or objects from the known population of ``redback" or ``black widow" pulsars which transition to the LMXB state) is an alternate possibility for providing new data points on the radio/X-ray correlation. A third option is a radio survey of other neutron star LMXB systems which display a hard X-ray spectrum at intermediate X-ray luminosities ($10^{34} < L_X < 10^{35}$ erg s$^{-1}$), as the radio emission should also be brighter and hence easier to detect to large distances in a reasonable observing time. Some candidate intermediate luminosity sources are listed in \citet{degenaar14a}. \subsection{Flat-spectrum radio sources in globular clusters} Finally, we consider the implications of this result for observations of accreting sources in globular clusters. As shown in Figure~\ref{fig:lrlx}, the three known transitional MSP systems all exhibit higher radio luminosity than models based on neutron star binaries with higher accretion rates had predicted. Rather than being two or more orders of magnitude fainter in the radio band than black hole systems with an equivalent X-ray luminosity, the difference appears to be closer to a factor of five, which is comparable to the scatter around the correlation (in the case of the black hole correlation; the statistics for the transitional MSP sources are too low to estimate the scatter). This relative radio brightness invites a re-examination of globular cluster black hole candidates in M22 \citep{strader12a} and M62 \citep{chomiuk13a}, where the ratio of X-ray to radio luminosity was a key element in the interpretation. The M62 candidate source falls exactly on the $L_R - L_X$ correlation for black holes, and so a black hole interpretation must still certainly be favoured. However, if the scatter around the transitional MSP correlation is comparable to that of the black hole correlation, then from the point of view of the X-ray and radio luminosity it is certainly plausible that the M62 source is actually a transitional MSP. The likely association of a giant star as the companion object \citep{chomiuk13a} would, however, mean that the orbital period of the system would be longer than that of a typical ``redback" MSP. Of order 10 redback MSPs are known in globular clusters\footnote{A list of known redback pulsars is maintained at \url{http://www.naic.edu/\~{}pfreire/GCpsr.html}}, indicating the potential for transitional systems to be found. In light of the rapid variability demonstrated by J1023, the fact that the radio and X-ray observations of M62 were not contemporaneous is particularly important, as it suggests that a single measurement of radio luminosity and a single measurement of X-ray luminosity may not be robust. The two M22 sources fall well above the $L_R - L_X$ correlation for black holes (that is, they are even more radio bright than predicted by the best-fit correlation for black hole systems). At face value, that argues against a transitional MSP interpretation. However, as with the M62 candidate, the observations were not simultaneous, which offers both the potential for ``normal" variability or a change in the state of the system. In this scenario, the radio observations of the source would have been carried out during the LMXB state, and the X-ray observations during the radio pulsar state. While the probability of both sources changing state between the X-ray and radio observations is unlikely (with the caveat that our knowledge of how long transitional MSPs spend in the LMXB state per transition is limited), the fact that the sources fall quite far from the $L_R - L_X$ correlation for black hole LMXBs also requires explanation. For both M22 and M62, truly simultaneous radio and X-ray observations (or a quasi-simultaneous, extended monitoring campaign such as undertaken for J1023) would give much more certainty about whether a transitional MSP system could be responsible. \section{Conclusions} We have monitored the transitional MSP J1023 in its current LMXB state over a period of 6 months, detecting variable X-ray emission and variable, flat-spectrum radio emission that is suggestive of a compact jet. In both the radio and X-ray bands, the average luminosity over months to a year is quite stable, despite variability of two orders of magnitude in luminosity on timescales of seconds to hours. When J1023 is compared to the other two known transitional MSPs and other black hole and neutron star LMXBs, it appears that the transitional MSPs exhibit a different correlation between radio and X-ray luminosity than either the black hole LMXB systems or the relationships previously proposed for neutron star LMXB systems accreting at higher rates \citep{migliari06a,migliari11a}. The apparent correlation seen for the transitional MSP systems does, however, extend to a subset of the AMXP sources, which were already known to be the most radio-loud (for a given X-ray luminosity) of the neutron star LMXB systems \citep{migliari11a}. We hypothesize that the transitional MSPs are jet-dominated accretion systems operating in a propeller mode, where only a fraction of the mass transferred from the secondary reaches the neutron star surface and the majority is ejected in a jet. The similarity to AMXP systems implies that they, too, may become jet-dominated in some cases. In this scenario, the $\gamma$-ray emission seen from J1023 in the LMXB state originates from the acceleration of particles at the inner edge of the accretion disk by the pulsar magnetic field, and the radio emission is generated in the collimated outflow. We predict that neutron star systems accreting in propeller-mode will generically show radio jets and $\gamma$-ray emission, and will follow a radio -- X-ray correlation of $L_R \propto L_X^{0.7}$, which will break down at a sufficiently high mass accretion rate (and hence X-ray luminosity), when accretion can no longer proceed via a mechanism where the majority of the material is expelled. It remains unclear whether the transition to a jet-dominated regime is a common occurrence for neutron star LMXBs at sufficiently low mass transfer rates, or if a property intrinsic to the transitional MSPs / AMXPs such as magnetic field strength and/or spin period is important in enabling jet formation. The \citet{papitto14a} model of propeller-mode accretion which predicts $\gamma$-ray emission is strongly dependent on the neutron star period and magnetic field strength, and it is likely that these parameters are also important for the generation of the jet which powers the radio emission. Deep radio observations of a wider range of neutron star LMXB systems in quiescence would be desirable to answer these questions. Finally, we note that future searches for black holes in globular clusters should make use of contemporaneous radio and X-ray observations to distinguish black hole systems from transitional MSP systems similar to J1023. \acknowledgements We thank the referee, Rob Fender, for constructive suggestions which improved this manuscript. ATD is supported by an NWO Veni Fellowship. JCAMJ is supported by an Australian Research Council (ARC) Future Fellowship (FT140101082) and also acknowledges support from an ARC Discovery Grant (DP120102393). A.P. acknowledge support from an NWO Vidi fellowship. J.W.T.H. acknowledges funding from an NWO Vidi fellowship and ERC Starting Grant ``DRAGNET" (337062). A.M.A. was funded for this work via an NWO Vrije Competitie grant (PI Hessels). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The EVN (\url{http://www.evlbi.org}) is a joint facility of European, Chinese, South African, and other radio astronomy institutes funded by their national research councils. The WSRT is operated by ASTRON (Netherlands Institute for Radio Astronomy) with support from the Netherlands Foundation for Scientific Research. LOFAR, the Low Frequency Array designed and constructed by ASTRON, has facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the International LOFAR Telescope (ILT) foundation under a joint scientific policy. The research leading to these results has received funding from the European Commission Seventh Framework Programme (FP/2007-2013) under grant agreement No. 283393 (RadioNet3). e-VLBI research infrastructure in Europe was supported by the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement number RI-261525 NEXPReS. This research has made use of NASA's Astrophysics Data System. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. \bibliographystyle{apj}
1,116,691,499,477
arxiv
\section{Introduction} A \emph{cluster modular group}, defined in ~\cite{FG09}, is a group associated with a combinatorial data called a \emph{seed}. An element of the cluster modular group is a finite composition of permutations of vertices and \emph{mutations}, which preserves the \emph{exchange matrix} and induces non-trivial ($\A$- and $\X$-)\emph{cluster transformations}. The cluster modular group acts on the \emph{cluster algebra} as automorphisms (only using the $\A$-cluster transformations). A closely related notion of an automorphism group of the cluster algebra, which is called the \emph{cluster automorphism group}, is introduced in ~\cite{ASS} and further investigated by several authors ~\cite{Blanc-Dolgachev,CZ16,CZ16coeff,Lawson16}. Relations between the cluster modular group and the cluster automorphism group are investigated in ~\cite{Fraser}. It is known that, for each marked hyperbolic surface $F$, the cluster modular group associated with the seed associated with an ideal triangulation of $F$ includes the \emph{mapping class group} of $F$ as a subgroup of finite index ~\cite{BS15}. Therefore it seems natural to ask whether a property known for mapping class groups holds for general cluster modular groups. In this paper we attempt to provide an analogue of the \emph{Nielsen-Thurston theory} ~\cite{Thurston,FLP} on mapping class groups, which classifies mapping classes into three types in terms of fixed point property of the action on the \emph{Thurston compactification} of the \Teich space. Not only is this an attempt at generalization, but also it is expected to help deepen understanding of mapping classes as cluster transformations. A problem, which is equivalent to classifying mapping classes in terms of the cluster transformations, was originally raised in ~\cite{PP93}. The \emph{cluster ensemble} associated with a seed, defined in ~\cite{FG09}, plays a similar role as the \Teich space when we study cluster modular groups. It can be thought of two spaces on which the cluster modular group acts. Technically, it consists of two functors $\psi_\A$, $\psi_\X: \G \to \Pos(\mathbb{R})$, called $\A$- and $\X$-spaces respectively, which are related by a natural transformation $p: \psi_\A \to \psi_\X$. Here the objects of the target category are split algebraic tori over $\mathbb{R}$, and the values of these functors patch together to form a pair of contractible manifolds $\A(\pos)$ and $\X(\pos)$, on which the cluster modular group acts analytically. These manifolds are naturally compactified to a pair of topological closed disks $\overline{\A}=\A(\pos) \sqcup P\A(\trop)$ and $\overline{\X}=\X(\pos) \sqcup P\X(\trop)$, called the \emph{tropical compactifications} ~\cite{FG16,Le16}, on which the actions of the cluster modular group extend continuously. These are algebraic generalizations of the \emph{Thurston compactifications} of \Teich spaces. In the case of the seed associated with a triangulated surface, $\U(\pos)=p(\A(\pos))$ is identified with the \Teich space, $\A(\pos)$ and $\X(\pos)$ are the \emph{decorated \Teich space} and the \emph{enhanced \Teich space} introduced by Penner ~\cite{Penner87} and Fock-Goncharov ~\cite{FG07}, respectively. The tropical compactification $\overline{\U}$ is identified with the Thurston compactification of the \Teich space ~\cite{FG16}. For an investigation of the action of the cluster modular group on $\U(\mathbb{Z}^t)$, see ~\cite{Mandel14}. For each seed, a simplicial complex called the \emph{cluster complex}, defined in ~\cite{FZ03} and ~\cite{FG09}, admits a simplicial action of the cluster modular group. In the case of the seed associated with an ideal triangulation of a surface $F$, the cluster complex is a finite covering of the \emph{arc complex} of $F$. In terms of the action on the cluster complex, we define three types of elements of the cluster modular group, called \emph{Nielsen-Thurston types}. They constitute an analogue of the classification of mapping classes. \begin{dfn}[Nielsen-Thurston types: \cref{dfn; NT types}] Let $\bi$ be a seed, $\C=\C_{|\bi|}$ the corresponding cluster complex and $\Gamma=\Gamma_{|\bi|}$ the corresponding cluster modular group. An element $\phi \in \Gamma$ is said to be \begin{enumerate} \item periodic if $\phi$ has finite order, \item cluster-reducible if $\phi$ has a fixed point in the geometric realization $|\C|$ of the cluster complex, and \item cluster-pseudo-Anosov (cluster-pA) if no power of $\phi$ is cluster-reducible. \end{enumerate} \end{dfn} These types give a classification of elements of the cluster modular group in the sense that the cyclic group generated by any element intersects with at least one of these types. We have the following analogue of the classical Nielsen-Thurston theory for general cluster modular groups, which is the main theorem of this paper. \begin{thm}[\cref{thm; main thm}]\label{thm: cluster modular} Let $\bi$ be a seed of \Teich type (see \cref{dfn; Teich type}) and $\phi \in \Gamma_{|\bi|}$ an element. Then the followings hold. \begin{enumerate} \item The element $\phi \in \Gamma$ is periodic if and only if it has fixed points in $\A(\pos)$ and $\X(\pos)$. \item The element $\phi \in \Gamma$ is cluster-reducible if and only if there exists a point $L \in \X(\trop)_+$ such that $\phi[L]=[L]$. \item If the element $\phi \in \Gamma$ is cluster-pA, there exists a point $L \in \X(\trop) \backslash \X(\trop)_+$ such that $\phi[L]=[L]$. \end{enumerate} \end{thm} We will show that the seeds of \Teich type include seeds of finite type, the seeds associated with triangulated surfaces, and the rank $2$ seeds of finite mutation type. In the theorem above, we neither characterize cluster-pA elements in terms of fixed point property, nor describe the asymptotic behavior of the orbits as we do in the original Nielsen-Thurston classification (see \cref{classical NT}). However we can show the following asymptotic behavior of orbits similar to that of pA classes in the mapping class groups, for certain classes of cluster-pA elements. \begin{thm}[cluster reductions and cluster Dehn twists: \cref{thm; cluster Dehn twists}]\label{thm; generalized Dehn}\ {} \begin{enumerate} \item Let $\bi$ be a seed, $\phi \in \Gamma_{|\bi|}$ be a cluster-reducible element. Then some power $\phi^l$ induces a new element in the cluster modular group associated with a seed which has smaller mutable rank $n$. We call this process the \emph{cluster reduction}. \item After a finite number of cluster reductions, the element $\phi^l$ induces a cluster-pA element. \item Let $\bi$ be a skew-symmetric connected seed which has mutable rank $n \geq 3$, $\phi \in \Gamma_{|\bi|}$ an element of infinite order. If some power of the element $\phi$ is cluster-reducible to rank $2$, then there exists a point $[G] \in P\A(\trop)$ such that we have \[ \lim_{n \to \infty}\phi^{\pm n}(g)=[G] \text{ in $\overline{\A}$} \] for all $g\in \A(\pos)$. \end{enumerate} \end{thm} We call a mapping class which satisfies the assumption of \cref{thm; generalized Dehn}(3) \emph{cluster Dehn twist}. Dehn twists in the mapping class groups are cluster Dehn twists. The above theorem says that cluster Dehn twists have the same asymptotic behavior of orbits on $\overline{\A}$ as Dehn twists. We expect that cluster Dehn twists together with seed isomorphisms generate cluster modular groups, as Dehn twists do in the case of mapping class groups. The generation of cluster modular groups by cluster Dehn twists and seed isomorphisms will be discussed elsewhere. This paper is organized as follows. In \cref{section: definition}, we recall some basic definitions from ~\cite{FG09}. Here we adopt slightly different treatment of the frozen vertices and definition of the cluster complex from those of ~\cite{FZ03,FG09}. In \cref{section: NT types}, we define the Nielsen-Thurston types for elements of cluster modular groups and study the fixed point property of the actions on the tropical compactifications. Our basic examples are the seeds associated with triangulated surfaces, studied in \cref{section: Teich}. Most of the contents of this section seem to be well-known to specialists, but they are scattered in literature. Therefore we tried to gather results and give a precise description of these seeds. Other examples are studied in \cref{examples}. \bigskip \noindent \textbf{Acknowledgement.} I would like to express my gratitude to my advisor, Nariya Kawazumi, for helpful guidance and careful instruction. Also I would like to thank Toshiyuki Akita, Vladimir Fock, Rinat Kashaev, and Ken'ichi Ohshika for valuable advice and discussion. This work is partially supported by the program for Leading Graduate School, MEXT, Japan. \section{Definition of the cluster modular groups}\label{section: definition} \subsection{The cluster modular groups and the cluster ensembles} We collect here the basic definitions on cluster ensembles and cluster modular groups. This section is based on Fock-Goncharov's seminal paper ~\cite{FG09}, while the treatment of frozen variables here is slightly different from them. In particular, the dimensions of the $\A$- and $\X$-spaces equal to the rank and the mutable rank of the seed, respectively. See \cref{def: ensembles}. \begin{dfn}[seeds] A \emph{seed} consists of the following data ${\bi} =(I, I_0, \epsilon, d)$; \begin{enumerate} \item $I$ is a finite set and $I_0$ is a subset of $I$ called the \emph{frozen subset}. An element of $I-I_0$ is called a \emph{mutable vertex}. \item $\epsilon=(\epsilon_{ij})$ is a $\mathbb{Q}$-valued function on $I \times I$ such that $\epsilon_{ij} \in \mathbb{Z}$ for $(i, j) \notin I_0 \times I_0$, which is called the \emph{exchange matrix}. \item $d = (d_i) \in \mathbb{Z}_{>0}^{I}$ such that $\mathrm{gcd}(d_i)=1$ and the matrix $\hat{\epsilon}_{ij}:= \epsilon_{ij}d_j$ is skew-symmetric. \end{enumerate} The seed $\bi$ is said to be \emph{skew-symmetric} if $d_i=1$ for all $i \in I$. In this case the exchange matrix $\epsilon$ is a skew-symmetric matrix. We simply write ${\bi} =(I, I_0, \epsilon)$ if $\bi$ is skew-symmetric. We call the numbers $N:=|I|$, $n:=|I-I_0|$ the \emph{rank} and the \emph{mutable rank} of the seed $\bi$, respectively. \end{dfn} \begin{rem} Note that unlike Fomin-Zelevinsky's definition of seeds (e.g. ~\cite{FZ03}), our definition does not include the notion of \emph{cluster variables}. A corresponding notion, which we call the \emph{cluster coordinate}, is given in \cref{seed tori} below. \end{rem} Skew-symmetric seeds are in one-to-one correspondence with quivers without loops and 2-cycles. Here a loop is an arrow whose endpoints are the same vertex, and a 2-cycle is a pair of arrows sharing both endpoints and having different orientations. Given a skew-symmetric seed ${\bi} =(I, I_0, \epsilon)$, the corresponding quiver is given by setting the set of vertices $I$, and drawing $|\epsilon_{ij}|$ arrows from the vertex $i$ to the vertex $j$ (resp. $j$ to $i$) if $\epsilon_{ij}>0$ (resp. $\epsilon_{ij}<0$). \begin{dfn}[seed mutations] For a seed ${\bi} =(I, I_0, \epsilon, d)$ and a vertex $k \in I - I_0$, we define a new seed ${\bi'} =(I', I_0', \epsilon', d')$ as follows: \begin{itemize} \item $I':=I, I_0':= I_0, d':=d$, \item $\epsilon'_{ij}:= \begin{cases} -\epsilon_{ij} & \text{if $k \in \{ i, j\}$}, \vspace{2mm} \\ \epsilon_{ij} + \displaystyle \frac{|\epsilon_{ik}|\epsilon_{kj}+ \epsilon_{ik}|\epsilon_{kj}|}{2} & \text{ otherwise}. \end{cases}$ \end{itemize} We write $\bi' = \mu_k(\bi)$ and refer to this transformation of seeds as the \emph{mutation directed to the vertex $k$}. \end{dfn} Next we associate \emph{cluster transformation} with each seed mutation. For a field $k$, let $k^*$ denote the multiplicative group. Our main interest is the case $k=\mathbb{R}$. A direct product $(k^*)^n$ is called a \emph{split algebraic torus} over $k$. \begin{dfn}[seed tori]\label{seed tori} Let $\bi=(I,I_0,\epsilon,d)$ be a seed and $\Lambda:= \mathbb{Z}[I ]$, $\Lambda':=\mathbb{Z}[I-I_0]$ be the lattices generated by $I$ and $I-I_0$, respectively. \begin{enumerate} \item $\X_{\bi}(k):= \mathrm{Hom}_\mathbb{Z}( \Lambda', k^*)$ is called the \emph{seed $\X$-torus} associated with $\bi$. For $i \in I - I_0$, the character $X_i: \X_{\bi} \to k^*$ defined by $\phi \mapsto \phi(e_i)$ is called the \emph{cluster $\X$-coordinate}, where $(e_i)$ denotes the natural basis of $\Lambda'$. \item Let $f_i:=d_i^{-1} e_i^* \in \Lambda^*\otimes_{\mathbb{Z}}\mathbb{Q}$ and $\Lambda^\circ:=\oplus_{i \in I}\mathbb{Z}f_i \subset \Lambda^*\otimes_{\mathbb{Z}}\mathbb{Q}$ another lattice, where $\Lambda^*$ denotes the dual lattice of $\Lambda$ and $(e_i^*)$ denotes the dual basis of $(e_i)$. Then $\A_{\bi}(k):= \mathrm{Hom}_\mathbb{Z}( \Lambda^\circ, k^*)$ is called the \emph{seed $\A$-torus} associated with $\bi$. For $i \in I $, the character $A_i: \A_{\bi} \to k^*$ defined by $\psi \mapsto \psi(f_i)$ is called the \emph{cluster $\A$-coordinate}. The coordinates $A_i$ ($i \in I_0$) are called \emph{frozen variables}. \end{enumerate} \end{dfn} Note that $\X_\bi(k) = (k^*)^n$ and $\A_\bi(k) = (k^*)^N$ as split algebraic tori. These two tori are related as follows. Let $p^*: \Lambda' \to \Lambda^\circ$ be the linear map defined by \[ p^*(v) = \sum_{\substack{i \in I-I_0 \\ k \in I}} v_i \epsilon_{ik} f_k \] for $v=\sum_{i \in I-I_0} v_i e_i \in \Lambda'$. By taking $\mathrm{Hom}_\mathbb{Z}( -, k^*)$, it induces a monomial map $p_{\bi}: \A_{\bi} \to \X_{\bi}$, which is represented in cluster coordinates as $p_{\bi}^* X_i = \prod_{k \in I} A_k^{\epsilon_{ik}}$. \begin{rem} Note that we assign cluster $\X$-coordinates only on mutable vertices, which is a different convention from that of ~\cite{FG09}. It seems to be natural to adopt our convention from the point of view of the \Teich theory (see \cref{section: Teich}). \end{rem} \begin{dfn}[cluster transformations]\label{cluster transf} For a mutation $\mu_k:\bi \to \bi'$, we define transformations on seed tori called the \emph{cluster transformations} as follows; \begin{enumerate} \item $\mu_k^{x}: \X_{\bi} \to \X_{\bi'}$, \\ $(\mu_k^{x})^* X_i':= \begin{cases} X_k^{-1} & \text{if $i=k$}, \\ X_i(1+ X_k^{\mathrm{sgn}\epsilon_{ki}})^{\epsilon_{ki}} & \text{otherwise}, \end{cases}$ \item $\mu_k^{a}: \A_{\bi} \to \A_{\bi'}$, \\ $(\mu_k^{a})^* A_i':= \begin{cases} A_i^{-1}(\prod_{\epsilon_{kj}>0} A_j^{\epsilon_{kj}} + \prod_{\epsilon_{kj}<0} A_j^{-\epsilon_{kj}}) & \text{if $i=k$}, \\ A_i & \text{otherwise}. \end{cases}$ \end{enumerate} \end{dfn} Note that the frozen $\A$-variables are not transformed by mutations, while they have an influence on the transformations of the mutable $\A$-variables. \begin{dfn}[the cluster modular group] Let $\bi=(I,I_0,\epsilon,d)$ be a seed. Recall that a \emph{groupoid} is a small category whose morphisms are all invertible. \begin{enumerate} \item A \emph{seed isomorphism} is a permutation $\sigma$ of $I$ such that $\sigma(i)=i$ for all $i \in I_0$ and $\epsilon_{\sigma(i) \sigma(j)}=\epsilon_{ij}$ for all $i,j \in I$. A \emph{seed cluster transformation} is a finite composition of mutations and seed isomorphisms. A seed cluster transformation is said to be \emph{trivial} if the induced cluster $\A$- and $\X$- transformations are both identity. Two seeds are called \emph{equivalent} if they are connected by a seed cluster transformation. Let $|\bi|$ denote the equivalence class containing the seed $\bi$. \item Let $\G_{|\bi|}$ be the groupoid whose objects are seeds in $|\bi|$, and morphisms are seed cluster transformations, modulo trivial ones. The automorphism group $\Gamma= \Gamma_{|\bi|}:= \mathrm{Aut}_{\G_{|\bi|}}(\bi)$ is called the \emph{cluster modular group} associated with the seed $\bi$. We call elements of the cluster modular group \emph{mapping classes} in analogy with the case in which the seed is coming from an ideal triangulation of a surface (see \cref{section: Teich}). \end{enumerate} \end{dfn} \begin{ex}\label{example: cluster modular group} We give some examples of cluster modular groups. \begin{enumerate} \item (Type $A_2$). Let ${\bi}:= (\{0,1\}, \emptyset, \epsilon)$ be the skew-symmetric seed defined by $\epsilon:= \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$, which is called \emph{type $A_2$}. Let $\phi:=(0\ 1)\circ \mu_0 \in \Gamma_{A_2}$. It is the generator of the cluster modular group. The associated cluster transformations are described as follows: \begin{align*} \phi^*(A_0, A_1)&=\left(A_1, \frac{1+A_1}{A_0}\right), \\ \phi^*(X_0, X_1)&=(X_1(1+X_0), X_0^{-1}). \end{align*} Then one can check that $\phi$ has order $5$ by a direct calculation. See ~\cite{FG09}Section 2.5 for instance. In particular we have $\Gamma_{A_2} \cong \mathbb{Z}\slash 5$. \item (Type $L_k$ for $k \geq 2$). For an integer $k \geq 2$, let ${\bi}_k:= (\{0,1\}, \emptyset, \epsilon_k)$ be the skew-symmetric seed defined by $\epsilon_k:= \begin{pmatrix} 0 & k \\ -k & 0 \end{pmatrix}$. Let us refer to this seed as the \emph{type $L_k$}. The quiver associated with the seed ${\bi}_k$ is shown in \cref{fig; Lk}. Let $\phi:=(0\ 1)\circ \mu_0 \in \Gamma_{L_k}$. It is the generator of the cluster modular group. In this case, the associated cluster transformations are described as follows: \begin{align*} \phi^*(A_0, A_1)&=\left(A_1, \frac{1+A_1^k}{A_0}\right), \\ \phi^*(X_0, X_1)&=(X_1(1+X_0)^k, X_0^{-1}). \end{align*} It turns out that in this case the element $\phi$ has infinite order ~\cite{FZ03}. See \cref{example: periodic}. \begin{figure} \[ \begin{xy}0;<1pt,0pt>:<0pt,-1pt>:: (50,0) *+[o][F]{0} ="0", (100,0) *+[o][F]{1} ="1", "0", {\ar|*+{\scriptstyle k}"1"}, \end{xy} \] \caption{quiver $L_k$} \label{fig; Lk} \end{figure} \end{enumerate} \end{ex} Next we define the concept of a \emph{cluster ensemble}, which is defined to be a pair of functors related by a natural transformation. A cluster ensemble, in particular, produces a pair of real-analytic manifolds, on which the cluster modular group acts analytically. Let us recall some basic concepts from algebraic geometry. For a split algebraic torus $H$, let $X_1,\dots,X_n$ be its coordinates. A rational function $f$ on $H$ is said to be \emph{positive} if it can be represented as $f=f_1/f_2$, where $f_i=\sum_{\alpha \in \mathbb{N}^{n}} a_\alpha X^\alpha$ and $a_\alpha \in \mathbb{Z}_{\geq0}$. Here we write $X^\alpha:=X_1^{\alpha_1}\dots X_n^{\alpha_n}$ for a multi-index $\alpha \in \mathbb{N}^n$. Note that the set of positive rational functions on a split algebraic torus form a semifield under the usual operations. A rational map between two split algebraic tori $f: H_1 \to H_2$ is said to be \emph{positive} if the induced map $f^*$ preserves the semifields of positive rational functions. \begin{dfn}[positive spaces]\ {} \begin{enumerate} \item Let $\Pos(k)$ be the category whose objects are split algebraic tori over $k$ and morphisms are positive rational maps. A functor $\psi: \G \to \Pos(k)$ from a groupoid $\G$ is called a \emph{positive space}. \item A \emph{morphism} $\psi_1 \to \psi_2 $ between two positive spaces $ \psi_i: \G _i \to \Pos(k)$ (i=1,2) consists of the data $(\iota, p)$, where $\iota: \G_1 \to \G_2$ is a functor and $p: \psi_1 \Rightarrow \psi_2 \circ \iota$ is a natural transformation. A morphism of positive spaces $(\iota, p): \psi_1 \to \psi_2$ is said to be \emph{monomial} if the map between split algebraic tori $p_\alpha: \psi_1(\alpha) \to \psi_2(\iota(\alpha))$ preserves the set of monomials for each object $\alpha \in \G_1$. \end{enumerate} \end{dfn} \begin{dfn}[cluster ensembles]\label{def: ensembles}\ {} \begin{enumerate} \item From \cref{cluster transf} we get a pair of positive spaces $\psi_{\X}, \psi_{\A}: \G_{|\bi|} \to \Pos(k)$, and we have a monomial morphism $p=p_{|\bi|}: \psi_{\A} \to \psi_{\X}$ (with $\iota=\mathrm{id}$), given by $p_{\bi}^* X_i = \prod_{k \in I} A_k^{\epsilon_{ik}}$ on each seed $\A$- and $\X$-tori. We call these data the \emph{cluster ensemble} associated with the seed $\bi$, and simply write as $p: \A \to \X$. The groupoid $\G=\G_{|\bi|}$ is called the \emph{coordinate groupoid} of the cluster ensemble. \item Let $\U=p(\A)$ be the positive space obtained by assigning the restriction $\psi_{\X}(\mu): p_{\bi}(\A_{\bi}) \to p_{\bi'}(\A_{\bi'})$ for each mutation $\mu: {\bi} \to {\bi'}$. \end{enumerate} \end{dfn} \begin{dfn}[the positive real part] For a cluster ensemble $p: \A \to \X$ and $\Z = \A,$ $\U$ or $\X$, define the \emph{positive real part} to be the real-analytic manifold obtained by gluing seed tori by corresponding cluster transformations, as follows: \[ \left. \Z(\pos):=\bigsqcup_{\bi \in \G} \Z_{\bi}(\mathbb{R}_{>0}) \middle\slash (\mu_k^z) \right., \] where $\Z_\bi(\mathbb{R}_{>0})$ denotes the subset of $\Z_\bi(\mathbb{R})$ defined by the condition that all cluster coordinates are positive. Note that it is well-defined since positive rational maps preserves positive real parts. Similarly we define $\Z(\mathbb{Q}_{>0})$ and $\Z(\mathbb{Z}_{>0})$. \end{dfn} Note that we have a natural diffeomorphism $\Z_\bi(\pos) \to \Z(\pos)$ for each $\bi \in \G$. The inverse map $\psi_\bi^z : \Z(\pos) \to \Z_\bi(\pos)$ gives a chart of the manifold. The cluster modular group acts on positive real parts $\Z(\mathbb{R}_{>0})$ as follows: \begin{equation}\label{action} \xymatrix{ \Z(\pos) \ar[d]_{\phi} \ar[r]^{\psi_\bi^z} & \Z_\bi(\pos) \ar[d]^{\mu_{i_1}^z \dots \mu_{i_k}^z \sigma^*} \\ \Z(\pos) \ar[r]^{\psi_\bi^z} & \Z_\bi(\pos) \\ } \end{equation} Here $\phi = \sigma \circ \mu_{i_k} \dots \mu_{i_1} \in \Gamma$ is a mapping class, $\sigma^*$ is the permutation of coordinates induced by the seed isomorphism $\sigma$. The fixed point property of this action is the main subject of the present paper. \subsection{Cluster complexes} We define a simplicial complex called the \emph{cluster complex}, on which the cluster modular group acts simplicially. In terms of the action on the cluster complex, we will define the Nielsen-Thurston types of mapping classes in \cref{section: NT types}. We propose here an intermediate definition between that of ~\cite{FZ03} and ~\cite{FG09}. Let ${\bi}=(I, I_0, \epsilon, d)$ be a seed. A decorated simplex is an ($n-1$)-dimensional simplex $S$ with a fixed bijection, called a \emph{decoration}, between the set of facets of S and $I-I_0$. Let $\bS$ be the simplicial complex obtained by gluing (infinite number of) decorated $(n-1)$-dimensional simplices along mutable facets using the decoration. Note that the dual graph $\bS^{\vee}$ is a tree, and there is a natural covering from the set of vertices $V(\bS^{\vee})$ to the set of seeds. An edge of $\bS^{\vee}$ is projected to a mutation under this covering. Assign mutable $\A$-variables to vertices of $\bS$ in such a manner that: \begin{enumerate} \item the reflection with respect to a mutable facet takes the $\A$-variables to the $\A$-variables which are obtained by the corresponding mutation. \item the labels of variables coincide with the decoration assigned to the facet in the opposite side. \item the initial $\A$-coordinates are assigned to the initial simplex. \end{enumerate} Note that the assignment is well-defined since the dual graph $\bS^{\vee}$ is a tree. Similarly we assign $\X$-variables to co-oriented facets of $\bS$ (see \cref{fig: assign}). Let $\Delta$ be the subgroup of $\mathrm{Aut}(\bS)$ which consists of elements that preserve all cluster variables. \begin{figure} \begin{tikzpicture} \fill (0,0) circle(2pt) coordinate(A); \fill (0,4) circle(2pt) coordinate(B); \fill (A) ++(150: 4) circle(2pt) coordinate(C); \fill (A) ++(30: 4) circle(2pt) coordinate(D); \draw (A) -- (B); \draw (A) -- (C); \draw (A) -- (D); \draw (C) -- (B); \draw (D) -- (B); \path (A) node[below]{$A_1$}; \path (B) node[above]{$A_2$}; \path (C) node[left]{$A_3$}; \path (D) node[right]{$\mu_1^*A_1$}; \draw[->] (0,2) -- (-0.5,2) [thick] node[above]{$X_3$}; \draw[->] (0,2) -- (0.5,2) [thick] node[below]{$\mu_1^*X_3$}; \draw[->, thick] ($(C) !0.5! (B)$) -- ++(315: 0.5) node[right]{$X_1$}; \draw[->, thick] ($(D) !0.5! (B)$) -- ++(225: 0.5) node[below]{$\mu_1^*X_1$}; \draw[->, thick] ($(C) !0.5! (A)$) -- ++(45: 0.5) node[right]{$X_2$}; \draw[->, thick] ($(D) !0.5! (A)$) -- ++(135: 0.5) node[above]{$\mu_1^*X_2$}; \end{tikzpicture} \caption{assignment of variables} \label{fig: assign} \end{figure} \begin{dfn}[the cluster complex]\label{cluster complex} The simplicial complex $\C=\C_{|\bi|}:= \bS \slash \Delta$ is called the \emph{cluster complex}. A set of vertices $\{\alpha_1, \cdots, \alpha_n\} \subset V(\C)$ is called a \emph{cluster} if it spans a maximal simplex. \end{dfn} Let $\C^\vee$ denote the dual graph of the cluster complex. Note that the clusters, equivalently, the vertices of $\C^\vee$, are in one-to-one correspondence with seeds together with tuples of mutable variables $((A_i), (X_i))$. For a vertex $v \in V(\C^\vee)$, let $[v]$ denote the underlying seed. Then we get coordinate systems of the positive real parts for each vertex $v \in V(\C^\vee)$, as follows: \begin{align*}\xymatrix{ \psi_v^x: \X(\pos) \ar[r]^{\psi_{[v]}^x} & \X_\bi(\pos) \ar[r]^{(X_i)} & \mathbb{R}^n_{>0} } \\ \xymatrix{ \psi_v^a: \A(\pos) \ar[r]^{\psi_{[v]}^a} & \A_\bi(\pos) \ar[r]^{(A_i)} & \mathbb{R}^N_{>0} } \end{align*} The edges of $\C^\vee$ correspond to seed mutations, and the associated coordinate transformations are described by cluster transformations. \begin{rem} In ~\cite{FZ03}, the cluster complex is defined to be a simplicial complex whose ground set is the set of mutable $\A$-coordinates, while the definition in ~\cite{FG09} uses all (mutable/frozen) coordinates. In our definition, the frozen $\A$-variables have no corresponding vertices. The existence of the frozen variables does not change the structure of the cluster complex, see Theorem 4.8 of ~\cite{CKLP}. \end{rem} \begin{prop}[~\cite{FG09}Lemma 2.15]\label{action on cluster complex} Let $D$ be the subgroup of $\mathrm{Aut}(\bS)$ which consists of elements which preserve the exchange matrix. Namely, an automorphism $\gamma$ belongs to $D$ if it satisfies $\epsilon^{[\gamma(v)]}_{\gamma(i), \gamma(j)}=\epsilon^{[v]}_{ij}$ for all $v \in V(\C^\vee)$ and $i,j \in [v]$. Then \begin{enumerate} \item $\Delta$ is a normal subgroup of $D$, and \item the quotient group $D \slash \Delta$ is naturally isomorphic to the cluster modular group $\Gamma$. \end{enumerate} In particular, the cluster modular group acts on the cluster complex simplicially. \end{prop} \begin{ex}\label{example: cluster complex} The cluster complexes associated with seeds defined in \cref{example: cluster modular group} are as follows: \begin{enumerate} \item (Type $A_2$). Let ${\bi}$ be the seed of type $A_2$. The cluster complex is a pentagon. The generator $\phi=(0\ 1)\circ \mu_0 \in \Gamma_{A_2}$ acts on the pentagon by the cyclic shift. \item (Type $L_k$ for $k \geq 2$). Let $\bi$ be the seed of type $L_k$. The cluster complex is 1-dimensional, and the generator $\phi=(0\ 1)\circ \mu_0 \in \Gamma_{L_k}$ acts by the shift of length $1$. The fact that $\phi$ has infinite order implies that the cluster complex is the line of infinite length. See \cref{example: periodic}. \end{enumerate} \end{ex} \subsection{Tropical compactifications of positive spaces} Next we define tropical compactifications of positive spaces, which are described in ~\cite{FG16}~\cite{Le16}. \begin{dfn}[the tropical limit] For a positive rational map $f(X_1, \cdots , X_N)$ over $\mathbb{R}$, we define the \emph{tropical limit} $\Trop (f)$ of $f$ by \[ \Trop (f)(x_1, \cdots, x_N):= \lim_{\epsilon \to 0} \epsilon \log f(e^{x_1/\epsilon}, \cdots, e^{x_N/\epsilon}), \] which defines a piecewise-linear function on $\mathbb{R}^N$. \end{dfn} \begin{dfn}[the tropical space] Let $\psi_{\Z}: \G \to \Pos(\mathbb{R})$ be a positive space. Then let $\Trop(\psi_{\Z}): \G \to \mathrm{PL}$ be the functor given by the tropical limits of positive rational maps given by $\psi_{\Z}$, where $\mathrm{PL}$ denotes the category whose objects are euclidean spaces and morphisms are piecewise-linear $(PL)$ maps. Let $\Z(\trop)$ be the PL manifold obtained by gluing coordinate euclidean spaces by PL maps given by $\Trop(\psi_{\Z})$, which is called the \emph{tropical space}. \end{dfn} Note that since PL maps given by tropical limits are homogeneous, $\mathbb{R}_{>0}$ naturally acts on $\Z(\trop)$. The quotient $P\Z(\trop):=(\Z(\trop) \backslash \{0\})\slash \mathbb{R}_{>0}$ is PL homeomorphic to a sphere. Let us denote the image of $G \in \Z(\trop) \backslash \{0\}$ under the natural projection by $[G] \in P\Z(\trop)$. The cluster modular group acts on $\Z(\trop)$ and $P\Z(\trop)$ by PL homeomorphisms, similarly as \cref{action}. \begin{dfn}[a divergent sequence]\label{divergent} For a positive space $\psi_{\Z}: \G \to \Pos(\mathbb{R})$, we say that a sequence $(g_m)$ in $\Z(\pos)$ is \emph{divergent} if for each compact set $K \subset \Z(\pos)$ there is a number $M$ such that $g_m \not\in K$ for all $m \geq M$. \end{dfn} \begin{dfn}[the tropical compactification] Let $\psi_\X: \G \to \Pos(\mathbb{R})$ be the $\X$-space associated to a seed. For a vertex $v \in V(\C^\vee)$, let $\bi=[v]=(I,I_0, \epsilon, d)$ be the underlying seed, and $\psi_v^x$ and $\mathrm{Trop}(\psi_v^x)$ the associated positive and tropical coordinates, respectively. Then we define a homeomorphism $\F_v: \X(\pos) \to \X(\trop)$ by the following commutative diagram: \[ \xymatrix{ \X(\pos) \ar[d]_{\F_v} \ar[r]^{\psi_v^x} & \mathbb{R}_{>0}^n \ar[d]^{\log} \\ \X(\trop) \ar[r]^{\mathrm{Trop}(\psi_v^x)} & \mathbb{R}^n \\ }. \] Fixing a vertex $v \in V(\C^\vee)$, we define the \emph{tropical compactification} by $\overline{\X}:= \X(\pos) \sqcup P\X(\trop)$, and endow it with the topology of the spherical compactification. Namely, a divergent sequence $(g_n)$ in $\X(\pos)$ converges to $[G] \in P\X(\trop)$ in $\overline{\X}$ if and only if $[\F_v(g_n)]$ converges to $[G]$ in $P\X(\trop)$. Similarly we can consider the tropical compactifications of $\A$ and $\U$-spaces, respectively. \end{dfn} \begin{thm}[Le, ~\cite{Le16} Section 7]\label{Le} Let $p:\A \to \X$ be a cluster ensemble, and $\Z=\A$, $\U$ or $\X$. If we have $[\F_v(g_m)] \to [G]$ in $P\Z(\trop)$ for some $v \in V(\C^\vee)$, then we have $[\F_{v'}(g_m)] \to [G]$ in $P\Z(\trop)$ for all $v' \in V(\C^\vee)$. In particular the definition of the tropical compactification is independent of the choice of the vertex $v \in V(\C^\vee)$. \end{thm} \begin{cor} Let $p: \A \to \X$ be a cluster ensemble, and $\Z=\A$, $\U$ or $\X$. Then the action of the cluster modular group on the positive real part $\Z(\pos)$ continuously extends to the tropical compactification $\overline{\Z}$. \end{cor} \begin{proof} We need to show that $\phi_*(g_m) \to \phi_*([G])$ in $\overline{\Z}$ for each mapping class $\phi \in \Gamma$ and a divergent sequence $(g_m)$ such that $g_m \to [G]$ in $\overline{\Z}$. Here the action in the left-hand side is given by a composition of a finite number of cluster transformations and a permutation, while the action in the right-hand side is given by its tropical limit. Then the assertion follows from \cref{Le}. \end{proof} Note that each tropical compactification is homeomorphic to a closed disk of an appropriate dimension. \section{Nielsen-Thurston types on cluster modular groups}\label{section: NT types} In this section we define three types of elements of cluster modular groups in analogy with the classical Nielsen-Thurston types (see \cref{comparison}). Recall that the cluster modular group acts on the cluster complex simplicially. \begin{dfn}[Nielsen-Thurston type]\label{dfn; NT types} Let $\bi$ be a seed, $\C=\C_{|\bi|}$ be the corresponding cluster complex and $\Gamma=\Gamma_{|\bi|}$ the corresponding cluster modular group. An element $\phi \in \Gamma$ is called \begin{enumerate} \item periodic if $\phi$ has finite order, \item cluster-reducible if $\phi$ has a fixed point in the geometric realization $|\C|$ of the cluster complex, and \item cluster-pseudo-Anosov (cluster-pA) if no power of $\phi$ is cluster-reducible. \end{enumerate} \end{dfn} Recall that the cluster modular group acts on the tropical compactifications $\overline{\A}=\A(\pos) \sqcup P\A(\trop)$ and $\overline{\X}=\X(\pos) \sqcup P\X(\trop)$, which are closed disks of dimension $N$ and $n$, respectively. Hence Brouwer's fixed point theorem says that each mapping class has at least one fixed point on each of the tropical compactifications. The following is the main theorem of the present paper, which is an analogue of the classical Nielsen-Thurston classification theory. \begin{thm}\label{thm; main thm} Let $\bi$ be a seed and $\phi \in \Gamma_{|\bi|}$ a mapping class. Then the followings hold. \begin{enumerate} \item If the mapping class $\phi \in \Gamma$ is periodic, then it has fixed points in $\A(\pos)$ and $\X(\pos)$. \item If the mapping class $\phi \in \Gamma$ is cluster-reducible, then there exists a point $L \in \X(\trop)_+$ such that $\phi[L]=[L]$. \end{enumerate} If the seed $\bi$ is of \Teich type (see \cref{dfn; Teich type}), the followings also hold: \begin{enumerate} \item[(1)$'$] if $\phi$ has a fixed point in $\A(\pos)$ or $\X(\pos)$, then $\phi$ is periodic. \item[(2)$'$] if there exists a point $L \in \X(\trop)_+$ such that $\phi[L]=[L]$, then $\phi$ is cluster-reducible. \end{enumerate} \end{thm} We prove the theorem in the following several subsections. The asymptotic behavior of orbits of certain type of cluster-pA classes on the tropical compactification of the $\A$-space will be discussed in \cref{sub: cluster Dehn twists}. \subsection{Periodic classes} Let us start by studying the fixed point property of periodic classes. Let $\Z=\A$ or $\X$. \begin{prop}\label{periodic} Let $\bi$ be a seed, and $\Gamma=\Gamma_{|\bi|}$ the associated cluster modular group. For any $\phi \in \Gamma$, consider the following conditions: \begin{enumerate} {\renewcommand{\labelenumi}{(\roman{enumi})} \item $\phi$ fixes a cell $C \in \C$ of finite type, \item $\phi$ is periodic, and \item $\phi$ has fixed points in $\Z(\pos)$}. \end{enumerate} Then we have $\mathrm{(i)} \Rightarrow (\mathrm{ii}) \Rightarrow (\mathrm{iii})$. Here a cell $C$ in the cluster complex is \emph{of finite type} if the set of supercells of $C$ is a finite set. \end{prop} \begin{rem} The converse assertion $\mathrm{(iii)} \Rightarrow (\mathrm{ii})$ holds under the condition (T1) on the seed. See \cref{condition(T1)}. \end{rem} \begin{proof} $(\mathrm{i}) \Rightarrow (\mathrm{ii})$. Suppose we have $\phi(C)=C$ for some cell $C \in \C$ of finite type. Then from the definition, the set of supercells of $C$ is a finite set, and $\phi$ preserves this set. Since this set contains a maximal dimensional cell and the cluster complex $\C$ is connected, the action of $\phi$ on $\C$ is determined by the action on this finite set. Hence $\phi$ has finite order. $(\mathrm{ii}) \Rightarrow (\mathrm{iii})$. The proof is purely topological. Assume that $\phi$ has finite order. By Brouwer's fixed point theorem, $\phi$ has a fixed point on the disk $\overline{\Z} \approx D^N$. We need to show that there exists a fixed point in the interior $\Z(\pos)$. Suppose $\phi$ has no fixed points in the interior. Then $\phi$ induces a homeomorphism $\tilde{\phi}$ on the sphere $S^N=D^N \slash \partial D^N$ obtained by collapsing the boundary to a point, and $\tilde\phi$ has no fixed points other than the point corresponding to the image of $\partial D^N$. Now we use the following theorem. \begin{thm}[Brown ~\cite{Brown82}Theorem 5.1]\label{Brown} Let $X$ be a paracompact space of finite cohomological dimension, $s$ a homeomorphism of $X$, which has finite order. If $H_{*}(\mathrm{Fix}(s^k); \mathbb{Z})$ is finitely generated for each $k$, then the Lefschetz number of $s$ equals the Euler characteristic of the fixed point set: \begin{equation} \mathrm{Lef}(s):=\sum_{i} \mathrm{Tr}(s: H_i(X) \to H_i(X)) =\chi(\mathrm{Fix}(s)). \nonumber \end{equation} \end{thm} Applying Brown's theorem for $X=S^N$ and $s=\tilde\phi$ we get a contradiction, since the Lefschetz number of $\tilde\phi$ is an even number in this case, while the Euler characteristic of a point is $1$. Indeed, the homology is non-trivial only for $i=0$ or $N$, and the trace equals to $\pm 1$ on each of these homology groups. Hence $\phi$ has a fixed point in the interior $\Z(\pos)$. \end{proof} To get the converse implication $(\mathrm{iii}) \Rightarrow (\mathrm{ii})$, we need a condition on the seed, which can be thought of an algebraic formulation of the proper discontinuity of the action of the cluster modular group on positive spaces. \begin{prop}[Growth property $(\mathrm{T}1)$]\label{condition(T1)} Suppose that a seed $\bi$ satisfies the following condition. \begin{enumerate} \item[(T1)]For each vertex $v_0 \in V(\C^\vee)$, $g \in \Z(\pos)$ and a number $M>0$, there exists a number $B>0$ such that $\max_{\alpha \in v} |\log Z_\alpha (g)| \geq M$ for all vertices $v \in V(\C^\vee)$ such that $[v]=[v_0]$ and $d_{\C^\vee}(v, v_0) \geq B$. \end{enumerate} Then the conditions $(\mathrm{ii})$ and $(\mathrm{iii})$ in \cref{periodic} are equivalent. Here $d_{\C^\vee}$ denotes the graph metric on the 1-skeleton of $\C^\vee$. \end{prop} Roughly speaking, the condition (T1) says that the values of the cluster coordinates evaluated at a point $g$ diverge as we perform a sequence of mutations which increase the distance $d_{\C^\vee}$. \begin{proof} Let $\phi \in \Gamma_{|\bi|}$ be an element of infinite order. We need to show that $\phi$ has no fixed points in $\Z(\pos)$. It suffices to show that each orbit is divergent. Let $g \in \Z(\pos)$ and $K \subset \Z(\pos)$ a compact set. We claim that there exists a number $M$ such that $\phi^m(g) \notin K$ for all $m \geq M$. Take a number $L>0$ so that $L > \max_{i=1,\dots,N}\max_{g \in K}|\log Z_i(g)|$. Note that since the 1-skeleton of $\C^\vee$ has valency $n$ at any vertex, the graph metric $d_{\C^\vee}$ is proper. Namely, the number of vertices $v$ such that $d_{\C^\vee}(v, v_0) \leq B$ is finite for any $B >0$. Hence for the number $B>0$ given by the assumption (T1), there exists a number $M$ such that $d_{\C^\vee}(\phi^{-m}(v_0), v_0) \geq B$ for all $m \geq M$, since $\phi$ has infinite order. Also note that $[\phi^{-m}(v_0)]=[v_0]$ by \cref{action on cluster complex}. Then we have \[ \max_{i=1, \dots, N}|\log Z_i(\phi^m(g)|= \max_{\alpha \in \phi^{-m}(v_0)} |\log Z_\alpha(g)| \geq L \] for all $m \geq M$, where $(Z_1, \dots, Z_N)$ is the coordinate system associated with the vertex $v_0$. Here we used the equivariance of the coordinates $Z_{\phi^{-1}(\alpha)}(g)=Z_\alpha(\phi(g))$. Hence we have $\phi^m(g) \not\in K$ for all $m\geq M$. \end{proof} \begin{prop}\label{growth} Assume that the cluster modular group $\Gamma_{|\bi|}$ acts on $\Z_{|\bi|}(\pos)$ proper discontinuously. Then the condition $(T1)$ holds. \end{prop} \begin{proof} Suppose that the condition (T1) does not hold. Then there exists a vertex $v_0 \in V(\C^\vee)$, a point $g \in \Z(\pos)$, a number $M>0$, and a sequence $(v_m) \subset V(\C^\vee)$ such that $[v_m]=[v_0]$, $d_{\C^\vee}(v_m, v_0) \geq m$ and $\max_{\alpha \in v_m} |\log Z_\alpha (g)| \leq M$. Take a mapping class $\psi_m \in \Gamma$ so that $\psi_m(v_m)=v_0$. It is possible since $[v_m]=[v_0]$. Then we have \[ \max_{i=1, \dots, N}|\log Z_i(\psi_m(g)|= \max_{\alpha \in \psi_m^{-1}(v_m)} |\log Z_\alpha(g)| \leq M, \] which implies that there exists a compact set $K \subset \Z(\pos)$ such that $\psi_m(g) \in K$ for all $n$. Note that the mapping classes $(\psi_m)$ are distinct, since the vertices $(v_m)$ are distinct. In particular we have $\psi_m^{-1}(K)\cap K \neq \emptyset$ for all $m$, consequently the action is not properly discontinuous. \end{proof} We will verify the condition (T1) for a seed associated with a triangulated surface using \cref{growth} in \cref{Teich are Teich}, and for the simplest case $L_k$ ($k \geq 2$) of infinite type in \cref{examples}. \begin{ex}\label{example: periodic}\ {} \begin{enumerate} \item (Type $A_2$). Let ${\bi}$ be the seed of type $A_2$ and $\phi=(0\ 1)\circ \mu_0 \in \Gamma_{A_2}$ the generator. See \cref{example: cluster modular group}. Recall that the two actions on the positive spaces $\A(\pos)$ and $\X(\pos)$ are described as follows: \begin{align*} \phi^*(A_0, A_1)&=\left(A_1,\frac{1+A_1}{A_0}\right), \\ \phi^*(X_0, X_1)&=(X_1(1+X_0), X_0^{-1}). \end{align*} The fixed points are given by $(A_0,A_1)=((1+\sqrt{5})/2, (1+\sqrt{5})/2)$ and $(X_0,X_1)=((1+\sqrt{5})/2, (-1+\sqrt{5})/2)$, respectively. \item (Type $L_k$ for $k \geq 2$). Let $\bi$ be the seed of type $L_k$ and $\phi=(0\ 1)\circ \mu_0 \in \Gamma_{L_k}$ the generator. See \cref{example: cluster modular group}. Recall that the two actions on the positive spaces $\A(\pos)$ and $\X(\pos)$ are described as follows: \begin{align*} \phi^*(A_0, A_1)&=\left(A_1,\frac{1+A_1^k}{A_0}\right), \\ \phi^*(X_0, X_1)&=(X_1(1+X_0)^k, X_0^{-1}). \end{align*} These equations have no positive solutions. Indeed, the $\X$-equation implies $X_0^2=(1+X_0)^k$, which has no positive solution since $\begin{pmatrix}k \\ 2 \end{pmatrix} \geq 1$ for $k \geq 2$. Similarly for $\A$-variables. Hence we can conclude that $\phi$ has infinite order by \cref{periodic}. In particular we have $\Gamma_{L_k} \cong \mathbb{Z}$. \end{enumerate} \end{ex} \subsection{Cluster-reducible classes} In this subsection, we study the fixed point property of a cluster-reducible class. Before proceeding, let us mention the basic idea behind the constructions. Consider the seed associated with an ideal triangulation of a marked hyperbolic surface $F$. Here we assume $F$ is a closed surface with exactly one puncture or a compact surface without punctures (with marked points on its boundary). Then the vertices of the cluster complex $\C$ are represented by \emph{ideal arcs} on $F$. See \cref{Arc}. In particular each point in the geometric realization $|\C|$ of the cluster complex is represented by the projective class of a linear combination of ideal arcs. On the other hand, the Fock-Goncharov boundary $P\X(\trop)$, which is identified with the space of \emph{measured laminations} on $F$, contains all such projective classes. Hence the cluster complex is embedded into the Fock-Goncharov boundary of the $\X$-space in this case. In \cref{subsub: redX} we show that this picture is valid for a general seed satisfying some conditions. See \cref{isomorphism}. \subsubsection{Fixed points in the tropical $\X$-space}\label{subsub: redX} \begin{dfn}[the non-negative part] Let $\bi$ be a seed. For each vertex $v \in V(\C^\vee)$, let $K_{v}:=\{ L \in \X(\trop) | L \geq 0\text{ in } v\}$ be a cone in the tropical space, where $L \geq 0\text{ in } v$ means that $x_\alpha(L)\geq 0$ for all $\alpha \in v$. Then the union $\X(\trop)_+:= \bigcup_{v \in V(\C^\vee)} K_v \subseteq \X(\trop)$ is called the \emph{non-negative part} of $\X(\trop)$. \end{dfn} Let us define a $\Gamma$-equivariant map $\Psi: \C \to P\X(\trop)_+$ as follows. The construction contains reformulations of some conjectures stated in ~\cite{FG09} Section 5, for later use. For each maximal simplex $S$ of $\bS$, let $[S]$ denote the image of $S$ under the projection $\bS \to \C$, and let $v \in V(\C^\vee)$ be the dual vertex of $[S]$. By using the barycentric coordinate of the simplex $S$, we get an identification $S\cong P\mathbb{R}^{n}_{\geq 0}$. Then we have the following map: \[\xymatrix{ \Psi_S: S \cong P\mathbb{R}^{n}_{\geq 0} \ar[r]^{\xi_v^{-1}} & PK_v \subseteq P\X(\trop)_+, } \] where $\xi_v:=\Trop(\psi_v^x): \X(\trop) \to \mathbb{R}^n $ is the tropical coordinate associated with the vertex $v$, whose restriction gives a bijection $K_v \to \mathbb{R}_{\geq 0}^n$. Since the tropical $\X$-transformation associated to a mutation $\mu_k: v \to v'$ preserves the set $\{ x_k=0\}$ and the dual graph $\bS^{\vee}$ is a tree, these maps combine to give a map \[ \Psi:=\bigcup_{v \in V(\C^\vee)} \Psi_v: \bS \to P\X(\trop)_+, \] which is clearly surjective. Assume we have $S'=\gamma(S)$ for some $\gamma \in \Delta$. Then from the definition of $\Delta$, $\gamma$ preserves all the tropical $\X$-coordinates. Hence we have $\Psi_{v'}(\gamma x)=\Psi_v(x)$ for all $x \in S$, and the map descends to \[ \Psi : \C=\bS \slash \Delta \to P\X(\trop)_+. \] \begin{lem}\label{Psi equivariance} The surjective map $\Psi$ defined above is $\Gamma$-equivariant. \end{lem} \begin{proof} It follows from the following commutative diagram for $\phi \in \Gamma$: \[ \xymatrix{ S \ar[d]_{\phi} \ar[r]^{\cong} & P\mathbb{R}^n_{ \geq 0} \ar[d]^{\phi^*} \ar[r]^{\xi_v^{-1}} & K_v \ar[d]^{\phi^x} \\ \phi(S) \ar[r]^{\cong} & P\mathbb{R}^n_{ \geq 0} \ar[r]^{\xi_{\phi^{-1}(v)}^{-1}} & PK_v \\ } \] Here $v$ is the dual vertex of $[S]=[\phi(S)]$, $\phi^*$ is the permutation on vertices induced by $\phi$, and $\phi^x$ is the induced tropical $\X$-transformation on $\X(\trop)$. \end{proof} Next we introduce a sufficient condition for $\Psi$ being injective. For a point $L \in \X(\trop)$, a cluster $C$ in $\C$ is called a \emph{non-negative cluster} for $L$ if $L \in K_v$, where $v \in V(\C^\vee)$ is the dual vertex of $C$. The subset $Z(L):=\{ \alpha\in V(C) \mid \xi_v(L; \alpha)=0\} \subset V(C)$ is called the \emph{zero subcluster} of $L$. Here $\xi_v(-; \alpha)$ denotes the component of the chart $\xi_v$ corresponding to the vertex $\alpha$. Since the mutation directed to a vertex $k \in Z(L)$ preserves the signs of coordinates, the cluster $\mu_k(C)$ inherits the zero subcluster $Z(L)$. Two non-negative clusters $C$ and $C'$ are called \emph{$Z(L)$-equivalent} if they are connected by a finite sequence of mutations directed to the vertices in $Z(L)$. \begin{lem}\label{isomorphism} Assume that a seed $\bi$ satisfies the following condition: \begin{enumerate} \item[(T2)] For each $L \in \X(\trop)_+$, any two non-negative clusters for $L$ are $Z(L)$-equivalent. \end{enumerate} Then the map $\Psi: \C \to P\X(\trop)_+$ is a $\Gamma$-equivariant isomorphism. \end{lem} Compare the condition $(\mathrm{T2})$ with Conjecture 5.10 in ~\cite{FG09}. \begin{proof} We need to prove the injectivity of $\Psi$. Note that $\Psi$ is injective on each simplex. Also note that, by the construction of the map $\Psi$, a point $[L]=\Psi(x)$ ($x \in C$) satisfies $Z(L) \neq \emptyset$ if and only if $x$ is contained in the boundary of the simplex $C$. Assume that $C$, $C'$ are distinct clusters, $x \in C$, $x' \in C'$ and $\Psi(x)=\Psi(x')=: [L] \in P\X(\trop)_+$. If $x$ lies in the interior of the cluster $C$, then $Z(L)= \emptyset$. Then the condition (T2) implies that $C=C'$, which is a contradiction. Hence $Z(L) \neq \emptyset$. Then the condition (T2) implies that $C'$ is $Z(L)$-equivalent to $C$. On the other hand, the point $x$ (resp. $x'$) must be contained in the face of $C$ (resp. $C'$) spanned by the vertices in $Z(L)$. Hence $x, x' \in Z(L) \subset C \cap C'$. In particular $x$ and $x'$ are contained in the same simplex, hence we have $x=x'$. Therefore $\Psi$ is injective. \end{proof} \begin{ex}Seeds of finite type satisfy the equivalence property $(\mathrm{T2})$, see ~\cite{FG09} Theorem 5.8. \end{ex} \begin{prop}[fixed points in $\X$-space]\label{redX} Let $\bi$ be a seed, and $\phi \in \Gamma_{|\bi|}$ a mapping class. Then the followings hold. \begin{enumerate} \item If $\phi$ is cluster-reducible, then there is a point $L \in \X(\trop)_+ \backslash \{0\}$ such that $\phi[L]=[L]$. \item If $\bi$ satisfies the condition $(\mathrm{T2})$, then the converse of $(1)$ is also true. \end{enumerate} \end{prop} \begin{proof} The assertions follow from \cref{Psi equivariance} and \cref{isomorphism}, respectively. \end{proof} \begin{dfn}[seeds of definite type] A seed $\bi$ is \emph{of definite type} if $\X_{|\bi|}(\trop)_+ = \X_{|\bi|}(\trop)$. \end{dfn} \begin{prop}\label{prop: definite} Assume that a seed $\bi$ satisfies the equivalence property $(\mathrm{T2})$. Then $\bi$ is of definite type if and only if it is of finite type. \end{prop} \begin{proof} The fact that finite type seeds are definite is due to Fock-Goncharov ~\cite{FG09}. Let us prove the converse implication. Assume that $\X$ is of definite type. Then by \cref{isomorphism} we have a homeomorphism $\Psi: \C \to P\X(\trop)$, and the latter is homeomorphic to a sphere. In particular $\C$ is a compact simplicial complex, hence it can possess finitely many cells. \end{proof} \begin{rem} The conclusion part in \cref{prop: definite} is Conjecture 5.7 in ~\cite{FG09}. \end{rem} \begin{dfn}[seeds of \Teich type]\label{dfn; Teich type} A seed $\bi$ is of \Teich type if it satisfies the the growth property $(\mathrm{T1})$ and equivalence property $(\mathrm{T2})$, defined in \cref{condition(T1)} and \cref{isomorphism}, respectively. \end{dfn} \begin{ex}\ {} \begin{enumerate} \item Seeds of finite type are of \Teich type. See ~\cite{FG09} Theorem 5.8. \item Seeds associated with triangulated surfaces are of \Teich type. See \cref{Teich are Teich}. \item The seed of type $L_k$ ($k \geq 1$) is of \Teich type. See \cref{examples}. \end{enumerate} \end{ex} \begin{cor}\label{cor; cluster-pA} Let $\bi$ be a seed of \Teich type, and $\phi \in \Gamma$ a cluster-pA class. Then there exists a point $L \in \X(\trop) \backslash \X(\trop)_+$ such that $\phi[L]=[L]$. \end{cor} \begin{proof} Since the tropical compactification $\overline{\X}$ is a closed disk, Brouwer's fixed point theorem says that there exists a point $x \in \overline{\X}$ such that $\phi(x)=x$. If $x \in \X(\pos)$, then by assumption $\phi$ has finite order, which is a contradiction. If $x \in P\X(\trop)_+$, then by \cref{redX}(2), $\phi$ is cluster-reducible, which is a contradiction. Hence $x \in P(\X(\trop) \backslash \X(\trop)_+)$. \end{proof} \subsubsection{Cluster reduction and fixed points in the tropical $\A$-space}\label{subsub: redA} Here we define an operation, called the \emph{cluster reduction}, which produces a new seed from a given seed and a certain set of vertices of the cluster complex. At the end of Section \ref{subsub: redA} we study the fixed point property of a cluster-reducible class on the tropical $\A$-space. Let $\{\alpha_1, \dots, \alpha_k\} \subset V(\C)$ be a subset of vertices, which is contained in a cluster. \begin{lem}[the cluster reduction of a seed] Take a cluster containing $\{\alpha_1, \dots, \alpha_k\}$. Let ${\bi}=(I, I_0, \epsilon, d)$ be the underlying seed and $i_j:=[\alpha_j] \in I$ the corresponding vertex for $j=1,\dots, n-2$ under the projection $[\ ]: \{\text{clusters}\} \to \{\text{seeds}\}$ (see \cref{cluster complex}). Then we define a new seed by ${\bi}':= (I, I_0 \sqcup \{i_1, \dots, i_k\}, \epsilon, d)$, namely, by "freezing" the vertices $\{i_1, \dots, i_k\}$. Then the corresponding cluster complex $\C':=\C_{|\bi'|}$ is naturally identified with the link of $\{\alpha_1, \dots, \alpha_k\}$ in $\C$. In particular the equivalence class $|\bi'|$ does not depend on the choice of the cluster containing $\{\alpha_1, \dots, \alpha_k\}$. \end{lem} \begin{proof} Let $C \subset \C$ be a cluster containing $\{\alpha_1, \dots, \alpha_k\}$, $\bi=(I,I_0,\epsilon,d)$ the corresponding seed. For a mutation directed to a mutable vertex $k \in I-(I_0 \sqcup \{i_1, \dots, i_k\})$, the cluster $C'=\mu_k(C)$ also contains $\{\alpha_1, \dots, \alpha_k\}$. Conversely, any cluster $C'$ containing $\{\alpha_1, \dots, \alpha_k\}$ is obtained by such a sequence of mutations. Hence each cluster in the cluster complex $\C'$ has the form $C \backslash \{\alpha_1, \dots, \alpha_k\}$, for some cluster $C \subset \C$ containing $\{\alpha_1, \dots, \alpha_k\}$. \end{proof} We say that the corresponding object, such as the cluster ensemble $p_{|\bi'|}:\A_{|\bi'|} \to \X_{|\bi'|}$ or the cluster modular group $\Gamma_{|\bi'|}$, is obtained by the \emph{cluster reduction} with respect to the invariant set $\{\alpha_1, \dots, \alpha_k\}$ from the original one. Next we show that some power of a cluster-reducible class induces a new mapping class by the cluster reduction. \begin{lem} Let $\bi$ be a seed, $\phi \in \Gamma_{|\bi|}$ a mapping class. Then $\phi$ is cluster-reducible if and only if it has an invariant set of vertices $\{\alpha_1, \dots, \alpha_k\} \in V(\C)$ contained in a cluster. \end{lem} \begin{proof} Suppose $\phi$ is cluster-reducible. Then $\phi$ has a fixed point $c \in |\C|$. Since the action is simplicial, $\phi$ fixes the cluster $C$ containing the point $c$. Hence $\phi$ permute the vertices of $C$, which give an invariant set contained in $C$. The converse is also true, since $\phi$ fixes the point given by the barycenter of the vertices $\{\alpha_1, \dots, \alpha_k\}$. \end{proof} \begin{dfn}[proper reducible classes]\label{def: proper reducible} A mapping class $\phi \in \Gamma_{|\bi|}$ is called \emph{proper reducible} if it has a fixed point in $V(\C)$. \end{dfn} \begin{lem} Let $\phi \in \Gamma_{|\bi|}$ be a mapping class. \begin{enumerate} \item If $\phi$ is proper reducible, then $\phi$ is cluster reducible. \item If $\phi$ is cluster-reducible, then some power of $\phi$ is proper reducible. \end{enumerate} \end{lem} \begin{proof} Clear from the previous lemma. \end{proof} \begin{lem}[the cluster reduction of a proper reducible class] Let $\phi \in \Gamma_{|\bi|}$ be a proper reducible class, $\{\alpha_1, \dots, \alpha_k\}$ a fixed point set of vertices contained in a cluster. Then $\phi$ induces a new mapping class $\phi' \in \Gamma_{|\bi'|}$ in the cluster modular group obtained by the cluster reduction with respect to $\{\alpha_1, \dots, \alpha_k\}$. \end{lem} \begin{proof} The identification of $\C'$ with the link of the invariant set $\{\alpha_1, \dots, \alpha_k\}$ in $\C$ induces an group isomorphism \[ \Gamma_{|\bi'|} \cong \{ \psi \in \Gamma_{|\bi|} | \psi(\C')=\C'\}, \] and the right-hand side contains $\phi$. Let $\phi' \in \Gamma_{|\bi'|}$ be the corresponding element. Note that $\phi$ fixes all frozen vertices in $\bi'$, since it is proper reducible. \end{proof} We say that the mapping class $\phi'$ is obtained by the \emph{cluster reduction} with respect to the fixed point set $\{\alpha_1, \dots, \alpha_k\}$ from $\phi$. \begin{lem} A proper reducible class of infinite order induces a cluster-pA class in the cluster modular group corresponding to the seed obtained by a finite number of the cluster reductions. \end{lem} \begin{proof} Clear from the definition of the cluster-pA classes. \end{proof} \begin{ex}[Type $X_7$] Let $\bi=(\{0,1,2,3,4,5,6\}, \emptyset,\epsilon)$ be the skew-symmetric seed defined by the quiver described in \cref{fig; X7}. We call this seed \emph{type $X_7$}, following ~\cite{Derksen-Owen}. See also ~\cite{FeST12}. The mapping class $\phi_1:=(1\ 2)\circ \mu_1 \in \Gamma_{X_7}$ is proper reducible and fixes the vertex $A_i \in V(\C)$ ($i=0,3,4,5,6$), which is the $i$-th coordinate in the initial cluster. The cluster reduction with respect to the invariant set $\{A_0,A_3,A_4,A_5,A_6\}$ produces a seed $\bi'=(\{0,1,2,3,4,5,6\}, \{0,3,4,5,6\},\epsilon)$ of type $L_2$, except for some non-trivial coefficients. The cluster complex $\C_{|\bi'|}$ is identified with the link of $\{A_0,A_3,A_4,A_5,A_6\}$, which is the line of infinite length. The cluster reduction $\phi'$ is cluster-pA, and acts on this line by the shift of length $1$. Compare with \cref{example: cluster complex}. The mapping class $\psi_1:=(0\ 1\ 2)(3\ 4\ 5\ 6)\circ \mu_2\mu_1\mu_0 \in \Gamma_{X_7}$ is cluster-reducible, since it has an invariant set $\{A_3,A_4,A_5,A_6\}$ contained in the initial cluster. Note that the power $\psi_1^2$ is proper reducible, since it fixes the vertex $A_0$. \end{ex} \begin{figure} \unitlength 0.5mm \begin{center} \[ \begin{xy} 0;<0.5pt,0pt>:<0pt,-0.5pt>:: (140,100) *+[o][F]{0} ="0", (0,50) *+[o][F]{1} ="1", (75,0) *+[o][F]{2} ="2", (225,0) *+[o][F]{3} ="3", (280,50) *+[o][F]{4} ="4", (175,240) *+[o][F]{5} ="5", (105,240) *+[o][F]{6} ="6", "0", {\ar"1"}, "2", {\ar"0"}, "0", {\ar"3"}, "4", {\ar"0"}, "0", {\ar"5"}, "6", {\ar"0"}, "1", {\ar|*+{\scriptstyle 2}"2"}, "3", {\ar|*+{\scriptstyle 2}"4"}, "5", {\ar|*+{\scriptstyle 2}"6"}, \end{xy} \] \caption{quiver $X_7$} \label{fig; X7} \end{center} \end{figure} \begin{lem}\label{embedding} Let $\bi$ be a seed, and $\bi'$ the seed obtained by a cluster reduction. Let $\psi_\A : \G_{|\bi|} \to \Pos(\mathbb{R})$ and $\psi'_\A : \G_{|\bi'|} \to \Pos(\mathbb{R})$ be the positive $\A$-spaces associated with the seeds $\bi$ and $\bi'$, respectively. Then there is a natural morphism of the positive spaces $(\iota, q): \psi'_\A \to \psi_\A$ which induces $\Gamma_{|\bi'|}$-equivariant homeomorphisms $\A'(\pos) \cong \A(\pos)$ and $\A'(\trop) \cong \A(\trop)$. \end{lem} \begin{proof} Note that the only difference between the two positive $\A$-spaces is the admissible directions of mutations. The functor $\iota: \G_{|\bi'|} \to \G_{|\bi|}$ between the coordinate groupoids is defined by $(I, I_0 \sqcup \{i_1, \dots, i_k\}, \epsilon, d) \mapsto (I, I_0, \epsilon, d)$ and sending the morphisms naturally. The identity map $\A_{\bi'}(k)=\A_\bi(k)$ for each $\A$-torus combine to give a natural transformation $q: \psi'_\A \Rightarrow \psi_\A \circ \iota$. The latter assertion is clear. \end{proof} \begin{rem} We have no natural embedding of the $\X$-space in general, since $\X$-coordinates assigned to the vertices in $\{i_1, \dots, i_k\}$ may be changed by cluster $\X$-transformations directed to the vertices in $I-(I_0 \sqcup \{i_1, \dots, i_k\})$. \end{rem} \begin{dfn} A tropical point $G \in \A(\trop)$ is said to be \emph{cluster-filling} if it satisfies $a_\alpha(G)\neq0$ for all $\alpha \in V(\C)$. \end{dfn} Note that the definition depends only on the projective class of $G$. \begin{prop}[fixed points in $\A$-space]\label{redA} Let $\bi$ be a seed satisfying the condition $(\mathrm{T1})$, and $\Gamma=\Gamma_{|\bi|}$ the corresponding cluster modular group. For a proper reducible class $\phi \in \Gamma$ of infinite order, there exists a non-cluster-filling point $G \in \A(\trop)$ such that $\phi[G]=[G]$. \end{prop} \begin{proof} Let $\{\alpha_1,\dots, \alpha_k\}$ be a fixed point set of $\phi$ contained in a cluster, and $\phi' \in \Gamma_{|\bi'|}$ the corresponding cluster reduction. Since the tropical compactification $\overline{\A'}$ is a closed disk, $\phi'$ has a fixed point $x' \in \overline{\A'}$ by Brouwer's fixed point theorem. By \cref{condition(T1)}, $x$ must be a point on the boundary $P\A'(\trop)$. Then $\phi$ fixes the image $x$ of $x' \in \overline{\A}$ under the homeomorphism given by \cref{embedding}. \end{proof} \subsection{Cluster-pA classes of special type: cluster Dehn twists}\label{sub: cluster Dehn twists} Using the cluster reduction we define special type of cluster-pA mapping classes, called \emph{cluster Dehn twists}, and prove that they have an asymptotic behavior of orbits on the tropical compactification of the $\A$-space analogous to that of Dehn twists in the mapping class group. \begin{dfn}[cluster Dehn twists] Let $\bi$ be a skew-symmetric seed of mutable rank $n$. A cluster-reducible class $\phi \in \Gamma_{|\bi|}$ is said to be \emph{cluster-reducible to rank $m$} if the following conditions hold. \begin{enumerate} \item There exists a number $l \in \mathbb{Z}$ such that $\psi=\phi^l$ is proper reducible. \item The mapping class $\psi$ induces a mapping class in the cluster modular group associated with the seed of mutable rank $m$ obtained by the cluster reduction with respect to a fixed point set $\{\alpha_1, \dots, \alpha_{n-m}\}$ of $\psi$. \end{enumerate} A cluster-reducible class $\phi$ of infinite order is called a \emph{cluster Dehn twist} if it is cluster-reducible to rank 2. Namely, there exists a number $l \in \mathbb{Z}$ and a subset $\{\alpha_1, \dots, \alpha_{n-2}\} \subset V(\C_{|\bi|})$ of vertices which is fixed by $\phi^l$ and contained in a cluster, where $n$ is the mutable rank of $\bi$. \end{dfn} A skew-symmetric seed is said to be \emph{connected} if the corresponding quiver is connected. \begin{lem}\label{lem: cluster Dehn twists} Let $\bi$ be a skew-symmetric connected seed of mutable rank $n \geq 3$. Suppose that a proper reducible class $\psi \in \Gamma_{|\bi|}$ has infinite order and there exists a subset $\{\alpha_1, \dots, \alpha_{n-2}\} \subset V(\C_{|\bi|})$ of vertices which is fixed by $\psi$ and contained in a cluster. Then the action of the cluster reduction $\psi' \in \Gamma_{|\bi'|}$ with respect to the invariant set $\{\alpha_1, \dots, \alpha_{n-2}\}$ on the $\A$-space is represented as follows: \begin{equation}\label{eq: cluster Dehn twists} (\psi')^*(A_0, A_1)=\left(A_1, \frac{C+A_1^2}{A_0} \right). \end{equation} Here $(A_0,A_1)$ denotes the remaining cluster coordinates of the $\A$-space under the cluster reduction, $C$ is a product of frozen variables. \end{lem} \begin{proof} Take a cluster containing $\{\alpha_1, \dots, \alpha_{n-2}\}$. Let ${\bi}=(I, I_0, \epsilon)$ be the corresponding seed, and $i_j:=[\alpha_j] \in I$ the corresponding vertex for $j=1,\dots, n-2$. Then the cluster reduction produces a new seed ${\bi}':= (I, I_0 \sqcup \{i_1, \dots, i_{n-2}\}, \epsilon)$, whose mutable rank is 2. Label the vertices so that $I-I'_0=\{0,1\}$ and $I'_0=\{2, \dots, N-1\}$, where $I'_0:=I_0 \sqcup \{i_1, \dots, i_{n-2}\}$ and $N$ is the rank of the seed $\bi$. Note that $\psi=(0\ 1)\circ \mu_0 \in \Gamma_{|\bi|}$. Suitably relabeling if necessary, we can assume that $k:=\epsilon_{01} >0$. We claim that $k=2$. Since $\bi$ is connected, there exists a vertex $i \in I'_0$ such that $a:=\epsilon_{i0}\neq 0$ or $b:=\epsilon_{1i}\neq 0$. Since $\psi$ preserves the quiver, we compute that $a=b$ and $b-ak=-a$. Hence we conclude that $k=2$. Then from the definition of the cluster $\A$-transformation we have \[ \psi^*(A_0, A_1)=\left(A_1, \frac{\prod_{i \in I'_0}A_i^{\epsilon_{i0}}+A_1^2}{A_0} \right) \] and $\psi^*(A_i)=A_i$ for all $i \in I'_0$, as desired. \end{proof} \begin{thm}\label{thm; cluster Dehn twists} Let $\bi$ be a skew-symmetric connected seed of mutable rank $n \geq 3$ or the seed of type $L_2$. Then for each cluster Dehn twist $\phi \in \Gamma_{|\bi|}$, there exists a cluster-filling point $[G] \in P\A(\trop)$ such that we have \[ \lim_{n \to \infty}\phi^{\pm n}(g)=[G] \text{\ \ in $\overline{\A}$} \] for all $g \in \A(\pos)$. \end{thm} \begin{proof} Assume that $n \geq 3$. There exists a number $\l$ such that $\psi:=\phi^l$ satisfies the assumption of \cref{lem: cluster Dehn twists}. Let as consider the following recurrence relation: \[ \begin{cases} a_0^{(n)}&=-a_1^{(n-1)}, \\ a_1^{(n)}&=-a_0^{(n-1)}+\log(C+e^{2a_1^{(n-1)}}), \end{cases} \] where $C>0$ is a positive constant. It is the log-dynamics of \cref{eq: cluster Dehn twists}. Then one can directly compute that $a_0^{(n)}$, $a_1^{(n)}$ goes to infinity and $a_0^{(n)}\slash a_1^{(n)} \to 1$ as $n \to \infty$ for arbitrary initial real values. Hence we conclude that $\psi^n(g) \to [G]$ in $\overline{\A}$ for all $g \text{ in }\A(\pos)$, where $G \in \A(\trop)$ is the point whose coordinates are $a_0=a_1=1$, $a_i=0$ for all $i \in I'_0$. The proof for the negative direction is similar. The generator of $\Gamma_{L_2}$, which is cluster-pA, also satisfies the desired property. \end{proof} \begin{ex}[Dehn twists in the mapping class group] Let $F=F_g^s$ be a hyperbolic surface with $s \geq2$. For an essential non-separating simple closed curve $C$, we denote the right hand Dehn twist along $C$ by $t_C \in MC(F)$. Consider an annular neighborhood $\mathcal{N}(C)$ of $C$, and slide two of punctures so that exactly one puncture lies on each boundary component of $\mathcal{N}(C)$. Let $\Delta$ be an ideal triangulation obtained by gluing the ideal triangulation of $\mathcal{N}(C)$ shown in \cref{fig; Dehn} and an arbitrary ideal triangulation of $F \setminus \mathcal{N}(C)$. This kind of a triangulation is given in ~\cite{Kash01}. Then the Dehn twist is represented as $t_C=(0\ 1)\circ \mu_0$, hence it is a cluster Dehn twist with $l=1$. Its action on the $\A$-space is represented as \[ \phi_1^*(A_0, A_1, A_2, A_3)=\left(A_1,\frac{A_2A_3+A_1^2}{A_0}, A_2, A_3\right). \] \end{ex} \begin{figure} \[ \begin{tikzpicture} \draw(0,0) circle(0.5cm) node[midway,right]{2}; \draw(0,0) circle(2cm); \draw(0,0) circle(1cm)[thick]; \path (0,-1.2) node[circle]{$C$}; \draw (0.5,0) -- (2,0) node[midway,above] {0}; \draw (0.5,0) to[out=30, in=0] (0,1.5) to[out=180, in=90] node[above] {1} (-1.5,0) to[out=270, in=180] (0,-1.5) to[out=0, in=210] (2.0,0) ; \path (2,0) node[right]{3}; \end{tikzpicture} \] \caption{ideal triangulation of $\mathcal{N}(C)$} \label{fig; Dehn} \end{figure} \begin{ex}[Type $X_7$] Let us consider the seed of \emph{type $X_7$}. The mapping class $\phi_1:=(1\ 2)\circ \mu_1 \in \Gamma_{X_7}$ is a cluster Dehn twist, whose action on the $\A$-space is represented as \[ \phi_1^*(A_0, A_1, A_2)=\left(A_0, A_2,\frac{A_0+A_2^2}{A_1}\right). \] \end{ex} For a general cluster-pA class, we only know that it has at least one fixed point on the tropical boundary $P(\X(\trop)\backslash \X(\trop)_+$ from \cref{redX}. It would be interesting to find an analogue of the \emph{pA-pair} for a cluster-pA class which satisfies an appropriate condition, as we find in the surface theory (see \cref{classical NT}). \section{Basic examples: seeds associated with triangulated surfaces}\label{section: Teich} In this section we describe an important family of examples strongly related to the \Teich theory, following ~\cite{FST08}. A geometric description of the positive real parts and the tropical spaces associated with these seeds is presented in \cref{appendix: Teich}, which is used in \cref{Teich are Teich,comparison}. In \cref{Teich are Teich} we prove that these seeds are of \Teich type. In \cref{comparison}, we compare the Nielsen-Thurston types defined in \cref{section: NT types} with the classification of mapping classes. In these cases, the characterization of periodic classes described in \cref{periodic} is complete. We show that cluster-reducible classes are reducible. \subsection{Definition of the seed}\label{def: Teich seed} A \emph{marked hyperbolic surface} is a pair $(F, M)$, where $F=F_{g,b}^{p}$ is an oriented surface of genus $g$ with $p$ punctures and $b$ boundary components satisfying $6g-6+3b+3p+D>0$ and $p+b>0$, and $M \subset \partial F$ is a finite subset such that each boundary component has at least one point in $M$. The punctures together with elements of $M$ are called \emph{marked points}. We denote a marked hyperbolic surface by $F_{g, \vec{\delta}}^p$, where $\vec{\delta}=(\delta_1, \cdots, \delta_b)$, $\delta_i:= |M \cap \partial_i|$ indicates the number of marked points on the $i$-th boundary component. A connected component of $\partial F \backslash M$ is called a \emph{boundary segment}. We denote the set of boundary segments by $B(F)$, and fix a numbering on its elements. Note that $|B(F)|=D$, where $D:=\sum_{i=1}^b \delta_i$. \begin{dfn}[the seed associated with an ideal triangulation]\ {} \begin{enumerate} \item An ideal arc on $F$ is an isotopy class of an embedded arc connecting marked points, which is neither isotopic to a puncture, a marked point, nor an arc connecting two consecutive marked points on a common boundary component. An ideal triangulation of $F$ is a family $\Delta=\{ \alpha_{i} \}_{i=1}^{n}$ of ideal arcs, such that each connected component of $F \backslash \bigcup \alpha_{i}$ is a triangle whose vertices are marked points of $F$. One can verify that such a triangulation exists and that $n=6g-6+3r+3s+D$ by considering the Euler characteristic. \item For an ideal triangulation $\Delta$ of $F$, we define a skew-symmetric seed $\bi_\Delta=(\Delta\cup B(F), B(F), \epsilon=\epsilon_\Delta)$ as follows. For an arc $\alpha$ of $\Delta$ which is contained in a self-folded triangle in $\Delta$ as in \cref{fig:self-folded}, let $\pi_\Delta(\alpha)$ be the loop enclosing the triangle. Otherwise we set $\pi_\Delta(\alpha):=\alpha$. Then for a non-self-folded triangle $\tau$ in $\Delta$, we define \[ \epsilon_{ij}^\tau:= \begin{cases} 1, & \text{if $\tau$ contains $\pi_\Delta(\alpha_i)$ and $\pi_\Delta(\alpha_j)$ on its boundary in the clockwise order,} \\ -1, & \text{if the same holds, with the anti-clockwise order,} \\ 0, & \text{otherwise.} \end{cases} \] Finally we define $\epsilon_{ij}:= \sum_\tau \epsilon_{ij}^\tau$, where the sum runs over non-self-folded triangles in $\Delta$. \end{enumerate} \end{dfn} \begin{figure}[h] \begin{center} \setlength{\unitlength}{1.5pt} \begin{picture}(40,27)(-20,3) \thicklines \qbezier(0,0)(-30,30)(0,30) \qbezier(0,0)(30,30)(0,30) \put(0,0){\line(0,1){20}} \put(3,13){\makebox(0,0){$\alpha$}} \put(3,35){\makebox(0,0){$\pi_\Delta(\alpha)$}} \multiput(0,0)(0,20){2}{\circle*{2}} \end{picture} \end{center} \caption{Self-folded triangle} \label{fig:self-folded} \end{figure} For an arc $\alpha$ of an ideal triangulation $\Delta$ which is a diagonal of an immersed quadrilateral in $F$ (in this case the quadrilateral is unique), we get another ideal triangulation $\Delta^{'}:= (\Delta \backslash \{ \alpha \})\cup \{ \beta \}$ by replacing $\alpha$ by the other diagonal $\beta$ of the quadrilateral. We call this operation the \emph{flip} along the arc $\alpha$. One can directly check that the flip along the arc $\alpha_k$ corresponds to the mutation of the corresponding seed directed to the vertex $k$. \begin{thm}[~\cite{Harer86,Hatcher91,Penner})]\label{Whitehead} Any two ideal triangulations of $F$ are connected by a finite sequence of flips and relabellings. \end{thm} Hence the equivalence class of the seed $\bi_\Delta$ is determined by the marked hyperbolic surface $F$, independent of the choice of the ideal triangulation. We denote the resulting cluster ensemble by $p=p_{F}: \A_{F} \to \X_{F}$, the cluster modular group by $\Gamma_{F}:=\Gamma_{|\bi_\Delta|}$, \emph{etc}. The rank and the mutable rank of the seed $\bi_\Delta$ are $N=|\Delta \cup B(F)|=6g-6+3b+3p+2D$ and $n=|\Delta|=6g-6+3b+3p+D$, respectively. Though a flip induces a mutation, not every mutation is realized by a flip. Indeed, the existence of an arc contained in a self-folded triangle prevents us from performing the flip along such an arc. Therefore we generalize the concept of ideal triangulations, following ~\cite{FST08}. \begin{dfn}[tagged triangulations]\ {} \begin{enumerate} \item A tagged arc on $F$ is an ideal arc together with a label $\{\text{plain, notched}\}$ assigned to each of its end, satisfying the following conditions: \begin{itemize} \item the arc does not cut out a once-punctured monogon as in \cref{fig:self-folded}, \item each end which is incident to a marked point on the boundary is labeled plain, and \item both ends of a loop are labeled in the same way. \end{itemize} The labels are called \emph{tags}. \item The \emph{tagged arc complex} $\Arc$ is the clique complex for an appropriate compatibility relation on the set of tagged arcs on $F$. Namely, the vertices are tagged arcs and the collection $\{\alpha_1, \cdots, \alpha_k\}$ spans a $k$-simplex if and only if they are mutually compatible. See, for the definition of the compatibility, ~\cite{FST08}. The maximal simplices are called \emph{tagged triangulations} and the codimension 1 simplices are called \emph{tagged flips}. \end{enumerate} \end{dfn} Note that if the surface $F$ has no punctures, then each tagged triangulation has only plain tags. If $F$ has at least two punctures or it has non-empty boundary, then the tagged arc complex typically contains a cycle (which we call a \emph{$\diamondsuit$-cycle}) shown in the right of \cref{fig; diamond-cycle}. Here by convention, the plain tags are omitted in the diagram while the notched tags are represented by the $\bowtie$ symbol. Compare with the ordinary arc complex, shown in the left of \cref{fig; diamond-cycle}. Compatibility relation implies that for a compatible set of tagged arcs and each puncture $a$, either one of the followings hold. \begin{enumerate} \item[(a)]All tags at the puncture $a$ are plain. \item[(b)]All tags at the puncture $a$ are notched. \item[(c)]The number of arcs incident to the puncture $a$ is at most two, and their tags at the puncture $a$ is different. \end{enumerate} \begin{figure}[h] \unitlength 0.7mm \begin{subfigure}[b]{.495\linewidth} \centering {\begin{picture}(100,100)(-20,-20) \put(10,30){\line(4, 3){20}} \put(0,30){\makebox(0,0)[cc]{ \begin{tikzpicture}[scale=0.5 \fill (0,0) circle(2pt) coordinate(A); \fill (0,2) circle(2pt) coordinate(B); \fill (0,4) circle(2pt) coordinate(C); \coordinate (D) at (-1,2); \coordinate (E) at (1,2); \coordinate (F) at (0,2.5); \draw (A) to[out=90, in=190] (F) to[out=0, in=90] (A) ; \draw (A)--(B) ; \draw (A) to[out=90, in=270] (D) to[out=90, in=270] (C); \draw (A) to[out=90, in=270] (E) to[out=90, in=270] (C); \path (0,-1) node[above]{$\Delta_1^\circ$}; \end{tikzpicture} }} \put(30,65){\makebox(0,0)[cc]{ \begin{tikzpicture}[scale=0.5 \fill (0,0) circle(2pt) coordinate(A); \fill (0,2) circle(2pt) coordinate(B); \fill (0,4) circle(2pt) coordinate(C); \coordinate (D) at (-1,2); \coordinate (E) at (1,2); \draw (A)--(B); \draw (B)--(C); \draw (A) to[out=90, in=270] (D) to[out=90, in=270] (C); \draw (A) to[out=90, in=270] (E) to[out=90, in=270] (C); \path (0,5) node[below]{$\Delta_2^\circ=\Delta_4^\circ$}; \end{tikzpicture} }} \put(50,30){\line(-4,3){20}}\put(62,30){\makebox(0,0)[cc]{ \begin{tikzpicture}[scale=0.5 \fill (0,0) circle(2pt) coordinate(A); \fill (0,2) circle(2pt) coordinate(B); \fill (0,4) circle(2pt) coordinate(C); \coordinate (D) at (-1,2); \coordinate (E) at (1,2); \coordinate (F) at (0, 1.5); \draw (B)--(C) ; \draw (C) to[out=270, in=180] (F) to[out=0, in=270] (C); \draw (A) to[out=90, in=270] (D) to[out=90, in=270] (C); \draw (A) to[out=90, in=270] (E) to[out=90, in=270] (C); \path (0,-1) node[above]{$\Delta_3^\circ$}; \end{tikzpicture} }} \end{picture}} \end{subfigure} \begin{subfigure}[b]{.495\linewidth} \centering {\begin{picture}(100,100)(-20,-20) \put(10,30){\line(4, 3){20}} \put(0,30){\makebox(0,0)[cc]{ \begin{tikzpicture}[scale=0.5 \fill (0,0) circle(2pt) coordinate(A); \fill (0,2) circle(2pt) coordinate(B); \fill (0,4) circle(2pt) coordinate(C); \coordinate (D) at (-1,2); \coordinate (E) at (1,2); \draw (A) to[out=90, in=190] (B) ; \draw (A)--(B) ; \draw (A) to[out=90, in=270] (D) to[out=90, in=270] (C); \draw (A) to[out=90, in=270] (E) to[out=90, in=270] (C); \path (B) node[below]{$\bowtie$}; \path (0,-1) node[above]{$\Delta_1$}; \end{tikzpicture} }} \put(10,30){\line(4,-3){20}} \put(26,62){\makebox(0,0)[cc]{ \begin{tikzpicture}[scale=0.5 \fill (0,0) circle(2pt) coordinate(A); \fill (0,2) circle(2pt) coordinate(B); \fill (0,4) circle(2pt) coordinate(C); \coordinate (D) at (-1,2); \coordinate (E) at (1,2); \draw (A)--(B)node[midway,right]{$\alpha$}; \draw (B)--(C)node[midway,right]{$\beta$}; \draw (A) to[out=90, in=270] (D) to[out=90, in=270] (C); \draw (A) to[out=90, in=270] (E) to[out=90, in=270] (C); \path (-1,2) node[left]{$\Delta_4$}; \end{tikzpicture} }} \put(50,30){\line(-4,3){20}}\put(62,30){\makebox(0,0)[cc]{ \begin{tikzpicture}[scale=0.5 \fill (0,0) circle(2pt) coordinate(A); \fill (0,2) circle(2pt) coordinate(B); \fill (0,4) circle(2pt) coordinate(C); \coordinate (D) at (-1,2); \coordinate (E) at (1,2); \draw (B) to[out=10, in=270] (C); \draw (B)--(C) ; \draw (A) to[out=90, in=270] (D) to[out=90, in=270] (C); \draw (A) to[out=90, in=270] (E) to[out=90, in=270] (C); \path (B) node[above]{$\bowtie$}; \path (0,-1) node[above]{$\Delta_3$}; \end{tikzpicture} }} \put(50,30){\line(-4,-3){20}}\put(26,-3){\makebox(0,0)[cc]{ \begin{tikzpicture}[scale=0.5 \fill (0,0) circle(2pt) coordinate(A); \fill (0,2) circle(2pt) coordinate(B); \fill (0,4) circle(2pt) coordinate(C); \coordinate (D) at (-1,2); \coordinate (E) at (1,2); \draw (A)--(B) --(C); \draw (A) to[out=90, in=270] (D) to[out=90, in=270] (C); \draw (A) to[out=90, in=270] (E) to[out=90, in=270] (C); \path (B) node[above]{$\bowtie$}; \path (B) node[below]{$\bowtie$}; \path (-1,2) node[left]{$\Delta_2$}; \end{tikzpicture} }} \end{picture}} \end{subfigure} \caption{ $\diamondsuit$-cycle} \label{fig; diamond-cycle} \end{figure} \begin{dfn}[the seed associated with a tagged triangulation]\ {} For a tagged triangulation $\Delta$, let $\Delta^{\circ}$ be an ideal triangulation obtained as follows: \begin{itemize} \item replace all tags at a puncture $a$ of type (b) by plain ones, and \item for each puncture $a$ of type (c), replace the arc $\alpha$ notched at $a$ (if any) by a loop enclosing $a$ and $\alpha$. \end{itemize} A tagged triangulation $\Delta$ whose tags are all plain is naturally identified with the corresponding ideal triangulation $\Delta^\circ$. For a tagged triangulation $\Delta$ with a fixed numbering on the member arcs, we define a skew-symmetric seed by ${\bi}_\Delta =(\Delta \cup B(F),B(F), \epsilon:= \epsilon_{\Delta_{\circ}})$. \end{dfn} Then we get a complete description of the cluster complex associated with the seed $\bi_\Delta$ in terms of tagged triangulations: \begin{thm}[Fomin-Shapiro-Thurston ~\cite{FST08} Proposition 7.10, Theorem 7.11]\label{Arc}\ {} For a marked hyperbolic surface $F=\surf$, the tagged arc complex has exactly two connected components (all plain/all notched) if $F=F_{g,0}^1$, and otherwise is connected. The cluster complex associated with the seed $\bi_\Delta$ is naturally identified with a connected component of the tagged arc complex $\Arc$ of the surface $F$. Namely, \[ \begin{cases} \C_{F} \cong \mathrm{Arc}(F) & \text{if $F=F_{g,0}^1$},\\ \C_{F} \cong \Arc & \text{otherwise}. \end{cases} \] \end{thm} The coordinate groupoid of the seed $\bi_\Delta$ is denoted by $\mathcal{M}^{\bowtie}(F)$, and called \emph{tagged modular groupoid}. The subgroupoid $\mathcal{M}(F)$ whose objects are ideal triangulations and morphisms are (ordinary) flips is called the \emph{modular groupoid}, which is described in ~\cite{Penner}. Next we see that the cluster modular group associated with the seed $\bi_\Delta$ is identified with some extension of the mapping class group. \begin{dfn}[the tagged mapping class group] The group $\{\pm 1\}^p$ acts on the tagged arc complex by alternating the tags at each puncture. The mapping class group naturally acts on the tagged arc complex, as well on the group $\{\pm 1\}^p$ by $(\phi_*\epsilon)(a):=\epsilon(\phi(a))$. Then the induced semidirect product $MC^{\bowtie}(F):=MC(F) \ltimes \{\pm 1\}^p$ is called the \emph{tagged mapping class group}. The tagged mapping class group naturally acts on the tagged arc complex. \end{dfn} \begin{prop}[Bridgeland-Smith ~\cite{BS15} Proposition 8.5 and 8.6]\label{MCG} The cluster modular group associated with the seed $\bi_\Delta$ is naturally identified with the subgroup of the tagged mapping class group $MC^{\bowtie}(F)$ of $F$ which consists of the elements that preserve connected components of $\Arc$. Namely, \[ \begin{cases} \Gamma_{F} \cong MC(F) & \text{if $F=F_{g,0}^1$},\\ \Gamma_{F} \cong MC^{\bowtie}(F) & \text{otherwise}. \end{cases} \] \end{prop} We give a sketch of the construction of the isomorphism here, for later use. \begin{proof}[Sketch of the construction] Let us first consider the generic case $F\neq F_g^1$. Fixing a tagged triangulation $\Delta$, we can think of the cluster modular group as $\Gamma_{F}=\pi_1(\mathcal{M}^{\bowtie}(F), \Delta)$. For a mapping class $\psi=(\phi, \epsilon) \in MC^{\bowtie}(F)$, there exists a sequence of tagged flips $\mu_{i_1}, \cdots, \mu_{i_k}$ from $\Delta$ to $\epsilon\cdot\phi^{-1}(\Delta)$ by \cref{Arc}. Since both $\phi$ and $\epsilon$ preserves the exchange matrix of the tagged triangulation, there exists a seed isomorphism $\sigma: \epsilon\cdot \phi^{-1}(\Delta) \to \Delta$. Then $I(\psi):= \sigma \circ \mu_{i_k} \cdots \mu_{i_1} $ defines an element of the cluster modular group. Hence we get a map $I: MC^{\bowtie}(F) \to \Gamma_{F}$, which in turn gives an isomorphism. Since each element of $MC(F)$ preserves the tags, the case of $F=F_g^1$ is clear. \end{proof} \subsection{The seed associated with an ideal triangulation is of \Teich type}\label{Teich are Teich} Let $\Delta$ be an ideal triangulation of a marked hyperbolic surface $F$ and $\bi_\Delta$ the associated seed. \begin{thm}\label{prop: Teich are Teich} The seed $\bi_\Delta$ is of \Teich type. \end{thm} \begin{proof} Condition (T1). We claim that the action of the cluster modular group on each positive space is properly discontinuous. Then the assertion follows from \cref{growth}. First consider the action on the $\X$-space. By \cref{action coincide}, the action of the subgroup $MC(F) \subset \Gamma_{F}$ on the $\X$-space $\X(\pos)$ coincide with the geometric action. Hence this action of $MC(F)$ is properly discontinuous, as is well-known. See, for instance, ~\cite{FM}. From the definition of the action of $\Gamma_{F}=MC^{\bowtie}(F)$ on the tagged arc complex and \cref{action}, one can verify that an element $(\phi,\epsilon) \in MC^{\bowtie}(F)$ acts on the positive $\X$-space as $(\phi,\epsilon)g=\phi(\iota(\epsilon)g)$, where $\iota(\epsilon):=\prod_{\epsilon(a)=-1} \iota_a$ is a composition of the involutions defined in \cref{def; involution}. Now suppose that there exists a compact set $K \subset \X(\pos)$ and an infinite sequence $\psi_m=(\phi_m,\epsilon_m) \in \Gamma_{F}$ such that $\psi_m(K) \cap K \neq \emptyset$. Since $\{\pm1\}^p$ is a finite group, $(\phi_m) \subset MC(F)$ is an infinite sequence and there exists an element $\epsilon \in \{\pm1\}^p$ such that $\epsilon_m=\epsilon$ for infinitely many $m$. Hence we have \[ \emptyset \neq \psi_m(K) \cap K =\phi_m(\iota(\epsilon)K) \cap K \subset \phi_m(\iota(\epsilon)K \cup K) \cap (\iota(\epsilon)K \cup K) \] for infinitely many $m$, which is a contradiction to the proper discontinuity of the action of $MC(F)$. Hence the action of $\Gamma_{F}$ on the $\X$-space is properly discontinuous. The action on the $\A$-space is similarly shown to be properly discontinuous. Here the action of $\epsilon$ is described as $\iota'(\epsilon):=\prod_{\epsilon(a)=-1} \iota'_a$, where $\iota'_a$ is the involution changing the horocycle assigned to the puncture $a$ to the conjugated one (see ~\cite{FoT}). Condition (T2). Note that for a tagged triangulation $\Delta=\{\gamma_1,\dots,\gamma_N\}$ without digons as in the left of \cref{Delta1} in \cref{appendix: Teich}, the map $\Psi_\Delta|[S_\Delta]$ is given by $\Psi([w_1,\dots,w_N])=(\bigsqcup w_j \gamma_j, \pm)$, where the sign at a puncture $p$ is defined to be $+1$ if the tags of arcs at $p$ are plain, and $-1$ if the tags are notched. Then on the image of these maps, the equivalence condition holds. Let us consider the tagged triangulation $\Delta_j$ in the $\diamondsuit$-cycle, see \cref{fig; diamond-cycle}. From the definition of the tropical $\X$-transformations, we have \begin{align*} \begin{cases} x_{\Delta_2}(\alpha)=-x_{\Delta_1}(\alpha) \\ x_{\Delta_2}(\beta)=x_{\Delta_1}(\beta) \end{cases} & \begin{cases} x_{\Delta_3}(\alpha)=-x_{\Delta_1}(\alpha) \\ x_{\Delta_3}(\beta)=-x_{\Delta_1}(\beta) \end{cases} & \begin{cases} x_{\Delta_4}(\alpha)=x_{\Delta_1}(\alpha) \\ x_{\Delta_4}(\beta)=-x_{\Delta_1}(\beta). \end{cases} \end{align*} Hence the equivalence condition on the image of the $\diamondsuit$-cycle holds. \end{proof} \subsection{Comparison with the Nielsen-Thurston classification of elements of the mapping class group}\label{comparison} Let $F$ be a hyperbolic surface of type $F_g^1$ or $F_{g,\vec{\delta}}$ throughout this subsection. Recall that in this case we have $\Gamma_{F} \cong MC(F)$ and $\C_{F} \cong \mathrm{Arc}(F)$, see \cref{MCG,Arc}. Let us recall the Nielsen-Thurston classification. \begin{dfn}[Nielsen-Thurston classification]\label{classical NT} A mapping class $\phi \in MC(F)$ is said to be \begin{enumerate} \item \emph{reducible} if it fixes an isotopy class of a finite union of mutually disjoint simple closed curves on $F$, and \item \emph{pseudo-Anosov (pA)} if there is a pair of mutually transverse filling laminations $G_\pm \in \ML$ and a scalar factor $\lambda >0$ such that $\phi(G_\pm)=\lambda^{\pm 1}G_\pm$. The pair of projective laminations $[G_\pm]$ is called the \emph{pA-pair} of $\phi$. \end{enumerate} \end{dfn} Here a lamination $G \in \wML$ is said to be \emph{filling} if each component of $F \backslash G$ is unpunctured or once-punctured polygon. It is known (see, for instance, ~\cite{FLP}) that each mapping class is at least one of periodic, reducible, or pA, and a pA class is neither periodic nor reducible. Furthermore a mapping class $\phi$ is reducible if and only if it fixes a non-filling projective lamination, and is pA if and only if it satisfies $\phi(G)=\lambda G$ for some filling lamination $G\in \ML$ and a scalar $\lambda>0$, $\neq 1$. A pA class $\phi$ has the following asymptotic behavior of orbits in $\PML$: for any projective lamination $[G] \in \PML$ we have $\lim_{n \to\infty}\phi^{\pm n}[G]=[G_\pm]$. We shall start with periodic classes. In this case the characterization of periodic classes described in \cref{periodic} is complete: \begin{prop}\label{periodic mapping class} For a mapping class $\phi \in \Gamma_{F}$, the following conditions are equivalent. \begin{enumerate} {\renewcommand{\labelenumi}{(\roman{enumi})} \item The mapping class $\phi$ fixes a cell $C \in \C$ of finite type. \item The mapping class $\phi$ is periodic. \item The mapping class $\phi$ has fixed points in $\A_{F}(\pos)$ and $\X_{F}(\pos)$}. \end{enumerate} \end{prop} \begin{lem}\label{lem: reduced cluster complex} The cells of finite type (see \cref{periodic}) in the cluster complex are in one-to-one correspondence with ideal cell decompositions of $F$. Here an \emph{ideal cell decomposition} is a family $\Delta=\{\alpha_i\}$ of ideal arcs such that each connected component of $F\backslash \bigcup \alpha_i$ is a polygon. \end{lem} \begin{proof} Let $C=(\alpha_1, \dots, \alpha_k)$ be a cell in the cluster complex, which is represented by a family of ideal arcs. Suppose that $\{\alpha_1, \dots, \alpha_k\}$ is an ideal cell decomposition. Then supercells of $C$ are obtained by adding some ideal arcs on the surface $F\backslash \bigcup_{i=1}^k \alpha_i$ to $\{\alpha_1, \dots, \alpha_k\}$, which are finite since such an ideal arc must be a diagonal of a polygon. Conversely suppose that $\{\alpha_1, \dots, \alpha_k\}$ is not an ideal cell decomposition. Then there exists a connected component $F_0$ of $F\backslash \bigcup_{i=1}^k \alpha_i$ which has a half-twist or a Dehn twist in its mapping class group. Hence $F_0$ has infinitely many ideal triangulations, consequently $C$ has infinitely many supercells. \end{proof} \begin{proof}[Proof of \cref{periodic mapping class}] It suffices to show that the condition $(\mathrm{iii})$ implies the condition $(\mathrm{i})$. Let $\C^*$ denote the union of all cells of finite type in the cluster complex. In view of \cref{lem: reduced cluster complex}, Penner's convex hull construction (~\cite{Penner}Chapter 4) gives a mapping class group equivariant isomorphism \[ \C^* \cong \wTF \slash \mathbb{R}_{>0}, \] from which the assertion follows. \end{proof} Next we focus on cluster-reducible classes and their relation with \emph{reducible classes}. Observe that by \cref{prop: Teich are Teich} a mapping class is cluster-reducible if and only if it fixes an isotopy class of a finite union of mutually disjoint ideal arcs on $F$. \begin{prop}\label{cluster-red mapping class} The following holds. \begin{enumerate} \item A mapping class $\phi$ is cluster-reducible if and only if it fixes an unbounded lamination with \emph{real} weights $L=(\bigsqcup w_j \gamma_j, \pm)$, where $w_j \in \mathbb{R}$. If $\phi$ is proper reducible, then it induces a mapping class on the surface obtained by cutting $F$ along the multiarc $\bigsqcup \gamma_j$. \item A cluster-reducible class is reducible. \item A filling lamination is cluster-filling. \end{enumerate} \end{prop} \begin{proof}\ {} $(1)$. The assertion follows from \cref{redX}$(2)$. Note that an element of $P\X(\trop)_+$ consists of elements of the form $L=(\bigsqcup w_j \gamma_j, \pm)$, where $w_j \in \mathbb{R}_{>0}$. $(2)$. Let $\phi \in MC(F)$ be a cluster-reducible class, $L=(\bigsqcup w_j \gamma_j, \pm)$ a fixed lamination, and $\bigsqcup \gamma_j$ the corresponding multiarc. One can pick representatives of $\phi$ and $\gamma$ so that $\phi(\gamma)=\gamma$ on $F$. Then by cutting $F$ along $\bigsqcup \gamma_j$, we obtain a surface $F'$ with boundary. Since $\phi$ fixes $\bigsqcup \gamma_j$, it induces a mapping class $\phi'$ on $F'$ which may permute the boundary components. Let $C'$ be the multicurve isotopic to the boundary of $F'$. Since $\phi'$ fixes $C'$, the preimage $C$ of $C'$ in $F$ is fixed by $\phi$. Therefore $\phi$ is reducible. $(3)$Let $G$ be a non-cluster-filling lamination. Let $\gamma$ be an ideal arc such that $a_\gamma(G)=0$. Then $G$ has no intersection with $\gamma$. Since $G$ has compact support, there is a twice-punctured disk which surrounds $\gamma$ and disjoint from $G$, which implies that $G$ is non-filling. \end{proof} \begin{ex}[a reducible class which is not cluster-reducible]\label{red and arc-irred} Let $C$ be a non-separating simple closed curve in $F=F_g^p$, and $\phi \in MC(F)$ a mapping class given by the Dehn twist along $C$ on a tubular neighborhood $\mathcal{N}(C)$ of $C$ and a pA class on $F \backslash \mathcal{N}(C)$. Then $\phi$ is a reducible class which is not cluster-reducible. \end{ex} \begin{proof} The reducibility is clear from the definition. Let $F' := F \backslash \mathcal{N}(C)$. If $\phi$ fixes an ideal arc contained in $F'$, then by \cref{cluster-red mapping class} we see that the restriction $\phi|_{F'} \in MC(F')$ is reducible, which is a contradiction. Moreover since $\phi$ is the Dehn twist along $C$ near the curve $C$, it cannot fix ideal arcs which traverse the curve $C$. Hence $\phi$ is cluster-irreducible. \end{proof} \begin{ex}[a cluster-filling lamination which is not filling] Let $C$ be a simple closed curve in $F=F_g^p$, and $\{P_j\}$ be a pants decomposition of $F$ which contains $C$ as a decomposing curve: $F=\bigcup_j P_j$. For a component $P_j$ which contains a puncture, let $G_j \in \mathcal{ML}_0^+(P_j)$ be a filling lamination such that $i(G_j, C)=0$. For a component $P_j$ which does not contain any punctures, choose an arbitrary lamination $G_j \in \mathcal{ML}_0^+(P_j)$. Then $G:= \bigsqcup_j G_j \sqcup C \in \ML$ is a cluster-filling lamination which is not filling. Indeed, each ideal arc $\alpha$ incident to a puncture. Let $P_j$ be the component which contains this puncture. Since $G_j \in \mathcal{ML}_0^+(P_j)$ is filling, it intersect with the arc $\alpha$: $i(\alpha,G_j)\neq 0$. In particular $i(\alpha, G)\neq 0$. Hence $G$ is cluster-filling. However, $G$ is not necessarily filling since the complement $F\setminus G$ in general contain a component $P_j$ without punctures, which is not a polygon. \end{ex} Just as the fact that a mapping class is reducible if and only if it fixes a non-filling projective lamination, we expect that a mapping class is cluster-reducible if and only if it fixes a non-cluster-filling projective lamination.
1,116,691,499,478
arxiv
\section{Introduction}\label{intro} \sid{Empirical studies} \cite{kunegis:2013} suggest that the distribution of in- and out-degrees of the nodes of many social networks have Pareto-like tails. The indices of these distributions \sid{control the likelihood of nodes with large degrees appearing in the data.} \sid{Some social network models, such as preferential attachment, theoretically exhibit} these heavy-tailed characteristics. \sid{This paper estimates heavy tail parameters using semi-parametric extreme value (EV) methods and compares such EV estimates with model-based likelihood methods.} \sid{The EV} estimates only rely on the upper tail of the degree distributions so one might expect these estimates \sid{to be} robust against model \sid{error or data corruption}. \sid{Preferential} attachment \sid{(PA)} describes the growth of \sid{a} network where edges and nodes are added over time based on probabilistic rules \sid{that assume} existing nodes with large degrees attract more edges. \new{This property is} attractive \new{for} modeling social networks \new{due to} intuitive \sid{appeal} and ability to produce power-law networks with degrees \sid{matched to data} \cite{durrett:2010b,vanderhofstad:2017, krapivsky:2001,krapivsky:redner:2001,bollobas:borgs:chayes:riordan:2003}. Elementary descriptions of the preferential attachment model can be found in \cite{easley:kleinberg:2010} while more mathematical treatments are available in \cite{durrett:2010b,vanderhofstad:2017,bhamidi:2007}. Also see \cite{kolaczyk:csardi:2014} for a statistical survey of methods for network data \new{and \cite{MR3707244} for inference for an undirected model}. The linear preferential attachment model has received most attention. \sid{M}arginal degree power laws were established in \cite{krapivsky:2001,krapivsky:redner:2001,bollobas:borgs:chayes:riordan:2003}, while joint power-law behavior, also know\sid{n} as joint regular variation, was proved in \cite{resnick:samorodnitsky:towsley:davis:willis:wan:2016, resnick:samorodnitsky:2015,wang:resnick:2016} for the directed linear PA model. Given observed network data, \cite{wan:wang:davis:resnick:2017} proposed parametric inference procedures for the model in two data scenarios. For the case where the history of network growth is available, the MLE estimators \new{of model parameters} \phy{were} derived and shown to be strongly consistent, asymptotically normal and efficient. For the case where only a snapshot of the network is available at a single time point, \phy{the} estimators based on moment methods \wtd{as well as an} approximation to the likelihood \phy{were shown to be} strongly consistent. The loss of efficiency relative to full MLE was surprisingly mild. \sid{The drawback of these two methods} is \phy{that} they are model-based and sensitive to model error. To overcome this lack of robustness, this paper describes an \sid{EV inference method applied to} a single snapshot of a network and \sid{where possible, compares the EV method to model-based MLE methods.} \sid{The EV method is based on estimates of} in- and out-degree tail indices, $\iota_\wtd{\text{in}}$ and $\iota_\wtd{\text{out}}$, \sid{using a combination of the Hill estimator \sid{\cite{hill:1975,resnickbook:2007}} coupled with a minimum distance thereshold selection method} \cite{clauset:shalizi:newman:2009}. We also describe estimation of model parameters using the joint tail distribution of in- and out-degrees relying on the asymptotic angular measure \sid{\cite[page 173]{resnickbook:2007}} density obtained after standardizing \sid{\cite[page 203]{resnickbook:2007}} the data. \sid{If the data \phy{are} generated by the linear PA model}, the EV estimators can be applied to estimate the parameters of the model and \sid{compared with MLE estimates} and not surprisingly, the \sid{EV estimates} exhibit larger variance. \sid{However, if there is model error or data corruption, the EV \phy{estimates} more than hold their own and we illustrate the comparison in two ways:} \begin{itemize} \item The data is corrupted; linear PA data ha\new{ve} edges randomly deleted or added. The EV approach reliably recovers the original preferential attachment parameters while parametric methods degrade considerably. \item The data comes from a misspecified model, namely a directed edge superstar model \cite{bhamidi:2015} but is analyzed as if it comes from the linear PA model. \sid{The EV method \sid{gives good estimates for superstar model tail} indices and outperforms MLE based on a misspecified linear PA model if the probability of attaching to the superstar is significant.} \end{itemize} The rest of the paper is structured as follows. Section~\ref{sec:network:ht} formulates the power-law phenomena in network degree distributions along with joint dependency in the \wtd{in- and out-} degrees. We describe two network models which exhibit such heavy tail properties, the linear \sid{PA and the superstar linear PA models.} The EV inference method for networks is \phy{described} in Section~\ref{sec:tail} where we discuss its use for estimating the parameters of the linear PA model. Section~\ref{sec:est} \sid{gives EV estimation results for simulated data from the linear PA model.} \sid{Since} the \sid{generating} model is correctly specified, we use the previous parametric methods as benchmarks for comparison in Section~\ref{subsec:robust}. Section~\ref{sec:perturb} analyzes network data generated from the linear PA model but \sid{corrupted} by random edge addition or deletion. Pretending ignorance of the perturbation, we compare the performance of the extreme value method with the MLE and snapshot methods to recover the original model. In Section~\ref{subsec:superstar}, we \sid{use our EV inference approach on data from} the directed superstar model and attempt to to recover the tail properties of the degree distributions. \sid{A concluding} Section~\ref{sec:discussion} summarizes the discussion and reasons why EV methods have their place. \sid{Appendices give proofs and a fuller discussion of MLE \wtd{and the snapshot method} for linear PA models abstracted from \cite{wan:wang:davis:resnick:2017}.} \section{Networks and Heavy-Tailed Degree Distributions} \label{sec:network:ht} \subsection{General discussion.}\label{subsec:gen} \sid{We begin with a \sid{general} discussion of power laws and networks.} Let $G(n)=(V(n),E(n))$ denote a directed network, \wtd{where} $V(n)$ is the set of nodes, $E(n)$ is the set of edges, and $n$ is the number of edges. Let $N(n)$ denote the number of nodes in $G(n)$ and $N_n(i,j)$ be the number of nodes with in-degree $i$ and out-degree $j$. The marginal counts of nodes with in-degree $i$ and out-degree $j$ are given by \begin{equation*}\label{e:count} N^{\text{in}}_{i}(n) := \sum_{j=0}^\infty N_n(i,j) \mbox{ \quad and \quad } N^{\text{out}}_{j}(n) := \sum_{i=0}^\infty N_n(i,j), \end{equation*} respectively. For many network \sid{data sets}, log-log plots of the in- and out-degree distributions, i.e., plots of $\sid{\log i}$ vs.~ $\logN^{\text{in}}_i(n)$ and $\sid{\log j}$ vs.~$\logN^{\text{out}}_j(n)$, appear to be linear \sid{and generative models of network growth seek to reflect this.} Consider models such that the empirical degree frequency \new{converges almost surely,} \begin{equation}\label{pij} N_n(i,j)/{N(n)} \to p_{ij}, \quad (n\to\infty) \end{equation} \wtd{where $p_{ij}$ is} a \sid{bivariate} probability mass function \wtd{(pmf)}. The network exhibits power-law behavior if \begin{align} p^{\text{in}}_i &:= \sum_{j=0}^\infty p_{ij} \sim C_{\text{in}} i^{-(1+\iota_\text{in})}\mbox{ as }i\to\infty, \label{eq:pin:pl}\\ p^{\text{out}}_j &:= \sum_{i=0}^\infty p_{ij} \sim C_{\text{out}} j^{-(1+\iota_\text{out})}\mbox{ as }j\to\infty,\label{eq:pout:pl} \end{align} for some positive constants $C_{\text{in}},C_{\text{out}}$. Let $(I,O)$ be a \sid{fictitious} random vector with joint \wtd{pmf} $p_{ij}$, then \begin{align*} \textbf{P}(I\ge i)&\sim C_{\text{in}}(1+\iota_\text{in})^{-1} \cdot i^{-\iota_\text{in}}\mbox{ as }i\to\infty,\\ \textbf{P}(O\ge j) &\sim C_{\text{out}}(1+\iota_\text{out})^{-1} \cdot j^{-\iota_\text{out}}\mbox{ as }j\to\infty. \end{align*} In the linear PA model, the joint distribution of $(I,O)$ satisfies non-standard regular variation. Let $\mathbb{M}(\mathbb{R}^2_+\setminus \{\boldsymbol 0\})$ be the set of Borel measures on $\mathbb{R}^2_+\setminus \{\boldsymbol 0\}$ that are finite on sets bounded away from the origin. Then $(I,O)$ is {\it non-standard regularly varying} on $\mathbb{R}^2_+\setminus \{\boldsymbol 0\}$ \sid{means} that \sid{as $t\to\infty$,} \begin{equation}\label{MRV} t\textbf{P}\left[\left(\frac{I}{t^{1/\iota_\wtd{\text{in}}}},\frac{O}{t^{1/\iota_\wtd{\text{out}}}}\right)\in\cdot\right]\rightarrow \nu(\cdot),\quad \mbox{in }\mathbb{M}(\mathbb{R}^2_+\setminus \{\boldsymbol 0\}), \end{equation} where $\nu(\cdot) \in\mathbb{M}(\mathbb{R}^2_+\setminus \{\boldsymbol 0\})$ is called the limit or tail measure \sid{\cite{lindskog:resnick:roy:2014, das:mitra:resnick:2013, hult:lindskog:2006a}.} Using the power transformation $I\mapsto I^a$ with {$a = \iota_\text{in}/\iota_\text{out}$}, the vector $(I^a,O)$ becomes standard regularly varying, i.e., \begin{equation} \label{stdzRV} t\textbf{P}\left[\left(\frac{ I^a}{t^{1/\iota_\text{out}}},\frac{O}{t^{1/\iota_\text{out}}}\right)\in\cdot\right]\rightarrow \tilde{\nu}(\cdot),\quad \mbox{in }\,\mathbb{M}(\mathbb{R}^2_+\setminus \{\boldsymbol 0\}), \end{equation} where $\tilde{\nu}=\nu\circ T^{-1}$ with $T(x,y)=(x^a, y)$. With this standardization, the transformed measure $\tilde\nu$ is directly estimable from data \citep{resnickbook:2007}. In the following we describe two classes of preferential attachment models that generate networks with power-law degree distributions. \subsection{The linear preferential attachment (linear PA) model.} The directed linear \wtd{PA} model \cite{bollobas:borgs:chayes:riordan:2003,krapivsky:redner:2001} constructs a growing {sequence of} directed random graph{s} $G(n)$'s whose dynamics depend on five nonnegative {parameters} $\alpha, \beta, \gamma$, $\delta_{\text{in}}$ and $\delta_{\text{out}}$, where $\alpha+\beta+\gamma=1$ and $\delta_{\text{in}},\delta_{\text{out}} >0$. To avoid degenerate situations, assume that each of the numbers $\alpha, \beta, \gamma$ is strictly smaller than 1. We start with an arbitrary initial finite directed graph $G({n_0})$ with at least one node and $n_0$ edges. Given an existing graph $G(n-1)$, a new graph $G(n)$ is obtained by adding a single edge to $G(n-1)$ \sid{so that} the graph $G(n)$ contains $n$ edges for all $n\ge n_0$. Let $I_n(v)$ and $O_n(v)$ denote the in- and out-degree of $v\in V(n)$ in $G(n)$, {that is, the number of edges pointing into and out of $v$, respectively}. {We allow three scenarios of edge creation, which are activated by flipping a 3-sided coin with probabilities $\alpha,\beta$ and $\gamma$.} More formally, {let $\{J_n, n>n_0\}$ be} an iid sequence of {trinomial} random variables with cells labelled $1,2,3$ and cell probabilities $\alpha,\beta,\gamma$. Then the graph $G(n)$ is obtained from $G(n-1)$ as follows. \tikzset{ >=stealth', punkt/.style={ rectangle, rounded corners, draw=black, very thick, text width=6.5em, minimum height=2em, text centered}, pil/.style={ ->, thick, shorten <=2pt, shorten >=2pt,} } \newsavebox{\mytikzpic} \begin{lrbox}{\mytikzpic} \begin{tikzpicture} \begin{scope}[xshift=0cm,yshift=1cm] \node[draw,circle,fill=white] (s1) at (2,0) {$v$}; \node[draw,circle,fill=gray!30!white] (s2) at (.5,-1.5) {$w$}; \draw[->] (s1.south west)--(s2.north east){}; \draw[dashed] (0,-2.2) circle [x radius=2cm, y radius=15mm]; \end{scope} \begin{scope}[xshift=5cm,yshift=1cm] \node[draw,circle,fill=gray!30!white] (s1) at (.5,-1.5) {$v$}; \node[draw,circle,fill=gray!30!white] (s2) at (-.5,-2.5) {$w$}; \draw[->] (s1.south west)--(s2.north east){}; \draw[dashed] (0,-2.2) circle [x radius=2cm, y radius=15mm]; \end{scope} \begin{scope}[xshift=10cm,yshift=1cm] \node[draw,circle,fill=white] (s1) at (2,0) {$v$}; \node[draw,circle,fill=gray!30!white] (s2) at (.5,-1.5) {$w$}; \draw[->] (s2.north east)--(s1.south west){}; \draw[dashed] (0,-2.2) circle [x radius=2cm, y radius=15mm]; \end{scope} \node at (0,-3.5) {$\alpha$-scheme}; \node at (5,-3.5) {$\beta$-scheme}; \node at (10,-3.5) {$\gamma$-scheme}; \end{tikzpicture} \end{lrbox} \begin{figure}[h] \centering \usebox{\mytikzpic} \end{figure} \begin{itemize} \item If $J_n=1$ (with probability $\alpha$), append to $G(n-1)$ a new node $v\in V(n)\setminus V(n-1)$ and an edge $(v,w)$ leading from $v$ to an existing node $w \in V(n-1)$. Choose the existing node $w\in V(n-1)$ with probability depending on its in-degree in $G(n-1)$: \begin{equation} \label{eq:probIn} \textbf{P}[\text{choose $w\in V(n-1)$}] = \frac{I_{n-1}(w)+\delta_{\text{in}}}{n-1+\delta_{\text{in}} N(n-1)} \,. \end{equation} \item If $J_n=2$ (with probability $\beta$), add a directed edge $(v,w) $ to $E({n-1})$ with $v\in V(n-1)=V(n) $ and $w\in V(n-1)=V(n) $ and the existing nodes $v,w$ are chosen independently from the nodes of $G(n-1)$ with probabilities \begin{equation} \label{eq:probInOut} \textbf{P}[\text{choose $(v,w)$}] = \Bigl(\frac{O_{n-1}(v)+\delta_{\text{out}}}{n-1+\delta_{\text{out}} N(n-1)}\Bigr)\Bigl( \frac{I_{n-1}(w)+\delta_{\text{in}}}{n-1+\delta_{\text{in}} N(n-1)}\Bigr). \end{equation} \item If $J_n=3$ (with probability $\gamma$), append to $G(n-1)$ a new node ${v} \in V(n)\setminus V(n-1)$ and an edge $({w,v)}$ leading from the existing node ${w}\in V(n-1)$ to the new node ${v}$. Choose the existing node $w \in V(n-1)$ with probability \begin{equation} \label{eq:probOut} \textbf{P}[\text{choose }w \in V(n-1)] = \frac{ O_{n-1}(w)+\delta_{\text{out}}} {n-1+\delta_{\text{out}}N(n-1)}\,. \end{equation} \end{itemize} {For convenience we call these scenarios the $\alpha$-, $\beta$- and $\gamma$-schemes.} Note that this construction allows for the possibility of {multiple edges between two nodes and self loops.} This linear preferential attachment \new{model} can be simulated efficiently using the method {described} in \cite[Algorithm~1]{wan:wang:davis:resnick:2017} and linked to \url{http://www.orie.cornell.edu/orie/research/groups/multheavytail/software.cfm}. It is shown {in \cite{resnick:samorodnitsky:towsley:davis:willis:wan:2016,resnick:samorodnitsky:2015, wang:resnick:2016} that} the empirical degree distribution $$ \frac{N_n(i,j)}{N(n)} \stackrel{\text{a.s.}}{\longrightarrow} p_{ij}, $$ and the marginals satisfy \eqref{eq:pin:pl} and \eqref{eq:pout:pl}, where the tail indices are \begin{equation}\label{c1c2} \iota_\text{in} =: \frac{1+\delta_{\text{in}}(\alpha+\gamma)}{\alpha+\beta},\quad\text{and}\quad \iota_\text{out} =: \frac{1+\delta_{\text{out}}(\alpha+\gamma)}{\beta+\gamma}. \end{equation} Furthermore, the joint regular variation condition \eqref{stdzRV} is satisfied by the limit degree distribution and the limit measure \cite{resnick:samorodnitsky:towsley:davis:willis:wan:2016} or its density \cite{wang:resnick:2016} can be explicitly derived. We shall use this property for parameter estimation in Section~\ref{sec:tail}. \subsection{The superstar linear \wtd{PA} model.} {The key feature of the superstar linear PA model that distinguishes it from the standard linear PA model is the existence of a superstar node, to which a large proportion of nodes attach.} A new parameter $p$ {represents the attachment probability}. The $\alpha$-, $\beta$- and $\gamma$-schemes of the linear PA model are still in action. {However, for the $\alpha$- and $\beta$-schemes, an outgoing edge will attach to the superstar node with probability $p$, while \sid{with probability $1-p$} it will attach to a non-superstar node according to the original linear PA rules. {For simplicity, the network is initialized with two nodes $V(1)=\{0,1\}$ where node $0$ is the superstar node. \wtd{We assume at the first step,} there is an edge pointing from $1\to 0$ so $E_1=\{(1,0)\}$.} Again each graph $G(n)$ contains $n$ edges for all $n\ge1$. Let \[ V^0(n) := V(n)\setminus \{0\}, \quad\text{and}\quad E^0(n) := E(n)\setminus\{(u, 0): u\in V^0(n)\}, \] {so that $E^0(n)$ is the set of edges in $G(n)$ that do not point to the superstar.} Let $|V^0(n)|$ and $|E^0(n)|$ denote the number of nodes and edges in the non-superstar subgraph of $G(n)$, respectively. The model is specified through the parameter set $(p,\alpha,\beta,\gamma,\delta_{\text{in}},\delta_{\text{out}})$. Let $\{B_n: n\ge 1\}$ be another iid sequence of Bernoulli random variables where $$ \textbf{P}(B_n = 1) = p = 1-\textbf{P}(B_n=0). $$ The Markovian graph evolution from $G(n-1)$ to $G(n)$ is modified from the linear PA model as follows. \begin{itemize} \item If $J_n=1$ (with probability $\alpha$), append to $G(n-1)$ a new node $v\in V(n)\setminus V(n-1)$ and an edge $(v,w)$ leading from $v$ to an existing node $w$. \begin{itemize} \item If $B_n=1$ (with probability $p$), $w=0$, the superstar node; \item If $B_n=0$ (with probability $1-p$), $w\in V^0(n-1)$ is chosen according to the linear PA rule \eqref{eq:probIn} applied to $(V^0(n-1),E^0(n-1))$. \end{itemize} \item If $J_n=2$ (with probability $\beta$), add a directed edge $(v,w)$ to $E({n-1})$ where \begin{itemize} \item If $B_n=1$ (with probability $p$), $v=0$ and $w\in V^0(n-1)=V^0(n)$ is chosen with probability \eqref{eq:probIn} applied to $(V^0(n-1),E^0(n-1))$; \item If $B_n=0$ (with probability $1-p$), $v,w\in V^0(n-1)=V^0(n)$ are chosen with probability \eqref{eq:probInOut} applied to $(V^0(n-1),E^0(n-1))$. \end{itemize} \item If $J_n=3$ (with probability $\gamma$), append to $G(n-1)$ a new node $w\in V^0(n)\setminus V^0(n-1)$ and an edge $(v,w)$ leading from the existing node $v\in V^0(n-1)$ to $w$, where $v\in V^0(n-1)$ is chosen with probability \eqref{eq:probOut} applied to $(V^0(n-1),E^0(n-1))$. \end{itemize} \wtd{If we use} $N^{\text{in}}_i(n)$ and $N^{\text{out}}_j(n)$ \wtd{to denote} the number of {\it non\/}-superstar nodes that have in-degree $i$ and out-degree $j$, respectively, then Theorem~\ref{thm:superstar} shows that $(N^{\text{in}}_i(n)/n, N^{\text{out}}_j(n)/n) \to (q^{\text{in}}_i,q^{\text{out}}_j) $ almost surely \sid{where the limits} are deterministic constants \sid{that decay like power laws.} \begin{Theorem}\label{thm:superstar} Let $(N^{\text{in}}_i(n), N^{\text{out}}_j(n))$ be the in- and out-degree counts of {the} non-superstar nodes {of the superstar model}. There exists constants $q^{\text{in}}_i$ and $q^{\text{out}}_j$ such that as $n\to\infty$, \[ \frac{N^{\text{in}}_i(n)}{n} \stackrel{\text{a.s.}}{\longrightarrow} q^{\text{in}}_i,\qquad \frac{N^{\text{out}}_j(n)}{n} \stackrel{\text{a.s.}}{\longrightarrow} q^{\text{out}}_j. \] \wtd{Moreover,} \begin{enumerate} \item[(i)] As $i\to\infty$, \begin{equation}\label{power-in} q^{\text{in}}_i \sim C'_\text{in}\, i^{-(1+\iota_{\text{in}})}, \end{equation} where $C'_\text{in}$ is a positive constant and \begin{equation}\label{iotain} \iota_\text{in} := \frac{1-(\alpha+\beta)p+\delta_{\text{in}}(\alpha+\gamma)}{(\alpha+\beta)(1-p)}. \end{equation} \item[(ii)] As $j\to\infty$, \begin{equation}\label{power-out} q^{\text{out}}_j \sim C'_\text{out}\, j^{-(1+\iota_{\text{out}})}, \end{equation} where $C'_\text{out}$ is a positive constant and \begin{equation}\label{iotaout} \iota_\text{out}:= \frac{1+\delta_{\text{out}}(\alpha+\gamma)}{\beta+\gamma}. \end{equation} \end{enumerate} \end{Theorem} {The proof of Theorem~\ref{thm:superstar} is provided in Appendix~\ref{subsec:proof:superstar}.} \section{Estimation {Using Extreme Value Theory}}\label{sec:tail} In this section, we consider \sid{network parameter estimation} using extreme value theory. Given a graph $G(n)$ at a fixed timestamp, the data available for estimates are the in- and out-degrees for each node denoted by $(I_n(v),O_n (v))$, $v=1,\ldots,N(n)$. Let $F_n(\cdot)$ \wtd{be} the empirical distribution of this data on $\mathbb{N}\times \mathbb{N}$. Then from \eqref{pij}, almost surely $F_n$ converges weakly to a limit distribution $F$ on $\mathbb{N}\times \mathbb{N}$ which is the measure corresponding to the mass function $\{p_{ij}\}$. Let $\epsilon_{(i,j)}(\cdot)$ be the Dirac measure concentrating on $(i,j)$ and we have from \eqref{pij}, \begin{equation}\label{e:FnF} F_n(\cdot) =\frac{1}{N(n) }\sum_{v=1}^{N(n)} \epsilon_{(I_n(v),O_n(v) )} \phy{(\cdot)} =\sum_{i,j} \frac{N_n(i,j)}{N(n)} \epsilon_{(i,j)}(\cdot)\phy{\,\overset{w}\to\,} \sum_{i,j} p_{ij} \epsilon_{(i,j)}(\cdot)=:F(\cdot). \end{equation} \subsection{Estimating tail indices; Hill estimation.}\label{subsubsec:clauset} We review tail index estimation of $\iota_\text{in}$ ($\iota_\text{out}$ is similar) using the Hill estimator \cite{hill:1975,resnickbook:2007} applied to in-degree data $I_n(v)$, $v=1,\ldots,N(n)$. From \eqref{eq:pin:pl}, the marginal of $F$, called $F_\text{in}$ is regularly varying with index $-\iota_\text{in}$. From Karamata's theorem $\iota_\text{in}^{-1}$ can be expressed as a function of $F_\text{in}$ \cite[page 69]{dehaan:ferreira:2006}, \begin{equation} \label{eq:ain:limit} \iota_\text{in}^{-1} = \lim_{t\to\infty} \frac{\int_t^\infty(\log(u)-\log(t))F_\text{in}(du)}{1-F_\text{in}(t)}. \end{equation} The Hill estimator of $\iota_\text{in}^{-1}$ replaces $F_\text{in} (\cdot)$ with the marginal of the empirical distribution in \eqref{e:FnF} of in-degrees, called $F_{\text{in},n}$, and $t$ with $I_{(k_n+1)}$ in \eqref{eq:ain:limit}. Let $I_{(1)} \ge \ldots \ge I_{(N(n))}$ be the decreasing order statistics of $I_n(v)$, $v=1,\ldots,N(n)$. The resulting estimator is \begin{eqnarray*} \hatain^{-1}(k_n) &=& \frac{\int_{I_{(k_n+1)}}^\infty (\log(u) - \log(I_{(k_n+1)})) F_{\text{in},n}(du)}{k_n/N(n)} \\ &=& \frac{1}{k_n} \sum_{j=1}^{k_n} (\log(I_{(j)}) - \log(I_{(k_n+1)})). \end{eqnarray*}\noindent With iid data, if we assume $k_n\to\infty$ and $k_n/N(n)\to0$, then the Hill estimator is consistent. Of course, our network data is not iid but Hill estimation still works in practice. Consistency for an undirected graph is proven in \cite{wang:resnick:2017} but for directed graphs, this is an unresolved issue. To select $k_n$ in practice, \cite{clauset:shalizi:newman:2009} proposed computing the Kolmogorov-Smirnov (KS) distance between the empirical distribution of the upper $k$ observations and the power-law distribution with index $\hatain(k)$: \[ D_{k}:=\sup_{y\ge 1} \left|\frac{1}{k} \sum_{j=1}^{k}{\bf1}_{\{I_{(j)}/I_{(k+1)}>y\}}-y^{-\hatain(k)}\right|, \quad 1\le k\le n-1. \] Then the optimal $k^*$ is the one that minimizes the KS distance $ k^* := \operatornamewithlimits{argmin}_{1\le k\le n} D_{k}, $$ and the tail index is estimated by $\hatain(k^*)$. This estimator performs well if the thresholded portion comes from a Pareto tail and also seems effective in a variety of non-iid scenarios. It is widely used by data repositories of large network datasets such as KONECT (\url{http://konect.uni-koblenz.de/}) \cite{kunegis:2013} {and is realized in} the R-package {\it poweRlaw\/} \cite{gillespie:2015}. } {We refer to the above procedure as the {\it minimum distance method} in estimating $\iota_\text{in},\iota_\text{out}$ for network data.} There are two issues when applying this method. First, the data is node-based and not collected from independent repeated sampling. Secondly, degree counts are discrete {and} do not exactly comply with the Pareto assumption made in the minimum distance method. Our analysis shows that even if we ignore these two issues, the tail estimates are still reasonably good. \subsection{Estimating dependency between in- and out-degrees} If the limiting random vector $(I,O)\sim F$ \sid{corresponding to $p_{ij}$ in \eqref{pij}} is jointly regularly varying and satisfies \eqref{stdzRV}, we may apply a polar coordinate transformation, for example, with the $L_2$-norm, $$ (I^a, O)\mapsto (\sqrt{I^{2a}+O^2},\arctan(O/I^a)) := (R,T), $$ where $a=\iota_\text{in}/\iota_\text{out}$. Then, with respect to $F$ in \eqref{e:FnF}, the conditional distribution of \sid{$T$} given $R>r$ converges weakly (see, for example, \cite[p. 173]{resnickbook:2007}), $$ F[T\in\cdot | R>r] \to S(\cdot),\quad r\to\infty, $$ where $S$ is the {\it angular measure} and describes the asymptotic dependence of the standardized pair $(I^a, O)$. Since for large $r$, $F[T\in\cdot | R>r] \approx S(\cdot)$ and for large $n$, $F_n \approx F$, it is plausible that for $r$ and $n$ large $F_n[T \in \cdot | R>r] \approx S(\cdot)$. Skeptics may check \cite[p. 307]{resnickbook:2007} for a more precise argument and recall $F_n$ is the empirical measure defined in \eqref{e:FnF}. Based on observed degrees $\{(I_n(v),O_n(v)); v=1,\ldots,N(n)\}$, how does this work in practice? First $a$ is replaced by $\hat{a} = \hatain/\hataout$ estimated from Section~\ref{subsubsec:clauset}. Then the distribution $S$ is estimated via the empirical distribution of the sample angles $T_n(v):=\arctan(O_n(v)/I_n(v)^{\hat{a}})$ for which $R_n(v):=\sqrt{I_n(v)^{2\hat{a}}+O_n(v)^2} > r$ exceeds some large threshold $r$. This is the POT (Peaks Over Threshold) methodology {commonly employed} in extreme value theory \cite{coles:2001}. In the cases where the network model is known, $S$ may be specified in closed form. For the linear PA model, $S$ has a density that is an explicit function of the linear PA parameters \cite{resnick:samorodnitsky:towsley:davis:willis:wan:2016}. \new{After estimating $\iota_\text{in}$ and $\iota_\text{out}$ by the minimum distance method, the remaining parameters can then be estimated by an approximate likelihood method that we now explain.} \subsection{EV estimation for the linear PA model}\label{subsec:tailPA} From \eqref{c1c2}, \[ \delta_{\text{in}} = \frac{\iota_\text{in}(\alpha+\beta)-1}{\alpha+\gamma},\quad \delta_{\text{out}} = \frac{\iota_\text{out}(\beta+\gamma)-1}{\alpha+\gamma}, \] \phy{so that} \sid{the linear PA model may be parameterized by} $\boldsymbol{\theta}=(\alpha,\beta,\gamma, \iota_\text{in}, \iota_\text{out})$. To construct the EV estimates, begin by computing the minimum distance estimates $\hat\iota^{EV}_{\text{in}},\hat\iota^{EV}_{\text{out}}$ of the in- and out-degree indices. The parameter $\beta$, which represents the proportion of edges connected between existing nodes, \wtd{is} estimated by $\hat{\beta}^{EV} = 1-N(n)/n$. \phy{From} \eqref{stdzRV}, $\arctan(O/I^a)$ given $I^{2a}+O^2>r^2$ converges \new{weakly} as $r\to\infty$ to {the distribution of} a random variable $\Theta$ \cite[Section~4.1.2]{resnick:samorodnitsky:towsley:davis:willis:wan:2016}, {whose pdf} is given by \sid{($0\leq x\leq \pi/2$)} \begin{eqnarray} f_\Theta(x;\alpha,\beta,\gamma,\delta_{\text{in}},\delta_{\text{out}}) &\propto &\frac{\gamma}{\delta_{\text{in}}} (\cos x)^{\frac{\delta_{\text{in}}+1}{a}-1} (\sin x)^{\delta_{\text{out}}-1} \int_0^\infty t^{\iota_\text{in}+\delta_{\text{in}}+a\delta_{\text{out}}} e^{-t(\cos x)^{1/a}-t^a\sin x}\mathrm{d} t\nonumber\\ &&+\frac{\alpha}{\delta_{\text{out}}} (\cos x)^{\frac{\delta_{\text{in}}}{a}-1} (\sin x)^{\delta_{\text{out}}} \int_0^\infty t^{a-1+\iota_\text{in}+\delta_{\text{in}}+a\delta_{\text{out}}} e^{-t(\cos x)^{1/a}-t^a\sin x}\mathrm{d} t.\label{densIO} \end{eqnarray} By {replacing $\beta,\iota_\text{in},\iota_\text{out}$ with their estimated values} $\hat\beta^{EV}$, $\hat\iota_{\text{in}}^{EV}$, and $\hat\iota_{\text{out}}^{EV}$ {and setting} ${\gamma} = 1-{\alpha}-\hat\beta^{EV},$ the density \eqref{densIO} can be viewed as a profile likelihood function (based on a single observation $x$) of the unknown parameter $\alpha$, which we denote by $$l(\alpha;x)= f_\Theta(x;\alpha,\hat\beta^{EV},1-{\alpha}-\hat\beta^{EV}, \hat{\delta}_{\text{in}}^{EV},\hat{\delta}_{\text{out}}^{EV}). $$ Given the degrees $\bigl((I_n(v),O_n(v)), v\in V(n)\bigr)$, $\hat\alpha^{EV}$ {can be} computed by maximizing the profile likelihood based on the observations $(I_n(v),O_n(v))$ for which $R_n(v)> r$ for a large threshold $r$. That is, \begin{equation} \label{eq:alpha:solve} \hat\alpha^{EV} := \operatornamewithlimits{argmax}_{\sid{0\leq \alpha \leq 1}} \sum_{v=1}^{N(n)} \log l\left(\alpha;\arctan\left(\frac{O_n(v)}{(I_n(v))^{\hat{a}}}\right)\right) \mathbf{1}_{\{R_n(v)> r\}}, \end{equation} where $r$ is typically chosen as the $(n_{\text{tail}}+1)$-th largest $R_n(v)$'s for a suitable $n_{\text{tail}}$. This estimation procedure is sometimes referred to as the ``independence estimating equations'' (IEEs) method \cite{chandler:bate:2007,varin:reid:firth:2011}, in which the dependence between observations is ignored. This technique is often used when the joint distribution of the data is unknown or intractable. Finally, using the constraint, $\alpha+\beta+\gamma=1$, we estimate $\gamma$ by $\hat{\gamma}^{EV}=1-\hat\alpha^{EV}-\hat\beta^{EV}$. \section{Estimation results} \label{sec:est} In this section, we demonstrate the estimation of the linear PA and related models through the \wtd{EV} method described in Section~\ref{subsec:tailPA}. In Section~\ref{subsec:robust}, data are \wtd{simulated} from the standard linear PA model and used to estimate the true parameters of the underlying model. Section~\ref{sec:perturb} considers data generated from the linear PA model but \wtd{corrupted} by random addition or deletion of edges. Our goal is to estimate the parameters of the original linear PA model. In Section~\ref{subsec:superstar}, we simulate data from the superstar linear PA model and attempt to use the standard linear PA estimation to recover the degree distributions. Throughout the section, the EV method is compared with two parametric estimation approaches for the linear PA model, \wtd{namely} the MLE and snapshot (SN) methods, proposed in \cite{wan:wang:davis:resnick:2017}. For a given network, {when the network history is available, that is, each edge is marked with the timestamp of its creation}, MLE estimates are directly computable. {In the case where only a snapshot of the network is given at a single point in time (i.e., the timestamp information for the creation of the edges \new{is} unavailable)}, \new{we have} an estimation procedure combining elements of method of moments with an approximation to the likelihood. A brief summary of the MLE and SN methods is in Appendix~\ref{subsec:param_est} and desirable properties of these estimators are in \cite{wan:wang:davis:resnick:2017}. Note that a main difference between the MLE, SN and EV methods lies in the amount of data utilized. The MLE approach requires the entire growth history of the network while the SN method uses only a single snapshot of the network. The EV method, on the other hand, requires only a subset of a snapshot of the network; only those degree counts of nodes with large in- or out-degrees. When the underlying model is true, MLE is certainly the most efficient, but also hinges on having a complete data set. As we shall see, in the case where the model is misspecified, the EV method provides an attractive and reliable alternative. \subsection{Estimation for the linear PA model}\label{subsec:robust} \subsubsection{{Comparison of EV with MLE and SN}} \begin{figure}[t] \includegraphics[scale=0.4]{1213asy_est.pdf} \caption{Boxplots of biases for estimates of $(\alpha,\iota_\text{in},\iota_\text{out})$ using \phy{EV}, MLE and \phy{SN} methods. Panels (a)--(c) correspond to the case where $\alpha = 0.1, 0.2$ and (d)--(f) are for $\alpha = 0.3, 0.4$, holding $(\beta, \delta_{\text{in}}, \delta_{\text{out}}) = (0.4, 1, 1)$ constant.}\label{fig:asy} \end{figure} Figure~\ref{fig:asy} presents {biases for estimates of $(\alpha,\iota_\text{in},\iota_\text{out})$} using EV, MLE, and SN methods on data simulated from the linear PA model. We held $(\beta, \delta_{\text{in}}, \delta_{\text{out}}) = (0.4, 1, 1)$ constant and varied $\alpha = 0.1, 0.2, 0.3, 0.4$ so that the true values of $\gamma,\iota_\text{in},\iota_\text{out}$ were also varying. For each set of parameter values $(\alpha,\iota_\text{in}, \iota_\text{out})$, 200 independent replications of a linear PA network with $n=10^5$ edges were simulated and the true values of $(\iota_\text{in}, \iota_\text{out})$ were computed by \eqref{c1c2}. We estimated $(\iota_\text{in}, \iota_\text{out})$ by the minimum distance method $(\hat{\iota}^{EV}_\text{in}, \hat{\iota}^{EV}_\text{out})$, MLE and the one-snapshot methods applied to the parametric model (cf.\ Section~\ref{subsec:param_est}), denoted by $(\hat{\iota}^{MLE}_\text{in},\hat{\iota}^{MLE}_\text{out})$ and $(\hat{\iota}^{SN}_\text{in},\hat{\iota}^{SN}_\text{out})$, respectively. With $(\hat{\iota}^{EV}_\text{in}, \hat{\iota}^{EV}_\text{out})$, $\hat\alpha^{EV}$ is calculated by \eqref{eq:alpha:solve} using $n_{\text{tail}}=200$. As seen here, for simulated data from a known model, MLE outperforms other estimation procedures. The EV procedure tends to have much larger variance than both MLE and SN with slightly more bias. This is not surprising as the performance of the EV estimators is dependent on the \new{quality of the following approximations:} \begin{enumerate} \item The \wtd{number of edges in the} network, $n$, should be sufficiently large to ensure a close approximation of $N_n(i,j)/N(n)$ to the limit \wtd{joint pmf} $p_{ij}$. \item The choice of thresholds must guarantee the quality of the EV estimates for the indices and the limiting angular distribution. The thresholding means estimates are based on only a small fraction of the data and hence have large uncertainty. \item The parameter $a$ used to transform the in- and out-degrees to standard regular variation is estimated and thus subject to estimation error which propagates throughout the remaining estimation procedures. \end{enumerate} \subsubsection{Sensitivity analysis.}\label{subsubsec:sens} We {explore} {how sensitive {EV estimates are} to choice of $r$}, the threshold for the approximation to the limiting angular density in \eqref{eq:alpha:solve}. Equivalently, we consider varying $n_\text{tail}$, the number of tail observations included in \phy{the} estimation. For the sensitivity analysis, 50 linear PA networks with $10^5$ edges and parameter set $$(\alpha,\beta,\gamma,\delta_{\text{in}},\delta_{\text{out}})=(0.3,0.4,0.3,1,1),$$ or equivalently, $$(\alpha,\beta,\gamma,\iota_\text{in},\iota_\text{out})=(0.3,0.4,0.3,2.29,2.29)$$ are generated. We use $n_\text{tail} = 50,100,200,300,500,1000, 1500$ to calculate the EV estimates for $\alpha$. The performances of $\hat{\alpha}^{EV}$ across different value\wtd{s} of $n_\text{tail}$ are demonstrated by the blue boxplots in Figure~\ref{fig:alpha_est}(a). \begin{figure}[t] \centering \includegraphics[scale=0.6]{fixed_selected_clauset.pdf} \caption{(a) Boxplots of biases of $\hat\alpha$ and $\hat{\alpha}^*$ for different $n_\text{tail}$ and $n_\text{tail}^*$ over 50 replications, where $(\alpha,\beta,\gamma,\delta_{\text{in}},\delta_{\text{out}})=(0.3,0.4,0.3,1,1)$. (b) Linearly interpolated trajectories of biases of $\hat\alpha$ and $\hat{\alpha}^*$ from 10 randomly picked realizations.}\label{fig:alpha_est} \end{figure} We see that the biases of $\hat{\alpha}$ remain small until $n_\text{tail}$ increases to 300, and for larger values of $n_\text{tail}$, $\hat{\alpha}$ considerably underestimates $\alpha$. We note that the angular components $R_n(v)$, $1\le v\le N(n)$ are also power-lawed. As an attempt to select the optimal value of $n_\text{tail}$, we apply the minimum distance method to the $R_n(v)$'s and use the selected threshold, $n^*_\text{tail}$, as the truncation threshold. The boxplot of $n^*_\text{tail}$ for the 50 simulated networks are represented by the \phy{horizontal} boxplot in Figure~\ref{fig:alpha_est}(a). The EV estimator with respect to this threshold for each simulation, denoted by $\hat{\alpha}^*$, is shown by the red \wtd{boxplot} and plotted at $n_\text{tail}=875$, the mean of $n^*_\text{tail}$. Overall, $n^*_\text{tail}$ varies between 300 and 1500 and results in an underestimated $\hat{\alpha}^*$. In Figure~\ref{fig:alpha_est}(b), we randomly choose 10 realizations (among the 50 replications) and plot the linearly interpolated trajectories of $\hat{\alpha}$, based on different values of $n_\text{tail}$. Black points are the estimation results using fixed thresholds $n_\text{tail} = 50,100,200,300,500,1000,1500$ and red ones are determined by $(\hat{\alpha}^*,n^*_\text{tail})$ using the minimum distance method. Black and red points denoted by the same symbol belong to the same realization. Comparison among estimation results for different values of $n_\text{tail}$ reveals that choosing a fixed threshold $n_\text{tail}\le 300$ outperforms selecting a $n_\text{tail}^*$ using the minimum distance method, as it produces estimates with smaller biases and variances. \subsection{Data {corrupted} by {random edge addition/deletion}.}\label{sec:perturb} PA models are designed to describe human interaction in social networks but what if data collected from a network is corrupted or usual behavior is changed? Corruption could be due to collection error and atypical behavior could result from users hiding their network presence or trolls acting as provocateurs. In such circumstances, the task is to unmask data corruption or atypical behavior and recover {the parameters associated with the original preferential attachment rules. In the following, we {consider network data that are generated from the linear PA model but \wtd{corrupted} by random addition or deletion of edges}. {For such corrupted data}, {we attempt to recover the original model and compare the performances of MLE, SN, and EV methods.} \subsubsection{Randomly adding edges.}\label{subsec:add} We consider a network generating algorithm with linear PA rules but also a possibility of adding random edges. Let $G(n)=(V(n),E(n))$ denote the graph at time $n$. We assume that the edge set $E(n)$ can be decomposed into two disjoint subsets: $E(n) = E^{PA}(n) \bigcup E^{RA}(n)$, where $E^{PA}(n)$ is the set of edges result{ing} from \wtd{PA rules}, and $E^{RA}(n)$ is the set of those result{ing} from random attachment{s}. {This can be viewed as} an interpolation of the \wtd{PA} network and the Erd\"os-R\'enyi random graph. {More specifically, consider the following network growth}. Given $G(n-1)$, $G(n)$ is formed by creating a new edge where: \begin{enumerate} \item[(1)] With probability $p_a$, two nodes are chosen randomly {(allowing repetition)} from $V(n-1)$ and an edge is created connecting them. The possibility of a self loop is allowed. \item[(2)] With probability $1-p_a$, a new edge is created according to the preferential attachment scheme $(\alpha,\beta,\gamma,\delta_{\text{in}},\delta_{\text{out}})$ on $G^{PA}(n-1):=(V(n-1),E^{PA}(n-1))$. \end{enumerate} {The question of interest is,} {if we {are unaware of the perturbation effect and} pretend the data from this model {are} coming from the linear PA model, can} we recover the \wtd{PA} parameters? {To investigate,} we generate networks of $n=10^5$ edges with parameter values $$(\alpha,\beta,\gamma,\delta_{\text{in}},\delta_{\text{out}})=(0.3,0.4,0.3,1,1), \quad p_a\in\{0.025,0.05,0.075,0.1,0.125,0.15\}.$$ For each network, the original \wtd{PA} model {is fitted} using the {MLE, SN and EV} methods, respectively. The {angular} MLE in \eqref{eq:alpha:solve} in the {extreme value} estimation {is performed} based on $n_{\text{tail}}=500$ tail observations. In order to compare these estimators, we repeat the experiment 200 times for each value of $p_a$ and obtain 200 sets of estimated parameters for each method. {Figure~\ref{fig:add_params} {summarizes the estimated values for $(\delta_{\text{in}}, \delta_{\text{out}}, \alpha, \gamma, \iota_\text{in}, \iota_\text{out})$ for different values of $p_a$. The mean estimates are marked by crosses and the $2.5\%$ and $97.5\%$ empirical quantiles are marked by the bars.} The true value of parameters are {shown} as {the} horizontal lines. } {While all parameters deviate} from the true value as $p_a$ increases {and} the network becomes more ``noisy", {the EV estimates for $(\delta_{\text{in}},\delta_{\text{out}})$ exhibit smaller bias than the MLE and \wtd{SN} methods (Figure \ref{fig:add_params} (a) and (b)).} All three methods give underestimated probabilities $(\alpha, \gamma)$ (Figure \ref{fig:add_params} (c) and (d)). This is because the perturbation step (1) creates more edges between existing nodes and consequently inflates the estimated value of $\beta$. {Also note that the mean {EV} estimates of $(\iota_\text{in}, \iota_\text{out})$ stay close to the theoretical values for all choices of $p_a$; see Figure \ref{fig:add_params} (e) and (f). {The MLE and SN estimates of $(\iota_\text{in},\iota_\text{out})$, which are computed from the corresponding estimates for $(\alpha,\beta,\gamma,\delta_\text{in},\delta_\text{out})$, show strong bias as $p_a$ increases.} {In this case}, the \wtd{EV} method is {robust for estimating the \wtd{PA} parameters} {and recovering} the tail indices from the original model. \begin{figure}[t] \centering \includegraphics[scale=0.6]{1213add_param.pdf} \caption{Mean estimates and $2.5\%$ and $97.5\%$ empirical quantiles of (a) $\delta_{\text{in}}$; (b) $\delta_{\text{out}}$; (c) $\alpha$; (d) $\gamma$; (e) $\iota_\text{in}$; (f) $\iota_\text{out}$, using MLE (black), SN (red) and EV (blue) methods over 200 replications, where $(\alpha,\beta,\gamma,\delta_{\text{in}},\delta_{\text{out}})=(0.3,0.4,0.3,1,1)$ and $p_a= 0.025,0.05,0.075,0.1,0.125,0.15$. For the EV method, 500 tail observations were used {to obtain $\hat{\alpha}^{EV}$}. }\label{fig:add_params} \end{figure} \subsubsection{Randomly deleting edges.}\label{subsec:delete} {We now consider the scenario where a network is generated from the linear PA model, but a random proportion $p_d$ of edges are deleted at the final time. We do this by generating $G(n)$ and then deleting $[np_d]$ edges by sampling without replacement. For the simulation, we generated networks with parameter values $$(\alpha,\beta,\gamma,\delta_{\text{in}},\delta_{\text{out}})=(0.3,0.4,0.3,1,1), \quad p_d\in\{0.025,0.05,0.075,0.1,0.125,0.15\}.$$ {Again, for each value of $p_d$, the experiment is repeated 200 times and the resulting parameter plots are shown in} Figure~\ref{fig:delete_params} {using the same format as for Figure~\ref{fig:add_params}}. For the \wtd{EV} method, 100 tail observations were used {to {compute} an $\hat{\alpha}^{EV}$}. {Surprisingly, for all six parameters considered}, MLE estimates stay almost unchanged for different values of $p_d$ while SN and EV estimates underestimate $(\delta_{\text{in}},\delta_{\text{out}})$ and overestimate $(\alpha, \gamma)$, with increasing magnitudes of biases as $p_d$ increases. For tail estimates, the minimum distance method still gives reasonable results (though with larger variances), whereas the \wtd{SN} method keeps underestimating $\iota_\text{in}$ and $\iota_\text{out}$. The performance of MLE in this case is surprisingly competitive. This is intriguing and in ongoing work, we will think about why this is the case. \begin{figure}[t] \centering \includegraphics[scale=0.6]{1213delete_param.pdf} \caption{Mean estimates and $2.5\%$ and $97.5\%$ empirical quantiles of (a) $\delta_{\text{in}}$; (b) $\delta_{\text{out}}$; (c) $\alpha$; (d) $\gamma$; (e) $\iota_\text{in}$; (f) $\iota_\text{out}$, using MLE (black), SN (red) and EV (blue) methods over 50 replications, where $(\alpha,\beta,\gamma,\delta_{\text{in}},\delta_{\text{out}})=(0.3,0.4,0.3,1,1)$ and $p_d= 0.025,0.05,0.075,0.1,0.125,0.15$. For the EV method, 100 tail observations were used {to compute $\hat{\alpha}^{EV}$}.}\label{fig:delete_params} \end{figure} \subsection{Superstar model.}\label{subsec:superstar} In this section, we consider network data generated from the superstar model. We compare the accuracy of tail index estimates under parametric methods applied to the linear PA model with extreme value estimates applied directly to data. {Networks are simulated from the superstar model with the following parameter values:} $$(\alpha, \beta, \delta_{\text{in}}, \delta_{\text{out}}, n)= (0.3, 0.4, 0.3, 1, 1, 10^6), \quad p \in \{0.1, 0.15, 0.2, 0.25, 0.3\}.$$ The MLE estimates of the tail indices based on {\eqref{c1c2}}, $(\hat{\iota}^{MLE}_\text{in},\hat{\iota}^{MLE}_\text{out})$, are compared to the EV estimates calculated directly from the node degree data, $(\hat{\iota}^{EV}_\text{in},\hat{\iota}^{EV}_\text{out})$. According to Theorem~\ref{thm:superstar}, {the} theoretical marginal tail indices for $I_n(v)$ and $O_n(v)$, \wtd{$1\le v\le N(n)$}, based on a superstar \wtd{PA} model are {given by \eqref{iotain}, \eqref{iotaout}.} This experiment is repeated 50 times and Table~\ref{robust-clauset} records the mean estimates for $(\iota_\text{in}, \iota_\text{out})$ over these 50 replications. \begin{table}[h] \centering \begin{tabular}{ccccc} \hline $p$& $(\iota_\text{in}, \iota_\text{out})$ & $(\hat{\iota}^{MLE}_\text{in},\hat{\iota}^{MLE}_\text{out})$ & $(\hatain^{EV},\hataout^{EV})$ \\ \hline $0.1$ & (2.43, 2.29) & (2.11, 2.31) & (2.24 2.25)\\ $0.15$ & (2.51, 2.29) & (2.03, 2.33) & (2.28 2.20) \\ $0.2$ & (2.61, 2.29) & (1.97, 2.34) & (2.35 2.18)\\ $0.25$ & (2.71, 2.29) & (1.91, 2.36) & (2.43 2.18) \\ $0.3$ & (2.84, 2.29) & (1.86, 2.38) & (2.51 2.15)\\ \hline \end{tabular} \caption{Mean estimates for $(\iota_\text{in}, \iota_\text{out})$ using both MLE and minimum distance methods, with $(\alpha, \beta, \gamma, \delta_{\text{in}}, \delta_{\text{out}}, n) = (0.3, 0.4, 0.3, 1, 1, 10^6)$.}\label{robust-clauset} \end{table} As $p$ increases and the influence of the superstar node becomes more profound, the MLE method does not give an accurate estimate of tail indices, while the {EV} method stays more robust. However, when $p$ becomes too large, the in-degrees of non-superstar nodes will be greatly restricted, which increases the finite sample bias in the {EV} estimates. Note that the theoretical indices $(\iota_\text{in}, \iota_\text{out})$ in Table~\ref{robust-clauset} are for the in- and out-degrees of the non-superstar nodes. In the EV methods, the inclusion of the superstar node \phy{can} severely \phy{bias} the estimation of $\iota_\text{in}$. Let $k_n$ be some intermediate sequence such that $k_n\to\infty$ and $k_n/n\to 0$ as $n\to\infty$ and use $I_{(1)}\ge\ldots\ge I_{(k_n+1)}$ to denote the upper $k_n+1$ order statistics of $\{I_n(v): 0\le v\le N(n)\}$. Then the corresponding Hill estimator is \begin{align} \new{1/\hatain^{EV} (k_n)} &:= \frac{1}{k_n}\sum_{i=1}^{k_n} \log \frac{I_{(i)}}{I_{(k_n+1)}}\nonumber\\ &= \frac{1}{k_n}\log I_{(1)} - \frac{1}{k_n}\log I_{(k_n+1)} + \frac{1}{k_n}\sum_{i=2}^{k_n} \log \frac{I_{(i)}}{I_{(k_n+1)}}. \label{eq:hill_superstar} \end{align} From the construction of the superstar model, we know that the superstar node \new{likely has} the largest in-degree, which is approximately equal to $np$ for large $n$. Hence, the first term in \eqref{eq:hill_superstar} goes to 0, as long as \[ k_n/\log n\to \new{\infty},\quad\text{as }n\to\infty, \] and the third term in \eqref{eq:hill_superstar} is the Hill estimator computed from the in-degrees of non-superstar nodes. In \cite{wang:resnick:2017}, the consistency of the Hill estimator has been proved for a simple undirected linear PA model, but consistency for $\hatain^{EV}(k_n)$ is not proven for either of the two models we consider here. However, with the belief on the consistency of $\hatain^{EV}(k_n)$, \eqref{eq:hill_superstar} suggests that choosing a larger $k_n$ will reduce the bias when estimating $\iota_\text{in}$ in the superstar model. To illustrate this point numerically, we choose $k_n = 200, 500, 1000, 1500, 2000$ for a superstar network with $10^6$ edges and probability of attaching to the superstar node $p = 0.1, 0.15, 0.2, 0.25, 0.3$. For each value of $p$, we again simulate 50 independent replications of the superstar PA model with parameters $(\alpha, \beta, \gamma, \delta_{\text{in}}, \delta_{\text{out}}, n) = (0.3, 0.4, 0.3, 1, 1, 10^6)$. Then for each replication generated, Hill estimates of the in- and out-degree tail indices are calculated under different choices of $k_n$. The mean values of the 50 pairs of estimates are recorded in Table~\ref{Table:vary_kn}, where the first entry is the in-degree tail estimate and the second is for out-degree. \begin{table}[h] \centering \begin{tabular}{l|ccccc} \hline & \multicolumn{5}{c}{Number of Upper Order Statistics $k_n$} \\ & 200 & 500 & 1000 & 1500 & 2000\\ \hline $p = 0.1$ & (2.16, 2.22) & (2.26, 2.19) & (2.27, 2.16) & (2.28, 2.14) & (2.27, 2.15) \\ $p = 0.15$ & (2.25, 2.18) & (2.32, 2.17) & (2.29, 2.14) & (2.31,2.15) & (2.28, 2.14)\\ $p = 0.2$ & (2.32, 2.17) & (2.39, 2.16) & (2.37, 2.15) & (2.39, 2.11) & (2.33, 2.13) \\ $p = 0.25$ & (2.36, 2.18) & (2.47, 2.16) & (2.43, 2.12) & (2.49, 2.11) & (2.52, 2.12)\\ $p = 0.3$ & (2.41, 2.17) & (2.58, 2.13) & (2.56, 2.11) & (2.47, 2.11) & (2.51, 2.12)\\ \hline \end{tabular} \caption{Mean values of \new{EV} estimates of tail indices $(\iota_\text{in}, \iota_\text{out})$ over 50 replications, with $(\alpha, \beta, \gamma, \delta_{\text{in}}, \delta_{\text{out}}, n) = (0.3, 0.4, 0.3, 1, 1, 10^6)$. \new{The true values are given in Table \ref{robust-clauset}.}} \label{Table:vary_kn} \end{table} From the in-degree estimates in Table~\ref{Table:vary_kn}, we observe that for most values of $p$ increasing $k_n$ to 500 improves the estimation results, but further increase in $k_n$ has adverse effects. One reason is that \new{large $k_n$} means smaller in-degrees are taken into the calculation of the Hill estimator; these smaller in-degrees might not be large enough to be considered as following the power law in \eqref{power-in}. This also explains the increasing biases for the out-degree estimates, where the superstar node does not have any impact. Comparing the results in Table~\ref{Table:vary_kn} to those EV estimates in Table~\ref{robust-clauset}, we see that the minimum distance method seeks a good balance between eliminating the effect of the superstar nodes and choosing a reasonably large threshold. \begin{figure} \includegraphics[scale = 0.65]{1213deg_superstar.pdf} \caption{Empirical in- and out-degree distributions, with $(\alpha, \beta, \gamma, \delta_{\text{in}}, \delta_{\text{out}},n, p) = (0.3,\, 0.4,\, 0.3,\, 1,\, 1,\, 10^5,\, 0.25)$.}\label{fig:deg_compare} \end{figure} The next question is how the model misspecification affects the empirical distributions of in- and out-degrees. To {evaluate} this, we generated a superstar \wtd{PA} model with parameters $$ (\alpha, \beta, \gamma, \delta_{\text{in}}, \delta_{\text{out}},n, p) = (0.3,\, 0.4,\, 0.3,\, 1,\, 1,\, 10^5,\, 0.25). $$ We {estimated parameters by} both MLE {and {EV} methods} {from} simulated superstar {data}, pretending that the data was generated from an ordinary \wtd{PA} rule. For the \wtd{EV} approach, 200 tail observations \new{were} used {while computing $\hat{\alpha}^{EV}$}. Denote the MLE and \wtd{EV} estimates by \begin{align*} \widehat{\boldsymbol\theta}_n^{MLE} &:= (\hat{\alpha}^{MLE}, \hat{\beta}^{MLE}, \hat{\gamma}^{MLE}, \hat{\delta}^{MLE}_\text{in}, \hat{\delta}^{MLE}_\text{out}), \\ \widehat{\boldsymbol\theta}_n^{EV} &:= (\hat{\alpha}^{EV}, \hat{\beta}^{EV}, \hat{\gamma}^{EV}, \hat{\delta}^{EV}_\text{in}, \hat{\delta}^{EV}_\text{out}). \end{align*} We then simulated $20$ independent replications of a linear PA model with parameters $\widehat{\boldsymbol\theta}_n^{MLE}$ and $20$ with parameters $\widehat{\boldsymbol\theta}_n^{EV}.$ For each set of replicates we computed the empirical frequency distributions. Comparisons of degree distributions are provided in Figure~\ref{fig:deg_compare}. In all 4 panels, the green dots represent the empirical degree frequencies for the simulated superstar data, top for in-degree and bottom for out-degree. Blue in the two left panels represents overlaid frequency distributions for the 20 simulated data sets from the \wtd{linear PA} replicates using $\widehat{\boldsymbol\theta}_n^{MLE}$. Red in the right two panels does the same thing for 20 replicates of the \wtd{linear PA} model using parameter $\widehat{\boldsymbol\theta}_n^{EV}$. The \wtd{EV} method {seems} to give better fit for in-degrees. Based on out-degrees, it is difficult to visually discern an advantage for either approach. {While not obvious in the plots, we again expect the estimated degrees from the EV method to have higher variance than those from MLE, as much less data were used for the model fitting.} \section{Conclusion}\label{sec:discussion} {In this paper, we propose{d} a semi-parametric {extreme value} (EV) estimation method for network models. We compared the {performance} of this method to the two parametric approaches (MLE and snapshot methods) given in \cite{wan:wang:davis:resnick:2017} under three scenarios: (1) data generated from a linear preferential attachment (linear PA) model; (2) data generated from a linear PA model with corruption; (3) data generated from a superstar linear PA model.} To summarize our findings and experience, EV estimation methods play important roles while applied to social network data. The method provides a robust procedure for estimating \new{parameters} of the network related to heavy-tailedness of the marginal and joint distributions of the in- and out-degrees. Also EV methods play a confirmatory role to other estimation procedures that are likelihood based, such as MLE or the snapshot (SN) method, which require that the model is correctly specified. If, for example, MLE or SN produces estimates of tail indices different \wtd{from} those \wtd{given} by the EV procedure, then this might suggest a lack of fit of the underlying model. In practice, data are not as {\it clean} as those produced in simulations and one expects deviations from a base model such as the linear PA. As seen in this paper, these deviations can lead to sharply biased MLE and SN estimates especially when compared to EV estimates. As in classical \wtd{EV} estimation in the iid setting, the choice of threshold upon which to base the estimation remains a thorny issue in the network context. The minimum distance method based on \cite{clauset:shalizi:newman:2009} for estimating \new{marginal} tail indices works well for the examples considered here, but \wtd{worse} for multivariate data where it is employed to set thresholds based on radius vectors.
1,116,691,499,479
arxiv
\section{Introduction} \label{sec:introduction} Magnetic resonance imaging (MRI) is a valuable diagnostic imaging modality that provides excellent spatial resolution with a superior soft-tissue contrast. However, MRI is an inherently slow acquisition process as the data sampling is carried out sequentially in k-space and the speed at which k-space can be traversed is limited by physiological and hardware constraints. These long acquisition times impose significant demands on patients, making the imaging modality expensive and less accessible \cite{hollingsworth}. Data acquisition can be accelerated by acquiring fewer k-space samples, which upon reconstruction provides a degraded image. Several works have been proposed to improve the reconstruction quality, including parallel imaging (PI) \cite{sense,grappa}, compressed sensing (CS) \cite{cs_mri} , and a combination of PI and CS \cite{cs_pi_1,cs_pi_2}. Recently, methods based on deep learning have shown promising results. However, the quantitative and the perceptual quality of these methods could be improved by the following: (1) effectively utilizing the image and k-space domain data; (2) exploiting the additional information from other sequences; (3) optimizing the network for a metric which highly correlates with the image quality scores of radiologists; and (4) availing the multi-coil data. In our work, we propose deep networks considering the above discussed possibilities. \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/methodology/RSN_Illustration.png} \caption{From left to right: Target, ZF, KI, II, and RSN. From top to bottom: Reconstructed image and its reconstruction error with respect to the target. Compared to II, KI has lower reconstruction error (higher PSNR). The bottom row shows that the residue of KI is lower compared to II. Compared to KI, the structure recovery of II is better (higher SSIM). The structures shown in the specific ROI are blurred and merged in KI while in II the structures are sharp and clearly separated. RSN provides both lower reconstruction error and better structure recovery (higher PSNR and SSIM).} \label{fig:rsn_illustration} \end{figure*} Firstly, the zero filled reconstruction (ZF) which is the inverse Fourier transform (IFT) of under sampled (US) k-space provides an image with aliasing artifacts. Deep learning networks have been developed to restore the original image by imputing the missing k-space \cite{automap} or by de-aliasing the degraded image \cite{wang}. Interestingly, we observed that networks operating on the Fourier domain provided lower reconstruction error while networks operating on the image domain provided a better structure recovery. The stacked version of the predictions from different models can effectively be combined through a fusion model \cite{stacked_generalization}. Hence, we propose reconsynergynet (RSN), a fusion (Fu) network which synchronously operates on the image domain outputs of both k-space to image (KI) and image to image (II) networks. KI network operates on the frequency domain while the II network operates on the image domain. A deep cascade convolutional neural network (DC-CNN) \cite{dc_cnn} incorporated the practical strategies of CS in deep learning through cascades of CNN interleaved with data fidelity (DF) units. CNN blocks are used for de-aliasing, while DF units are used to provide data consistency in the Fourier domain. Motivated by this adaptation, we propose DC-RSN, a deep cascade of RSN blocks interleaved with DF units for single-coil reconstruction. RSN fuses the de-aliased output of II network and the domain transformed output of KI network through Fu network, unlike the blocks in DC-CNN which only does de-aliasing. Fig. 1 demonstrates the effectiveness of operating in both the k-space and the image domain by comparing RSN with KI and II networks. Secondly, in routine radiological imaging, T1 weighted imaging (T1WI) and T2 weighted imaging (T2WI) are the two basic MR sequences used for diagnosis. T2WI is usually slower than T1WI due to the use of relatively longer repetition time (TR) and echo time (TE). Moreover, these sequences are structurally correlated, which facilitates the use of fully sampled (FS) T1WI for accelerating the acquisition of T2WI. Recently, the assistance of T1WI for T2WI reconstruction has been investigated \cite{t1_assistance}. We further explored this aspect and propose a gradient based T1 assistance to DC-RSN in two stages. In the first stage, we introduce DC-RSN-T1, a network similar to DC-RSN, except that for every cascade, Fu of RSN takes FS T1WI as an additional channel along with the reconstructions from KI and II networks to provide the structures missed by the KI and II networks. We encode the reconstruction from DC-RSN-T1 as a feature representation using gradient of log feature (GOLF) module which provides the feature corresponding to the logarithm of gradient of the input image. In the second stage, we fuse the intermediate feature maps of KI and II of RSN for every cascade of DC-RSN with the feature representation obtained from the first stage. We name this architecture as DC-RSN-T1-GOLF, where the GOLF obtained from the reconstruction of DC-RSN-T1 is used to guide the reconstruction of DC-RSN by explicitly providing the boundary information. Thirdly, the deep learning networks developed for MRI reconstruction typically use the mean square error (MSE) as an objective function. However, MSE is often associated with over-smoothed edges and overlooks the subtle image textures critical for human perception. In computer vision, the perceptual quality of the image is improved by generative adversarial networks (GAN), which uses adversarial loss in addition to MSE \cite{esrgan_perceptual}. Perceptual index (PI), a no reference metric, is used in the vision community to validate the perceptual quality of an image \cite{perception_distortion_tradeoff}. Likewise, a recent study \cite{radiology_perception} reported that the metric visual information fidelity (VIF) \cite{vif} is highly correlated with the radiologist’s perception on the quality of MRI. In our work, we propose a CNN based perceptual refinement network (PRN) to refine the reconstructions of models trained using MSE for better perceptual quality. We also show that improvement in VIF can be obtained by training the PRN in an adversarial setup. PRN is successively connected to the pre-trained DC-RSN-T1-GOLF to form DC-RSN-T1-GOLF-PRN, an ensemble of dual domain cascade network with gradient-based T1 assistance and perceptual refinement for single-coil acquisition. Finally, multi-coil acquisition is the default option for many scan protocols and is supported by almost all modern clinical MRI scanners. Furthermore, for the same acceleration, reconstructions obtained from multi-coil acquisition are better and more tractable than the one from single-coil acquisition because of the information redundancy in multiple channels (Knoll et al., 2019). Variational network (VN) \cite{variational} and variable splitting network (VS-Net) \cite{vs_net} were proposed to specifically work for multi-coil acquisition. DF proposed in VS-Net for multi-coil acquisition is computationally efficient compared to DF in VN. Besides, DF in VS-Net is the direct extension of point wise data consistency operation in DC-CNN, unlike DF in VN, which is an approximate estimate through gradient descent. Inspired by VS-Net, we propose VS-RSN, a deep cascade of multi-coil specific blocks with each block containing RSN, DF unit, and weighted average module. RSN works on sensitivity-weighted multiple channels, DF unit does data consistency across multiple channels, and the weighted average module combines the reconstructions from RSN and DF. To the best of our knowledge, VS-RSN is the first dual domain cascade network for multi-coil reconstruction. Similar to DC-RSN, the image quality of VS-RSN are also improved through gradient assistance and perceptual refinement. In summary, the main contributions of our work are the following: \begin{itemize} \item We propose novel dual domain cascade architectures DC-RSN and VS-RSN, for single-coil and multi-coil acquisition, respectively. \item We propose GOLF based T1 assistance to provide more faithful reconstruction of T2WI. \item We propose PRN to refine the final reconstruction for obtaining high image quality scores from radiologists. \item We conduct extensive comparison and show that our network DC-RSN for single-coil and VS-RSN for multi-coil are better than the respective state-of-the-art methods across acceleration factors and datasets. \item We conduct extensive experiments and demonstrate the efficacy of GOLF based T1 assistance in T2WI reconstruction. Furthermore, we extensively validate PRN with the proposed models and observe that PRN addition improves VIF. \item We validate DC-RSN and VS-RSN using the single and multi-coil knee dataset of fastMRI (Zbontar et al., 2018). We obtain a competitive SSIM of 0.768, 0.923, 0.878 for knee single-coil-4x, multi-coil-4x, and multi-coil-8x, respectively. \end{itemize} The paper is organized as follows: Section 2 reviews the related works while section 3 provides the description of datasets used in the experiments. The design of DC-RSN, VS-RSN, GOLF based T1 assistance, and PRN are described in Section 4. Section 5 presents the results and discussions while section 6 contains the conclusions. \section{Related Work} \label{sec:related_work} \subsection{Single-coil MRI reconstruction} \label{subsec:related_work_single_coil} \subsubsection{k-space to Image methods} \label{subsubsec:related_work_single_coil_k-space_to_image} These are networks that learn the mapping between US k-space and FS image. AUTOMAP \cite{automap} used fully connected (FC) network to learn the mapping between k-space and image domain. The major drawback with AUTOMAP is that the parameters of the network increases quadratically with increase in input k-space dimension. This makes the usage of AUTOMAP difficult for k-space with higher dimensions (like 256x256). To overcome this limitation, dAUTOMAP \cite{dautomap} replaced the FC layers in AUTOMAP by a fully convolutional network using the separability property of 2D IFT. \subsubsection{Image to Image methods} \label{subsubsec:related_work_single_coil_image_to_image} These are networks that learn the mapping between US and FS image. Simple convolutional networks \cite{wang} have been used to learn the mapping, and it has been shown that learning the aliasing artifact is efficient compared to learning alias-free FS image \cite{residual_conference}. RefineGAN (ReGAN) \cite{cyclic_loss} and DAGAN \cite{dagan} used GAN framework with UNet (Ronneberger et al., 2015) like network as a generator and classic deep learning classifier as a discriminator. Both these networks used linear combination of adversarial loss, image domain loss, and frequency domain loss as their objective function. DAGAN also tried to improve the perceptual quality with additional VGG loss. \subsubsection{Cascade methods} \label{subsubsec:related_work_single_coil_cascade} Cascade networks help to learn the mapping between US and FS image through unrolled optimization of image to image learning and data consistency in Fourier domain. DC-CNN \cite{dc_cnn} proposed to utilize cascades of CNN for image reconstruction while DF layers were used for data consistency. DC-UNet \cite{dc_unet} replaced CNN in DC-CNN with UNet. Likewise, DC-RDN \cite{recursive_dilated} used a recursive dilated network in place of CNN in DC-CNN. In DC-DEN \cite{dc-ensemble}, the features extracted from each CNN block were connected to other CNN blocks through dense connections which were subsequently concatenated to obtain the final reconstruction. \subsubsection{Hybrid methods} \label{subsubsec:related_work_single_coil_hybrid} These are networks that operate on both the k-space and the image domain apart from k-space data consistency operations. KIKI-Net \cite{kikinet} proposed cascade of k-space and image CNN interleaved by DF units. K-space CNN was used to obtain FS k-space from US k-space. Image CNN was used to obtain FS image from the IFT of the predicted FS k-space. DC-Hybrid \cite{hybrid} used a similar architecture as that of KIKI-Net to operate on both the k-space and the image domains. However, the DC-Hybrid architecture started with image domain operation and followed it with the k-space domain operation. \subsection{Multi-coil MRI reconstruction} \label{subsec:related_work_multi_coil} Similar to DC-CNN, the architectures developed for multi-coil acquisition mimic the classic iterative image reconstruction. VN \cite{variational} proposed to utilize cascades of image CNN interleaved by DF through gradient descent scheme. MoDL \cite{modl} proposed to use cascades of CNN whose parameters are shared, thereby reducing the parameter complexity. Unlike VN, MoDL used conjugate-gradient setup for DF. VS-Net \cite{vs_net} proposed to replace the gradient and the conjugate-gradient update of VN and MoDL for DF with a point-wise analytical solution, making VS-Net computationally efficient and numerically accurate. \section{Dataset Description} \subsection{Single-coil dataset} \subsubsection{Real-valued MRI data} \textbf{Cardiac dataset} \cite{cardiac_dataset}: Automated cardiac diagnosis challenge (ACDC) consists of 150 and 50 patient records for training and validation, respectively. We extract the 2D slices and crop it to 150 x 150, which amounted to 1841 and 1076 slices for training and validation, respectively. \textbf{Kirby dataset} \cite{kirby_dataset}: The human brain dataset consists of 42 T1 MPRAGE volumes with dimensions 256 x 256. We consider the center 90 slices from each volume, which gave 29 volumes with 2610 slices for training and 13 volumes with 1170 slices for validation. \subsubsection{Complex-valued MRI data} \textbf{Calgary dataset} \cite{calgary_dataset}: The human brain dataset has 45 T1 volumes with dimensions 256 x 256. The data was acquired with 12-channel receiver coil which was combined to simulate a single-coil acquisition. We consider the center 110 slices from each volume, which provided 25 volumes with 2750 slices and 10 volumes with 1100 slices for training and validation, respectively. \subsection{T1-T2 paired dataset} \textbf{MRBrainS} \cite{mrbrains_dataset} dataset contains paired T1 and T2-FLAIR volumes of 7 subjects. The dimensions of the volumes are 240 x 240. We use T1 volumes to assist T2-FLAIR reconstruction. We utilize the data from 5 subjects with 240 slices for training and 2 subjects with 96 slices for validation. \subsection{Multi-coil dataset} \textbf{Knee dataset} \cite{variational}: The dataset has five image acquisition protocols: coronal proton-density (PD), coronal fat-saturated PD, axial fat-saturated T2, sagittal fat-saturated T2 and sagittal PD. The data was acquired using 15 channel receiver coil for 20 patients. Each patient data has 40 slices with 15 channels including their respective sensitivity maps. We consider the center 20 slices for the experiments. We split the patient data into 10 with 200 slices for training and remaining 10 with 200 slices for validation. \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/methodology/RSN.png} \caption{Outline of k-space to image (KI), image to image (II) and fusion (Fu) networks in ReconSynergyNet (RSN)} \label{fig:rsn} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/methodology/DC-RSN-T1-GOLF-PRN.png} \caption{Outline of Deep Cascade RSN (DC-RSN) with Gradient of Log Feature (GOLF), T1 assistance and Perceptual Refinement Network (PRN).} \label{fig:dc_rsn} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/methodology/VS-RSN-GOLF-PRN.png} \caption{Outline of Variable splitting ReconSynergyNet (VS-RSN) with Gradient of Log Feature (GOLF) and Perceptual Refinement Network (PRN).} \label{fig:vs_rsn} \end{figure*} \section{Methodology} \label{sec:methodology} \subsection{Problem formulation} \label{subsec:methodology_problem_formulation} Let $m \in C^{N}$ be a column stacked vector of complex-value MR image and $y_{i} \in C^{M}$ ($M << N$) be the under-sampled k-space data with respect to the $i^{th}$ MR receiver coil. Recovery of $m$ from $y_{i}$ is an ill-posed inverse problem. According to compressed sensing (CS) theory, $m$ can be obtained by minimizing the following optimization problem: \begin{equation} \label{eq:problem_formulation} \footnotesize \underset{m}{min}\quad\frac{\lambda}{2}\sum_{i=1}^{n_{c}}||DFS_{i}m - y_{i}||_{2}^{2} + R(m) \end{equation} where ${R(m)}$ is the sparse regularization term, $\lambda$ is the weight to balance the two terms, $D \in R^{M \times N}$ is the undersampling matrix, $F \in C^{N \times N}$ is the Fourier transform matrix, $n_{c}$ denotes number of receiver coils and $S_{i} \in C^{N \times N}$ is the sensitivity map of $i^{th}$ coil. \subsection{Fundamental CNN block - RSN} \label{subsec:methodology_fundamental_cnn_block} We propose RSN, a basic block for MRI reconstruction which consists of the following components (Fig. \ref{fig:rsn}): \textbf{KI network:} We consider dAUTOMAP as KI network instead of the common k-space CNN followed by IFT as it offers equivalent performance of state-of-the-art AUTOMAP with linear complexity. Let $p$ be the vector form of image $P$, $q$ be the vector form of k-space $Q$. Then, by the theory of Fourier transform: \begin{equation} \label{eq:dautomap_formulation} \footnotesize p = F^{*}q = (F_{1}^{*} \otimes F_{2}^{*})q = vec(F_{1}^{*}QF_{2}^{*}) = vec((F_{2}^{*}(F_{1}^{*}Q)^T)^T) \end{equation} where $F^{*}$ is conjugate of 2D Discrete Fourier transform (DFT) $F$ and $F_{1}^{*}, F_{2}^{*}$ are conjugates of 1D DFT matrices. Note that $F_{1}^{*}Q$ can be realized by 1D convolution, this is termed as decomposed transform layer (DT layer). Hence, 2D DFT can be obtained by repeating DT and transpose operation twice. The predicted image is further refined by 2D convolutions. \textbf{II network:} The II network is U-Net \cite{unet}, a popular multiscale network for structure recovery. U-Net is a encoder-decoder architecture which uses convolutions (for extracting features), ReLU activations (to add non-linearity), max-pooling (downsampling) layers, up-convolution (upsampling) layers and skip connections (transfer features). \textbf{Fu network:} An efficient fusion of the reconstructions of KI and II can help us provide an improved reconstruction. Wolpert \cite{stacked_generalization} showed that stacked version of the predictions from different models can be effectively combined through a fusion model. Interestingly, several works have used CNN for fusion as it's hard for a non-learning-based algorithm to combine the benefits of different models \cite{fusion_gland}\cite{fusion_hyperdensenets}. Similarly, SRCNN \cite{srcnn} showed that by stacking channels with high cross-correlation at the input, the convolution layers can leverage the natural correspondences between the channels for better reconstruction. In our case the correlated channels are reconstructions from KI and II networks. The Fu network consists of five 3x3 convolution layers with ReLU activations. We also considered stacking the input image (II network's input) along with the outputs of KI and II networks to provide an idea of unsmoothed structures to Fu network. \subsection{DC-RSN: Single-coil MRI reconstruction} \label{subsec:methodology_single_coil_mri} For single-coil MRI reconstruction, Eq. \ref{eq:problem_formulation} is converted to the following: \begin{equation} \label{eq:problem_formulation_dc_rsn_1} \footnotesize \underset{m,\theta}{min}\quad\frac{\lambda}{2}||DFm - y||_{2}^{2} + || m - f_{cnn}(m_{u}\lvert\theta) ||_{2}^2 \end{equation} where $f_{cnn}$ is the deep network parameterised by $\theta$, which learns the mapping between $m_{u}$(undersampled column stacked complex-value MR image) and $m$. To provide consistency with the acquired k-space data, the following data fidelity procedure is followed: \begin{equation} \label{eq:problem_formulation_dc_rsn_2} \footnotesize \hat{m}_{rec}= \begin{cases} \hat{m}_{cnn}(k) & \ k\notin\Omega \\ \frac{\hat{m}_{cnn}(k) + \lambda \hat{m}_{u}(k)}{1+\lambda} & k\in\Omega \\ \end{cases} \end{equation} where $\hat{m}_{cnn} = Fm_{cnn}, m_{cnn}=f_{cnn}(m_{u}\lvert\theta)$, $\hat{m}_{u} = Fm_{u}$, $m_{rec}= F^{H}\hat{m}_{rec}$, $\Omega$ is an index set indicating which k space measurements have been sampled. We propose DC-RSN (Fig. \ref{fig:dc_rsn}) for single-coil MRI reconstruction. DC-RSN consists of $n_{b}$ cascades of RSN ($f_{cnn}$) blocks and DF layers. RSN takes in US k-space and image to provide predicted FS image, while DF takes the predicted FS image and returns data (image, k-space) which are consistent in Fourier domain. \subsection{VS-RSN: Multi-coil MRI reconstruction} \label{subsec:methodology_multi_coil_mri} In order to optimize Eq. \ref{eq:problem_formulation} efficiently, VS-Net \cite{vs_net} developed a variable splitting method by introducing auxiliary splitting variables $u \in C^{N}$ and $\{x_{i} \in C^{N}\}^{n_{c}}_{i=1}$ and derived the final solution as given below: \begin{equation} \label{eq:problem_formulation_vs_rsn} \footnotesize \begin{split} u^{k+1} &= denoiser(m^{k}) \\ x_{i}^{k+1} &= F^{-1}((\lambda D^{T}D + \alpha I)^{-1}(\alpha FS_{i}m^{k} + \lambda D^{T}y_{i})) \\ m^{k+1} &= (\beta I + \alpha \sum_{i=1}^{n_{c}}S_{i}^{H}S_{i})^{-1}(\beta u^{k+1} + \alpha \sum_{i=1}^{n_{c}}S_{i}^{H}x_{i}^{k+1}) \end{split} \end{equation} From the above equations, the following can be inferred: 1) Top equation: The original problem (Eq. \ref{eq:problem_formulation}) is converted to denoising problem. 2) Middle Equation: Provides data consistency to k-space for each coil. 3) Bottom equation: Computes a weighted average of the results obtained from the first two equations. We propose VS-RSN (Variable Splitting - RSN) (Fig. \ref{fig:vs_rsn}) for multi-coil MRI reconstruction which can accommodate the iterative setup formulated in Eq. \ref{eq:problem_formulation_vs_rsn}. VS-RSN consists of $n_{b}$ cascades of three blocks: RSN as denoiser block, data fidelity block (DFB) and weighted average block (WAB). RSN takes in sensitivity-weighted US image ($m^{0} = \sum_{i}^{n_{c}}S_{i}^{H}F^{-1}D^{T}y_{i}$) and it's respective k-space ($Fm^{0}$) as input. DFB uses pre-computed coil sensitivity maps ($\{S_{i}\}_{i=1}^{n_{c}}$), binary sampling mask ($D^{T}D$) and undersampled k-space data ($\{D^{T}y_{i}\}^{n_{c}}_{i=1}$) to provide data consistency in k-space for every coil. WAB uses coil sensitivity to weight the output of DFB and combine it with the output of RSN. Instead of pre-computing, the sensitivity maps could also be jointly learned with reconstruction using Auto-Calibration Signal of k-space as done in \cite{fb_varnet}. \subsection{Assistance to MRI reconstruction} \label{subsec:methodology_assistance_to_mri} \subsubsection{Gradient assistance} \label{subsubsec:methodology_gradient_assistance_to_mri} Image gradients in the log-transformed domain can be used to guide image restoration tasks through its explicit boundary information \cite{riemannian_loss}. We call this type of gradient as GOL (Gradient of Logarithm). In our work, we propose to provide assistance to DC-RSN and VS-RSN through GOL of image. We build a GOLF (GOL feature) module with UNet and a single convolution layer. UNet architecture is trained on FS image and its GOL, while the features are extracted from the multiple levels of UNet, resized and concatenated to form a deep feature map which is then given to the single convolution layer with ReLU activation to provide the depth reduced effective feature map. These features are provided to RSN in each cascade of the DC-RSN and VS-RSN. In each RSN, the features are concatenated with the feature maps of 2D convolution and decoder layers of KI and II respectively. This design choice is made empirically. Feature concatenation explicitly provides the structural information required for reconstruction. We call the DC-RSN and VS-RSN with GOLF assistance as DC-RSN-GOLF and VS-RSN-GOLF respectively. During test time, the output of pretrained DC-RSN or VS-RSN is provided as input to the GOLF module for required features. The schematic of DC-RSN-GOLF and VS-RSN-GOLF can be found in Fig. \ref{fig:dc_rsn} and \ref{fig:vs_rsn} respectively. \subsubsection{T1 assistance} \label{subsubsec:methodology_t1_assistance_to_mri} The structural information in T1WI is highly correlated with T2WI. Hence, FS T1WI can be used to compensate for missing structures in US T2WI. In our work, we propose DC-RSN-T1 in which FS T1WI is provided as assistance to RSN at each cascade in DC-RSN. Specifically, the input to the Fu network is the channel stacked FS T1WI, KI and II network outputs. This design enables Fu network to effectively fuse FS T1WI with reconstructions obtained from KI and II networks. The notion behind this design is that FS T1WI could provide the structures which both KI and II networks would have failed to reconstruct because of missing frequencies and structures in k-space and image respectively. Fig. \ref{fig:dc_rsn} provides the outline. \subsubsection{Combined assistance - Gradient and T1} \label{subsubsec:methodology_gradient_t1_assistance_to_mri} We also propose DC-RSN-T1-GOLF with an optimal combination of GOLF and T1 assistance that maximizes the benefits of both. During test time of DC-RSN-T1-GOLF, instead of providing the output of DC-RSN to GOLF module, we provide the output of DC-RSN-T1. GOLF obtained using DC-RSN-T1 will have a feature representation closer to FS image compared to DC-RSN as DC-RSN-T1 would have reconstructed structures missed by DC-RSN. The enhanced GOLF is concatenated with the intermediate feature maps of KI and II of every cascade which provides the KI and II with explicit improved structural information. Fig. \ref{fig:dc_rsn} provides the illustration of DC-RSN-T1-GOLF. \subsection{PRN for MRI reconstruction} The prediction of pre-trained reconstruction networks are refined for better perceptual quality using PRN. PRN is a five layer CNN (Re network) followed by DF and is adversarially trained using WGAN \cite{wgan}. We use basic CNN as a refinement block so as to show its ability to improve the perceptual quality of the reconstructed image. DF is a necessary component for MRI reconstruction as it provides the required consistency in k-space. We choose WGAN for adversarial training as it provides more stability, better convergence and accurate estimate of the divergence between generator and data distributions. We use both adversarial and distortion loss for WGAN training. We provide higher weightage to adversarial loss compared to distortion loss as the adversarial component helps in providing perceptually better images \cite{perception_distortion_tradeoff}. We use PRN to refine the outputs of DC-RSN, VS-RSN and DC-RSN-T1-GOLF for better image perception. The overview can be found in Fig \ref{fig:dc_rsn} and \ref{fig:vs_rsn}. \section{Results and discussions} \label{sec:experiments_and_results} \subsection{Implementation details} US data is retrospectively obtained using fixed cartesian undersampling masks for 4x and 5x acceleration. The source code of DC-CNN is used for its implementation. In the case of DC-UNet, CNN in DC-CNN is replaced with UNet from the fastMRI repository. The dense connections are added to CNN in DC-CNN to replicate the design of DC-DEN. Likewise, dilated CNNs with recursive connections are used in place of CNN in DC-CNN for DC-RDN. The alternative CNNs in DC-CNN are used to operate on image and k-space through their respective intermediate Fourier operations as demonstrated in the codebase of DC-Hybrid. UNet from fastMRI repository is used as the generator for DAGAN and ReGAN, while the designs of discriminator and loss functions were adapted from their respective repositories . The implementation of DenseUNet-T1 is taken from the publicly available code on semantic segmentation. VS-Net is implemented using its original repository. In the case of VN, the point based DF in VS-Net is replaced with the gradient descent based DF. From literature \cite{vs_net}, it is known that higher the number of cascades, better the reconstruction quality. Experiments demonstrating the same can be found in Figure A1 and Figure A2 of the supplementary material. In this work, due to resource constraints, the number of cascades is set to five for cardiac dataset and three for the remaining datasets. Models are trained using MSE loss with Adam optimizer (Kingma and Ba, 2014). Adversarial models are trained using the combination of MSE and Wasserstein distance with Adam and SGD optimizers. DC-RSN, VS-RSN, and DC-RSN-T1 involve single stage training. Models with GOLF assistance (DC-RSN-GOLF, VS-RSN-GOLF, and DC-RSN-T1-GOLF) require two stage training. In the first stage, the base model (DC-RSN, VS-RSN, and DC-RSN-T1) is trained while in the second stage, training is done for the base model whose intermediate features are concatenated with the GOLF of first stage reconstruction. PRN is adversarially trained with inputs being the reconstructions of pre-trained networks (DC-RSN, VS-RSN and DC-RSN-T1-GOLF). PSNR and SSIM metrics are used to evaluate the reconstruction quality. VIF is used to validate the reconstruction for radiologist's opinion on image quality. \subsection{Single-coil MRI reconstruction} \subsubsection{Real-valued MRI data} \begin{table*} \scriptsize \centering \caption{Quantitative comparison of single-coil real valued MRI reconstruction architectures} \label{tab:cardiac_kirby} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \multirow{3}{*}{Method} & \multicolumn{4}{c|}{Cardiac} & \multicolumn{4}{c|}{Kirby} \\ \cline{2-9} & \multicolumn{2}{c|}{4x} & \multicolumn{2}{c|}{5x} & \multicolumn{2}{c|}{4x} & \multicolumn{2}{c|}{5x} \\ \cline{2-9} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} \\ \hline US & 24.27 $\pm$ 3.10 & 0.6996 $\pm$ 0.08 & 23.82 $\pm$ 3.11 & 0.6742 $\pm$ 0.08 & 25.7 $\pm$ 1.46 & 0.5965 $\pm$ 0.08 & 25.36 $\pm$ 1.47 & 0.5794 $\pm$ 0.08 \\ \hline DAGAN\cite{dagan} & 28.52 $\pm$ 2.71 & 0.841 $\pm$ 0.04 & 28.02 $\pm$ 2.80 & 0.8248 $\pm$ 0.05 & 31.58 $\pm$ 1.30 & 0.8845 $\pm$ 0.01 & 30.93 $\pm$ 1.29 & 0.8719 $\pm$ 0.02 \\ \hline DC-CNN\cite{dc_cnn} & 32.75 $\pm$ 3.28 & 0.9195 $\pm$ 0.04 & 31.75 $\pm$ 3.40 & 0.9054 $\pm$ 0.04 & 34.67 $\pm$ 1.78 & 0.9522 $\pm$ 0.01 & 33.31 $\pm$ 1.69 & 0.9415 $\pm$ 0.01 \\ \hline DC-DEN\cite{dc-ensemble} & 33.22 $\pm$ 3.46 & 0.9249 $\pm$ 0.04 & 32.3 $\pm$ 3.57 & 0.9126 $\pm$ 0.04 & 35.27 $\pm$ 1.83 & 0.955 $\pm$ 0.01 & 33.73 $\pm$ 1.70 & 0.9425 $\pm$ 0.01 \\ \hline DC-RDN\cite{recursive_dilated} & 32.95 $\pm$ 3.40 & 0.9233 $\pm$ 0.04 & 32.09 $\pm$ 3.57 & 0.9115 $\pm$ 0.04 & 35.61 $\pm$ 1.84 & 0.9629 $\pm$ 0.01 & 33.95 $\pm$ 1.70 & 0.95 $\pm$ 0.01 \\ \hline DC-UNet\cite{dc_unet} & 33.17 $\pm$ 3.60 & 0.9276 $\pm$ 0.04 & 32.55 $\pm$ 3.71 & 0.9189 $\pm$ 0.04 & 36.4 $\pm$ 1.80 & 0.9697 $\pm$ 0.01 & 34.76 $\pm$ 1.67 & 0.9586 $\pm$ 0.01 \\ \hline DC-RSN(Ours)& \textbf{33.61 $\pm$ 3.57} & \textbf{0.9322 $\pm$ 0.04} & \textbf{32.65 $\pm$ 3.67} & \textbf{0.92 $\pm$ 0.04} & \textbf{36.83 $\pm$ 2.0} & \textbf{0.9707 $\pm$ 0.01} & \textbf{35.22 $\pm$ 1.90} & \textbf{0.9609 $\pm$ 0.01} \\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/results/main_paper/cardiac-kirby-figure-large.png} \caption{Qualitative comparison of single-coil real-valued MRI reconstruction architectures. From left to right (Target, ZF, DAGAN, DC-CNN, DC-DEN, DC-RDN, DC-UNet, and DC-RSN). Top row (Cardiac dataset): DAGAN has recovered the structure (pink dotted circle), but the reconstruction suffers from severe artifacts; DC-CNN, DC-DEN, and DC-RDN have completely missed the structure; DC-Unet and DC-RSN have properly recovered the structure, but DC-UNet has not removed the aliasing artifacts (green arrow). Bottom row (Kirby dataset): DAGAN has partially recovered the couple of structures in the region of interest (pink dotted circle); DC-CNN, DC-DEN, and DC-RDN have difficulty in delineating the structures; DC-Unet and DC-RSN have sharply recovered the structures, but DC-UNet has failed to recover the complete structure (green arrow).} \label{fig:cardiac_kirby} \end{figure*} In this experiment, cardiac and kirby datasets were used to compare our architecture DC-RSN with the architectures proposed for real-valued single-coil MRI reconstruction. The quantitative comparison of the architectures is presented in Table \ref{tab:cardiac_kirby}. It is clear from the table that DC-RSN fairs significantly better than other architectures in terms of PSNR and SSIM across different datasets and acceleration factors. This can be attributed to the RSN block which uses Fu network to effectively combine the benefits of simultaneously operating on both the k-space and the image domains. The qualitative comparison of the architectures for the datasets is depicted in Fig. \ref{fig:cardiac_kirby}, which shows that DC-RSN could reconstruct most intricate structures with reduced artifacts. \subsubsection{Complex-valued MRI data} In this experiment, Calgary dataset was used to compare DC-RSN with the architectures proposed for complex-valued single-coil MRI reconstruction. The comparison of DC-RSN with other architectures is presented in Table \ref{tab:calgary}. From the table, it is seen that, deep cascade architectures are better than ReGAN. It also can be noticed that DC-Hybrid is better than DC-CNN. This is due to alternate CNN operating on k-space and image domain unlike DC-CNN, which operates only on the image domain. However, DC-RSN is significantly better than DC-Hybrid showing that RSN has effectively utilized k-space and image information. The comparison of architectures with an example image is depicted in Fig. \ref{fig:calgary}. The better structure recovery of complex structures in DC-RSN compared to other architectures can be noticed in the figure. \begin{table}[] \scriptsize \centering \caption{Quantitative comparison of single-coil complex valued MRI reconstruction architectures} \label{tab:calgary} \begin{tabular}{|l|l|l|l|l|} \hline \multirow{3}{*}{Method} & \multicolumn{4}{c|}{Calgary} \\ \cline{2-5} & \multicolumn{2}{c|}{4x} & \multicolumn{2}{c|}{5x} \\ \cline{2-5} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} \\ \hline US & 26.91 $\pm$ 0.9 & 0.740 $\pm$ 0.0 & 26.49 $\pm$ 0.9 & 0.727 $\pm$ 0.0 \\ \hline ReGAN\cite{cyclic_loss} & 33.71 $\pm$ 0.9 & 0.921 $\pm$ 0.0 & 33.1 $\pm$ 0.94 & 0.91 $\pm$ 0.0 \\ \hline DC-CNN\cite{dc_cnn}& 36.66 $\pm$ 0.9 & 0.952 $\pm$ 0.0 & 35.22 $\pm$ 0.9 & 0.937 $\pm$ 0.0 \\ \hline DC-Hybrid\cite{hybrid} & 36.85 $\pm$ 0.9 & 0.954 $\pm$ 0.0 & 35.52 $\pm$ 0.9 & 0.941 $\pm$ 0.0 \\ \hline DC-RSN(Ours) & \textbf{37.85 $\pm$ 1.0} & \textbf{0.962 $\pm$ 0.0} & \textbf{36.04 $\pm$ 1.0} & \textbf{0.948 $\pm$ 0.0} \\ \hline \end{tabular} \end{table} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/results/main_paper/calgary-figure-large.png} \caption{Qualitative comparison of single-coil complex-valued MRI reconstruction architectures. From left to right (Target, ZF, ReGAN, DC-CNN, DC-Hybrid, and DC-RSN). The structure denoted by green arrow in target is only recovered by DC-RSN while other architectures have failed to recover it. The structures in the region given by pink circle in target has been sharply recovered by DC-RSN; DC-Hybrid and DC-CNN has partially recovered the structures, while ReGAN provided a smooth region without the structures.} \label{fig:calgary} \end{figure*} \begin{table*}[] \scriptsize \centering \caption{Quantitative comparison of multi-coil MRI reconstruction architectures} \label{tab:multi_coil_knee} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \multirow{3}{*}{Method} & \multicolumn{4}{c|}{Coronal PD} & \multicolumn{4}{c|}{Coronal fat-saturated PD} \\ \cline{2-9} & \multicolumn{2}{c|}{4x} & \multicolumn{2}{c|}{5x} & \multicolumn{2}{c|}{4x} & \multicolumn{2}{c|}{5x} \\ \cline{2-9} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} \\ \hline US & 31.42 $\pm$ 3.92 & 0.884 $\pm$ 0.06 & 29.26 $\pm$ 3.91 & 0.8465 $\pm$ 0.07 & 33.44 $\pm$ 2.95 & 0.8535 $\pm$ 0.07 & 31.67 $\pm$ 2.87 & 0.8109 $\pm$ 0.09 \\ \hline VN \cite{variational} & 39.8 $\pm$ 3.77 & 0.9595 $\pm$ 0.02 & 34.15 $\pm$ 3.35 & 0.9122 $\pm$ 0.04 & 37.26 $\pm$ 3.61 & 0.8875 $\pm$ 0.07 & 34.45 $\pm$ 3.01 & 0.8385 $\pm$ 0.09 \\ \hline VS-Net \cite{vs_net} & 39.87 $\pm$ 3.78 & 0.9604 $\pm$ 0.02 & 33.95 $\pm$ 3.34 & 0.9114 $\pm$ 0.04 & 37.32 $\pm$ 3.63 & 0.8883 $\pm$ 0.07 & 34.34 $\pm$ 2.93 & 0.8389 $\pm$ 0.08 \\ \hline VS-RSN (Ours) & \textbf{40.45 $\pm$ 3.94} & \textbf{0.9636 $\pm$ 0.02} & \textbf{35.67 $\pm$ 3.51} & \textbf{0.9293 $\pm$ 0.03} & \textbf{37.43 $\pm$ 3.68} & \textbf{0.8914 $\pm$ 0.07} & \textbf{34.79 $\pm$ 3.26} & \textbf{0.8453 $\pm$ 0.09} \\ \hline \multirow{3}{*}{} & \multicolumn{4}{c|}{Sagittal fat-saturated T2} & \multicolumn{4}{c|}{Sagittal PD} \\ \cline{2-9} & \multicolumn{2}{c|}{4x} & \multicolumn{2}{c|}{5x} & \multicolumn{2}{c|}{4x} & \multicolumn{2}{c|}{5x} \\ \cline{2-9} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} \\ \hline US & 38.08 $\pm$ 3.81 & 0.936 $\pm$ 0.04 & 37.25 $\pm$ 3.82 & 0.9271 $\pm$ 0.05 & 39.27 $\pm$ 2.98 & 0.9643 $\pm$ 0.01 & 38.8 $\pm$ 2.99 & 0.9606 $\pm$ 0.01 \\ \hline VN \cite{variational} & 41.82 $\pm$ 4.09 & 0.9498 $\pm$ 0.04 & 40.54 $\pm$ 4.06 & 0.9399 $\pm$ 0.05 & 43.86 $\pm$ 2.63 & 0.9788 $\pm$ 0.00 & 41.89 $\pm$ 2.80 & 0.9724 $\pm$ 0.01 \\ \hline VS-Net \cite{vs_net} & 41.84 $\pm$ 4.1 & 0.9491 $\pm$ 0.04 & 40.63 $\pm$ 4.05 & 0.94 $\pm$ 0.05 & 44.25 $\pm$ 2.58 & 0.9793 $\pm$ 0.00 & 42.2 $\pm$ 2.73 & 0.9731 $\pm$ 0.01 \\ \hline VS-RSN & \textbf{41.98 $\pm$ 4.18} & \textbf{0.951 $\pm$ 0.04} & \textbf{40.88 $\pm$ 4.12} & \textbf{0.9419 $\pm$ 0.05} & \textbf{44.3 $\pm$ 2.60} & \textbf{0.9805 $\pm$ 0.00} & \textbf{42.5 $\pm$ 2.72} & \textbf{0.975 $\pm$ 0.00} \\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/results/main_paper/knee-mc-coronal-figures-large.png} \caption{(a) Qualitative comparison of multi-coil MRI reconstruction architectures for coronal PD. From left to right (Target, ZF, VN, VS-Net, and VS-RSN). The structures denoted by green arrows in target have been faithfully recovered by VS-RSN compared to VN and VS-Net. VS-RSN has thoroughly delineated the marked structures from its background, while VN and VS-Net provided a blurry output. (b) Qualitative comparison of multi-coil MRI reconstruction architectures for coronal fat-saturated PD. From left to right (Target, ZF, VN, VS-Net, and VS-RSN). The structures denoted by green arrows in target have been sharply recovered by VS-RSN. VN and VS-Net have missed both the structures resulting in a lack of boundary.} \label{fig:multi_coil_knee} \end{figure*} \subsection{Multi-coil MRI reconstruction} In this experiment, VS-RSN is compared with the state-of-the-art architectures in multi-coil acquisition for different protocols in knee dataset. PSNR and SSIM metric comparison is presented in Table \ref{tab:multi_coil_knee}. It is observed that VN and VS-Net show similar performance, while VS-RSN fairs better than both VN and VS-Net for different acceleration factors and protocols. This shows the successful incorporation of sophisticated RSN block as a denoiser in VS-Net, thereby translating RSN for a multi-coil setting. The qualitative comparison of the reconstruction methods for coronal PD and coronal fat-saturated PD acquisition protocols is presented in Fig. \ref{fig:multi_coil_knee}, which shows that VS-RSN is able to delineate complex structures in comparison to VS-Net and VN where the structures look fuzzy. Quantitative comparison of axial protocol can be found in Table A1 of the supplementary material. \subsection{Assistance to MRI reconstruction} \subsubsection{Gradient assistance} In this experiment, the validation of the effect of GOLF assistance to DC-RSN and VS-RSN were carried out. The quantitative comparison of the architecture with and without GOLF assistance is presented in Table \ref{tab:gradient_assistance}. It is clearly seen that GOLF assistance provides a substantial improvement in evaluation metrics across acceleration factors and datasets. The respective qualitative comparison is shown with an example in Fig. \ref{fig:gradient_assistance}. It is noticed that the architecture with GOLF enhances subtle structures and appreciably recovers grainy regions as compared to the one obtained without GOLF. \begin{table}[] \scriptsize \centering \caption{Quantitative comparison of DC-RSN, VS-RSN with and without GOLF assistance} \label{tab:gradient_assistance} \begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{2}{c|}{4x} & \multicolumn{2}{c|}{5x} \\ \cline{3-6} & & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} \\ \hline \multirow{2}{*}{Cardiac} & DC-RSN & 33.61 & 0.9322 & 32.65 & 0.92 \\ \cline{2-6} & DC-RSN-GOLF & \textbf{33.68} & \textbf{0.933} & \textbf{32.75} & \textbf{0.9214} \\ \hline \multirow{2}{*}{Kirby} & DC-RSN & 36.83 & 0.9707 & 35.22 & 0.9609 \\ \cline{2-6} & DC-RSN-GOLF & \textbf{36.92} & \textbf{0.9723} & \textbf{35.27} & \textbf{0.9618} \\ \hline \multirow{2}{*}{Coronal-pd} & VS-RSN & 40.45 & 0.9636 & 35.67 & 0.9293 \\ \cline{2-6} & VS-RSN-GOLF & \textbf{40.52} & \textbf{0.9645} & \textbf{35.83} & \textbf{0.9316} \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results/main_paper/golfassistance-figure-trial2-large.png} \caption{Qualitative comparison of DC-RSN and DC-RSN-GOLF. From left to right (Target, ZF, DC-RSN, and DC-RSN-GOLF). The green arrows in target indicate challenging structures. These structures have been recovered by DC-RSN-GOLF, while missed by DC-RSN. The yellow arrow shows a region where DC-RSN-GOLF restores the degradation caused by undersampling, while it is not handled by DC-RSN.} \label{fig:gradient_assistance} \end{figure} \subsubsection{T1 assistance} This experiment was designed to understand the contribution of T1 assistance. The quantitative comparison of DenseUnet-T1 (Xiang et al., 2019), DC-RSN, and DC-RSN-T1 is presented in Table \ref{tab:t1assistance}. From table, it is observed that DC-RSN-T1 is better than both DC-RSN and DenseUNet-T1 in terms of PSNR and SSIM for different acceleration factors. Further, on closer inspection of Fig. \ref{fig:t1assistance}, it is noticed that DC-RSN-T1 and DenseUNet-T1 have reconstructed a structure which is completely missed by DC-RSN. This is due to FS T1 assistance in DenseUNet-T1 and DC-RSN-T1. \begin{table}[] \scriptsize \centering \caption{Quantitative comparison of different combinations of T1 and GOLF with DC-RSN} \label{tab:t1assistance} \begin{tabular}{|l|l|l|l|l|} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{4x} & \multicolumn{2}{c|}{5x} \\ \cline{2-5} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} \\ \hline US & 28.4 & 0.6422 & 26.99 & 0.6095 \\ \hline DenseUNet-T1 \cite{t1_assistance} & 34.2 & 0.9428 & 32.88 & 0.9315 \\ \hline DC-RSN & 37.76 & 0.9731 & 37.05 & 0.9643 \\ \hline DC-RSN-GOLF & 38 & 0.9742 & 37.44 & 0.9675 \\ \hline DC-RSN-T1 & 38.34 & 0.976 & 37.66 & 0.9705 \\ \hline DC-RSN-T1-GOLF & \textbf{38.6} & \textbf{0.9774} & \textbf{38.08} & \textbf{0.9722} \\ \hline \end{tabular} \end{table} \begin{figure*}[] \centering \includegraphics[width=0.9\linewidth]{images/results/main_paper/t1assistance-figure-without-dcnn-large.png} \caption{ Qualitative comparison of DenseUNet-T1, DC-RSN, and DC-RSN-T1. From left to right (T1 FS, T2 FS, T2 ZF, DenseUNet-T1, DC-RSN, and DC-RSN-T1). The structures denoted by yellow arrows in T2 FS has not been recovered by DC-RSN, while DenseUNet-T1 and DC-RSN-T1 have recovered those structures through FS T1 assistance. Green arrows indicate regions in DC-RSN-T1 and DC-RSN which are closer to FS T2 than DenseUNet-T1.} \label{fig:t1assistance} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/results/main_paper/rsn-t1-golf-figure-latest-edge-large.png} \caption{Qualitative comparison of different combinations of T1 and GOLF with DC-RSN. From left to right (Target, ZF, DC-RSN-GOLF, DC-RSN-T1, and DC-RSN-T1-GOLF). The structure indicated by pink circle in the region denoted by orange in the target has been recovered by DC-RSN-T1 and DC-RSN-T1-GOLF, while DC-RSN-GOLF has failed to recover the entire structure. The edge map of the structure indicated by pink circle in the region denoted by purple in the target image is closer to DC-RSN-GOLF and DC-RSN-T1-GOLF compared to DC-RSN-T1. The green arrow in the edge map indicates the structure of interest. } \label{fig:gradient_t1_assistance} \end{figure*} \subsubsection{Gradient based T1 assistance} In this experiment, the performance of DC-RSN-GOLF, DC-RSN-T1 and DC-RSN-T1-GOLF were compared. The data of the quantitative and the qualitative comparison are provided in Table \ref{tab:t1assistance} and in Fig. 10, respectively. From the table the following are observed: 1) DC-RSN-T1 is better than DC-RSN-GOLF as T1 assistance has provided missing structures while GOLF assistance has only enhanced the existing structures; 2) DC-RSN-T1-GOLF is better than DC-RSN-GOLF as GOLF obtained using DC-RSN-T1 is better than the one obtained using DC-RSN; and, 3) DC-RSN-T1-GOLF is better than DC-RSN-T1. This is because GOLF containing the structural information of DC-RSN-T1 is explicitly provided to DC-RSN for final prediction. From the figure it is seen that DC-RSN-T1-GOLF and DC-RSN-T1 has reconstructed structures missed by DC-RSN-GOLF. In addition, DC-RSN-T1-GOLF and DC-RSN-GOLF has recovered subtle structures compared to DC-RSN-T1. This structure improvement can be better appreciated through edge maps of the region of interest; moreover, restoring the image gradient is the primary motivation behind GOLF assistance. \subsection{PRN for MRI reconstruction} In this experiment, PRN is validated by using it to refine the reconstructions of the proposed networks DC-RSN, VS-RSN, and DC-RSN-T1-GOLF. The comparison of networks with and without PRN block is provided in Table VI. From the table, it is observed that addition of PRN has improved VIF across acceleration factors, datasets, and networks. Additionally, improvement in VIF has reduced PSNR and SSIM to some extent which is expected as PRN is trained with higher weightage to adversarial term (Blau and Michaeli, 2018). Sample reconstructions of networks using PRN can be found in Fig. A3, A4, and A5 of the supplementary material. The quantitative metrics of models developed for better perceptual quality including DAGAN, ReGAN, and VN is also added in Table VI. It is observed that VIF for these models are significantly lower than our proposed models. \begin{table*}[] \scriptsize \centering \caption{Quantitative comparison of DC-RSN, VS-RSN, DC-RSN-T1-GOLF with and without PRN.} \label{tab:perceptual_refinement} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{4x} & \multicolumn{3}{c|}{5x} \\ \cline{3-8} & & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{VIF} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{VIF} \\ \hline \multirow{2}{*}{Cardiac} & DC-RSN & 33.36 $\pm$ 3.55 & 0.9279 $\pm$ 0.04 & 0.928 $\pm$ 0.05 & 32.56 $\pm$ 3.68 & 0.9188 $\pm$ 0.04 & 0.92 $\pm$ 0.05 \\ \cline{2-8} & DC-RSN-PRN & 33.39 $\pm$ 3.55 & 0.9295 $\pm$ 0.03 & 0.959 $\pm$ 0.04 & 32.36 $\pm$ 3.59 & 0.9167 $\pm$ 0.04 & 0.951 $\pm$ 0.05 \\ \hline \multirow{2}{*}{Calgary} & DC-RSN & 37.85 $\pm$ 1.08 & 0.9621 $\pm$ 0.00 & 0.946 $\pm$ 0.01 & 36.04 $\pm$ 1.00 & 0.9483 $\pm$ 0.01 & 0.939 $\pm$ 0.01 \\ \cline{2-8} & DC-RSN-PRN & 37.75 $\pm$ 1.05 & 0.9612 $\pm$ 0.00 & 0.982 $\pm$ 0.01 & 35.84 $\pm$ 0.96 & 0.9466 $\pm$ 0.01 & 0.992 $\pm$ 0.02 \\ \hline \multirow{2}{*}{Coronal PD} & VS-RSN & 40.45 $\pm$ 3.94 & 0.9636 $\pm$ 0.02 & 0.951 $\pm$ 0.04 & 35.67 $\pm$ 3.51 & 0.9293 $\pm$ 0.03 & 0.851 $\pm$ 0.07 \\ \cline{2-8} & VS-RSN-PRN & 40.26 $\pm$ 3.62 & 0.9647 $\pm$ 0.02 & 0.978 $\pm$ 0.04 & 34.42 $\pm$ 3.33 & 0.9166 $\pm$ 0.04 & 0.879 $\pm$ 0.11 \\ \hline \multirow{2}{*}{MRBrains} & DC-RSN-T1-GOLF & 38.6 $\pm$ 0.27 & 0.9774 $\pm$ 0.00 & 0.962 $\pm$ 0.04 & 38.08 $\pm$ 0.49 & 0.9722 $\pm$ 0.00 & 0.979 $\pm$ 0.05 \\ \cline{2-8} & DC-RSN-T1-GOLF-PRN & 38.2 $\pm$ 0.27 & 0.9755 $\pm$ 0.00 & 0.978 $\pm$ 0.03 & 38.16 $\pm$ 0.51 & 0.969 $\pm$ 0.00 & 0.979 $\pm$ 0.04 \\ \hline \end{tabular} \end{table*} \subsection{Ablative study} \begin{table*}[] \scriptsize \centering \caption{Quantitative comparison of ablative study on RSN} \label{tab:ablative-study} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{} & \multicolumn{1}{c|}{\multirow{2}{*}{Method}} & \multicolumn{2}{c|}{2x} & \multicolumn{2}{c|}{2.5x} & \multicolumn{2}{c|}{3.3x} \\ \cline{3-8} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} \\ \hline \multirow{10}{*}{\begin{tabular}[c]{@{}l@{}}Lower \\ acceleration \\ factors\end{tabular}} & US & 29.63 $\pm$ 3.17 & 0.8435 $\pm$ 0.05 & 26.71 $\pm$ 3.14 & 0.7582 $\pm$ 0.07 & 26.95 $\pm$ 3.12 & 0.7906 $\pm$ 0.06 \\ \cline{2-8} & KK $\rightarrow$ IFT & 32.55 $\pm$ 2.90 & 0.8919 $\pm$ 0.03 & 28.75 $\pm$ 2.89 & 0.8133 $\pm$ 0.05 & 29.77 $\pm$ 3.03 & 0.8755 $\pm$ 0.04 \\ \cline{2-8} & II-CNN & 32.73 $\pm$ 2.94 & 0.9146 $\pm$ 0.03 & 29.55 $\pm$ 2.85 & 0.8559 $\pm$ 0.04 & 29.51 $\pm$ 2.88 & 0.8563 $\pm$ 0.05 \\ \cline{2-8} & KI & 33.72 $\pm$ 3.09 & 0.9278 $\pm$ 0.03 & 29.98 $\pm$ 2.89 & 0.8638 $\pm$ 0.04 & 31.59 $\pm$ 3.15 & 0.908 $\pm$ 0.04 \\ \cline{2-8} & II & 31.31 $\pm$ 3.34 & 0.928 $\pm$ 0.03 & 28.87 $\pm$ 3.15 & 0.8853 $\pm$ 0.04 & 28.49 $\pm$ 3.36 & 0.8825 $\pm$ 0.05 \\ \cline{2-8} & II $\rightarrow$ FT $ \rightarrow$ KI & 31.2 $\pm$ 3.503 & 0.9228 $\pm$ 0.04 & 28.76 $\pm$ 3.25 & 0.8756 $\pm$ 0.05 & 28.4 $\pm$ 3.39 & 0.8786 $\pm$ 0.05 \\ \cline{2-8} & KI $\rightarrow$ II & 31.73 $\pm$ 3.15 & 0.9265 $\pm$ 0.03 & 28.72 $\pm$ 2.87 & 0.8647 $\pm$ 0.04 & 29.56 $\pm$ 3.393 & 0.9053 $\pm$ 0.04 \\ \cline{2-8} & Mean ( KI $\vert$ II ) & 33.61 $\pm$ 2.92 & 0.938 $\pm$ 0.02 & 30.53 $\pm$ 2.77 & 0.8905 $\pm$ 0.04 & 30.95 $\pm$ 2.93 & 0.9088 $\pm$ 0.04 \\ \cline{2-8} & Fu ( KI $\vert$ II ) & 34.39 $\pm$ 2.84 & 0.9417 $\pm$ 0.02 & 31.13 $\pm$ 2.76 & 0.8958 $\pm$ 0.03 & 32.61 $\pm$ 3.07 & 0.9235 $\pm$ 0.04 \\ \cline{2-8} & Fu ( KI $\vert$ II $\vert$ US) & 34.72 $\pm$ 2.89 & 0.9427 $\pm$ 0.02 & 31.29 $\pm$ 2.81 & 0.8981 $\pm$ 0.03 & 32.72 $\pm$ 3.0 & 0.9249 $\pm$ 0.03 \\ \hline \multirow{12}{*}{\begin{tabular}[c]{@{}l@{}}Higher\\ acceleration \\ factors\end{tabular}} & \multicolumn{1}{c|}{\multirow{2}{*}{Model}} & \multicolumn{2}{c|}{4x} & \multicolumn{2}{c|}{5x} & \multicolumn{2}{c|}{8x} \\ \cline{3-8} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{SSIM} \\ \cline{2-8} & US & 24.27 $\pm$ 3.10 & 0.6996 $\pm$ 0.08 & 23.82 $\pm$ 3.11 & 0.6742 $\pm$ 0.08 & 22.83 $\pm$ 3.11 & 0.6344 $\pm$ 0.09 \\ \cline{2-8} & KK $\rightarrow$ IFT & 26.76 $\pm$ 2.89 & 0.7622 $\pm$ 0.06 & 25.88 $\pm$ 2.86 & 0.73 $\pm$ 0.06 & 24.53 $\pm$ 2.83 & 0.6772 $\pm$ 0.07 \\ \cline{2-8} & II-CNN & 26.86 $\pm$ 2.87 & 0.784 $\pm$ 0.06 & 26.4 $\pm$ 2.90 & 0.7642 $\pm$ 0.06 & 25.19 $\pm$ 2.97 & 0.724 $\pm$ 0.07 \\ \cline{2-8} & KI & 27.85 $\pm$ 2.87 & 0.8092 $\pm$ 0.06 & 26.91 $\pm$ 2.95 & 0.7829 $\pm$ 0.06 & 25.51 $\pm$ 2.96 & 0.7299 $\pm$ 0.08 \\ \cline{2-8} & II & 26.98 $\pm$ 3.14 & 0.8409 $\pm$ 0.06 & 26.89 $\pm$ 3.04 & 0.8327 $\pm$ 0.06 & 25.3 $\pm$ 2.93 & 0.7796 $\pm$ 0.07 \\ \cline{2-8} & II $\rightarrow$ FT $\rightarrow$ KI & 26.99 $\pm$ 3.15 & 0.8334 $\pm$ 0.06 & 26.78 $\pm$ 3.07 & 0.8245 $\pm$ 0.06 & 25.17 $\pm$ 2.92 & 0.7697 $\pm$ 0.08 \\ \cline{2-8} & KI $\rightarrow$ II & 26.64 $\pm$ 2.83 & 0.8108 $\pm$ 0.06 & 25.98 $\pm$ 2.81 & 0.7851 $\pm$ 0.06 & 24.67 $\pm$ 2.70 & 0.7365 $\pm$ 0.08 \\ \cline{2-8} & Mean ( KI $\vert$ II ) & 28.56 $\pm$ 2.72 & 0.8459 $\pm$ 0.05 & 27.94 $\pm$ 2.81 & 0.8288 $\pm$ 0.06 & 26.28 $\pm$ 2.82 & 0.7764 $\pm$ 0.07 \\ \cline{2-8} & Fu ( KI $\vert$ II ) & 28.68 $\pm$ 2.63 & 0.8525 $\pm$ 0.05 & 27.8 $\pm$ 2.82 & 0.8376 $\pm$ 0.06 & 26.2 $\pm$ 2.75 & 0.7861 $\pm$ 0.07 \\ \cline{2-8} & Fu ( KI $\vert$ II $\vert$ US) & 28.78 $\pm$ 2.62 & 0.855 $\pm$ 0.05 & 27.8 $\pm$ 2.73 & 0.8367 $\pm$ 0.05 & 26.19 $\pm$ 2.73 & 0.7856 $\pm$ 0.07 \\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/results/main_paper/ablative-study-figure-1.png} \caption{Qualitative results of ablative study. From left to right (FS, US, KK $\rightarrow$ IFT, II-CNN, KI, II, II $\rightarrow$ FT $\rightarrow$ KI, KI $\rightarrow$ II, Mean ( KI $\vert$ II ), Fu ( KI $\vert$ II ) , Fu ( KI $\vert$ II $\vert$ US)). } \label{fig:ablative_study} \end{figure*} An ablative study was conducted to understand the role of individual networks (KI, II, Fu) in RSN. The quantitative comparison of the combinations of different networks for various acceleration factors is presented in Table VII. KI (dAUTOMAP) is better than KK $\rightarrow$ IFT (k-space CNN followed by IFT) in both the metrics for all acceleration factors. This shows that KI is a better choice to move from k-space to image due to its specifically designed convolution layers (domain transform layers) instead of normal 2D convolutions in KK $\rightarrow$ IFT. II (image space UNet) is significantly better than II-CNN (image space CNN) in SSIM for every acceleration factor and competitively closer in PSNR. The superior SSIM can be attributed to encoder-decoder multi scale network design of UNet, making it an ideal choice to operate on image domain for structure recovery. Between KI and II, it is observed that KI has higher PSNR (lower reconstruction error) while II has higher SSIM (better structure recovery). This shows that effective combination of KI and II can produce both better PSNR and SSIM. The proposed Fu (KI | II) network presented here provided significantly better PSNR and SSIM compared to Mean (KI | II) for lower acceleration factors, while for higher acceleration factors, Fu (KI | II) provided better SSIM and competitively closer PSNR. Similarly, the network Fu (KI | II | US) provided better metrics than Fu (KI | II) for lower acceleration factors, but for higher acceleration factors both showed similar results. These observations show that CNN acts as an effective fusion network. The disparity in the network's performance for lower and higher acceleration factors is due to varying sparsity of frequency components in k-space which impacts the performance of KI for higher acceleration factors. The qualitative comparison of different combinations of networks mentioned in this section is provided in Fig. 11. The following are observed in the figure: 1) the networks starting with k-space domain ( KI, KI $\rightarrow$ II) provided lower residue; 2) the networks starting with image domain ( II, II $\rightarrow$ FT $\rightarrow$ KI ) provided better structure recovery; 3) networks simultaneously operating on both domains ( Mean ( KI $\vert$ II ), Fu ( KI $\vert$ II ) , Fu ( KI $\vert$ II $\vert$ US))) provided lower residue and better structure recovery; and 4) RSN ( Fu (KI | II), Fu (KI | II | US) ) provided enhanced structures compared to Mean (KI | II). \subsection{RSN and different US masks} Evaluation of RSN for other standard US masks (Gaussian, Radial, and Spiral) was carried out by comparing DC-CNN and DC-RSN in a single cascade mode for MRBrains T1 dataset. The US data was prepared using 5x US masks of ReGAN (Quan et al., 2018). The quantitative comparison of DC-CNN and DC-RSN for different US masks is depicted in Fig. 12. It is observed that DC-RSN outperforms DC-CNN for every US mask, demonstrating that RSN can be used across standard masks. \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/results/main_paper/box_plot_psnr_ssim.png} \caption{Quantitative comparison of DC-CNN and DC-RSN for different US masks.} \label{fig:box_plot} \end{figure*} \subsection{Misregistration environment} In our experiments, it has been assumed that the sequences T1WI and T2WI have undergone registration, i.e., perfectly aligned. However, in real MRI scenarios, such an accurate registration is not always possible. Hence, in this experiment, the performance of DC-RSN-T1 for a fixed T2WI with a randomly shifted T1WI for a maximum of 2 pixels is investigated as followed in DISN (Sun et al., 2019b). It is observed that SSIM of DC-RSN-T1 consistently dropped for different possible shifts, and it was also noticed that T1 assistance did not aid in the recovery of missing structures. To make DC-RSN-T1 robust to these random shifts, the model is trained with fixed T2WI and randomly shifted T1WI for upto 2 pixels in both x and y directions. It is noted that the DC-RSN-T1 trained for these random shifts are robust to those shifts, and it also helped in recovering structures that were degraded in US T2WI. The respective quantitative and qualitative comparison are presented in Fig. 13. \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/results/main_paper/misregistration.png} \caption{ Quantitative (a) and qualitative (b) comparison of model trained with perfectly registered pairs (DC-RSN-T1) and model trained with randomly shifted pairs (DC-RSN-T1 (TS)). The baseline is given by DC-RSN. SSIM for DC-RSN-T1 (TS) for various shifts is almost constant, while SSIM for DC-RSN-T1 drops significantly with increase in shifts in either direction. The structure recovered by DC-RSN-T1 (TS) for different shifts looks similar to the one without any shift, while the structure recovered by DC-RSN-T1 is severely affected by different shifts. } \label{fig:misregistration} \end{figure*} \subsection{Comparison with fastMRI} DC-RSN and VS-RSN were evaluated with the single and multi-coil knee data of fastMRI, respectively. Results are reported at the public leaderboard , with team name HTIC and model name Cascade Hybrid. In single-coil track, DC-RSN with five cascades provided PSNR of 33.81 dB and SSIM of 0.768. In multi-coil track, VS-RSN with five cascades provided PSNR of 38.59 dB, and SSIM of 0.923 for 4x acceleration; and PSNR of 35.32 dB, and SSIM of 0.878 for 8x acceleration. \section{Conclusion} \label{sec:conclusion} In this work, we introduced RSN, a base network specifically designed to handle both the k-space and the image input for MRI reconstruction. Using RSN, we proposed DC-RSN and VS-RSN for high quality image reconstruction from US k-space of single- and multi-coil acquisitions. We enhanced the structure recovery of DC-RSN for T2WI reconstruction through GOLF based T1 assistance. We also presented PRN to improve the perceptual quality of reconstructions with respect to radiologist's opinion. We conducted an extensive study across datasets and acceleration factors and found the following: 1) DC-RSN and VS-RSN are better than respective state-of-the-art methods; 2) GOLF based T1 assistance provides more faithful reconstruction; and 3) PRN addition increases VIF, a metric highly correlated with the radiologist's opinion on image quality. The reconstructions in our work are evaluated using VIF metric. It will be interesting to conduct a study and evaluate the reconstructions with radiologists to better understand the correlation between VIF and scores from radiologists. Furthermore, the methods need to be evaluated for faithful reconstruction in pathology cases. In the case of network design, the feature fusion operations can be improved through attention mechanisms. {\small \bibliographystyle{ieee_fullname}
1,116,691,499,480
arxiv
\section{Introduction} Wormholes (WH) are tunnels in the spacetime topology connecting different regions of spacetime. Einstein Rosen bridge proposed by Einstein and Rosen \cite{ER1935} was first scientific work on WH solution in the framework of General Relativity. Einstein Rosen bridge was, however, not traversable and as a consequence, was not taken seriously by the scientific community. The term wormhole was coined by \cite{MW1957} and Morris and Thorne \cite{MT1988} confirmed the physical acceptability of traversable WH and regenerated interests in the field in 1988. They proposed the traversable WH that satisfies the following: \begin{itemize} \item the spacetime geometry must have no event horizon \item the matter sustaining the WH must violate classical NEC. \end{itemize} For a review on wormhole geometries one may refer \cite{visser1,lobo1}. Construction of WH minimizing the support of exotic matter is an active field of research for a long time. This end, was achieved by several authors using one or other modified theories of gravity. Investigations based on Brans-Dicke theory ~[\cite{Agnese95}, \cite{Eiroa08} \cite{Anchordoqui97}, \cite{Nandi98}, \cite{He02}], Kaluza-Klein theory \cite{Shen91}, f($R$) gravity theory [\cite{Lobo09}, \cite{Garcia10}, \cite{Garcia11}, \cite{Harko13}, \cite{Taser16}], f($T$) theory \cite{Bohmer12}, f($G$) gravity \cite{Sharif15}, f($T, T_{G}$) gravity \cite{Sharif18} were reported. Rosa et al \cite{Rosa18} explored WH solution under a generalized hybrid metric-Palatini matter theory. They showed that matter field satisfies Null Energy Condition from the throat region upto infinity requiring no exotic matter. Studying WH in Lovelock gravity theories is also an active field of research [\cite{shang}, \cite{bhawal}, \cite{tanwi}, \cite{dehghani}, \cite{mehdizadeh2012}, \cite{mehdizadeh2015}, \cite{zangeneh}, \cite{mehdizadeh2016}]. For some values of parameters, Richarte et al. \cite{Richarte07} found that WH solution may exist supported by normal matter under Einstein-Gauss-Bonnet gravity theory. Bandyopadhyay and Chakraborty~\cite{tanwi} reported that for suitable choices of parameter values, thin shell WH can be constructed of ordinary matter under Einstein-Yang-Mills-Gauss-Bonnet gravity. Lorentzian WH geometry supported by normal matter was explored by Dehghani and Dayyani~\cite{dehghani} for different choices of shape function and Lovelock coefficients. They reported the radius of shape function to be larger in third order Lovelock gravity compared to the Gauss Bonnet WHs. Whereas, in a study of WH solutions ~\cite{mehdizadeh2012} under higher dimensional Lovelock gravity theory, it was found that extra dimensions could support WHs. Thin shell WHs were constructed unber third order Lovelock gravity by Mehdizadeh, Zangeneh and Lobo~\cite{mehdizadeh2015}. For some values of second and third order Lovelock coefficients, they found WHs supported by normal matter. Conditions for traversable WH solutions sustained by normal matter were explored in ~\cite{zangeneh} under third order Lovelock gravity with a cosmological constant term. Interestingly, the authors pointed out that a WH solution supported by normal matter corresponds to negative cosmological constant.Recently, Mehedizadeh and Lobo~\cite{mehdizadeh2016} also obtained exact WH solutions under third order Lovelock theory satsfying the energy conditions. In a different context it is to note that, Rahaman et al.~\cite{rahaman2014a} considering the Navarro-Frenk-White (NFW)~\cite{NFW1996,NFW1997} density profile studied the possible existence of WH spacetime in the outer regions of galactic halo under GTR. Kuhfittig~\cite{kuhfittig} also reported a similar observation. In another work, Rahaman et al.~\cite{rahaman2014b} by employing the Universal Rotation Curve (URC)~\cite{URC2012} based dark matter model in the central part of the galactic halo and obtained analogous results. They also generalized the result to predict possible existence of wormholes in most of the spiral galaxies. On the other hand, Rahaman et al.~\cite{rahaman2016a,rahaman2016b} exploiting the NFW density profile as well as the URC calculated the tangential velocities $v^{\phi}$ of the test particles in the galactic halo under wormhole like line element. The result was satisfactory for the theoretical and observational plot within the range $9~kpc \leq r \leq 100~kpc$. So, there are plethora of research articles reporting possibility of WH solutions supported by normal matter. In the present paper we investigate the possibility of existence of WH geometry supported by quark matter in third order Lovelock gravity. Strange quark matter, made up of up, down and strange quarks, was shown to be energetically most favorable state of matter \cite{Witten84}. The core of a massive compact star is largely believed to be constituted of such matter. Researchers also predicted that massive compact stars might wholly be constituted of strange quark matter. The present study may shed light over the possibility of existence of WH geometry inside massive compact stars like hybrid stars, quark stars, strange stars etc. In the Section 2 we present a brief outline of the Lovelock gravity theories. Section 3 presents the basic equations. The results and plots are discussed in Section 4. We conclude in the last Section 5. \section{Brief outline of Lovelock gravity theory}\label{SecII} In the framework of third-order Lovelock gravity, the action is \begin{equation} I=\int d^{n+1}x\sqrt{-g}\left( L_{1}+\alpha _{2}^{\prime }L_{2}+\alpha _{3}^{\prime }L_{3}\right), \label{love-act} \end{equation} assuming $8 \pi G_{n} =1$. Here, $G_{n}$ being the $n$-dimensional gravitational constant. $\alpha_{2}^{\prime}$ and $\alpha_{3}^{\prime}$ are the second (Gauss-Bonnet) and third order Lovelock coefficients, $g$ is the determinant of the metric, $L_{1}=R$ is the Einstein-Hilbert Lagrangian, the term $L_{2}$ is the Gauss-Bonnet Lagrangian given by \begin{equation} L_{2}=R_{abcd}R^{abcd}-4R_{ab}R^{ab}+R^{2}, \end{equation} and the third order Lovelock Lagrangian $L_{3}$ is defined as \begin{eqnarray} L_{3} =&&2R^{abcd}R_{cdmn}R_{\phantom{mn}{ab}}^{mn}+8R_{\phantom{ab}{cm}}^{ab}R_{\phantom{cd}{bn}}^{cd}R_{\phantom{mn}{ad}}^{mn} \nonumber \\&& +24R^{abcd}R_{cdbm}R_{\phantom{m}{a}}^{m}+3RR^{abcd}R_{cdab} \notag \\ &&+24R^{abcd}R_{ca}R_{db}+16R^{ab}R_{bc}R_{\phantom{c}{a}}^{c} \nonumber \\&& -12RR^{ab}R_{ab}+R^{3}. \end{eqnarray} In Lovelock theory, for an $n$-dimensional space, only terms with order less than $\left[\frac{(n+1)}{2} \right]$ contribute to the field equations. Here, we used the notation that $\left[\frac{n}{2} \right]$ will give biggest integer less than $\frac{n}{2}$. Since we are considering third order Lovelock gravity, its effects will be apparent for $n \geq 7$. Thus, Varying the action (\ref{love-act}) with respect to the metric we get the field equations up to third order as follows: \begin{equation} G^{E}_{ab} + \alpha _{2}^{\prime } G^{(2)}_{ab} + \alpha _{3}^{\prime } G^{(3)}_{ab} = T_{ab}, \end{equation} where $T_{ab}$ is the energy-momentum tensor, $G_{ab}^{E}$ is the Einstein tensor whereas $G_{ab}^{(2)}$ and $G_{ab}^{(3)}$ are given by \begin{eqnarray*} G_{ab}^{(2)} =&& -2R_{acdn}R_{\phantom{cnd}{b}}^{dnc}-4R_{ambc}R^{mc} \nonumber\\&& -4R_{ac}R_{\phantom{c}b }^{c} +2RR_{ab})-\frac{1}{2}L_{2}g_{ab} \,, \\ G_{ab}^{(3)} =&& -3(4R^{nmcd}R_{cdpm}R_{\phantom{p }{b n a}}^{p}-8R_{\phantom{nm}{pc}}^{nm}R_{\phantom{cd}{na}}^{cn}R_{\phantom{p}{bmd}}^{p } \nonumber\\ &&+2R_{b}^{\phantom{b}{ncd}}R_{cdpm}R_{\phantom{pm}{na}}^{pm}-R^{nmcd}R_{cdnm}R_{ba} \nonumber\\ &&+8R_{\phantom{n}{jkm}}^{n}R_{\phantom{kl}{ni}}^{kl }R_{\phantom{m}l }^{m }+8R_{\phantom{k}{jnl}}^{k }R_{\phantom {nm}{ki}}^{nm }R_{\phantom{l}{m}}^{l } \nonumber\\ &&+4R_{b }^{\phantom{b}{n c d}}R_{c da m }R_{\phantom{m}{n}}^{m }-4R_{b}^{\phantom{b}{n c d}}R_{c d n m}R_{\phantom{m}{a}}^{m } \notag\\&& +4R^{n m c d}R_{c d n a }R_{b m} \nonumber\\&& +2RR_{b }^{\phantom{b}{d n m}}R_{n m d a } +8R_{\phantom{n}{b a m }}^{n }R_{\phantom{m}{c}}^{m }R_{\phantom{c}{n}}^{c} \nonumber\\ &&-8R_{\phantom{c}{b nm }}^{c }R_{\phantom{n}{c}}^{n }R_{a }^{m }-8R_{\phantom{n }{c a}}^{n m }R_{\phantom{c}{n}}^{c}R_{b m } \nonumber\\ &&-4RR_{\phantom{n}{b a m }}^{n }R_{\phantom{m}n }^{m }+4R^{n m }R_{m n }R_{ba}-8R_{\phantom{n}{b}}^{n}R_{n m }R_{\phantom{m}{a}}^{m } \nonumber\\ &&+4RR_{b m }R_{\phantom{m}{a }}^{m }-R^{2}R_{b a})-\frac{1}{2}L_{3}g_{ab}. \end{eqnarray*} \section{Basic Equations}\label{SecIII} The $n$-dimensional traversable wormhole metric is given by \begin{equation}\label{E:line1} ds^2=-e^{2 \Phi(r)}dt^2+\frac{dr^2}{\left(1-\frac{b(r)}{r}\right)}+r^2 d\Omega_{n-2}^2, \end{equation} using units in which $c = G = 1$. Here $\Phi(r)$ is the redshift function which must be everywhere finite to prevent the event horizon, $b(r)$ is the shape function and $d\Omega_{n-2}^2$ is the metric on the surface of a $(n-2)$-sphere. The shape function of the wormhole essentially satisfies the condition $b(r_0) = r_0$ at $r = r_0$ where $r_0$ is the throat of the wormhole. This condition is commonly known as the flare-out condition which gives at the throat $b'(r_0) < 1$ while $b(r) < r$ near the throat. In the present paper, for the sake of convenience we shall take the function $\Phi$ to be constant. The energy-momentum tensor is given by \begin{equation} T^{\mu}_{\nu} = diag[ - \rho(r), p_{\|}(r), p_{\bot}(r), p_{\bot}(r), \dots], \end{equation} where $\rho(r)$ is the energy density, $p_{\|}(r)$ is the radial pressure and $p_{\bot}(r)$ gives the transverse pressure. With the above assumption for the redshift function, $\Phi(r) = constant$, the Einstein equations for the above mentioned metric are as follows \[ \rho(r) = -\frac{(n-2)}{2r^2}\left(1+\frac{2\alpha_2 b}{r^3}+\frac{3\alpha_3 b^2}{r^6}\right)\frac{(b-rb^{\prime})}{r}\] \begin{equation} \label{rho} +\frac{(n-2)b}{2r^3}\left[(n-3)+(n-5)\frac{\alpha_2 b}{r^3}+(n-7)\frac{\alpha_3 b^2}{r^6}\right], \end{equation} \[ p_{\|}(r) = -\frac{(n-2)(n-3)b}{2r^3} \] \begin{equation} \label{pr} +(n-5)\frac{\alpha_2 (n-2)b^2}{2r^9}+(n-7)\frac{\alpha_3(n-2) b^3}{2r^9}, \end{equation} \[p_{\bot}(r) = \Xi(r) \left[(n-3) + (n-5)\frac{2 \alpha_2 b}{r^3} + (n-7)\frac{3 \alpha_3 b^2}{r^6} \right]\] \[ - \frac{b}{2 r^3}(n-3)(n -4) - (n-5)(n -6)\frac{\alpha_2 b^2}{2r^6} \] \begin{equation} \label{pt} - (n-7)(n -8)\frac{\alpha_3 b^3}{2r^9}, \end{equation} where $\Xi = \left(1 - \frac{b}{r} \right) \left(\frac{(b - rb')}{2r^2(r -b)} \right)$, $\alpha_2 = (n-3)(n-4) \alpha_{2}'$ and $\alpha_3 = (n-3)(n-4)(n-5)(n-6) \alpha_{3}'$. In the above equations the prime denotes the derivative with respect to $r$. Here, we obtain a system of three independent nonlinear equations \ref{rho}, \ref{pr}, \ref{pt}. We have to solve for four unknown functions, $\rho(r)$, $p_{\|}(r)$, $p_{\bot}(r)$ and $b(r)$. The redshift function, $\Phi(r)$ is already assumed to be constant implying zero tidal force. In order to close the system we assume a specific equation of state (EOS). We assume the existence MIT bag model EOS is given by \begin{equation}\label{bagEOS} 4B_g = \rho - 3p_r. \end{equation} \section{Results and Discussion} \subsection{Shape function} In simplified form Eq. \ref{rho} can be written as \begin{equation} \rho(r) = \frac{(n-2)}{2}\left[r \xi^{\prime}+(n-1)\xi \right],\label{rhosimp} \end{equation} where $\xi=\frac{b}{r^3}\left[1+\alpha_2 \frac{b}{r^3}+\alpha_3 \frac{b^2}{r^6}\right]$. With the same considerations for simplification Eq.\ref{pr} reduces to \begin{equation} p_{\|} (r) = -\frac{(n-2)}{2} \left[(n-7) \xi + 2 \frac{b}{r^3} \left(2 + \alpha_2 \frac{b}{r^3}\right) \right].\label{pr2} \end{equation} With the assumption of EOS given in Eqs. \ref{bagEOS},~\ref{rhosimp} and~\ref{pr2} reduces to the form \begin{equation} r \xi^{\prime} + 2(2n-11) \xi + 6 \frac{b}{r^3} \left(2 + \alpha_2 \frac{b}{r^3}\right) = \frac{8B_g}{n-2}. \label{final-eq} \end{equation} Now, without any loss of generality, we may assume the following mathematical relationships between the modified second and third order coupling constants of Lovelock action i.e., $\alpha_2 = (n-3)(n-4) \alpha_{2}'$ and $\alpha_3 = (n-3)(n-4)(n-5)(n-6) \alpha_{3}'$. As it is hard to find exact analytical solution non-linear Eq. \ref{final-eq}. For the mathematical simplicity let us first consider \begin{equation} \alpha_3 = \left(\frac{\alpha_2}{2}\right)^2, \label{cond1} \end{equation} then \begin{equation}\label{xi1} \xi =\frac{b}{r^3}\left(1+\frac{\alpha_2}{2}\frac{b}{r^3}\right)^2. \end{equation} We now consider the coupling coefficient for the second order Lovelock coefficient to be sufficiently small such that $| \alpha_2 | b \ll r^3$ for $| \alpha_2 | < 1$. With this approximation, the above expression for $\xi$ in Eq. \ref{xi1} reduces to the following form \begin{equation} \xi =\frac{b}{r^3}\left(1+\alpha_2 \frac{b}{r^3}\right), \end{equation} i.e., \begin{equation} \frac{b}{r^3} = \frac{1}{2\alpha_2}\left(-1+\sqrt{1+4\alpha_2 \xi}\right). \end{equation} For weak coupling, under similar arguments as before, Eq. \ref{final-eq} reduces to the form \begin{equation} r \xi^{\prime} + l \xi = C, \end{equation} where $C = \frac{8B_g}{n-2}$ and $l = 2(2n-5)$. The solution of this equation is \begin{equation} \xi = \frac{C}{l}+ C_1 r^{-l}, \label{xi2} \end{equation} where $C_1$ is constant of integration. Hence the shape function is given by \begin{equation}\label{b1} b(r) = \frac{r^3}{2\alpha_2}\left[-1+\sqrt{1+4\alpha_2 \left(\frac{C}{l}+ C_1 r^{-l}\right)}\right]. \end{equation} Note that in the limit $\alpha_2 \rightarrow 0$, we actually get the results for $4$ dimensional general relativity, since we have already assumed Eq. \ref{cond1}. Taking this limit on the Eq. \ref{b1}, we get \begin{equation} b(r)|_{\alpha_2 \rightarrow 0}= r^3 \left[\frac{2B_g}{3}+\left(1-\frac{2B_g}{3}r_0^2\right)r^4_0 r^{-6}\right], \end{equation} which must be a decreasing function for $r_0 < 1$. Taking the throat of the WH at $r = r_0$, from condition $b(r_0) = r_0$ gives \begin{equation} C_1=\frac{l ({r_0}^2+ \alpha_2)-C {r_0}^4}{l{r_0}^{4-l}}. \end{equation} Now, we can check the acceptability of the approximation $| \alpha_2 | b \ll r^3$ for $| \alpha_2 | < 1$, taken in the calculation of $b(r)$. \begin{figure} [thbp] \includegraphics[width=6.0cm]{condn.eps} \caption { Plot showing the values of $\frac{| \alpha_2 | b}{r^3}$ for $n = 7, 8, 9, 10, 11$ for values $r_0=0.9$, $\alpha_2=-0.81$, $B_g=0.0001052631579$ km$^{-2}$.}\label{condn} \end{figure} As can be noted from Fig. \ref{condn}, the values of $\frac{| \alpha_2 | b}{r^3}$ are acceptably small everywhere near the throat of the wormhole. To test the accuracy for solution of shape function we define \cite{Finlayson66} residual of shape function as given by \[ Res[b(r)] = r \xi^{\prime} + 2(2n-11) \xi + \] \begin{equation} 6 \frac{b}{r^3} \left(2 + \alpha_2 \frac{b}{r^3}\right) - \frac{8B_g}{n-2}, \end{equation} where for exact solution Res[b(r)] would be zero. From Fig \ref{resb} note that Res[b(r)] is very small compared to shape function value which indicates the accuracy of analytic solution under weak coupling approximation is very high. \begin{center} \begin{figure*}[thbp] \includegraphics[width=6.0cm]{Resb.eps} \caption{\small{Plot to study residual for shape function taking $r_0=0.9$, $\alpha_2=-0.81$, $B_g=0.0001052631579$ km$^{-2}$ for different value of $n$.} }\label{resb} \end{figure*} \end{center} \begin{figure*}[thbp] \begin{tabular}{lr} \includegraphics[width=6.0cm]{b.eps}& \includegraphics[width=6.0cm]{b-r.eps} \end{tabular} \caption{\small{Plot to study $b(r)$ (left) and $b(r) - r$ (right) taking $r_0=0.9$, $\alpha_2=-0.81$, $B_g=0.0001052631579$ km$^{-2}$ for different value of $n$.} }\label{b(r)} \end{figure*} The plot in the left of Fig. \ref{b(r)} shows that shape function is positive in the entire region for $n = 7, 8, 9, 10, 11$ and it is increasing function as well. From the plot in right of Fig. \ref{b(r)}, it can be noted that $b(r)<r$ for $r>r_0$ which means flare out condition is satisfied for $n = 7, 8, 9, 10, 11$. Yet another relation $rb^{\prime}(r) < b(r)$ is satisfied as shown in Fig. \ref{fc1}. The condition $rb^{\prime}-b=0$ gives the maximum physical radial distance ($r_{max}$) for WH. \begin{figure}[thbp] \includegraphics[width=6.0cm]{fc1.eps} \caption {Flare out condtion of the wormhole taking $r_0=0.9$, $\alpha_2=-0.81$, $B_g=0.0001052631579$ km$^{-2}$ for different value of $n$.} \label{fc1} \end{figure} \subsection{Density and the coupling coefficients} Substituting Eq. \ref{xi2} in Eq. \ref{rhosimp} we get \[ \rho(r) = \] \begin{equation} \frac{3(n-2)(n-3)(Cr_0^4-lr_0^2-l\alpha_2)}{4 (2n-5) r_0^{4-l}}r^{-l} +\frac{2B_g (n-1)}{(2n-5)}. \end{equation} For positive energy density we will have $n > 2$ and $(Cr_0^4-lr_0^2-l\alpha_2)>0$ which gives upper limit for $\alpha_2$ as given by \begin{equation} (\alpha_{2})_{max} = r_0^2 \left(\frac{4B_g }{(n-2)(2n-5)}r_0^2 -1 \right), \label{alphamax} \end{equation} which is positive for $B_g=0.0001052631579$ km$^{-2}$ only when $r_0$ is the order of $10^2$ resulting large values of $(\alpha_{2})_{max}$. However, the present model is valid for only smaller values of $\mid (\alpha_{2})_{max} \mid<<1$ which is true for $r_0<1$ and negative $\alpha_2$ as indicated by Eq. \ref{alphamax}. $\alpha_2$ is negative considering positive energy density and validity of the present model. Note that the present model of wormhole is physically valid for the range of parameter values $0<r_0<1$ and $-1<\alpha_2<0$. We have taken different data set for ($r_0$, $\alpha_2$) for analysis of the model.The graphical nature of the physical functions and conditions is shown taking data set ($r_0=0.9$, $\alpha_2=-0.81$, $B_g=0.0001052631579$ $km^{-2}$) for different value of $n$. \begin{table}[thbp] \caption{Study of the effect of parameter values on the physical acceptance of the model} \label{tbl-1} \begin{tabular}{|c|c|c|c|c|c|c|c|c} \hline $r_0$ & $\alpha_2$ & $b(r)$ & $b(r) - r$ & $r_{max}$ & $rb^{\prime}-b(r)$ \\ \hline 0.9 & -0.81 & +ve \& increasing & -ve for $r>r_0$ & $ 1.489 r_0$ & satisfied \\ \hline 0.8 & -0.64 & +ve \& increasing & -ve for $r>r_0$ & $ 1.529 r_0$ & satisfied \\ \hline 0.7 & -0.49 & +ve \& increasing & -ve for $r>r_0$ & $ 1.575 r_0$ & satisfied \\ \hline 0.6 & -0.36 & +ve \& increasing & -ve for $r>r_0$ & $1.629 r_0$ & satisfied \\ \hline 0.5 & -0.25 & +ve \& increasing & -ve for $r>r_0$ & $1.697 r_0$ & satisfied \\ \hline 0.4 & -0.16 & +ve \& increasing & -ve for $r>r_0$ & $1.784 r_0$ & satisfied \\ \hline 0.3 & -0.09 & +ve \& increasing & -ve for $r>r_0$ & $ 1.901 r_0$ & satisfied \\ \hline 0.2 & -0.04 & +ve \& increasing & -ve for $r>r_0$ & $ 2.080 r_0$ & satisfied \\ \hline 0.1 & -0.01 & +ve \& increasing & -ve for $r>r_0$ & $ 2.427 r_0$ & satisfied \\ \hline \end{tabular} \end{table} \begin{table}[h] \caption{Study of the effect of parameter values on the satisfaction of energy conditions for the WH} \label{tbl-2} \begin{tabular}{|c|c|c|c|c|c|c|c|c} \hline $r_0$ & $\alpha_2$ & $\rho$ \& $\rho+p_{\|}$ \& $\rho+p_{\bot}$ & $\rho+p_{\|}+ (n-2)p_{\bot}$ \\ \hline 0.9 & -0.81 & +ve \& decreasing & -ve \& increasing \\ \hline 0.8 & -0.64 & +ve \& decreasing & -ve \& increasing \\ \hline 0.7 & -0.49 & +ve \& decreasing & -ve \& increasing \\ \hline 0.6 & -0.36 & +ve \& decreasing & -ve \& increasing \\ \hline 0.5 & -0.25 & +ve \& decreasing & -ve \& increasing \\ \hline 0.4 & -0.16 & +ve \& decreasing & -ve \& increasing \\ \hline 0.3 & -0.09 & +ve \& decreasing & -ve \& increasing \\ \hline 0.2 & -0.04 & +ve \& decreasing & -ve \& increasing \\ \hline 0.1 & -0.01 & +ve \& decreasing & -ve \& increasing \\ \hline \end{tabular} \end{table} \subsection{Energy conditions} The standard point-wise energy conditions of classical general relativity are helpful for extracting significant information of the matter distribution without assuming a particular equation of state. We consider three energy conditions, namely, Null Energy Condition (NEC), Weak Energy Condition (WEC), Strong Energy Condition (SEC). In terms of principal pressures the NEC is given by following inequations: \begin{eqnarray} \rho + p_{\|} \geq 0, \rho + p_{\bot} \geq 0, \label{NEC} \end{eqnarray} whereas WEC is given by following inequations: \begin{eqnarray} \rho \geq 0,\label{WEC1} \rho + p_{\|} \geq 0, \rho + p_{\bot} \geq 0.\label{WEC} \end{eqnarray} The SEC can be expressed as: \begin{eqnarray} \rho + \Sigma_{i} p_{i} \geq 0, \rho + p_{i} \geq 0. \label{SEC} \end{eqnarray} With the arguments of weak coupling as mentioned earlier, the radial pressure given in Eq. \ref{pr2} reduces to \begin{equation}\label{prsimp} p_{\|} (r) = -\frac{(n-2)(n-3)}{2} \xi. \end{equation} For our present model of WH, to verify the NEC, WEC and SEC, we consider the following equations respectively: \begin{equation} \rho + p_{\|} = \frac{(n-2)}{2}\left(r \xi^{\prime}+2\xi \right), \end{equation} \begin{equation} \rho + p_{\bot} = r \xi^{\prime} + (n-2) \xi + 2 \alpha_2 r \xi \xi^{\prime}, \end{equation} \[ \rho+p_{\|}+ (n- 2) p_{\bot}= \] \begin{equation} -\frac{(n-2)}{2} \left[ (n-4) \{ r \xi^{\prime} + (n - 1) \xi \} - 4 \alpha_2 r \xi \xi^{\prime}\right]. \end{equation} It may be noted from Fig. \ref{EC1} and left plot of Fig. \ref{EC2} that NEC and WEC are satisfied by the constituent matter of the WH. From Table \ref{tbl-2}, it is noted that NEC and WEC are satisfied for $0<r_0<1$ and $-1<\alpha_2<0$. These two constraints are physically acceptable. \begin{figure*}[thbp] \begin{tabular}{lr} \includegraphics[width=6.0cm]{E1.eps}& \includegraphics[width=6.0cm]{E2.eps} \end{tabular} \caption{\small{Plot to study $\rho$ and $\rho + p_{r}$ taking $r_0=0.9$, $\alpha_2=-0.81$, $B_g=0.0001052631579$ km$^{-2}$ for different value of $n$.} }\label{EC1} \end{figure*} \begin{figure*}[thbp] \begin{tabular}{lr} \includegraphics[width=6.0cm]{E3.eps}& \includegraphics[width=6.0cm]{E4.eps} \end{tabular} \caption{\small{Plot to study $\rho + p_{t}$ and $\rho + p_{r} +(n-2)p_{t}$ taking $r_0=0.9$, $\alpha_2=-0.81$, $B_g=0.0001052631579$ km$^{-2}$ for different value of $n$.} }\label{EC2} \end{figure*} \section{Concluding Remarks} Several studies \cite{dehghani,mehdizadeh2012} reported the existence of WH geometry without being supported by exotic matter under Lovelock gravity. Being motivated by those results, we explored a WH constituted of quark matter obeying MIT Bag equation of state under third order Lovelock gravity. We take a constant redshift which is compatible with the necessary conditions for the existence of a WH. A specific relation between the second and third order Lovelock coefficients ($\alpha_2$ and $\alpha_3$ respectively) is considered, i.e., $\alpha_3 = \left( \frac{\alpha_2}{2} \right)^2$. As can be noted from this relation, only positive values for $\alpha_3$ are possible. More specifically, we get WH solution supported by quark matter for $0 < r_o < 1$ and $-1 < \alpha_2 < 0$, as the constituent matter satisfy NEC as well as WEC in this range of parameter values. In this connection we would like to mention that the above result implies that for any observer, the energy density is non-negative \cite{Hawking} in this range of values of parameters. This is in confirmation with the physical nature of quark matter. However, the matter violates SEC everywhere near the throat of the WH. It is important to note that violation of SEC is very common even in classical physics [\cite{Hochberg99}, \cite{Molina99}]. Based on observational data, Visser \cite{Visser97} confirmed the violation of SEC during the evolution of the universe at sometime since the epoch of galaxy formation. It was pointed out by Hawking and Ellis \cite{Hawking} that violation of SEC may take place due to a large negative pressure. A massive scalar field may possibly violate the SEC \cite{Tipler78}. However, a prescription has been provided by Biswas et al. \cite{Biswas2020} pointing out that this type of violation of the SEC, especially for a scalar field with a positive potential and any cosmological inflationary process \cite{Hawking}, can be easily overcome under an alternative theory of gravity since violation of SEC will cause of violation to the classical regime of GTR. They \cite{Biswas2020} have studied anisotropic spherically symmetric strange star under the background of $f(R,T)$ gravity and shown by exhibiting their result for the strange star candidate PSRJ 1614–2230 that there is no violation of SEC. Therefore, it seems that Lovelock gravity does serves this purpose adequately. In the present investigation, to make the Einstein field equations simpler we have made a choice of $\Phi = constant$. However, it immediately raises the following issue: to what extent this choice is physically acceptable? So, one can execute the present work with a $\Phi \neq constant$ and explore the possibility to solve the Einstein equations, either analytically or numerically, for getting physically viable solutions. \section*{Acknowledgments} KC, FR and SR acknowledge the support from the authority of Inter-University Centre for Astronomy and Astrophysics, Pune, India by providing them Visiting Associateship under which a part of this work was carried out. AA also thanks the authority of IUCAA, Pune, India.
1,116,691,499,481
arxiv
\section{Introduction} In recent years, abstractive summarization \cite{Abstractive} has made impressive progress with the development of sequence-to-sequence (seq2seq) framework \cite{seq2seq1,seq2seq2}. This framework is composed by an encoder and a decoder. The encoder processes the source text and extracts the necessary information for the decoder, which then predicts each word in the summary. Thanks to their generative nature, abstractive summaries can include novel expressions never seen in the source text, but at the same time, abstractive summaries are more difficult to produce compared with extractive summaries \cite{extractive1,extractive2} which formed by directly selecting a subset of the source text. It has been also found that seq2seq-based abstractive methods usually struggle to generate out-of-vocabulary (OOV) words or rare words, even if those words can be found in the source text. Copy mechanism \cite{pointer0} can alleviate this problem and meanwhile maintain the expressive power of the seq2seq framework. The idea is to allow the decoder not only to generate a summary from scratch but also copy words from the source text. Though effective in English text summarization, the copy mechanism remains relatively undeveloped in the summarization of some East Asian languages e.g. Chinese. Generally speaking, abstractive methods for Chinese text summarization comes in two varieties, being word-based and character-based. Since there is no explicit delimiter in Chinese sentence to indicate word boundary, the first step of word-based methods \cite{LCSTS} is to perform word segmentation \cite{seg1,seg2}. Actually, in order to avoid the segmentation error and to reduce the size of vocabulary, most of the existing methods are character-based \cite{global,vae1,contrastive}. When trying to combine the character-based methods in Chinese with copy mechanism, the original ``word copy'' degrades to ``character copy'' which does not guarantee a multi-character word to be copied verbatim from the source text \cite{pointer1}. Unfortunately, copying multi-character words is quite common in Chinese summarization tasks. Take the Large Scale Chinese Social Media Text Summarization Dataset (LCSTS) \cite{LCSTS} as an example, according to Table I, about 37\% of the words in the summaries are copied from the source texts and consist of multiple characters. \begin{table} \caption{Percentage of different types of words occur in the summaries of Chinese text summarization trainning data. } \begin{center} \resizebox{0.7\linewidth}{!}{ \begin{tabular}{ccc} \hline \textbf{Word Len.}&\textbf{Copied}&\textbf{Generated}\\ \hline 1&21.6\%&12.3\%\\ 2&28.9\%&21.8\%\\ $\geqslant$3&7.6\%&7.7\%\\\hline \end{tabular} } \end{center} \label{weibo} \end{table} Selective read \cite{pointer1} was proposed to handle this problem. It calculates the weighted sum of encoder states corresponding to the last generated character and adds this result to the input of the next decoding step. Selective read can provide location information of the source text for the decoder and help it to perform the consecutive copy. A disadvantage of this approach, however, is that it increases reliance of present computation on partial results before the current step which makes the model more vulnerable to the errors accumulation and leads to exposure bias\cite{exposure} during inference. Another way to make copied content consecutive is through directly copying text spans. Zhou et al. \cite{seqcopynet} implement span copy operation by equipping the decoder with a module that predicts the start and end positions of the span. Because a longer span can be decomposed to shorter ones, there are actually many different paths to generate the same summary during inference, but their model is optimized by only the longest common span at each time step during training, which exacerbates the discrepancy between two phases. In this work, we propose a novel lexicon-constrained copying network (LCN). The decoder of LCN can copy either a single character or a text span at a time, and we constrain the text span to match a potential multi-character word. Specifically, given a text and several off-the-shell word segmentators, if a text span is included in any segmentation result of the text, we consider it as a potential word. By doing so, the number of available spans is significantly reduced, making it is viable to marginalize over all possible paths during training. Furthermore, during inference, we aggregate all partial paths on the fly that producing the same output using a word-enhanced beam search algorithm, which encourages the model to copy multi-character words and facilitates the parallel computation. To be in line with the aforementioned decoder, the encoder should be revised to learn the representations of not only characters but also multi-character words. In the context of neural machine translation, Su et al. \cite{lattice1} first organized characters and multi-character words in a directed graph named word-lattice. Following Xiao et al. \cite{lattice2}, we adopt an encoder based on the Transformer \cite{transformer} to take the word-lattice as input and allow each character and word to have its own hidden representation. By taking into account relative positional information when calculating self-attention, our encoder can capture both global and local dependencies among tokens, providing an informative representation of source text for the decoder to make copy decisions. Although our model is character-based (because only characters are included in the input vocabulary), it can directly utilize word-level prior knowledge, such as keywords. In our setting, keywords refer to words in the source text that have a high probability of inclusion in the summary. Inspired by Gehrmann et al. \cite{bottom}, we adopt a separate word selector based on the large pre-trained language model, e.g. BERT \cite{Bert} to extract keywords. When the decoder intends to copy words from the source text, those selected keywords will be treated as candidates, and other words will be masked out. Experimental results show that our model can achieve better performance when incorporating with the word selector. \section{Related Work} Most existing neural methods to abstractive summarization fall into the sequence to sequence framework. Among them, models based on recurrent neural networks (RNNs) \cite{RNN1,RNN2,RNN3} are more common than those built on convolutional neural network (CNNs) \cite{Abstractive,cnn2}, because the former models can more effectively handle long sequences. Attention \cite{atten} is easily integrated with RNNs and CNNs, as it allows the model to focus more on salient parts of the source text \cite{atten2,atten3}. Also, it can serve as a pointer to select words in the source text for copying \cite{pointer0,pointer1}. In particular, architectures that are constructed entirely of attention, e.g. Transformer \cite{transformer} can be adopted to capture global dependencies between source text and summary \cite{bottom}. Prior knowledge has proven helpful for generating informative and readable summaries. Templates that are retrieved from training data can guide summarization process at the sentence-level when encoded in conjunction with the source text \cite{template1,template2}. Song et al. \cite{structure} show that the syntactic structure can help to locate the content that is worth keeping in the summary, such as the main verb. Keywords are commonly used in Chinese text summarization. When the decoder is querying from the source representation, Wang and Ren \cite{keyword1} use the keywords extracted by the unsupervised method to exclude noisy and redundant information. Deng et al. \cite{keyword2} propose a word-based model that not only utilizes keywords in the decoding process, but also adds the keywords produced by the generative method into the vocabulary in the hope of alleviating out of vocabulary (OOV) problem. Our model is drastically different from the above two models in terms of the way keywords being extracted and encoded. The most related works are in the field of neural machine translation, in which many researchers resort to the assistance of multi-granularity information. On the source side, Su et al. \cite{lattice1} use an RNN-based network to encode the word-lattice, an input graph that contains both word and character. Xiao et al. \cite{lattice2} apply the lattice-structured input to the Transformer \cite{transformer} and generalize the lattice to construct at a subword level. To fully take advantage of multi-head attention in the Transformer, Nguyen et al.\cite{granularity1} first partition input sequence to phrase fragments based on n-gram type and then allow each head to attend to either one certain n-gram type or all different n-gram types at the same time. In addition to n-gram phrases, the multi-granularity self-attention proposed by Hao et al. \cite{granularity2} also attends to syntactic phrases obtained from syntactic trees to enhance structure modeling. On the target side, when the decoder produces an UNK symbol which denotes a rare or unknown word, Luong et al. restore it to a natural word using a character-level component. Srinivasan et al. \cite{subword2} adopt multiple decoders that map the same input into translations at different subword-levels, and combine all the translations into the final result, trying to improve the flexibility of the model without losing semantic information. While our model and the above models all utilize multi-granularity information, our model differs at that we impose a lexical constraint on both encoding and decoding. \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{fig/fig1.pdf} \caption{Structure overview of the proposed model.} \label{fig:first} \end{figure*} \section{Model} This section describes our proposed model in detail. \subsection{Notation} Let character sequence $x_{1:I}=\{x_1,...,x_I\}$ be a source text, we can define a text span $x_{i:j}$ that starts with $x_i$ and ends with $x_j$ a potential word if it is contained by any word segmentation result of $x_{1:I}$. Because both characters and words can be regarded as tokens, we include all characters and potential words of the source text in a token sequence $o_{1:M}=\{o_1,...,o_M\}$. \subsection{Input Representation } \label{sec:input} Given a token $o_m=\{x^1,...,x^l\}$, where $l$ is the token length ($l=1$ when $o_m$ is a character), we first convert it into a sequence of vectors, using the character embedding $\textbf{E}^c$. Then a bi-directional Long Short-term Memory Network (bi-LSTM) is applied to model the token composition: \begin{equation} \textbf{g}_m=[\overleftarrow{LSTM}(\textbf{E}^c(x^{1})),\overrightarrow{LSTM}(\textbf{E}^c(x^{l}))] \end{equation} Where $\textbf{g}_m\in\mathbb{R}^{d}$ denotes the input token representation, which is formed by concatenating the backward state of the beginning character and the forward state of the ending character. Since the Transformer has no sequential structure, Vaswani et al. \cite{transformer} proposed positional encoding to explicitly model the order of the sequence. In this work, we assign each token an absolute position which depends on the first character of the token. For example, the absolute position of the word ``留学 (studying abroad)'' in Fig.~\ref{fig:first} is the same as that of the character ``留 (stay)''. By adding the encoding of the absolute position to the token representation, we can get the final input representation $\textbf{G}=\{\textbf{g}_1,...,\textbf{g}_M\}$. \subsection{Encoder} \begin{table} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \scriptsize \caption{Position of $x_{c:d}$ relative to $x_{a:b}$ under different conditions. Constant $r$ is used to limit the relative position from $0$ to $2r+3$, where $2r+1$ to $2r+3$ each represent special cases: $x_{c:d}$ includes $x_{a:b}$, $x_{c:d}$ is included in $x_{a:b}$, and $x_{c:d}$ intersects with $x_{a:b}$. } \begin{center} \resizebox{0.7\linewidth}{!}{ \begin{tabular}{|c|c |} \hline \textbf{Conditions}&\textbf{Relative Position}\\ \hline $d< a$ &$max(0,r-a+d)$\\ \hline $a=c$ and $b=d$ & $r$\\ \hline $b< c$ &$min(2r,r+c-b)$\\ \hline \tabincell{c}{$c \leq\ a \leq b<d$ or \\ $c<a \leq b \leq d$} &$2r+1$\\ \hline \tabincell{c}{$a \leq c \leq d<b$ or \\ $a<c \leq d \leq b$} &$2r+2$\\ \hline otherwise &$2r+3$\\ \hline \end{tabular} } \end{center} \label{relations} \end{table} However, absolute position alone cannot precisely reflect the relationship among tokens. Consider again the example in Fig.~\ref{fig:first}, the distance between the word ``留学 (studying abroad)'' and the character ``生 (life)'' is 2 according to their absolute positions, but they are actually neighboring tokens in a certain segmentation. To alleviate this problem, Xiao et al. \cite{lattice2} extend the Transformer \cite{transformer} by taking into account relation types when calculating self-attention. In this work, we adopt relative position as an alternative to relation type. The main idea is that relative position is complementary to absolute position and can guide each token to interact with other tokens in a coherent manner. Given two tokens $x_{a:b}$ and $x_{c:d}$ that correspond to a span of the source text each, the position of $x_{c:d}$ relative to $x_{a:b}$ is determined by both their beginning and ending characters as shown in Table \ref{relations}. Following Xiao et al. \cite{lattice2}, we revise self-attention to integrate relative positional information. Concretely, a self-attention layer consists of $h$ heads, which operate in parallel on a sequence $\textbf{H}=\{\textbf{h}_1,...,\textbf{h}_M\}$ of context vectors with dimension $d$. After modification, the resulting output \textbf{attn} for each attention head is defined as: \begin{equation} e_{ij}=\frac{(\textbf{W}^q\textbf{h}_i ) (\textbf{W}^k \textbf{h}_j + \textbf{p}_{ij}^k)^T}{\sqrt{d_z}} \end{equation} \begin{equation} a_{ij}=\frac{exp(e_{ij})}{\sum_{k=1}^{M}exp(e_{ik})} \end{equation} \begin{equation} \textbf{attn}_{i}=\sum_{j=1}^{M}a_{ij}(\textbf{W}^v \textbf{h}_j+ \textbf{p}_{ij}^v) \end{equation} where $\textbf{W}^q$, $\textbf{W}^k$, $\textbf{W}^v \in\mathbb{R}^{d_z \times d}$ are all model parameters, $d_z=d/h$ is the hidden dimension for each head, $\textbf{p}_{ij}^k$ and $\textbf{p}_{ij}^v$ are learned embeddings that encode the position of token $t_j$ relative to token $t_i$. We concatenate the outputs of all heads to restore their dimension to $d$, and then apply other sub-layers (such as feed-forward layer) used in the original Transformer \cite{transformer} to get the final output of the layer. Several identical self-attention layers are stacked to build our encoder. For the first layer, $\textbf{H}$ is input representation $\textbf{G}$. For the subsequent layers, $\textbf{H}$ is the output of the previous layer. \subsection{Decoder} The encoder proposed by Xiao et al. \cite{lattice2} takes both characters and words as input and thus has the ability to learn multi-granularity representations. However, as their decoder is character-based which consumes and outputs only characters, the word representations induced from the encoder cannot receive supervision signal directly from the decoder and remain a subsidiary part of the input memory. To alleviate this problem, we extend the standard Transformer decoder by a lexicon-constrained copying module, which not only allows the decoder to perform multi-character word copy but also provides auxiliary supervision on word representations. Specifically, at each time step $t$ we leverage a single-head attention over the input memory $\textbf{H}=\{\textbf{h}_1,...,\textbf{h}_M\}$ and the decoder hidden state $\textbf{s}_t$ to produce copy distribution $\textbf{a}_t$ and context vector $\textbf{c}_t$: \begin{equation} e_{tj}=\frac{(\textbf{W}_{copy}^q \textbf{s}_t ) (\textbf{W}_{copy}^k \textbf{h}_j )^T}{\sqrt{d}} \end{equation} \begin{equation} \label{mask} a_{tj}=\frac{exp(e_{tj})}{\sum_{k=1}^{M}exp(e_{tk})} \end{equation} \begin{equation} \textbf{c}_{t}=\sum_{j=1}^{M}a_{tj}(\textbf{W}_{copy}^v \textbf{h}_j) \end{equation} In addition to the predefined character vocabulary $\mathcal V$ and a special token UNK denoting any out-of-vocabulary token, lexicon-constrained copying module expand the output space by two sets $\mathcal C$ and $\mathcal W$, consisting of characters and multi-character words that appear in the source text, respectively. So that the probability of emitting any token $o$ at time step $t$ is: \begin{equation} P(o|\cdot)=\left\{ \begin{array}{lc} p_{gen}gen(o) + p_{copy}\sum_{i:o_i=o}a_{ti} & o \in \mathcal V \cup \mathcal C \\ p_{copy}\sum_{i:o_i=o}a_{ti} & o \in \mathcal W\\ p_{gen}gen(\text{UNK} ) & otherwise \\ \end{array} \right. \end{equation} where $p_{gen}$, $p_{copy}\in [ 0,1 ]$ control the decoder switching between generation-mode and copy-mode, $gen(\cdot)$ provide a probability distribution over character vocabulary $\mathcal V$: \begin{equation} p_{gen}=\sigma(\textbf{V}_g[\textbf{s}_t,\textbf{c}_t]+\textbf{b}_p) \end{equation} \begin{equation} p_{copy}=1-p_{gen} \end{equation} \begin{equation} gen(\cdot)=\text{softmax}(\hat{\textbf{V}_g} (\textbf{V}_g[\textbf{s}_t,\textbf{c}_t]+\textbf{b})+\hat{\textbf{b}_g}) \end{equation} where $\sigma(\cdot)$ is the sigmoid function. With the introduction of lexicon-constrained copying module, our decoder can predict tokens of variable lengths at each time step, and thereby can generate any segmentation of a sentence. Naturally, we expect to evaluate the probability of a summary by marginalizing over all its segmentations. For example, the probability of a summary consisting of only one word ``北京 (BeiJing)'' can be factorized as: \begin{equation} \begin{aligned} &P(\text{北,京})=P(\text{北}|\epsilon)P(\text{京}|\text{北})P(\epsilon|\text{北,京})\\ &\qquad+P(\text{北京}|\epsilon) P(\epsilon|\text{北京}) \nonumber \end{aligned} \end{equation} where each term corresponds to a segmentation and is the product of conditional probabilities, we use $\epsilon$ to denote either the beginning or end of a sentence. Note that the conditional probability here depends on the current segmentation, which means the decoder directly take tokens in a segmentation as input. However, if we feed the decoder with character-level input and reformulate the conditional probability, the above probability can be rewritten as: \begin{equation} \!P(\text{北,京})=P(\epsilon|\text{北,京})\Big(P(\text{北}|\epsilon) P(\text{京}|\text{北})\! +P(\text{北京}|\epsilon)\Big)\\ \nonumber \end{equation} where we factor out $P(\epsilon|\text{北,京})$, because it is shared by two different segmentations. As can be seen from the above example, the assumption that conditional probability of a token depends only on its preceding character sequence facilitate the reuse of computation and thus makes it feasible to apply dynamic programming. Formally, let character sequence $y_{1:J}=\{y_1,...,y_J\}$ be a summary, its probability can be represented as a recursion: \begin{equation} \label{recur} P(y_{1:J})=\quad\sum_{\mathclap{\tiny\begin{array}{c} o\in \mathcal V \cup \mathcal C \cup \mathcal W \\ \!o=y_{J-\ell+1:J}\! \end{array}}}\quad P(o|y_{1:J-\ell})P(y_{1:J-\ell}) \end{equation} Note that all the above $P(\cdot)$ are inevitably conditioned on the source text, so we omit it for simplicity. We train the model by maximizing $P(y_{1:J})$ for all training pair in the dataset. During inference, since there is no access to the ground truth, we need a search algorithm which can guide the generation of the summary in a left-to-right fashion. Beam search \cite{beam} is the most common search algorithm in seq2seq frameworks, but cannot be used directly in our scenario. To illustrate this, we first define hypothesis as a partial output that consists of tokens. Hypotheses can be further divided into character hypotheses and word hypotheses based on whether their last token is a character or a multi-character word. For hypotheses within a beam, the standard beam search algorithm updates states for them by feeding their last tokens to the decoder and then generate new hypotheses through suffixing them with a token sampled from the model's distribution. Because our decoder is designed to take only characters as input, multiple decoder steps are required to update the state for a word hypothesis. As a result, it is difficult to conduct batched update for a beam containing both word hypotheses and character hypotheses. To this end, we proposed a novel word-enhanced beam search algorithm, where the beam is also split into two parts: the character beam and the word beam. The word beam is used to update the states for word hypotheses. When their states are fully updated, word hypotheses are placed into the character beam (see lines 5-8 of Algorithm~\ref{alg:beam}). Note that we do not perform generation step for word hypotheses in the word beam, that is to say, with the same length, the more multi-character words a hypothesis includes, the more generation steps it can skips, which may give it a higher probability. \begin{algorithm}[htp] \caption{Pseudo-code for word-enhanced Beam Search} \label{alg:beam} \begin{algorithmic}[1] \Require $model$, $source$, maximum summary length $L$, beam size $k$ \State $startHyp$$\leftarrow$getStartHyp($\epsilon$) \State $B_c$=\{$startHyp$\},$B_w$=\{\} and $B_f$=\{\} \Statex $//$ Respectively the character beam, word beam, and the set of finished hypotheses \For{$t=0;t++;t < L$} \State $n,m=\{\}$ \State $B_c,B_w$$\leftarrow$$model$.batchedUpadte($B_c,B_w,source$) \Statex \quad \,\,// Batched update for hypos in both $B_c$ and $B_w$ \For{$hyp \in B_w$} \If{$hyp$.isUpdated} \State move $hyp$ into $B_c$ \EndIf \EndFor \State Merge hypos of the same character sequence in $B_c$ \State $n \leftarrow$ $model$.generate($B_c$) \Statex \quad \,\,$//$ Generating new hypotheses and their respective \Statex \quad \, log probabilities from $B_c$ \For{$hyp \in n$} \If{$hyp$ ends with a multi-character word} \State Move $hyp$ from $n$ into $m$ \EndIf \EndFor \State $B_w \leftarrow \mathop{\text{k-argmax}}\limits_{hyp\in m}$ $hyp$.avgLogProb \State $B_c \leftarrow \mathop{\text{k-argmax}}\limits_{hyp\in n}$ $hyp$.avgLogProb \For{$hyp \in B_c$} \If{$hyp$ ends with $\epsilon$ {\textbf{or}} $hyp$.len=$L$} \State Move $hyp$ from $n$ into $B_f$ \EndIf \EndFor \EndFor \State $finalHyp \leftarrow \mathop{\text{argmax}}\limits_{hyp\in B_f}$ $hyp$.avgLogProb \State \textbf{return} $finalHyp$ \end{algorithmic} \end{algorithm} \subsection{Word Selector} We treat keyword selection as a binary classification task on each potential word. To obtain word representations, we first leverage BERT \cite{Bert}, a pre-trained language model to produce context-aware representations $\textbf{x}^c$ for all characters in the source text, and then feed them to a bi-LSTM network. Different from Section~\ref{sec:input} where bi-LSTM is applied to character sequence of each word, here the bi-LSTM takes the whole source character representations as input, in an attempt to build word representation that can reflect how much contextual information the word carries. Given a potential word $x_{a:b}$, where $a$ and $b$ are indexes of characters in the source text, we can calculate its final representation $\textbf{t}$ as follows: \begin{equation} \begin{array}{c} \overrightarrow{\textbf{d}_a}=\overrightarrow{LSTM}(\textbf{x}^c_a),\overrightarrow{\textbf{d}_b}=\overrightarrow{LSTM}(\textbf{x}^c_b) \\ \overleftarrow{\textbf{d}_a}=\overleftarrow{LSTM}(\textbf{x}^c_a),\overleftarrow{\textbf{d}_b}=\overleftarrow{LSTM}(\textbf{x}^c_b) \\ \textbf{t}=[\overrightarrow{\textbf{d}_a},\overrightarrow{\textbf{d}_b},\overleftarrow{\textbf{d}_a},\overleftarrow{\textbf{d}_b},\overrightarrow{\textbf{d}_b}-\overrightarrow{\textbf{d}_a},\overleftarrow{\textbf{d}_a}-\overleftarrow{\textbf{d}_b}] \end{array} \end{equation} then a linear transformation layer and a sigmoid function can be used sequentially on its final representation to compute the probability of $x_{a:b}$ being selected. During training, words that appear in both summary and source text are considered as positives, rest are negatives. To make sure that decoder can access the entire source character sequence at inference time, in addition to the multi-character words with the top-$n$ probability, we treat all characters in source text as keywords. Inspired by \cite{bottom}, we utilize keyword information by masking out other words when calculating copy distribution. In particular, we leave $e_{tj}$ in Eq.~\ref{mask} unchanged for keywords and set $e_{tj}$ to zero for the rest of the words. \section{Experiments} \subsection{Datasets and Evaluation Metric} We conduct experiments on the Large Scale Chinese Social Media Text Summarization Dataset (LCSTS)$\footnote{http://icrc.hitsz.edu.cn/Article/show/139.html}$ \cite{LCSTS}, which consists of source texts with no more than 140 characters, along with human-generated summaries. The dataset is divided into three parts, the (source, summary) pairs in PART II and PART III are scored manually from 1 to 5, with higher scores indicating more relevance between the source text and its summary. Following Hu et al. \cite{LCSTS}, after removing pairs with scores less than 3, PART I, PART II and PART III are used as training set, verification set and test set respectively, with 2.4M pairs in PART I, 8K pairs in PART II and 0.7K pairs in PART III. We choose ROUGE score \cite{rouge} as our evaluation metric, which is widely used for evaluating automatically produced summaries. The metric measure the relevance between a source text and its summary based on their co-occurrence statistics. In particular, ROUGE-1 and ROUGE-2 depend on unigram and bigram overlap respectively, while ROUGE-L relies on the longest common subsequence. \subsection{Experimental Setup} The character vocabulary is formed by 4000 most frequent characters in the training set. To get all potential words, we use PKUSEG \cite{pkuseg}, a toolkit for multi-domain chinese word segmentation. Specifically, there are separate segmentators for four domains, including web, news, medicine, and tourism. We use these segmentators for the source text, and if a text span is included in any of word segmentation results, we regard it as a potential word. For the lexicon-constrained copying network, we employ six attention layers of 8 heads for both encoder and decoder. Constant $r$ in TABLE~\ref{relations} is set to 8. We make character embedding and all hidden vectors the same dimension of 512 and set the filter size of feed-forward layers to 1024. For word selectors, we use a single-layer Bi-LSTM with a hidden size of 512. During training, we update the parameters of the lexicon-constrained copying network (LCCN) and word selector by Adam optimizer with $\beta1=0.9$, $\beta2=0.98$, $\varepsilon=10^{-9}$. The same learning rate schedule of Vaswani et al. \cite{transformer} is used to the LCCN, while a fixed learning rate of 0.0003 is set for word selector. The BERT we use in the word selector is pre-trained on Chinese corpus by Wolf et al. \cite{hugging} and we freeze its parameters throughout the training. During testing, we use a beam size of 10 and take the first 10 multi-character words predicted by the word selector and all characters in the source text as keywords. \subsection{Baselines} \begin{itemize} \item \textbf{RNN} and \textbf{RNN-Context} are seq2seq baselines provided along with the LCSTS dataset by Hu et al. \cite{LCSTS}. Both of them have GRU encoder and GRU decoder, while RNN-context has an additional attention mechanism. \item \textbf{COPYNET} integrates copying mechanism into the seq2seq framework, trying to improve both content-based addressing and location-based addressing. \item \textbf{Supervison with Autoencoder (superAE)} uses an autoencoder trained on the summaries to provide auxiliary supervision for the internal representation of seq2seq. Moreover, adversarial learning is adopted to enhance this supervision. \item \textbf{Global Encoding} refines source representation with consideration of the global context by using a convolutional gated unit. \item \textbf{Keyword and Generated Word Attention (KGWA)} exploits relevant keywords and previously generated words to learn accurate source representation and to alleviate the information loss problem. \item \textbf{Keyword Extraction and Summary Generation (KESG)} first uses a separate seq2seq model to extract keywords, and then utilize keywords information to improve the quality of the summarization. \item \textbf{Transformer} and \textbf{CopyTransformer} are our implementations of the Transformer framework in the task of summarization. Copy mechanism is incorporated into \textbf{CopyTransformer}. \end{itemize} \subsection{Results} \begin{table} \scriptsize \caption{Results of different models. We use $\dagger$ to indicate that the model utilizes keywords information} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{l|c|c|c} \hline \textbf{Models}&\textbf{ROUGE-1}&\textbf{ROUGE-2}&\textbf{ROUGE-L}\\ \hline RNN \cite{LCSTS}&21.5&8.9&18.6\\ RNN-Content \cite{LCSTS}&29.9&17.4&27.2\\ COPYNET \cite{pointer1}&34.4&21.6&31.3\\ superAE \cite{vae1}&39.2&26.0&36.2\\ Global Encoding \cite{global}&39.4&26.9&36.5\\ $\text{KESG}^{\dagger}$ \cite{keyword2} &39.4&28.4&35.3\\ $\text{KGWA}^{\dagger}$ \cite{keyword1} &40.9&28.3&38.2\\ \hline Transformer&38.9&27.4&35.5\\ CopyTransformer&39.7&28.0&35.8\\ LCCN&41.7&29.5&38.0\\ w/o word-enhance beam search&40.0&28.5&37.1\\ $\text{LCCN+word selector}^{\dagger}$&\textbf{42.3}&\textbf{29.8}&\textbf{38.4}\\ \hline \end{tabular} } \end{center} \label{results} \end{table} TABLE~ \ref{results} records the results of our LCCN model and other seq2seq models on the LCSTS dataset. To begin with, we first compare two Transformer baselines. We can see that CopyTransformer outperforms vanilla Transformer by 0.8 ROUGE-1, 0.6 ROUGE-2, and 0.3 ROUGE-L, showing the importance of copy mechanism. The gap between our LCCN and vanilla Transformer is further widen to 1.8 ROUGE-1, 2.1 ROUGE-2, and 2.5 ROUGE-L, which asserts the superiority of lexicon-constrained copying over character-based copying. Compared to other latest models, our LCCN can achieve state-of-the-art performance in terms of ROUGE-1 and ROUGE-2, and is second only to the KGWA in terms of ROUGE-L. When also using keywords information as the KGWA does, LCCN+word-selector further improves the performance and overtakes the KGWA by 0.2 ROUGE-L. We also conduct an ablation study by removing the word-enhanced beam search in LCCN, denoted by w/o word-enhanced beam search in TABLE~ \ref{results}. It shows that word-enhanced beam search can boost the performance of 1.7 ROUGE-1, 1.0 ROUGE-2, and 0.9 ROUGE-L. \subsection{Discussion} \begin{table} \scriptsize \caption{Results of different approaches to extract keywords.} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{l|c|c|c} \hline \textbf{Models}&\textbf{ROUGE-1}&\textbf{ROUGE-2}&\textbf{ROUGE-L}\\ \hline TFIDF &28.6&13.1&20.7\\ encoder of LCCN&42.0&25.6&33.7\\ word selector&46.0&28.2&36.1\\ \hline \end{tabular} } \end{center} \label{selector} \end{table} \begin{table} \scriptsize \caption{Summarization examples. Summaries in the last two blocks are separated by spaces to show the output of LCCN and LCCN+word selector per step. We use keywords to denote the multi-character words chosen by the word selector.} \begin{center} \begin{tabular}{p{8cm}} \hline \textbf{Source}: 9月5日,陕西咸阳双泉村垃圾站发现一名死亡女婴,脖子上缠有绳子。近日案件告破,令人意外的是,婴儿的父母是17岁左右的在校高中生,涉嫌勒死婴儿的,是孩子父亲。“当时看见孩子生下来心就慌了,害怕孩子哭,便用绳子勒死了”。\\ On September 5, a dead baby girl with a rope around her neck was found at the Shuangquan Village Garbage Station in Xianyang, Shaanxi Province. Recently, the case was solved. Surprisingly, the parents of the baby were high school students around 17 years old. The father of the baby was suspected of strangulating his baby. "When I saw the baby born, I was in a panic. I was afraid of a baby crying, so I strangled her with a rope."\\ \hline \textbf{Reference}: 咸阳两高中生同居生下女婴因害怕孩子哭将其勒死。\\ Two high school students in Xianyang lived together and gave birth to a baby girl and strangled her for fear of baby crying.\\ \hline \textbf{Transformer}: 17岁高中生当街勒死亲生父母。 \\ 17-year-old high school students strangled their parents in the street\\\hline \textbf{CopyTransformer}: 陕西17岁女婴儿缠绳子勒死。 \\ 17-year-old female infant in Shaanxi was strangled with rope\\ \hline \textbf{LCNN}: 17岁\quad 高中生\quad 勒\quad 死\quad 婴儿。\\ 17-year-old high school student strangled a infant\\ \hline \textbf{LCNN+word selector}: 高中生\quad 因\quad 害怕 \quad 勒\quad 死\quad 婴儿。\\ High school student strangled a infant out of fear.\\ \textbf{Keywords}: 女婴 (baby girl), 婴儿 (infant), 孩子 (baby), 勒死 (strangle), 害怕 (fear), 高中 (high school), 陕西 (Shaanxi), 绳子 (rope), 高中生 (high school students), 父亲 (father)\\\hline \end{tabular} \end{center} \label{example} \end{table} Similar to extractive summarization, we can use the top $n$ extracted keywords to form a summary, which then can be used to evaluate the quality of keywords. The first entry of TABLE~\ref{selector} shows the performance when the keywords are extracted by TF-IDF \cite{tf-idf}, a numerical statistic method that relies on the frequency of the word. The second entry shows the performance when we determine keywords based on the source representation learned by the encoder of LCCN. As can be seen from the last entry, word selector outperforms two methods mentioned above by a large margin, indicating the importance of external knowledge brought by the BERT. Given a source text which describes the criminal case, we show its summaries generated by different models in TABLE~\ref{example}. It is clear that the suspect of this case is a high school student and the victim is his baby daughter. However, the summary generated by the Transformer mistakes the high school student's parents as victims and claims that the crime took place in the street, which is not mentioned in the source text. The summary of the CopyTransformer also makes a fundamental mistake, resulting the mismatch between the adjective ``17岁 (17-year-old) and the noun ``女婴儿 (female infant)". Compared with them, the summary of our LCCN is more faithful to the source text and contains the correct suspect and victim, i.e ``高中生 (high school student)" and ``婴儿 (infant)" which are copied from the source text through only two decoder steps. With the help of word selector, our summary can further include the keyword ``害怕 (fear)" to indicate the criminal motive. Compared with character-based models, our LCCN uses fewer steps to output a summary, so it should be able to reduce the possibility of repetition. To prove it, we record the percentage of n-gram duplicates for summaries generated by different models in TABLE~\ref{ngram}. The results show that our model can indeed alleviate the repetition problem, and we also notice that the repetition rate of the LCCN+word selector is slightly higher than that of LCNN, which may be due to the smaller output space after adding the word selector. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{fig/fig2.pdf} \caption{Percentage of the duplicates} \label{ngram} \end{figure} \section{Conclusion} In this paper, we propose a novel lexicon-constrained copying network for Chinese summarization. Querying the multigranularity representation learned by our encoder, our decoder can copy either a character or a multi-character word at each time step. Experiments on the LCSTS dataset show that our model is superior to the Transformer baselines and quite competitive with the latest models. With the help of keyword information provide by the word selector, it can even achieve state-of-the-art performance. In the future, we plan to apply our model to other tasks, such as comment generation, and to other languages, such as English. \section*{Acknowledgments} The work is supported by the National Key Research and Development Program of China (2018YFB0203804, 2017YFB0202201, 2018YFB1701401), the National Natural Science Foundation of China (Grant Nos. 61873090, L1824034, L1924056), Ministry of Education-China Mobile Research Fund Project(MCM20170506), China Knowledge Centre for Engineering Sciences and Technology Project(CKCEST-2018-1-13, CKCEST-2019-2-13). \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,116,691,499,482
arxiv
\section{Introduction} Electronic-structure simulations have a profound impact on many scientific fields, from condensed-matter physics to chemistry, materials science, and engineering~\cite{marzari_electronic-structure_2021}. One of the main reasons for this is the accuracy and efficiency of Kohn-Sham density-functional theory~\cite{hohenberg_inhomogeneous_1964, kohn_self-consistent_1965} (KS-DFT), together with the availability of robust computational tools that implement and make available these fundamental theoretical developments. Nevertheless, exact KS-DFT can only describe (if the exact functional were known) the total energy of a system, including its static derivatives (or the expectation value of any local single-particle operator), precluding any access to spectroscopic information, except for the position of the highest occupied orbital~\cite{levy_exact_1984,almbladh_exact_1985,perdew_comment_1997} (HOMO) (see also Ref.~\citenum{chong_interpretation_2002} and Ref.~\citenum{pederson_densityfunctional_1985} and references therein for an in-depth discussion about the connection between KS eigenvalues and vertical ionization potentials). While the access to charge-neutral excitations can be achieved by extending the formalism to the time domain~\cite{runge_density-functional_1984}, charged excitations --- revealed in direct and inverse photoemission experiments --- are outside the realm of the theory. Accurate first-principles predictions of band gaps, photoemission spectra, and band structures require more advanced approaches, most often based on Green’s function theory~\cite{onida_electronic_2002}. For example, in solids, the so-called GW approximation~\cite{hedin_new_1965} is considered a good compromise between accuracy and computational cost. Nevertheless, these high-level methods are often limited in system size and complexity, due to their computational cost and numerical complexity. Despite many efforts dedicated to improving efficiency of Green's function methods~\cite{umari_optimal_2009, umari_gw_2010, giustino_gw_2010, neuhauser_breaking_2014, govoni_large_2015, wilhelm_toward_2018, vlcek_swift_2018, umari_fully_2022}, simpler methods based on Kohn-Sham density-functional theory (KS-DFT), possibly including some fraction of non-local exchange~\cite{becke_new_1993}, are still frequently employed to approximately evaluate the spectral properties of nanostructures, interfaces, and solids. In this respect, Koopmans-compliant (KC) functionals~\cite{dabo_non-koopmans_2009, dabo_koopmans_2010, dabo_piecewise_2014, borghi_koopmans-compliant_2014, borghi_variational_2015, nguyen_koopmans-compliant_2018} have been introduced to bridge the gap between KS-DFT and Green's function theory~\cite{ferretti_bridging_2014, colonna_koopmans-compliant_2019}. KC functionals retain the advantages of a functional formulation by enforcing physically motivated conditions to approximate density functionals. In particular, the exact condition of the piecewise linearity (PWL) of the total energy as a function of the total number of electrons~\cite{perdew_density-functional_1982}, or equivalently of the occupation of the HOMO, is extended to the entire manifold, leading to a \textit{generalized PWL} of the energy as a function of each orbital occupation~\cite{dabo_koopmans_2010, borghi_koopmans-compliant_2014}. In KS-DFT the deviation from PWL has been suggested~\cite{cococcioni_linear_2005,kulik_density_2006,mori-sanchez_many-electron_2006, cohen_insights_2008,mori-sanchez_localization_2008} as a definition of electronic self-interaction errors (SIEs)~\cite{perdew_self-interaction_1981}, and in recently developed functionals, such as DFT-corrected~\cite{zheng_improving_2011,zheng_nonempirical_2013, kraisler_piecewise_2013,kraisler_fundamental_2014, li_local_2015,gorling_exchange-correlation_2015,li_localized_2018, mei_exact_2021}, range-separated~\cite{stein_fundamental_2010,kronik_excitation_2012,refaely-abramson_quasiparticle_2012} or dielectric-dependent hybrid functionals~\cite{shimazaki_band_2008, marques_density-based_2011,skone_self-consistent_2014,brawand_generalization_2016}, it has been recognized as a critical feature to address. The generalized PWL of KC functionals leads to beyond-DFT orbital-density dependent functionals, with enough flexibility to correctly describe both ground states and charged excitations~\cite{dabo_donor_2012, nguyen_first-principles_2015, nguyen_first-principles_2016, elliott_koopmans_2019, colonna_koopmans-compliant_2019, de_almeida_electronic_2021}. In fact, while ground-state energies are typically very close or exactly identical to those of the ``base'' functional~\cite{borghi_koopmans-compliant_2014}, some of us argued that, for spectral properties, the orbital-dependent KC potentials act as a quasiparticle approximation to the spectral potential~\cite{ferretti_bridging_2014} (that is, the local and frequency-dependent potential sufficient to correctly describe the local spectral density $\rho({\bf r}, \omega)$~\cite{gatti_transforming_2007, vanzini_dynamical_2017, vanzini_spectroscopy_2018}). Beside the core concept of generalized PWL, KC functionals are characterized by two other features: i) the correct description of the screening and relaxation effects that take place when an electron is added/removed from the system~\cite{zhang_orbital_2015, colonna_screening_2018}, and ii) the localization of the ``variational'' orbitals, i.e. those that minimize the KC energy. This last feature is key to obtaining meaningful and accurate results in extended or periodic systems~\cite{nguyen_koopmans-compliant_2018,de_almeida_electronic_2021}, but at the same time poses some challenges since the localized nature of the variational orbitals apparently breaks the translational symmetry. Nevertheless, thanks to the Wannier-like character of the variational orbitals, the Bloch symmetry is still preserved and it is possible to describe the electronic energies with a band structure picture~\cite{de_gennaro_blochs_2021}. While a general method to unfold and interpolate the electronic bands from $\Gamma$-point-only calculations can be employed~\cite{de_gennaro_blochs_2021}, in this work we describe how to exploit the Wannier-like character of the variational orbitals to recast the Koopmans corrections as integrals over the Brillouin zone of the primitive cell. This leads to a formalism suitable for a periodic-boundary implementation and to the natural and straightforward recovery of band structures. Moreover, we show how the evaluation of the screened KC corrections can be recast into a linear-response problem suitable for an efficient implementation based on density-functional perturbation theory. The advantage with respect to a $\Gamma$-point calculation with unfolding is a much reduced computational cost and complexity. In the rest of the paper we will describe the details of such a formalism, which leads to a transparent and efficient implementation of Koopmans functionals for periodic systems. \section{Koopmans spectral functionals} \label{sec:KC_general} We review in this section the theory of KC functionals. In Sec.~\ref{sec:basic_features} we describe the basic features of the KC functionals; in Sec.~\ref{sec:theory_technical} we detail, for the interested readers, more practical and technical aspects of the method. Finally in Sec.~\ref{sec:minimization} we describe the general strategy to minimize the KC functionals and the assumptions made in this work to simplify the formalism. For a complete and exhaustive description of the theory we also refer the reader to previous publications~\cite{borghi_koopmans-compliant_2014, nguyen_koopmans-compliant_2018}. \subsection{Core concepts of the theory} \label{sec:basic_features} \noindent {\bf Linearization:} The basic idea KC functionals stand on is that of enforcing a generalized PWL condition of the total energy as a function of the fractional occupation of any orbital in the system: \begin{equation} \frac{dE^{\rm KC}}{d f_i} = \langle \phi_i | \hat{h}^{\rm KC} | \phi_i \rangle = \mathrm{constant}, \label{eq:kc-cond} \end{equation} where $f_i$ is the occupation number of the $i$-th orbital $\phi_i$ and $\hat{h}^{\rm KC}$ the Koopmans-compliant Hamiltonian. Under such condition the total energy remains unchanged when an electron is, e.g., extracted ($f_i$ goes from $1$ to $0$) from the system, in analogy to what happen in a photoemission experiment. The generalized PWL condition in Eq.~\ref{eq:kc-cond} can be achieved by simply augmenting any approximate density functional $E^{\rm DFT}$ with a set of orbital-density-dependent corrections $\{\Pi_{i}\}$ (one for each orbital $\phi_i$): \begin{equation}\label{eq:KC_lin} E^{\rm KC}[\rho, \{\rho_i\}] = E^{\rm DFT}[\rho] + \sum_i {\Pi_i}[\rho, \rho_i] \end{equation} where $\rho({\bf r}) = \sum_i \rho_i({\bf r})$ and $ \rho_i({\bf r}) = f_i n_i({\bf r}) = f_i |\phi_i({\bf r})|^2$ are the total and orbital densities, respectively. The corrective term removes from the approximate DFT energy the contribution that is non-linear in the orbital occupation $f_i$ and adds in its place a linear Koopmans’s term in such a way to satisfy the KC condition in Eq.~\ref{eq:kc-cond}. Depending on the slope of this linear term, different KC flavors can be defined~\cite{borghi_koopmans-compliant_2014}; in this work we focus on the Koopmans-Integral (KI) approach, where the linear term is given by the integral between occupations 0 and 1 of the expectation value of the DFT Hamiltonian on the orbital at hand: \begin{align}\label{eq:KI_integral} {\Pi_i^{\rm KI}}[\rho, \rho_i] = - \int_0^{f_i} ds \langle \phi_i | H^{\rm DFT}(s) | \phi_i \rangle \nonumber \\ + f_i \int_0^{1} ds \langle \phi_i | H^{\rm DFT}(s) | \phi_i \rangle. \end{align} In the expression above, the first line removes the non-linear behaviour of the underlying DFT energy functional and the second one replaces it with a linear term, i.e. proportional to $f_i$. Neglecting orbital relaxations, i.e. the dependence of the orbital $\phi_i$ on the occupation numbers, and recalling that $\langle \phi_i | H^{\rm DFT}(s) | \phi_i \rangle = dE^{\rm DFT}/df_i$, the explicit expression for the ``bare'' or ``unrelaxed'' KI correction becomes: \begin{align}\label{eq:KI} \Pi^{\rm KI}_i &= E^{\rm DFT}_{\rm Hxc} [\rho-\rho_i] -E^{\rm DFT}_{\rm Hxc}[\rho] \nn \\ & +f_i \Big[ E^{\rm DFT}_{\rm Hxc}[\rho-\rho_i+n_i] -E^{\rm DFT}_{\rm Hxc}[\rho-\rho_i] \Big], \end{align} where $E^{\rm DFT}_{\rm Hxc}$ is the Hartree, exchange and correlation energy at the ``base'' DFT level. Interestingly, the KI functional is identical to the underlying DFT functional at integer occupation numbers ($f_i =0$ or $f_i =1$) and thus it preserves exactly the potential energy surface of the base functional. However, its value at fractional occupations differs from the base functional, and thus so do the derivatives everywhere, including at integer occupations, and consequently the spectral properties will be different. \noindent {\bf Screening:} By construction, the ``unrelaxed'' KI functional is linear as a function of the occupation number $f_i$, when orbital relaxations are neglected. This is analogous to Koopmans' theorem in HF theory, and it is not enough to guarantee the linearity of the functional in the general case, where each orbital will relax in response to a change in the occupation of all the others. To enforce the generalized Koopmans' theorem --- that is, to achieve the desired linearity in the presence of orbital relaxation --- a set of scalar, orbital dependent screening coefficients are introduced that transform the ``unrelaxed'' KI correction into a fully relaxed one: \begin{equation}\label{eq:rKI} E^{\rm KI}[\rho, \{\rho_i\}] = E^{\rm DFT}[\rho] + \sum_i \alpha_i {\Pi^{\rm KI}_i}[\rho, \rho_i]. \end{equation} The scalar coefficients $\alpha_i$ act as a compact measure of electronic screening in an orbital basis and they are given by a well-defined average of the microscopic dielectric function~\cite{dabo_piecewise_2014, colonna_screening_2018}: \begin{equation}\label{eq:alpha_lr} \alpha_i = \frac{d^2E^{\rm DFT}/df_i^2}{\partial ^2E^{\rm DFT}/\partial f_i^2} = \frac{ \langle n_{i} | \left[ \epsilon^{-1}f_{\rm Hxc} \right] | n_{i} \rangle}{\langle n_{i} | \left[ f_{\rm Hxc} \right] | n_{i} \rangle} \end{equation} where $f_{\rm Hxc}({\bf r}, {\bf r}')$ is the Hartree and exchange-correlation kernel, i.e. the second derivative of the underlying DFT energy functional with respect to the total density, and $\epsilon^{-1} ({\bf r}, {\bf r}')$ is the static microscopic dielectric function. The different notation in the derivative at the numerator and denominator indicates whether orbitals relaxation is accounted for ($df_i$) or not ($\partial f_i$). \noindent {\bf Localization:} Similarly to other orbital-density-dependent functionals, KC functionals can break the invariance of the total energy with respect to unitary rotations of the occupied manifold. This implies that the energy functional is minimized by a unique set of ``variational'' orbitals; contrast this with density functional theory, or any unitary invariant theory, where any unitary transformation of the occupied manifold would leave the energy unchanged. These variational orbitals are typically very localized in space and closely resemble Foster-Boys orbitals~\cite{boys_construction_1960, foster_canonical_1960} in the case of atoms and molecules or equivalently maximally localized Wannier functions~\cite{marzari_maximally_1997, marzari_maximally_2012} (MLWF) in the case of periodic systems. This localization is driven by a Perdew-Zunger (PZ) self-interaction-correction (SIC) term appearing in any KC functional (see Sec.~\ref{sec:ki_kipz} for the specific case of KI), and in particular by the self-Hartree contribution to it. This also explain the similarity between variational orbitals and MLWF as the maximization of the self-Hartree energy and the maximal localization produce very similar localized representations of the electronic manifold~\cite{marzari_maximally_2012}. We recently showed that the localization of the variational orbitals is a key feature to get meaningful KC corrections in the thermodynamic limit of extended systems (see Fig. 1 in Ref.~[\citenum{nguyen_koopmans-compliant_2018}] and the related discussion). We note in passing that several recent strategies to address the DFT band-gap underestimation in periodic crystals have relied on the use of some kinds of localized orbitals, ranging from defect states~\cite{miceli_nonempirical_2018, bischoff_adjustable_2019, bischoff_nonempirical_2019} to different types of Wannier functions~\cite{heaton_self-interaction_1983, anisimov_transition_2005, ma_using_2016, weng_wannier_2018, li_localized_2018, weng_wannierkoopmans_2020, wing_band_2021, shinde_improved_2020}. It is a strength of KC theory that a set of localized orbitals arises naturally from the energy functional minimization. Indeed, this provides a more rigorous justification of the aforementioned approaches where the use of Wannier functions is a mere (albeit reasonable) ansatz. \subsection{Technical aspects of the theory} \label{sec:theory_technical} Having described the three main pillars that KC functionals stand on, in this subsection we detail several technical aspects of the theory. While these details are certainly important, this subsection can be skipped without compromising the article's message. \subsubsection{Canonical and variational orbitals} As we mentioned above, KC functionals depend on the orbital densities (rather than on the total density as in DFT or on the density matrix as in Hartree-Fock). This can break the invariance of the total energy with respect to unitary rotations of the occupied manifold. A striking consequence of this feature is the fact that the variational orbitals $\{\phi_i\}$, i.e. those that minimize the energy functional, are different from the eigenstates or ``canonical'' orbitals, i.e. those that diagonalize the orbital-density-dependent Hamiltonian. This duality between canonical and variational orbital is not a unique feature of KC functionals but it also arises in any other orbital-density-dependent functional theories such as the well-known self-interaction-correction (SIC) scheme by Perdew and Zunger (PZ) and has been extensively discussed in the literature~\cite{heaton_self-interaction_1983, pederson_localdensity_1984, pederson_densityfunctional_1985,lehtola_variational_2014,vydrov_tests_2007,stengel_self-interaction_2008,hofmann_using_2012}. At the minimum of the energy functional, the KC Hamiltonian is defined in the basis of the variational orbitals as \begin{equation} \langle \phi_j | \hat{h}_i^{\rm KC} | \phi_i \rangle = \langle \phi_j | \hat{h}^{\rm DFT}[\rho] + \hat{\mathcal{V}}_i^{\rm KC}[\rho, \rho_i] | \phi_i \rangle \label{eq:KC_ham} \end{equation} where $\mathcal{V}_i^{\rm KC}[\rho, \rho_i]({\bf r}) = \delta \sum_j \alpha_j \Pi^{\rm KC}_j[\rho,\rho_j]/\delta \rho_j({\bf r})$ is the orbital-density-dependent KC potential. This Hamiltonian is then diagonalized to obtain the KC eigenvalues and canonical orbitals. These orbitals are the analogue of KS-DFT or Hartree-Fock eigenstates: they usually display the symmetry of the Hamiltonian and, in analogy to exact DFT~\cite{levy_exact_1984,almbladh_exact_1985,perdew_comment_1997}, the energy of the highest occupied canonical orbitals has been numerically proven to be related to the asymptotic decay of the ground-state charge density~\cite{stengel_self-interaction_2008}. For these reasons, canonical orbitals and energies are usually interpreted as Dyson orbitals and quasiparticle energies~\cite{pederson_densityfunctional_1985, nguyen_first-principles_2015, ferretti_bridging_2014} accessible, for example, via photoemission experiments. \subsubsection{Resolving the unitary invariance of the KI functional} \label{sec:ki_kipz} While in the most general situation KC functionals break the invariance of the total energy under unitary rotation of the occupied manifold, in the particular case of KI at integer occupation number the functional is invariant under such transformation. (Integer occupation is the typical case of an insulating system, where the valence manifold is separated by the conduction manifold by a finite energy gap). Indeed, it is easy to verify that for $f_i=0$ or $f_i=1$ the KI energy correction in Eq.~\ref{eq:KI_integral} vanishes and the KI functional coincides with the underlying density functional approximation which is invariant under such transformations. Nevertheless the spectral properties will depend on the orbital representation the Koopmans Hamiltonians operate on, and it is thus important to remove this ambiguity; this is achieved by defining the KI functional as the limit of the KIPZ functional at zero PZ correction~\cite{borghi_koopmans-compliant_2014}. Formally, this amounts to adding an extra term $-\gamma f_i E^{\rm DFT}_{\rm Hxc}[n_i]$ to the KI correction defined in Eq.~\ref{eq:KI}. In the limit of $\gamma \rightarrow 0$ this extra term drives the variational orbitals toward a localized representation without modifying the energetics. Practically, this infinitesimal term is not included in the functional; instead, the KI variational orbitals are generated by minimizing the PZ energy of the system with respect to unitary rotations of the canonical orbitals of the base DFT functional. The final result is entirely equivalent to taking the $\gamma \rightarrow 0$ limit. \subsubsection{Restriction to insulating systems} \label{sec:limitation_metals} The KC linearization procedure can be imposed upon both the valence and conduction states. Currently, the only requirement is that the system under consideration needs to have a finite band gap so that the occupation matrix can always be chosen to be block-diagonal and equal to the identity for the occupied manifold and zero for the empty manifold~\cite{dabo_piecewise_2014}. This limitation follows from the fact that currently KC corrections are well-defined for changes in the diagonal elements of the occupation matrix. In the most general case where a clear distinction between occupied and empty manifold is not possible, e.g. in the case of metallic systems, the occupation matrix will be necessarily non-diagonal in the localized-orbitals representation. This would in turn call for possibly more general KC corrections to deal with such off-diagonal terms in the occupation matrix. While this would certainly be a desirable improvement of the theory, as things currently stand the theory remains powerful: it provides a simple yet effective method for correcting insulating and semiconducting systems, where DFT exhibits one of its most striking limitations in its inability to accurately predict the band gap. \subsubsection{Empty state localization} \label{sec:empty_sate_localization} While the energy functional minimization typically leads to a set of very well localized occupied orbitals, this is not the case for the empty states, which, even at the KC level, turn out to be delocalized\footnote{The empty states resulting from the KI functionals are delocalized due to (i) the entanglement of the high-lying nearly free electron bands (which are very delocalized) and low-lying conduction bands, and (ii) the residual Hartree contribution to empty states' potentials (see the detailed description of the KC potentials in Ref.~\citenum{borghi_koopmans-compliant_2014}).}. Applying the KC corrections on delocalized empty states would lead to corrective terms that vanish in the limit of infinite systems~\cite{nguyen_koopmans-compliant_2018}, thus leaving the unoccupied band structure totally uncorrected and identical to the one of the underlying density functional approximation. Using a localized set of orbitals is indeed a key requirement to deal with extended systems, and to get KC corrections to the band structure that remain finite (rather than tend to zero) and converge rapidly to their thermodynamic limit~\cite{nguyen_koopmans-compliant_2018}. For this reason we typically compute a non-self-consistent Koopmans correction using maximally localized Wannier functions as the localized representation for the lower part of the empty manifold~\cite{de_gennaro_blochs_2021, nguyen_koopmans-compliant_2018}. This heuristic choice provides a practical and effective scheme, as clearly supported by the results of previous works~\cite{nguyen_koopmans-compliant_2018, de_gennaro_blochs_2021} and confirmed here. Moreover, it does not affect the occupied manifold and therefore does not change the potential energy surface of the functional. \subsection{Energy functional minimization} \label{sec:minimization} The algorithm used to minimize any KC functional consists of two nested steps~\cite{borghi_variational_2015}, inspired by the ensemble-DFT approach~\cite{marzari_ensemble_1997}: First, (i) a minimization is performed with respect to all unitary transformations of the orbitals at fixed manifold, i.e. leaving unchanged the Hilbert subspace spanned by these orbitals (the so-called ``inner-loop''). This minimizes the orbital-density-dependent contribution to the KC functional. Then (ii) an optimization of the orbitals in the direction orthogonal to the subspace is performed via a standard conjugate-gradient algorithm (the so-called ``outer-loop''). This two steps are iterated, imposing throughout the orthonormality of the orbitals, until the minimum is reached. To speed up the convergence, the minimization is typically performed starting from a reasonable guess for the variational orbitals. As discussed above, for extended systems a very good choice for this guess are the MLWFs calculated from the ground state of the base functional. For these orbitals the screening coefficients are calculated and kept fixed during the minimization. Ideally, these can be recalculated at the end of the minimization if the variational orbitals changed significantly, thus implementing a full self-consistent cycle for the energy minimization. While this is the most rigorous way to perform a KC calculation, in the next section we will we resort to two well-controlled approximations to simplify the formalism and make it possible to use an efficient implementation in primitive cell: i) we use a second order Taylor expansion of Eq.~\ref{eq:KI} and ii) we assume that the variational orbitals coincide with MLWFs from the underlying density functional. The first assumption allows us to replace expensive $\Delta$SCF calculations in a supercell with cheaper primitive cell ones using DFPT, while the second allows us to skip altogether the minimization of the functional, while still providing a very good approximation for the variational orbitals~\cite{borghi_variational_2015, nguyen_koopmans-compliant_2018, de_gennaro_blochs_2021}. A formal justification of the second order Taylor expansion is discussed in Sec.~\ref{sec:KmeetW} and its overall effect on the final results is discussed in Sec.~\ref{sec:validation} and in the Supporting Information. \section{A simplified KI implementation: Koopmans meets Wannier} \label{sec:KmeetW} In previous work on the application of KC functionals to periodic crystals~\cite{nguyen_koopmans-compliant_2018} the calculation of the screening coefficients and the minimization of the KC functional were performed using a supercell approach. While this is a very general strategy (and the only possible one for non-periodic system), for periodic solids it is desirable to work with a primitive cell, exploiting translational symmetry and thus reducing the computational cost. The obstacle to this (and the reason for the previous supercell approach) is the localized nature of the variational orbitals and the orbital-density-dependence of the KI Hamiltonian which apparently breaks the translational symmetry of the crystal. Nevertheless, one can argue that the Bloch symmetry is still preserved~\cite{heaton_self-interaction_1983, almbladh_exact_1985} which allows the variational orbitals to be expressed as Wannier functions~\cite{wannier_structure_1937} (WFs). The translational properties of the WFs can then be exploited to recast the supercell problem into a primitive cell one plus a sampling of the Brillouin zone. In the present implementation, we use a Taylor expansion of Eq.~\ref{eq:KI} retaining only the terms up to second order in $f_i$~\cite{colonna_screening_2018, salzner_koopmans_2009,stein_curvature_2012,zhang_orbital_2015, mei_exact_2021}. While this approximation is not strictly necessary, it allows us to simplify the expression for the KI corrections and potentials, and at the same time it does not affect the dominant Hartree contribution in Eq.~\ref{eq:KI}, which is exactly quadratic in the occupations. The residual difference in the xc contribution has a minor effect on the final results (see section~\ref{sec:validation}). The unrelaxed KI energy corrections and potentials become~\cite{colonna_screening_2018, colonna_koopmans-compliant_2019} \begin{align} \Pi^{\rm KI(2)}_i & = \frac{1}{2}f_i(1-f_i) \langle n_i | f_{\rm Hxc} | n_i \rangle \label{eq:ene_ki2_u} \\ \mathcal{V}_i^{\rm KI(2)}({\bf r}) & = \frac{\delta \Pi^{\rm KI(2)}_i}{\delta \rho_i({\bf r})} = -\frac{1}{2}\langle n_i | f_{\rm Hxc} | n_i \rangle + \nn \\ & + (1-f_i) \int d{\bf r} ' f_{\rm Hxc}({\bf r}, {\bf r}') n_i({\bf r}') \label{eq:pot_ki2_u} \end{align} where the superscript ``$^{\rm (2)}$'' underscores the fact that this is a second-order expansion of the full KI energy and potential. We note that the DFT kernel $f_{\rm Hxc}$ depends only on the total charge density and therefore has the periodicity of the primitive cell, while the variational orbitals are periodic on the supercell. Based on the translational symmetry of perfectly periodic systems the assumption can be made that variational orbitals can be expressed as WFs~\cite{de_gennaro_blochs_2021}. By definition the WFs $\omega_{{\bf R} n}({\bf r})$ are labeled according to the lattice vector ${\bf R}$ of the home cell inside the supercell; have the periodicity of the supercell, i.e. $\omega_{{\bf R} n}({\bf r})=\omega_{{\bf R} n}({\bf r}+{\bf T})$ with ${\bf T}$ any lattice vector of the supercell; and are such that $\omega_{{\bf R} n}({\bf r}) = \omega_{{\mathbf{0}} n}({\bf r} -{\bf R})$. The WFs provides an alternative but completely equivalent description of the electronic structure of a crystal, via a unitary matrix transformation of the delocalized Bloch states $\psi_{\mathbf{k}n}$: \begin{align} w_{\mathbf{R}n}(\mathbf{r}) & = \frac{1}{N_\mathbf{k}} \sum_\mathbf{k} e^{-i\mathbf{k}\cdot\mathbf{R}} \psi_{\mathbf{k}n}(\mathbf{r}) \nn \\ & = \frac{1}{N_\mathbf{k}} \sum_\mathbf{k} e^{-i\mathbf{k}\cdot\mathbf{R}} e^{i\mathbf{k}\cdot\mathbf{r}} w_{\mathbf{k}n}(\mathbf{r}) \nn \\ w_{\mathbf{k}n}(\mathbf{r}) &= \sum_v U^{({\bf k})}_{nm} u_{\mathbf{k}m}(\mathbf{r}). \label{eq:MLWF_def} \end{align} In this expression $w_{{\bf k} n}({\bf r})= \sum_{v} U^{({\bf k})}_{nm} u_{{\bf k} m}({\bf r})$ is a very general ``gauge transformation'' of the periodic part of the canonical Bloch state $u_{{\bf k} m}({\bf r})$, $N_{{\bf k}}$ is the number of ${\bf k}$ points and ${\bf R}$ the Bravais lattice vectors of the primitive cell. The expression above highlights the duality between variational orbitals (Wannier functions) and canonical orbitals (Bloch states), and the simple connection between the two. In periodic systems the transformation relating these two sets of orbitals can be decomposed in a phase factor $e^{- i {\bf k} \cdot {\bf R}}$ and a ${\bf k}$-dependent unitary rotation mixing only Bloch states at the same ${\bf k}$. This unitary matrix is defined in principle by the minimization of the orbital-density-dependent correction to the energy functionals (see Sec.~\ref{sec:minimization}). However, this minimization greatly increases the computational cost of these calculations relative to functionals of the electronic density alone. As discussed in section~\ref{sec:basic_features}, the minimization of the KI functional (in the limit of an infinitesimally small PZ-SIC term) leads to localized orbitals that closely resemble MLWFs~\cite{borghi_variational_2015}. For this reason we make a further assumption and assume that the unitary matrix defining the variational orbitals can be obtained via a standard Wannierization procedure, i.e. by minimizing the sum of the quadratic spread of the Wannier functions~\cite{marzari_maximally_1997, marzari_maximally_2012}, thus allowing us to bypass the computationally intense energy minimization. Under this assumption the KI functional closely resembles related approaches like the Wannier-Koopmans~\cite{ma_using_2016} and the Wannier-transition-state methods~\cite{anisimov_transition_2005}. In both these schemes the linearity of the energy is enforced when adding/removing an electron from a set of Wannier functions, resulting in accurate prediction of band gaps and band structure of a variety of systems~\cite{anisimov_transition_2005, ma_using_2016, weng_wannier_2017, weng_wannier_2018, li_wannier-koopmans_2018, weng_wannierkoopmans_2020}. At variance with these two methods, the present approach is based on a variational expression of the total energy (Eq.~(\ref{eq:KC_lin})) as a function of the orbital densities which automatically leads to a set of Wannier-like variational orbitals. Moreover, from a practical point of view, within the present implementation the evaluation of the energy and potential corrections can be efficiently evaluated using density functional perturbation theory, as detailed in the next section, thus avoiding expensive supercell calculations typically needed for both the Wannier-Koopmans~\cite{ma_using_2016} and the Wannier-transition-state methods~\cite{anisimov_transition_2005}. Overall, in this simplified framework, all the ingredients are then provided by a standard DFT calculation followed by a Wannierization of the canonical KS-DFT eigenstates. The KI calculation reduces then to a one-shot procedure where the screening coefficients in Eq.~\ref{eq:alpha_lr} and the KI Hamiltonian specified by Eq.~\ref{eq:KC_ham} and Eq.~\ref{eq:pot_ki2_u} needs to be evaluated on the localized representation provided by the MLWFs. This can be done straightforwardly by working in a supercell to accommodate the real-space representation of the Wannier orbital densities, or, as pursued here, by working in reciprocal space and exclusively within the primitive cell, thus avoiding expensive supercell calculations. This latter strategy leverages the translational properties of the Wannier functions. By expressing the Wannier orbital densities as Bloch sum in the primitive cell as described in Sec.~\ref{sec:rho_wann}, we must then i) recast the equation for the screening coefficients (cf. Eq.~\ref{eq:alpha_lr}) into a linear response problem suitable for an efficient implementation using the reciprocal space formulation of density-functional perturbation theory as detailed in Sec.~\ref{sec:screen_coeff}, and ii) devise, compute and diagonalize the KI Hamiltonian at each ${\bf k}$-point in the BZ of the primitive cell as illustrated in Sec.~\ref{sec:ki_ham}. \subsection{From Wannier orbitals in the supercell to Bloch sums in the primitive cell} \label{sec:rho_wann} \begin{figure*}[t] \begin{center} \includegraphics[width=0.9\textwidth]{Figs/SC_PC.pdf} \caption{Schematic representation of the supercell --- primitive cell mapping in 2D. A $2 \times 2$ supercell problem with a $\Gamma$ only sampling can be recast in a primitive cell problem with a $2 \times 2$ sampling of the Brillouin zone.} \label{fig:sc-pc} \end{center} \end{figure*} The first step in this reformulation is to rewrite the Wannier orbital densities as Bloch sums in the primitive cell. A schematic view of the supercell-primitive cell mapping is shown in Fig.~\ref{fig:sc-pc}. Using the definition of the MLWFs, the Wannier orbital densities can be written as \begin{align} \rho_{{\bf R} n}({\bf r}) & = |\omega_{{\bf R} n}({\bf r})|^2 = \left| \frac{1}{N_{{\bf k}}}\sum_{{\bf k}} e^{-i {\bf k} \cdot {\bf R}} \tilde{\psi}_{{\bf k} n}({\bf r})\right|^2 \nonumber \\ & = \frac{1}{N_{{\bf q}}}\sum_{{\bf q}} e^{i {\bf q} \cdot {\bf r}} \left\{ e^{-i {\bf q} \cdot {\bf R}} \frac{1}{N_{{\bf k}}}\sum_{{\bf k}} w^*_{{\bf k}, n}({\bf r})w_{{\bf k}+{\bf q}, n}({\bf r}) \right\} \nonumber \\ & = \frac{1}{N_{{\bf q}}}\sum_{{\bf q}} e^{i {\bf q} \cdot {\bf r}} \rho_{{\bf q}}^{{\bf R} n}({\bf r}). \label{eq:wann_orb_dens} \end{align} Since the $w_{{\bf k} n}({\bf r})$ are periodic on the primitive cell, the $\rho^{{\bf R} n}_{{\bf q}}({\bf r})$ are also, and consequently the Wannier orbital density is given as a sum over the Brillouin zone (BZ) of primitive cell-periodic function just modulated by a phase factor $e^{i {\bf q} \cdot {\bf r}}$. The periodic densities $\rho^{{\bf R} n}_{{\bf q}}({\bf r}) = \rho^{{\bf R} n}_{{\bf q}}({\bf r}+{\bf R})$ are the basic ingredients needed to express integrals over the supercell appearing in the definitions of the screening coefficients and of the KI corrections and potentials into integrals over the primitive cell. \subsubsection{Screening coefficients} \label{sec:screen_coeff} The expression for the screening coefficients given in Eq.~(\ref{eq:alpha_lr}) can be recast in a linear response-problem~\cite{colonna_screening_2018} suitable for an efficient implementation based on DFPT~\cite{baroni_phonons_2001}: \begin{align}\label{eq:alpha} \alpha_{{\mathbf{0}} n} & = \frac{ \langle \rho_{{\mathbf{0}} n} | \left[ \epsilon^{-1}f_{\rm Hxc} \right] | \rho_{{\mathbf{0}} n} \rangle}{\langle \rho_{{\mathbf{0}} n} | \left[ f_{\rm Hxc} \right] | \rho_{{\mathbf{0}} n} \rangle} = 1 + \frac{\langle V^{{\mathbf{0}} n}_{\rm pert} | \Delta^{{\mathbf{0}} n} \rho \rangle}{\langle \rho_{{\mathbf{0}} n} | V^{{\mathbf{0}} n}_{\rm pert} \rangle}. \end{align} In the expression above we made use of the definition of the dielectric matrix $\epsilon^{-1} = 1 + f_{\rm Hxc}\chi$ with $\chi$ being the density-density response function of the system at the underlying DFT level; $\Delta^{{\mathbf{0}} n} \rho({\bf r}) = \int d{\bf r}' \chi({\bf r}, {\bf r}') V_{\rm pert}^{{\mathbf{0}} n}({\bf r}')$ is by definition the density response induced in the systems due to the ``perturbing potential'' $V^{{\mathbf{0}} n}_{\rm pert}({\bf r}) = \int d{\bf r}' f_{\rm Hxc} ({\bf r}, {\bf r}') \rho_n({\bf r}')$. This perturbation represents the Hartree-exchange-correlation potential generated when an infinitesimal fraction of an electron is added to/removed from a MLWF. The perturbing potential has the same periodic structure as the Wannier density in Eq.~(\ref{eq:wann_orb_dens}) and can be decomposed into a sum of monochromatic perturbations in the primitive cell, $V_{\rm pert}^{{\mathbf{0}} n}({\bf r}) = \sum_{{\bf q}} e^{i{\bf q} \cdot {\bf r}} V^{{\mathbf{0}} n}_{{\rm pert},{\bf q}}({\bf r})$ with \begin{equation} V^{{\mathbf{0}} n}_{{\rm pert}, {\bf q}}({\bf r}) = \int d{\bf r}' f_{\rm Hxc}({\bf r}, {\bf r}') \rho^{{\mathbf{0}} n}_{{\bf q}}({\bf r}') . \label{eq:Vpert_KC} \end{equation} The total density variation $\Delta^{{\mathbf{0}} n} \rho({\bf r}')$ induced by the bare perturbation $V_{\rm pert}^{{\mathbf{0}} n}({\bf r})$ reads \begin{align} \Delta^{{\mathbf{0}} n} \rho({\bf r}) & = \int d{\bf r}' \chi({\bf r}, {\bf r}') V_{\rm pert}^{{\mathbf{0}} n}({\bf r}') \nn \\ & = \int d{\bf r}' \chi({\bf r}, {\bf r}') \sum_{{\bf q}} e^{i{\bf q} \cdot {\bf r}'} V_{{\rm pert},{\bf q}}^{{\mathbf{0}} n}({\bf r}') \nn \\ & = \sum_{{\bf q}} e^{i{\bf q} \cdot {\bf r}} \Delta_{{\bf q}}^{{\mathbf{0}} n}\rho({\bf r}) \label{eq:dens_var} \end{align} where we used the fact that for a periodic system $\chi$ can be decomposed in a sum of primitive cell-periodic functions $\chi({\bf r} ,{\bf r}') = \sum_{{\bf q}} e^{i{\bf q} \cdot ({\bf r} - {\bf r}')}\chi_{{\bf q}}({\bf r} ,{\bf r}')$ ~\footnote{ More in detail: \begin{align*} &\int d{\bf r}' \chi({\bf r}, {\bf r}') \sum_{{\bf q}} e^{i{\bf q} \cdot {\bf r}'} V_{{\rm pert},{\bf q}}^{{\mathbf{0}} n}({\bf r}') = \\ = &\sum_{{\bf q}} \int d{\bf r}' e^{i{\bf q} \cdot {\bf r}} e^{-i{\bf q} \cdot {\bf r}}\chi({\bf r}, {\bf r}') e^{i{\bf q} \cdot {\bf r}'} V_{{\rm pert},{\bf q}}^{{\mathbf{0}} n}({\bf r}') = \\ = &\sum_{{\bf q}} e^{i{\bf q} \cdot {\bf r}} \int d{\bf r}' \chi_{{\bf q}}({\bf r}, {\bf r}') V_{{\rm pert},{\bf q}}^{{\mathbf{0}} n}({\bf r}') = \\ = & \sum_{{\bf q}} e^{i{\bf q} \cdot {\bf r}} \Delta_{{\bf q}}^{{\mathbf{0}} n}\rho({\bf r}) \end{align*} }. The primitive cell-periodic density variation is given by: \begin{align} \Delta_{{\bf q}}^{{\mathbf{0}} n} \rho({\bf r}) & = \int d{\bf r}' \chi_{{\bf q}}({\bf r},{\bf r}') V_{{\rm pert},{\bf q}}^{{\mathbf{0}} n}({\bf r}') \nn \\ &= \sum_{{\bf k} v} \psi_{{\bf k},v}^*({\bf r}) \Delta \psi_{{\bf k}+{\bf q}, v}({\bf r}) + c.c. \label{eq:dens_var_per} \end{align} where $ \Delta\psi_{{\bf k}+{\bf q}}({\bf r})$ is the first order variation of the KS orbitals due to the perturbation (the bare one plus the SCF response in the Hxc potential). Only the projection of the variation of the KS wave functions on the conduction manifold contributes to the density response in Eq.~(\ref{eq:dens_var_per}), meaning that $\Delta\psi_{{\bf k}+{\bf q}}({\bf r})$ can be thought of as its own projection on this manifold and it is given by the solution of the following linear problem~\cite{baroni_phonons_2001}: \begin{align} & \left( H +\gamma P_v^{{\bf k}+{\bf q}} -\varepsilon_{{\bf k},v} \right) \Delta \psi_{{\bf k}+{\bf q}, v}({\bf r}) \nn \\ = & -P_c^{{\bf k}+{\bf q}} \left[V_{{\rm pert},{\bf q}}^{{\mathbf{0}} n}({\bf r}) + \Delta_{{\bf q}} V_{\rm Hxc}({\bf r})\right]\psi_{{\bf k},v}({\bf r}) \label{eq:lin_eq} \end{align} where $H$ is the ground state KS Hamiltonian, $P_v^{{\bf k}}=\sum_v |u_{{\bf k}, v}\rangle \langle u_{{\bf k}, v}|$ and $P_c^{{\bf k}} = 1-P_v^{{\bf k}}$ are the projectors onto the occupied- and empty-manifold respectively, $\gamma$ is a constant chosen in such a way that the $\gamma P_v^{{\bf k}+{\bf q}}$ operator makes the linear system non-singular~\cite{baroni_phonons_2001}, and \begin{equation} \Delta_{{\bf q}} V_{\rm Hxc}({\bf r}) = \int d {\bf r} ' f_{\rm Hxc}({\bf r}, {\bf r}') \Delta^{{\mathbf{0}} n}_{{\bf q}}\rho ({\bf r}') \label{eq:Delta_Vscf} \end{equation} is the self-consistent variation of the Hxc potential due to the charge density variation $\Delta_{{\bf q}}^{{\mathbf{0}} n}\rho$. Iterating Eqs.~(\ref{eq:dens_var_per})-(\ref{eq:Delta_Vscf}) to self-consistency leads to the final results for $ \Delta\rho_{{\bf q}}^{{\mathbf{0}} n}({\bf r})$ and the screening coefficient is finally obtained by summing over all the ${\bf q}$ contributions: \begin{align} \alpha_{{\mathbf{0}} n} = 1 + \frac{\sum_{{\bf q}} \langle V^{{\mathbf{0}} n}_{{\rm pert},{\bf q}} | \Delta^{{\mathbf{0}} n}_{{\bf q}}\rho \rangle} {\sum_{{\bf q}} \langle {\rho^{{\mathbf{0}} n}_{{\bf q}}} | V^{{\mathbf{0}} n}_{{\rm pert}, {\bf q}} \rangle} . \label{eq:alpha_gspace} \end{align} Equations ~(\ref{eq:Vpert_KC})-(\ref{eq:Delta_Vscf}) show how to recast the calculation of the screening coefficient into a linear response problem in the primitive cell that can be efficiently solved using the machinery of DFPT ,and are key to the present work. Linear-response equations for different ${\bf q}$ are decoupled and the original problem is thus decomposed into a set of independent problems that can be solved on separate computational resources, allowing for straightforward parallelization. More importantly, the computational cost is also greatly reduced: In a standard supercell implementation the screening coefficients are computed with a finite difference approach by performing additional total-energy calculations where the occupation of a Wannier function is constrained~\cite{nguyen_koopmans-compliant_2018}. This requires, for each MLWF, multiple SCF calculations with a computational time $T^{\rm SC}$ that scales roughly as ${N_{\rm el}^{\rm SC}}^3$, where $N_{\rm el}^{\rm SC}$ is the number of electrons in the supercell. The primitive cell DFPT approach described above scales instead as $T^{\rm PC} \propto N_{{\bf q}} N_{{\bf k}} {N_{\rm el}^{\rm PC}}^3$; this is the typical computational time for the SCF cycle ($N_{{\bf k}} {N_{\rm el}^{\rm PC}}^3$), times the number of independent monochromatic perturbations ($N_{{\bf q}}$). Using the relation $N_{\rm el}^{\rm SC}=N_{{\bf k}}N_{\rm el}^{\rm PC}$, and the fact that $N_{{\bf q}}=N_{{\bf k}}$, the ratio between the supercell and primitive cell computational times is $T^{\rm SC}/T^{\rm PC} \propto N_{{\bf q}}$. Therefore as the supercell size (and, equivalently, the number of ${\bf q}$-points in the primitive cell) increases, the primitive cell DFPT approach becomes more and more computationally convenient. We finally point out that a similar strategy was recently implemented in the context of the linear-response approach to the calculation of the Hubbard parameters in DFT+U~\cite{cococcioni_linear_2005} in order to avoid the use of a supercell~\cite{timrov_hubbard_2018, timrov_self-consistent_2021}. \subsubsection{KI Hamiltonian} \label{sec:ki_ham} As it is typical for orbital-density-dependent functionals, the canonical eigenvalues and eigenvectors are given by the diagonalization of the matrix of Lagrangian multipliers $H_{mn}({\bf R})=\me{{\bf R} m}{\hat{h}_{{\mathbf{0}} n}}{{\mathbf{0}} n}$ with $\hat{h}_{{\mathbf{0}} m}|{\mathbf{0}} n \rangle = \delta E^{\rm KI(2)}/\delta \langle {\bf R} n|$. In the case of insulating systems, the matrix elements between occupied and empty states vanish~\cite{dabo_piecewise_2014} and we can treat the two manifolds separately. For occupied states, the KI potential is simply a scalar correction (the second term in Eq.~(\ref{eq:pot_ki2_u}) is identically zero if $f_i=1$), and thus the KI contribution to the Hamiltonian is diagonal and ${\bf R}$-independent: \begin{align} \Delta H^{\rm KI(2)}_{vv'}({\bf R}) & = \me{{\bf R} v}{\mathcal{V}^{\rm KI(2)}_{{\mathbf{0}} v'}}{{\mathbf{0}} v'} \nn \\ & =-\frac{1}{2N_{{\bf q}}} \sum_{{\bf q}} \langle {\rho^{{\mathbf{0}} v}_{{\bf q}}} | V^{{\mathbf{0}} v}_{{\rm pert}, {\bf q}} \rangle \delta_{{\bf R},{\mathbf{0}}} \delta_{vv'} \nn \\ & = -\frac{1}{2}\Delta^{\rm KI(2)}_{{\mathbf{0}} v} \delta_{{\bf R},{\mathbf{0}}} \delta_{vv'} \end{align} Using the definition of $| {\bf R} v \rangle$ or equivalently the identity $\delta_{{\bf R}, {\mathbf{0}}} = 1/N_{{\bf k}}\sum_{{\bf k}} e^{i{\bf k} \cdot {\bf R}}$ in the equation above, the KI contribution to the Hamiltonian at a given ${\bf k}$ can be identified as: \begin{align} \Delta H^{\rm KI(2)}_{vv'}({\bf k}) & = -\frac{1}{2}\Delta^{\rm KI(2)}_{{\mathbf{0}} v} \delta_{vv'} \label{eq:ki-ham-K-occ} \end{align} which is ${\bf k}$-independent, as expected. In the case of empty states, in addition to the scalar term in the equation above, there is also a non-scalar contribution~\cite{borghi_koopmans-compliant_2014} that needs a more careful analysis. This term is given by the matrix element of the non-scalar contribution to the KI potential, i.e. $\mathcal{V}^{\rm KI(2),r}_{{\mathbf{0}} c}({\bf r}) = \int d{\bf r}' f_{\rm Hxc}({\bf r}, {\bf r}') \rho_{{\mathbf{0}} c}({\bf r}')$, and reads: \begin{align} \Delta H^{\rm KI(2),r}_{cc'}({\bf R}) & = \me{{\bf R} c}{\mathcal{V}^{\rm KI(2), r}_{{\mathbf{0}} c'}}{{\mathbf{0}} c'} \nn \\ & = \frac{1}{N_{{\bf k}}}\sum_{{\bf k}} e^{i{\bf k} \cdot {\bf R}} \left[ \frac{1}{N_{{\bf q}}}\sum_{{\bf q}} \langle V^{{\mathbf{0}} c'}_{{\rm pert}, {\bf q}}| \rho^{cc'}_{{\bf k}, {\bf k}+{\bf q}}\rangle \right] \label{eq:ki-ham-R-emp-real} \end{align} where $\rho^{cc'}_{{\bf k}, {\bf k}+{\bf q}}({\bf r}) = w^*_{{\bf k},c}({\bf r})w_{{\bf k}+{\bf q},c'}({\bf r})$ (see Supporting Information for a detailed derivation of the KI matrix elements). Since the dependence on the ${\bf R}$-vector only appears in the complex exponential, the matrix elements of the KI Hamiltonian in ${\bf k}$-space can be easily identified as the term inside the square brackets in Eq.\eqref{eq:ki-ham-R-emp-real}. Including the scalar contribution leads to the ${\bf k}$-space Hamiltonian for the empty manifold: \begin{equation} \Delta H^{\rm KI(2)}_{cc'}({\bf k}) =-\frac{1}{2}\Delta^{\rm KI(2)}_{{\mathbf{0}} c} \delta_{cc'} + \frac{1}{N_{{\bf q}}}\sum_{{\bf q}} \langle V^{{\mathbf{0}} c'}_{{\rm pert}, {\bf q}}| \rho^{cc'}_{{\bf k}, {\bf k}+{\bf q}}\rangle \label{eq:ki-ham-K-emp} \end{equation} Eqs.~(\ref{eq:ki-ham-K-occ}) and~(\ref{eq:ki-ham-K-emp}) define the KI contribution to the Hamiltonian at a given ${\bf k}$ point on the regular mesh used for the sampling of the Brillouin zone. This contribution needs to be scaled by the screening coefficient $\alpha_{{\mathbf{0}} n}$ and added to the DFT Hamiltonian to define the full KI Hamiltonian at ${\bf k}$ as: \begin{equation} H^{\rm KI(2)}_{mn}({\bf k}) = H^{\rm DFT}_{mn}({\bf k}) + \alpha_{{\mathbf{0}} n} \Delta H^{\rm KI(2)}_{mn}({\bf k}) \label{eq:ki-ham-2nd} \end{equation} where $H^{\rm DFT}_{mn}({\bf k}) = \me{w_{{\bf k} m}}{\hat{H}^{\rm DFT}}{w_{{\bf k} n}}$ is the KS-DFT Hamiltonian evaluated on the periodic part of the Bloch states in the Wannier gauge (see Eq.~(\ref{eq:MLWF_def})). The diagonalization of $H^{\rm KI(2)}_{mn}({\bf k})$ defines the canonical KI eigenstates $\{\psi_{{\bf k} i}^{\rm KI(2)}; \varepsilon_{{\bf k} i}^{\rm KI(2)}\}$. Finally, given the localized nature of the MLWFs it is also possible to interpolate the Hamiltonian with standard techniques~\cite{slater_simplified_1954,souza_maximally_2001,marzari_maximally_2012} to obtain the KI eigenvalues at any arbitrary ${\bf k}$ point in the Brillouin zone. \subsection{Technical aspects of the implementation} The calculations of the screening parameters and KI potentials involve the evaluation of bare and screened Hxc integrals of the form $\me{\rho_{{\mathbf{0}} n}}{f_{\rm Hxc}}{\rho_{{\mathbf{0}} n}}$ and $\me{\rho_{{\mathbf{0}} n}}{\epsilon^{-1}f_{\rm Hxc}}{\rho_{{\mathbf{0}} n}}$. Because of the long-range nature of the Hartree kernel, these integrals are diverging in periodic-boundary conditions (PBC) and therefore require particular caution. The divergence can be avoided adding a neutralizing background (in practice this means that the ${\bf q}+{\bf G} ={\mathbf{0}}$ component of the Hartree kernel is always set to zero). The integrals are then finite and, more importantly, converge to the correct electrostatic energy of isolated Wannier functions.~\cite{makov_periodic_1995} However, the convergence is extremely slow ($1/N_{{\bf k}}^{1/3}$ to leading order) because of the $1/|{\bf q}+{\bf G}|^2$ divergence in the Coulomb kernel. This is a well know problem and many solutions have been proposed to overcome it; e.g. Makov and Payne~\cite{makov_periodic_1995} (MP) suggested to remove from the energy the electrostatic interaction of a periodically-repeated point-charge. Other approaches consist in truncating the Coulomb interaction~\cite{ismail-beigi_truncation_2006, rozzi_exact_2006} or using the scheme proposed by Martyna and Tuckermann~\cite{martyna_reciprocal_1999} or the charge or density corrections of Ref.~\citenum{dabo_electrostatics_2008}. Here we adopt the approach devised by Gygi and Baldereschi (GB)~\cite{gygi_quasiparticle_1989} that consists of adding and subtracting to the integrand a function with the same divergence and whose integral can be computed analytically. The result is a smooth function suitable for numerical integration, plus an analytical contribution. From a computational point of view this amounts to defining a modified Hartree kernel \begin{equation} f_{\rm H}({\bf q}+{\bf G}) = \begin{cases} D, & \text{if } {\bf q}+{\bf G} = {\mathbf{0}} \\ 4\pi/|{\bf q}+{\bf G}|^2, & \text{if } {\bf q}+{\bf G} \neq {\mathbf{0}} \end{cases} \end{equation} where $D$ is the reciprocal-space part of the Ewald sum for a point charge, repeated according to the super-periodicity defined by the grid of ${\bf q}$-points~\cite{nguyen_efficient_2009}. For the screened Hartree integral the ${\bf q}+{\bf G}=0$ component needs to be further scaled by the macroscopic dielectric function $\epsilon_{\infty}$ of the system~\footnote{This is only strictly valid for cubic systems, where the dielectric tensor is diagonal with $\epsilon^{(11)}_{\infty} = \epsilon^{(22)}_{\infty} = \epsilon^{(33)}_{\infty}$. In the most general case a generalization of the Ewald technique must be used~\cite{fischerauer_comments_1997, rurali_theory_2009, murphy_anisotropic_2013} } because in this case we are dealing with the screened Coulomb integral $\epsilon^{-1}f_H$. In this work we compute $\epsilon_{\infty}$ from first-principles~\cite{baroni_phonons_2001} using the PHONON code of \textsc{Quantum ESPRESSO}\,{}. Another important point is that the periodic part of the Wannier function at ${\bf k}$ and ${\bf k} + {\bf q}$ must come from the same Wannierization procedure, otherwise the localization property of the Wannier orbital density [Eq.~(\ref{eq:wann_orb_dens})] will be lost because of unphysical phase factors possibly due to the diagonalization routine or other computational reasons. This requirement is enforced using a uniform grid centered at $\Gamma$ such that ${\bf k}+{\bf q} = {\bf p} + {\bf G}$ with ${\bf p}$ still belonging to the original grid and ${\bf G}$ a reciprocal lattice vector. In this way $w_{{\bf k}+{\bf q}}({\bf r})$ can be obtained from $w_{{\bf p}}({\bf r})$ simply multiplying it by a phase factor $e^{-i{\bf G} \cdot {\bf r}}$. As a direct consequence of this choice the mesh of ${\bf q}$ points for the LR calculation has to be the same as the one used for the Wannierization. Finally, in order to be compliant with the current limitation of working with block-diagonal occupation matrices (see. Sec.~\ref{sec:limitation_metals}), the Wannierization procedure needs to be prevented from mixing the occupied and empty manifolds, so that the occupation matrix retains its block-diagonal form in the localized-orbital representation. In practice this is done by performing two separate Wannierizations, one for the occupied and one for the empty manifold. To obtain the maximally localized Wannier orbitals for the low-lying empty states, we employed the disentanglement maximally localized Wannier function technique proposed in Ref.~\citenum{souza_maximally_2001}. \begin{table}[hb] \centering \begin{tabularx}{\linewidth}{*{6}{>{\centering\arraybackslash}X} \hline \hline Sys. & wann & method & $\frac{\partial^2 E}{\partial f^2}[eV]$ & $\frac{d^2 E}{df^2}[eV]$ & $\alpha$ \\ \hline Si & $sp^3$ (V) & FD & 3.906 & 0.888 & 0.227 \\ & & DFPT & 3.886 & 0.887 & 0.228 \vspace{0.1cm} \\ & $sp^3$ (C) & FD & 1.351 & 0.215 & 0.160 \\ & & DFPT & 1.351 & 0.218 & 0.162 \vspace{0.1cm} \\ \hline GaAs & $d$ (V) & FD & 10.159 & 3.530 & 0.347 \\ & & DFPT & 10.217 & 3.550 & 0.347 \vspace{0.1cm} \\ & $sp^3$ (V) & FD & 3.899 & 0.976 & 0.250 \\ & & DFPT & 3.896 & 0.936 & 0.240 \vspace{0.1cm} \\ & $sp^3$ (C) & FD & 1.418 & 0.233 & 0.164 \\ & & DFPT & 1.372 & 0.243 & 0.177 \\ \hline \end{tabularx} \caption{Comparison between analytical (computed with DFPT) and numerical [computed with finite differences (FD)] second derivatives of the energy with respect to the occupation of different kinds of Wannier functions for silicon (Si) and gallium arsenide (GaAs). V / C indicates whether the MLWF is from the valence or conduction manifold. Partial $\partial/\partial f_i$ and full $d/df_i$ derivatives refer to unrelaxed and relaxed quantity, respectively.} \label{tab:alpha} \end{table} \section{Results and discussion} In this section we first validate the present implementation against a standard KI one as described in Ref.~\citenum{nguyen_koopmans-compliant_2018}, and then discuss the application to few paradigmatic test cases, highlighting the advantages and limitations of the approach. In particular we analyze the band structure of gallium arsenide (GaAs), hexagonal wurtzite zinc oxide (ZnO) and face-centered-cubic (FCC) lithium fluoride (LiF) at three levels of theory: i) the local density approxination (LDA), ii) the hybrid-functional scheme by Heyd Scuseria and Ernzerhof (HSE)~\cite{heyd_hybrid_2003, heyd_erratum:_2006}, and iii) the KI functional within the implementation described in this work. All calculations are performed using the plane-wave (PW) and pseudopotential (PP) method as implemented in the \textsc{Quantum ESPRESSO}\,{} package~\cite{giannozzi_quantum_2009, giannozzi_advanced_2017}. The LDA functional is used as the underlying density-functinal approximation for all the KI calculations. LDA scalar relativistic Optimized Norm-conserving Vanderbilt PPs~\cite{hamann_optimized_2013,hamann_erratum_2017} from the DOJO library~\cite{van_setten_pseudodojo_2018} are used to model the interaction between the valence electrons and the nucleus plus the core electrons~\footnote{The LDA pseudopotentials are available at \href{http://www.pseudo-dojo.org/}{www.pseudo-dojo.org}. Version 0.4.1., standard accuracy}. Maximally localized Wannier functions are computed using the Wannier90 code~\cite{pizzi_wannier90_2020}. For all the systems we used the experimental crystal structures taken from the inorganic Crystal Structure Database~\footnote{ICSD website, \href{http://www.fiz-karlsruhe.com/icsd.html}{http://www.fiz-karlsruhe.com/icsd.html}} (ICSD); GaAs, ZnO and LiF correspond to ICSD numbers 107946, 162843 and 62361 respectively. For the LDA calculations we used a $10 \times 10 \times 10$ ${\bf k}$-point mesh for GaAs, a $12 \times 12 \times 7$ ${\bf k}$-point mesh for ZnO and a $14 \times 14 \times 14$ ${\bf k}$-point mesh for LiF. The kinetic energy cutoff for the PW expansion of the wave-functions is set to $E_{\rm cut} = 80$ Ry (320 Ry for the density and potentials expansion) in all cases. For the HSE calculations we verified that a reduced cutoff $E^{\rm Fock}_{\rm cut}= 120$ and a ${\bf k}$-point grid typically twice as coarse as the LDA one are sufficient for the convergence of the exchange energy and potential. For the screening parameters and KI Hamiltonian calculations we used the same energy cutoff and a ${\bf q}$-mesh of $6 \times 6 \times 6$ for GaAs and LiF, and a $6 \times 6 \times 4$ mesh for ZnO. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{Figs/dos_si-gaas.png} \caption{KI density of states of Si (upper panel) and GaAs (lower panel) computed with a reference 4$\times$4$\times$4 supercell (red line) and with the present approach working in the primitive cell with a commensurate 4$\times$4$\times$4 ${\bf k}$-points mesh.} \label{fig:dos} \end{figure} \begin{figure*}[t] \textbf{GaAs band structure}\par\medskip \begin{subfigure}{} \includegraphics[width=0.3\textwidth]{Figs/GaAs_lda.png} \includegraphics[width=0.3\textwidth]{Figs/GaAs_hse.png} \includegraphics[width=0.3\textwidth]{Figs/GaAs_ki.png} \end{subfigure} \begin{subfigure}{} \renewcommand\tabularxcolumn[1]{m{#1} \renewcommand\arraystretch{1.3} \setlength\tabcolsep{2pt \begin{tabularx}{\linewidth}{*{7}{>{\centering\arraybackslash}X} \hline \hline & LDA & HSE & GW$_0$ & scG$\tilde{\rm W}$ & KI & Exp. \\ \hline E$_{\rm gap}$(eV) & 0.19 & 1.28 & 1.55 & 1.62 & 1.57 & 1.52 \\ $\langle \varepsilon_d \rangle$(eV) & -14.9 & -15.6 & -17.3 & -17.6 & -17.7 & -18.9 \\ $W$(eV) & 12.8 & 13.9 & -- & -- & 12.8 & 13.1 \\ \hline \end{tabularx} \end{subfigure} \caption{Band structure of GaAs calculated at different levels of theory: LDA (left panel), HSE (middle panel) and KI (right panel). Shaded areas highlight valence (light blue) and conduction (light red) manifolds. The experimental values for the band gap, valence band width, and energy position of Ga $d$-states are represented by the dashed green, blue and red lines, respectively. Table: Band gap, position of Ga $d$ states with respect to the top of the valence band, and valence band width ($W$) at different level of theory compared to experimental~\cite{shevchik_densities_1974, madelung_semiconductors_2004} and GW results from Ref.~\citenum{shishkin_accurate_2007}. Theoretical values of the band gap are corrected for spin-orbit coupling (0.10 eV).} \label{fig:ki_gaas_bands} \end{figure*} \subsection{Validation} \label{sec:validation} In order to validate the implementation of the analytical formula for the derivatives based on the DFTP [Eq.~(\ref{eq:alpha})], we compare the result with a finite difference calculation where we add/remove a tiny fraction of an electron from a given Wannier function. This is done using a 4$\times$4$\times$4 supercell, consistent with the ${\bf k}$/${\bf q}$ mesh in the primitive cell calculation. In Table~(\ref{tab:alpha}) we present the results for two semiconductors, silicon (Si) and gallium arsenide (GaAs). For Si the Wannierization produces four identical bonding $sp^3$-like MLWFs spanning the occupied manifold and four anti-bonding $sp^3$-like MLWFs spanning a four-dimensional manifold disentangled from the lowest part of the conduction bands. In the case of GaAs we obtained 5 $d$-like and 4 $sp^3$-like MLWFs representing the occupied manifold and 4 anti-bonding $sp^3$-like MLWFs from the lowest part of the empty manifold. The numerical and analytical values for the derivatives agree within few hundredths of an eV. The residual discrepancy is possibly due to tiny differences in the Wannierization procedure (for the supercell a specific algorithm for a $\Gamma$-only calculation was used), and to the difficulties in converging to arbitrary accuracy the constrained calculations in the supercell. In order to quantify the error introduced by the second-order approximation adopted here, we compare in Fig.~(\ref{fig:dos}) the KI density of states for Si and GaAs computed using a 4$\times$4$\times$4 supercell within the original implementation~\cite{nguyen_koopmans-compliant_2018, de_gennaro_blochs_2021}, and the present approach working in primitive cell with a consistent 4$\times$4$\times$4 ${\bf k}/{\bf q}$-points mesh. For this figure the single particle eigenvalues were convoluted with a Gaussian function with a broadening of 0.2 eV. The zero of the energy is set to the LDA valence band maximum (VBM), and the shaded red area represent the LDA band gap. The thick black ticks on the energy axes mark the position of the KI VBM and conduction band minimum (CBM). The KI VBM and CBM are shifted downwards and upwards with respect to the corresponding LDA quantities, leading to an opening the fundamental band gap, that goes from 0.51 eV to 1.41 eV and from 0.20 eV to 1.57 eV for Si and GaAs, respectively. We stress here that these results are not fully converged with respect to the BZ sampling (or supercell size), and serve just as a validation test. The two DOS are in very good agreement, but small differences between the reference supercell and the primitive cell calculations are present. In particular there is a small downward shift of the order or 0.05 eV in the valence part of the DOS, and also tiny differences in the conduction one, especially evident for GaAs. These discrepancies are due to the second order approximation used for the calculation of the screening parameters and KI Hamiltonian in the primitive cell implementation (additional details are provided in Supporting Information). Nevertheless, all the main features of the DOS are correctly reproduced, thus validating the present implementation. \subsection{Application to selected systems} \noindent \textit{\bf Gallium arsenide:} GaAs is a III-V direct band gap semiconductor with a zincblende crystal structure. The band structure around the band gap is dominated by $s$ and $p$ orbitals from Ga and As forming $sp^3$ hybrid orbitals while the flat bands around 18.9 eV below the VBM are from the $d$ states of Ga. All these features are correctly reproduced by the LDA band structure [see Fig.~(\ref{fig:ki_gaas_bands})] but the band gap $E^{\rm LDA}_{\rm gap} =0.28$ eV is too small, the average $d$ states position $\langle \varepsilon^{\rm LDA}_d \rangle= -14.9$ eV is too high, and the valence bad width $W^{\rm LDA}=12.8$ eV is slightly underestimated. The HSE functional corrects these errors to some extent, opening the band gap up to $E^{\rm HSE}_{\rm gap} =1.38$ eV, and shifting downwards the Ga $d$ states, $\langle \varepsilon^{\rm HSE}_d \rangle=-15.6$ eV, but it also over-stretches the valence band thus overestimating the valence band width ($W^{\rm HSE} =13.9$ eV). These discrepancies with respect to experiment are possibly due to the fact that the fraction of Fock exchange and the range-separation parameter defining any hybrid scheme might have to be in principle system- and possibly state-dependent~\cite{skone_self-consistent_2014,brawand_generalization_2016,brawand_performance_2017}. On the contrary the parameters of the HSE functionals (and also of the vast majority of hybrid schemes) are system-independent and this is probably not sufficient for an accurate description of all the spectral features mentioned above. The KI functional with its orbital-dependent corrections produces an upward shift of the conduction manifold and a state-dependent downward shift of the valence manifold (with respect to the LDA VBM), leading to a better agreement with experimental data for $E_{\rm gap}$ and $\langle \varepsilon_d \rangle$. The $sp^3$ band width is instead identical to that of the underlying density functional approximation (LDA), which is already in good agreement with the experimental value. This is because of the scalar nature of the KI corrections for the occupied manifold combined with the fact that the Wannierization produces four identical but differently-oriented $sp^3$ MLWFs spanning the four uppermost valence bands. Then from Eq.~(\ref{eq:ki-ham-K-occ}) it follows that the KI contribution to the Hamiltonian is not only ${\bf k}$-independent but also the same for all the $sp^3$ bands. The KIPZ functional~\cite{borghi_koopmans-compliant_2014}, another flavor of KC functionals, might correct this because of its non-scalar contribution to the effective potentials. This will introduce an off-diagonal coupling between different bands and will thus modify the band width~\cite{de_gennaro_blochs_2021}. Overall the KI results are in extremely good agreement with experimental data. This performance is even more remarkable if compared to GW results~\cite{shishkin_accurate_2007} reported in the Table under Fig.~\ref{fig:ki_gaas_bands}, with KI showing the same accuracy as self-consistent GW plus vertex correction in the screened Coulomb interaction. \begin{figure*}[t] \textbf{LiF band structure}\par\medskip \begin{subfigure}{} \includegraphics[width=0.3\textwidth]{Figs/LiF_lda.png} \includegraphics[width=0.3\textwidth]{Figs/LiF_hse.png} \includegraphics[width=0.3\textwidth]{Figs/LiF_ki.png} \end{subfigure} \begin{subfigure}{} \renewcommand\tabularxcolumn[1]{m{#1}} \renewcommand\arraystretch{1.3} \setlength\tabcolsep{2pt} \begin{tabularx}{\linewidth}{*{7}{>{\centering\arraybackslash}X}} \hline \hline & LDA & HSE & GW$_0$ & scG$\tilde{\rm W}$ & KI & (Exp.) \\ \hline E$_{\rm gap}$(eV) & 8.87 & 11.61 & 13.96 & 14.5 & 15.28 & 15.35$^{(*)}$\\ $\langle \varepsilon \rangle_{\rm{F}_{2s}}$(eV) & -19.06 & -20.7 & -24.8$^{(\dagger)}$ & -- & -19.5 & -23.9 \\ $\langle \varepsilon \rangle_{\rm{Li}_{1s}}$(eV) & -39.6 & -42.5 & -47.2$^{(\dagger)}$ & -- & -46.6 & -49.8 \\ \hline \end{tabularx} \end{subfigure} \caption{Band structure of LiF calculated at different levels of theory: LDA (left panel), HSE (middle panel) and KI (right panel). Shaded areas highlight valence (light blue) and conduction (light red) manifolds. The experimental value for the band gap, the F $2s$ band and Li $1s$ band are represented by the dashed green, red, and blue lines, respectively. Table: Band gap and low lying energy levels at different levels of theory compared to GW results~\cite{shishkin_accurate_2007, shishkin_self-consistent_2007, wang_quasiparticle_2003} and to experiments~\cite{piacentini_thermoreflectance_1976, johansson_core_1976}. $^{(*)}$ The zero-point-renormalization~\cite{nery_quasiparticles_2018} (-1.15 eV) has been subtracted to the experimental gap~\cite{piacentini_thermoreflectance_1976} (14.2 eV) to have a meaningful comparison with the calculations. $(^{\dagger})$ Values obtained at G$_0$W$_0$ level.~\cite{wang_quasiparticle_2003}} \label{fig:ki_LiF_bands} \end{figure*} \noindent\textit{\bf Lithium fluoride:} LiF crystallizes in a rock-salt structure and provides a prototypical example of wide gap insulators with a marked ionic character. Its band structure at all the different levels of theory analyzed in this work is presented in Fig.~(\ref{fig:ki_LiF_bands}). LiF is a direct band gap material with the topmost valence bands exclusively contributed by F $2p$ orbitals, and the lower part of the conduction manifolds mainly from Li $2s$ orbitals with a small contribution from F $2p$ orbitals. The low lying energy levels at about -24 eV and -50 eV with respect to the top of the valence bands can be unambiguously classified as F $2s$ and Li $1s$ bands, respectively. Also in this case we observe all the limitations of LDA already observed and discussed for GaAs and the same trend going from local to hybrid to orbital-density dependent KI functionals. In particular, the KI band gap is in very good agreement with the experimental band gap~\cite{piacentini_thermoreflectance_1976} after the zero point renormalization energy~\cite{nery_quasiparticles_2018} is added to have a fair comparison between calculations (no electron-phonon effects are accounted for) and experiments. Thanks to the state-dependent potentials the Li $1s$ band is pushed downwards in energy more than the valence bands are, and its relative position with respect to the VBM results in a much better agreement with experimental values. On the other hand the F$_{2s}$ band is shifted downwards in energy by roughly the same amount as the three valence bands (originating from the F$_{2p}$ states) are. This leaves almost unchanged its distance with respect to the VBM (~19.5 eV at KI level compared to -19.06 at LDA level). This is at odd with GW results~\cite{wang_quasiparticle_2003} which show an increase in the relative distance of roughly 5 eV and place the F$_{2s}$ band at -24.8 eV with respect to the VBM, in better agreement with experimental results~\cite{johansson_core_1976}(-23.9 eV). Full KI and KIPZ band structures show the same underestimation although less severe (see Supporting Information), especially when using a better underlying density functional (PBE vs LDA). This seems to suggest this underestimation to be a common feature of the KC functionals (rather than to an effect solely due to the second order approximation adopted here) which deserves further investigation. Nevertheless the improved description of the band structure close to the Fermi level and in particular of the band gap is remarkable also in this case, and a comparison with available GW calculations~\cite{shishkin_accurate_2007, shishkin_self-consistent_2007} reveals the effectiveness of the KI functional approach presented here as a valid alternative to Green's function based methods. \noindent \textit{\bf Zinc oxide:} ZnO is a transition metal oxide which at ambient conditions crystallizes in a hexagonal wurtzite structure. It is a well studied insulator with potential applications in, e.g., transparent electrodes, light-emitting diodes, and solar cells~\cite{look_p-type_2004, look_future_2004, ozgur_comprehensive_2005,look_progress_2006}. It is also know to be a challenging system for Green's function theory~\cite{shih_quasiparticle_2010,samsonidze_insights_2014} and thus represents a more stringent test case for the KC functionals. In Fig.~\ref{fig:ki_zno_bands} the ZnO band structure calculated at different levels of theory is shown together with experimental data. The bands around the gap are dominated by oxygen $2p$ states in the valence and Zn $4s$ states in the conduction with some contribution from O $2p$ and $2s$. At LDA level the band gap is dramatically underestimated (see the table in Fig.~\ref{fig:ki_zno_bands}) compared to the experimental value. This underestimation is even more severe than in semiconductors with similar electronic structure and band gap, like e.g. GaN, and has been related to the O $p$ --- Zn $d$ repulsion and hybridization~\cite{wei_role_1988, lim_angle-resolved_2012}. In fact, at LDA level the bands coming from Zn $d$ states lie below the O $2p$ valence bands, but are too high in energy, resulting in upwards repulsion of the valence band maximum states, and in an exaggerated reduction of the band gap~\cite{lim_angle-resolved_2012}. The HSE functional pushes the $d$ states lower in energy and opens up the band gap (as already seen for GaAs) achieving a better agreement with experimental values. The KI functional moves in the same direction and further reduces the discrepancies with experiments, providing an overall satisfactory description of the electronic structure. The KI performance with an error as small as 0.02 eV on the band gap [after the zero point renormalization energy~\cite{cardona_isotope_2005} (-0.16 eV) has been subtracted to the experimental band gap~\cite{shishkin_accurate_2007} (3.44 eV)] and smaller than 1 eV on the $d$ band position is in line with that of self-consistent GW plus vertex correction in the screened Coulomb interaction. \begin{figure*}[t] \textbf{ZnO band structure}\par\medskip \begin{subfigure}{} \includegraphics[width=0.3\textwidth]{Figs/ZnO_lda.png} \includegraphics[width=0.3\textwidth]{Figs/ZnO_hse.png} \includegraphics[width=0.3\textwidth]{Figs/ZnO_ki.png} \end{subfigure} \begin{subfigure}{} \renewcommand\tabularxcolumn[1]{m{#1} \renewcommand\arraystretch{1.3} \setlength\tabcolsep{2pt \begin{tabularx}{\linewidth}{*{7}{>{\centering\arraybackslash}X} \hline \hline & LDA & HSE & GW$_0$ & scG$\tilde{\rm W}$ & KI & Exp. \\ \hline E$_{\rm gap}$(eV) & 0.79 & 2.79 & 3.0 & 3.2 & 3.62 & 3.60$^{(*)}$ \\ $\langle \varepsilon_d \rangle$(eV) & -5.1 & -6.1 & -6.4 & -6.7 & -6.9 & -7.5/-8.0 \\ \hline \end{tabularx} \end{subfigure} \caption{Band structure of ZnO calculated at different level of theory: LDA (left panel), HSE (middle panel) and KI (right panel). Shaded areas highlight valence (light blue) and conduction (light red) manifolds. The experimental values for the band gap and for the energy position of Zn $d$-states are represented by the dashed green line and by the dashed red line, respectively. Table: Band gap and position of Zn $d$ states with respect to the top of the valence band at different level of theory compared to experimental and GW results from Ref.~\citenum{shishkin_accurate_2007}.$^{(*)}$ The zero-point-renormalization~\cite{cardona_isotope_2005} (-0.16 eV) has been subtracted to the experimental gap~\cite{shishkin_accurate_2007} (3.44 eV) to have a meaningful comparison with the calculations.} \label{fig:ki_zno_bands} \end{figure*} It is worth mentioning here that for the KI calculation for ZnO we used projected Wannier functions as approximated variational orbitals. At variance with MLWFs, no minimization of the quadratic spread is performed in this case. The Wannier functions are uniquely defined by the projection onto atomic-like orbitals and, for the empty manifold, by the disentanglement procedure. We found that the minimization of the quadratic spread leads to a mixing of O $2p$ and Zn $3d$ states with deeper ones and this deteriorate the quality of the results (the band gap turns out to be overestimated and the $d$ states are pushed too low in energy). While the KS Hamiltonian depends only on the charge density and is therefore not affected by this unitary mixing, the orbital-density dependent part of the KI Hamiltonian is instead sensitive to the particular choice of the localized manifold; the unconstrained mixing of the valence Bloch states with very different energies is detrimental. The important question of which set of localized orbitals are the most suitable for the correction of the DFT Hamiltonian is not restricted to the KC functionals but is relevant, and has indeed been discussed, also in the context of the Perdew-Zunger self-interaction-correction scheme~\cite{stengel_self-interaction_2008}, the generalized transition state method~\cite{anisimov_transition_2005, ma_using_2016}, the localized orbital scaling correction to approximate DFT~\cite{li_localized_2018} and the DFT+U method for predicting band gaps~\cite{kirchner-hall_extensive_2021}. In principle the variational property of the KC functionals can be used to verify which set of Wannier functions --- projected or maximally localized --- is the most energetically favorable. This important point will be addressed in future work; here, we just highlight the evidence that in complex systems, where an undesired mixing between state with very different energies might be driven by the quadratic spread minimization, projected Wannier functions seem to provide a better choice for the localized manifold. \section{Summary and conclusions} We have developed, tested, and described a novel and efficient implementation of KC functional for periodic systems (but also readily applicable to finite ones) using Wannier functions as approximated variational orbitals and a linear response approach for the efficient evaluation of the screening parameters based on DFPT. Using the translational property of the Wannier functions , we have shown how to recast a problem whose natural dimension is that of a supercell, into a primitive cell problem plus a sampling of the primitive-cell Brillouin zone. All this leads to the decomposition of the problem into smaller and independent ones and thus to a sensible reduction of the computational cost and complexity. We have showcased its use to compute the band structure of few prototypical systems ranging from small gap semiconductors to a wide-gap insulator and demonstrated that the present implementation provides the same result as a straightforward supercell implementation, but at a greatly reduced computational cost, thus making the KC calculation more robust and user-friendly. The main results of Secs.~(\ref{sec:screen_coeff}) and (\ref{sec:ki_ham}) have been implemented as a post-processing of the PWSCF packages of the \textsc{Quantum ESPRESSO}\,{} distribution~\cite{giannozzi_advanced_2017, giannozzi_quantum_2009}, and of the Wannier90 code~\cite{pizzi_wannier90_2020}, two open-source and widely used electronic structure tools. The KI implementation presented here is part of the official \textsc{Quantum ESPRESSO}\,{} distribution. It is hosted at the \textsc{Quantum ESPRESSO}\,{} \href{https://gitlab.com/QEF/q-e}{gitlab} repository under the name ``KCW'' and has been distributed with the official release $7.1$ of \textsc{Quantum ESPRESSO}\,{}. The data used to produce the results of this work are available at the Materials Cloud Archive.~\cite{MC} \section{Acknowledgments} This work was supported by the Swiss National Science Foundation (SNSF) trough its National Centre of Competence in Research (NCCR) MARVEL and the grant No. 200021-179138. \section{KI matrix elements} For clarity and completeness, we provide here additional details on the KI matrix elements and on the derivation of Eqs.~(20) and (22) in the main text. Let us start with the definition of the KI matrix element on two Wannier functions $| {\bf R} i \rangle$ and $|{\mathbf{0}} j \rangle$: \begin{align} \Delta H^{\rm KI(2)}_{ij}({\bf R}) & = \me{{\bf R} i}{\hat{\mathcal{V}}^{\rm KI(2)}_{{\mathbf{0}} j}}{{\mathbf{0}} j} \nn \\ &= \int d{\bf r} \omega^*_{{\bf R} i}({\bf r}) \mathcal{V}^{\rm KI(2)}_{{\mathbf{0}} j}({\bf r}) \omega_{{\mathbf{0}} j}({\bf r}) \end{align} From a computational point of view it is convenient to first evaluate the product of the two Wannier functions in real space and then perform the integral with the KI potential in reciprocal space. Using the definition of Wannier functions in Eq.~(8) of the main text, the product of Wannier functions is given by \begin{align} \omega^*_{{\bf R} i}({\bf r}) \omega_{{\mathbf{0}} j}({\bf r}) = & \frac{1}{N_{{\bf k}}N_{{\bf k}'}}\sum_{{\bf k} {\bf k}'} e^{i{\bf k} \cdot {\bf R}} \tilde{\psi}^*_{{\bf k} i}({\bf r}) \tilde{\psi}_{{\bf k}' j}({\bf r}) \nn \\ & = \frac{1}{N_{{\bf k}}N_{{\bf q}}}\sum_{{\bf k} {\bf q}} e^{i{\bf k} \cdot {\bf R}} e^{i{\bf q} \cdot {\bf r}} \tilde{u}^*_{{\bf k} i}({\bf r}) \tilde{u}_{{\bf k}+{\bf q} j}({\bf r}) \nn \\ & = \frac{1}{N_{{\bf k}}}\sum_{{\bf k}} e^{i{\bf k} \cdot {\bf R}} \frac{1}{N_{{\bf q}}}\sum_{{\bf q}} e^{i{\bf q} \cdot {\bf r}} \rho_{{\bf k}, {\bf k}+{\bf q}}^{ij}({\bf r}), \label{eq_app:HR} \end{align} while the KI potential is given by a purely scalar term $-\frac{1}{2}\Delta^{\rm KI(2)}_{{\mathbf{0}} j}$ plus an ${\bf r}$-dependent contribution (only for empty states)~\cite{borghi_koopmans-compliant_2014} that can be decomposed into a sum of monochromatic terms: \begin{align} \mathcal{V}^{\rm KI(2)}_{{\mathbf{0}} j}({\bf r}) = -\frac{1}{2}\Delta^{\rm KI(2)}_{{\mathbf{0}} j} + (1-f_j) \sum_{{\bf q}} e^{i {\bf q} \cdot {\bf r}} V_{{\rm pert}, {\bf q}}^{{\mathbf{0}} j}({\bf r}) \label{eq_app:Vki} \end{align} Combining Eq.~\ref{eq_app:HR} and~\ref{eq_app:Vki} and using the orthogonality property of the Wannier functions $\langle {\bf R} i | {\mathbf{0}} j \rangle = \delta_{{\bf R} {\mathbf{0}}} \delta_{ij}$, we get \begin{align} \Delta H^{\rm KI(2)}_{ij}({\bf R}) & = -\frac{1}{2}\Delta^{\rm KI(2)}_{{\mathbf{0}} j} \delta_{{\bf R} {\mathbf{0}}} \delta_{ij} + \Delta H^{\rm KI(2)}_{ij, {\rm {\bf r}}}({\bf R}) \end{align} where $\Delta^{\rm KI(2)}_{{\mathbf{0}} j}$ is the scalar term and it is simply given by the self Hartree-exchange-correlation of the Wannier orbital density \begin{align} \Delta^{\rm KI(2)}_{{\mathbf{0}} j} & = \langle \rho_{{\mathbf{0}} j} | \left[ f_{\rm Hxc} \right] | \rho_{{\mathbf{0}} j} \rangle = \langle \rho_{{\mathbf{0}} n} | V^{{\mathbf{0}} n}_{\rm pert} \rangle \nn \\ & = \frac{1}{N_{{\bf q}}}\sum_{{\bf q}} \langle {\rho^{{\mathbf{0}} j}_{{\bf q}}} | V^{{\mathbf{0}} j}_{{\rm pert}, {\bf q}} \rangle. \end{align} and $\Delta H^{\rm KI(2)}_{ij, {\rm r}}({\bf R})$ is the contribution from the ${\bf r}$-dependent part of the potential: \begin{align} \Delta H^{\rm KI(2)}_{ij,{\rm {\bf r}}}({\bf R}) & = (1-f_j) \frac{1}{N_{{\bf k}}}\sum_{{\bf k}} e^{i{\bf k} \cdot {\bf R}} \frac{1}{N^2_{{\bf q}}}\sum_{{\bf q} {\bf q}'} \int d{\bf r} e^{i({\bf q} +{\bf q}') \cdot {\bf r}} V^{{\mathbf{0}} j}_{{\rm pert}, {\bf q}}({\bf r}) \rho_{{\bf k}, {\bf k}+{\bf q}'}^{ij}({\bf r}) \nn \\ & = (1-f_j) \frac{1}{N_{{\bf k}}}\sum_{{\bf k}} e^{i{\bf k} \cdot {\bf R}} \frac{1}{N^2_{{\bf q}}}\sum_{{\bf q} {\bf q}'}\sum_{{\bf g} {\bf g}'} \int d{\bf r} e^{i ({\bf q} + {\bf q}') \cdot {\bf r}} e^{i ({\bf g} + {\bf g}') \cdot {\bf r}} V^{{\mathbf{0}} j}_{{\rm pert}, {\bf q}}({\bf g}) \rho_{{\bf k}, {\bf k}+{\bf q}'}^{ij}({\bf g}') \nn \\ & = (1-f_j) \frac{1}{N_{{\bf k}}}\sum_{{\bf k}} e^{i{\bf k} \cdot {\bf R}} \left\{ \frac{1}{N_{{\bf q}}}\sum_{{\bf q} {\bf g}} V^{{\mathbf{0}} j}_{{\rm pert}, {\bf q}}({\bf g}) \rho^{ij}_{{\bf k}, {\bf k}-{\bf q}}(-{\bf g}) \right\} \quad \quad [{\bf q} \rightarrow -{\bf q}, {\bf g} \rightarrow -{\bf g}] \nn \\ & = (1-f_j) \frac{1}{N_{{\bf k}}}\sum_{{\bf k}} e^{i{\bf k} \cdot {\bf R}} \left\{ \frac{1}{N_{{\bf q}}} \sum_{{\bf q} {\bf g}} V^{{\mathbf{0}} j}_{{\rm pert}, -{\bf q}}(-{\bf g}) \rho^{ij}_{{\bf k}, {\bf k}+{\bf q}}({\bf g}) \right\} \nn \\ & = \frac{1}{N_{{\bf k}}}\sum_{{\bf k}} e^{i{\bf k} \cdot {\bf R}} \left\{ (1-f_j) \frac{1}{N_{{\bf q}}}\sum_{{\bf q} {\bf g}} [V^{{\mathbf{0}} j}_{{\rm pert}, {\bf q}}({\bf g})]^* \rho^{ij}_{{\bf k}, {\bf k}+{\bf q}}({\bf g}) \right\}. \end{align} The term inside the braces can be readily identified with the KI Hamiltonian at the given ${\bf k}$-point and is the final results for the KI matrix elements reported in Sec.~3.1.2. From a computational point of view, the product of the (periodic part of the) Wannier functions $\rho^{ij}_{{\bf k}, {\bf k}+{\bf q}}$ is performed in real space and then Fourier-transformed to the reciprocal space where the scalar product with the KI potential $V^{{\mathbf{0}} j}_{{\rm pert}, {\bf q}}$ is performed. Each independent ${\bf q}$-contribution is then summed up to give the desired matrix element. \begin{figure*}[t] \begin{subfigure}{} \includegraphics[width=0.45\textwidth]{Figs/LiF_full_vs_2nd.png} \includegraphics[width=0.45\textwidth]{Figs/GaAs_full_vs_2nd.png} \end{subfigure} \caption{KI Band structure of LiF (left panel) and GaAs (right panel) calculated. Solid red lines are the results from a full KI calculation as described in Ref.~\citenum{de_gennaro_blochs_2021}, black cross are the results of an equivalent calculation within the present implementation and approximation.} \label{fig:PBE} \end{figure*} \section{Full versus 2$^{\rm nd}$ order approximation} In this section we provide additional tests for the second order approximation adopted in this work for the KI energy and potential corrections. As a reference we take a full KI calculation performed as detailed in Ref.~\citenum{nguyen_koopmans-compliant_2018} and Ref.~\citenum{de_gennaro_blochs_2021} where i) the screening coefficients are computed with a finite difference approach removing/adding one electron to the system (and not just a tiny fraction as in the 2$^{\rm nd}$ order approach presented here), and ii) the KI energy and potential corrections are not approximated to 2$^{\rm nd}$ order. We instead resort to the approximation of using MLWFs as variational orbitals also for the SC calculations presented here. In Fig.~\ref{fig:PBE} we show a comparison between full KI band structure calculations done in a super-cell plus unfolding implementation~\cite{de_gennaro_blochs_2021} and the present approach for LiF and GaAs. We used the same computational set-up used in Ref.~\citenum{de_gennaro_blochs_2021}, i.e. a 4$\times$4$\times$4 mesh for the sampling of the Brilloiun zone to match the dimension of the supercell used in Ref.~\citenum{de_gennaro_blochs_2021}, the same lattice constants, base functional (PBE) and kinetic-energy cutoff for the expansion of the wave functions. For GaAs the smooth interpolation method presented in Ref.~\citenum{de_gennaro_blochs_2021} is also used to improve the quality of the band interpolation and to have a fair comparison with the reference results from Ref.~\citenum{de_gennaro_blochs_2021}. For LiF we have an overall very good agreement between the two calculations; the valence and conduction bands are in perfect agreement with the reference full KI calculation, both in terms of energy position and shape. For the F $2s$ and Li $1s$ bands, lying at about -20 eV and -47 eV with respect to the top of the valence band, we observe a difference of 0.3 eV (1.5 \%) and 1.3 eV (2.8 \%), respectively. For GaAs the differences are more marked with a 0.3 eV difference in the band gap and 0.5 eV in the position of the Ga $d$ bands. With respect to experiment, this leads to a slightly worse agreement for the band gap and a slightly better one for the Ga $d$ energy. \begin{figure*}[t] \begin{subfigure}{} \includegraphics[width=0.45\textwidth]{Figs/LiF_full_vs_2nd_LDA.png} \includegraphics[width=0.45\textwidth]{Figs/GaAs_full_vs_2nd_LDA_SI.png} \end{subfigure} \caption{As in Fig.~\ref{fig:PBE} but using LDA as the base functional.} \label{fig:LDA} \end{figure*} As an additional check we performed the same comparison as in Fig.~\ref{fig:PBE}, but now using the LDA as the base functional (the same used in the main text). The comparison is shown in Fig.~\ref{fig:LDA}. For GaAs the agreement between the full and the 2$^{\rm nd}$ order approximation is almost perfect, as was already observed for the density of state in Fig. 2 of the main text. At the same time we observe a slightly worse agreement for LiF, in particular the band gap is underestimated by 0.8 eV (5\%) compared to the full KI results. A summary of the energy positions of selected bands computed at full and 2$^{\rm nd}$ order KI for both LiF and GaAs is given in Tab.~\ref{tab:comparison}. The minor differences between the KI@LDA values reported here with respect to those in the main text (compare Tab~\ref{tab:comparison} with tables under Fig.~3 and Fig.~4 in the main text) are due to the different $\mathbf{k}$-mesh (4$\times$4$\times$4 here vs 6$\times$6$\times$6 in the main text) and, for GaAs band gap, to the spin-orbit coupling correction (-0.1 eV) applied to the results in the main text (not applied here). This analysis points to a non trivial dependence of the quality of the 2$^{\rm nd}$ order approximation on the system and on the base functional which requires further investigation to be deeply understood and possibly corrected toward a better agreement with the full KI method. Work in this direction is under way. \begin{table}[t] \begin{tabular}{cccccccc} \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{3}{c}{LDA} & \multicolumn{3}{c}{PBE} \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & $E_g$ & $\langle\varepsilon_{\rm{F}_{2s}}\rangle$ & $\langle \varepsilon_{\rm{Li}_{1s}} \rangle $ & $E_g$ & $\langle \varepsilon_{\rm{F}_{2s}} \rangle $ & $\langle \varepsilon_{\rm{Li}_{1s}} \rangle$ \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{LiF}} & \multicolumn{1}{c|}{KI Full} & \multicolumn{1}{c|}{16.16} & \multicolumn{1}{c|}{-19.6} & \multicolumn{1}{c||}{-45.2} & \multicolumn{1}{c|}{15.58} & \multicolumn{1}{c|}{-20.2} & \multicolumn{1}{c|}{-46.2} \\ \cline{2-8} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{KI 2nd} & \multicolumn{1}{c|}{15.25} & \multicolumn{1}{c|}{-19.5} & \multicolumn{1}{c||}{-46.6} & \multicolumn{1}{c|}{15.55} & \multicolumn{1}{c|}{-19.8} & \multicolumn{1}{c|}{-47.5} \\ \cline{2-8} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{KIPZ$^\dagger$} & \multicolumn{1}{c|}{ -- } & \multicolumn{1}{c|}{ -- } & \multicolumn{1}{c||}{ -- } & \multicolumn{1}{c|}{ 15.36 } & \multicolumn{1}{c|}{ -21.0 } & \multicolumn{1}{c|}{ -47.1 } \\ \hline \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & $E_g$ & $W$ & $\langle \varepsilon_d \rangle $ & $E_g$ & $W$ & $\langle \varepsilon_d \rangle$ \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{GaAs}} & \multicolumn{1}{c|}{KI Full} & \multicolumn{1}{c|}{1.74} & \multicolumn{1}{c|}{12.7} & \multicolumn{1}{c||}{-17.8} & \multicolumn{1}{c|}{1.68} & \multicolumn{1}{c|}{12.7} & \multicolumn{1}{c|}{-16.9} \\ \cline{2-8} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{KI 2nd} & \multicolumn{1}{c|}{1.75} & \multicolumn{1}{c|}{12.7} & \multicolumn{1}{c||}{-17.8} & \multicolumn{1}{c|}{2.02} & \multicolumn{1}{c|}{12.7} & \multicolumn{1}{c|}{-17.4} \\ \hline \end{tabular} \caption{Relevant energy values for LiF and GaAs from full and 2$^{\rm nd}$ order KI calculations. The zero of the energy is set to the VBM. KIPZ values for LiF~\cite{de_gennaro_blochs_2021} are also reported for comparison.} \label{tab:comparison} \end{table}
1,116,691,499,483
arxiv
\section{Introduction} \IEEEPARstart{N}{etworked} control of systems has gained a lot of attention in recent years. By eliminating unnecessary wiring, the cost and complexity of a control system are reduced, and nodes can more easily be added to or removed from the system. More importantly, there are applications in which the system is required to be controlled over a distance such as telerobotics, space explorations, and working in hazardous environments~\cite{gupta:10}. Networked control of systems is challenging due to network communication problems among which delays have the highest impact~\cite{Heemels:10}. In this regard, many works have appeared in the literature investigating the effects of communication delays on the performance of a control system with time-based dynamics~\cite{Antsaklis:07,gupta:10,Heemels:10}. However, there is less work considering networked control of discrete-event systems (DESs). A DES consists of a set of discrete states where state transitions depend only on the occurrence of instantaneous events. DESs are used for modeling many types of systems, e.g., manufacturing processes, traffic, and queuing systems~\cite{Cassandras:99}. In DESs, time is typically neglected meaning that events can occur independently of time. However, there are control applications in which time is an important factor to be considered, such as minimizing the production-cycle time in a manufacturing process~\cite{Wonham:19}. To consider time in control of a DES, the concept of a timed discrete-event system (TDES) has been introduced, in which the passage of a unit of time is indicated by an event called \ensuremath{\mathit{tick}}\xspace~\cite{Wonham:94}. Supervisory control theory is the main control approach developed for DESs~\cite{Ramadge:87}. To achieve desired (safe) behavior, a supervisor observes events executed in the plant and determines which of the next possible events must be disabled. Supervisory control theory synthesizes nonblocking supervisors that ensure safety, controllability, and nonblockingness for the plant and do not unnecessarily restrict the behavior of the plant (maximal permissiveness)~\cite{Cassandras:99}. \MFedit{ In the conventional supervisory control theory, the plant and supervisor are assumed to interact synchronously; control commands (from the supervisor) are simultaneously received by the plant, and the observations of events executed in the plant are simultaneously received by the supervisor. \MF{I am not sure how picky we should be here... normally the supervisor "enables" (or disables) events, it doesn't "send" them. With observation it makes slightly more sense to talk of an observation being "received", but really the observation is "made" simultaneously as the event is executed.}\AR{Does it help to say control commands sent by $S$ like now?} \MR{I agree with Martin that in the conventional setting there is no receiving and sending. We should only start using these these terms once our framework is discussed.} \MF{I think we need to do something here\dots In the conventional SCT, all events are generated by G, and S disables events by not defining them from the corresponding state, or enable them and in that case when G generates the event S synchronously follows. } Based on the \MFedit{simultaneous}{synchronous} interaction assumption, the controlled behavior is obtained by synchronizing the events executed in the plant and supervisor. However, the \MFedit{simultaneous}{synchronous} interaction assumption fails in a networked supervisory control setting due to the presence of delays in communication channels between the plant and supervisor. } In conventional supervisory control theory~\cite{Ramadge:84,Cassandras:99}, the plant generates all events, while the supervisor can disable some of the events and observes synchronously the execution of events in the plant. Based on this synchronous interaction, a model of the controlled plant behavior can be obtained by synchronous composition of the respective models of the plant and the supervisor. However, the synchronous interaction assumption fails in a networked supervisory control setting, due to the presence of delays in the communication channels between the plant and supervisor. There are several works in the literature investigating supervisory control of DES under communication delays. There are three important properties that these works may focus on: 1) Nonblockingness. For many applications, it is important to guarantee that the supervised plant does not block (as an additional control requirement)~\cite{Lin:17NB,Cassandras:99,yin2015synthesis}. 2) Maximal permissiveness. A supervisor must not restrict the plant behavior more than necessary so that the maximal admissible behavior of the plant is preserved~\cite{Cassandras:99,Wonham:19}. 3) Timed delays (delays modeled based on time). In most of the existing approaches such as in~\cite{Balemi:92,Park:06,Park:12,Lin:14,Lin:17Det,Lin:17NB,Liu:19,Rashidinejad:19}, communication delays are measured in terms of a number of consecutive event occurrences. As stated in ~\cite{Lin:17Det,Rashidinejad18,Zhao:17}, it is not proper to measure time delay only based on the number of event occurrences since events may have different execution times. Here, as in TDES~\cite{Wonham:19}, the event \ensuremath{\mathit{tick}}\xspace is used to represent the passage of a unit of time, which is the temporal resolution for modeling purposes. Supervisory control synthesis under communication delays was first investigated by Balemi~\cite{Balemi:92}. To solve the problem, Balemi defines a condition called \emph{delay insensitive language}. A plant has a delay insensitive language whenever any control command, enabled at a state of the plant, is not invalidated by an uncontrollable event. Under this condition, supervisory control under communication delays can be reduced to the conventional supervisory control synthesis~\cite{Balemi:92}. In other words, if a given plant has a delay insensitive language, then the conventional supervisor is robust to the effects of delays. The benefit of this method is that nonblockingness and maximal permissiveness are already guaranteed by the supervisor if it exists (as they are guaranteed in the conventional supervisory control theory). However, the imposed condition restricts the applications for which such a supervisor exists. In~\cite{Park:06,Park:12}, a condition called \emph{delay-observability} is defined for the control requirement such that the existence of a networked supervisor depends on it. The delay-observability condition is similar to the delay insensitivity condition generalized for a sequence of uncontrollable events so that a control command is not invalidated by a sequence of consecutive uncontrollable events. In~\cite{Park:06,Park:12}, nonblockingness is guaranteed. However, maximal permissiveness is not guaranteed. Also, no method is proposed to obtain the supremal controllable and delay-observable sublanguage of a given control requirement~\cite{Park:12}. In a more recent study, Lin introduced new observability and controllability conditions under the effects of communication delays called \emph{network controllability} and \emph{network observability}~\cite{Lin:14}. The approach presented by Lin has been further modified in ~\cite{Lin:14,Shu:15,Zhao:17,Alves:17,Lin:17Pre,Lin:17NB,Lin:17Det}. In all these works, the problem of supervisory control synthesis under communication delays is defined under certain conditions (network controllability and network observability or the modified versions of them). When the conditions are not met (by the control requirement), the synthesis does not result in a (networked) supervisor~\cite{Lin:14,Shu:15,Alves:17,Zhao:17,Lin:17NB}. As discussed in~\cite{Lin:17Det}, delayed observations and delayed control commands make it (more) challenging to ensure nonblockingness of the supervised plant (compared to the conventional non-networked setting when there is no delay). To guarantee nonblockingness, additional conditions are imposed on the control requirement in~\cite{Lin:17NB}, but maximal permissiveness is not investigated. In~\cite{Liu:19}, an online predictive supervisory control synthesis method is presented to deal with control delays. The supervisor is claimed to be maximally permissive. However, this is not formally proved. This is also the case in~\cite{Shu:15} as they do not formally prove the maximal permissiveness although they establish the steps to achieve it. In~\cite{Lin:17Pre}, a predictive synthesis approach is proposed to achieve a networked supervisor which is guaranteed to be maximally permissive in case it satisfies the conditions. Nonblockingness is yet not investigated in~\cite{Lin:17Pre}. None of the works following Lin's method consider simultaneously nonblockingness and maximal permissiveness. Moreover, as discussed in a recent study by Lin, in case that the conditions are not met by the control requirement, there is no method so far to compute the supremal sublanguage satisfying the conditions ~\cite{Lin20}. In~\cite{Rashidinejad18,Rashidinejad:19}, a new synthesis algorithm is proposed in which the effects of communication delays are taken into account in the synthesis procedure instead of in extra conditions to be satisfied by the plant/control requirement. \cite{Rashidinejad:19} investigates supervisory control of DES in an asynchronous setting. The asynchronous setting does not take time into account, but it is guaranteed that (if the algorithm terminates) the synthesized (asynchronous) supervisor satisfies nonblockingness. Maximal permissiveness is still an open issue in~\cite{Rashidinejad:19}. \cite{Rashidinejad18} focuses on timed delays, but it does not formally prove nonblockingness or maximal permissiveness. In~\cite{Zhu19} as a more recent study, first, the control and observation channels are modeled. Then, both the plant and control requirements are transformed into a networked setting. Using these transformations, the problem of networked supervisory control synthesis is reduced to conventional supervisory control synthesis. Using conventional supervisory control synthesis, the resulting supervisor is controllable and nonblocking for the transformed plant and the transformed control requirements. However, it is not discussed if the supervisor satisfies these conditions for the (original) plant. Furthermore, although it is important to consider time in the presence of delays, only a few papers investigate networked supervisory control of TDES~\cite{ParkTime:08,Zhao:17,Alves:17,Rashidinejad18,Miao:19} (where communication delays are modeled based on a consistent unit of time) as it introduces new complexities and challenges. Table \ref{tab:review} gives an overview of the existing works. To the best of our knowledge, none of these works studies supervisory control synthesis of discrete-event systems under communication delays such that delays are modeled based on time, and the delivered supervisor guarantees both nonblockingness and maximal permissiveness as is done in this paper. \newcommand{\ding{51}}{\ding{51}} \newcommand{\ding{55}}{\ding{55}} \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline Citation & Timed & Nonblocking & Permissive\\ \hline \cite{Balemi:92,Park:12} & \ding{55} & \ding{51} & \ding{51}\\ \hline \cite{Lin:14,Zhu19,Liu:19,Shu:15}& \ding{55} & \ding{55} & \ding{55}\\ \hline \cite{Park:06,Lin:17NB,Rashidinejad:19} & \ding{55} & \ding{51} & \ding{55} \\ \hline \cite{Lin:17Pre} & \ding{55} & \ding{55} & \ding{51} \\ \hline \cite{ParkTime:08,Rashidinejad18,Alves:17,Zhao:17,Miao:19} & \ding{51} & \ding{55} & \ding{55}\\ \hline\hline This Paper & \ding{51} & \ding{51} & \ding{51} \\ \hline \end{tabular} \caption{Overview of existing works.} \label{tab:review} \end{table} Our work is close to~\cite{Rashidinejad18} in terms of the networked supervisory control setting and to~\cite{Rashidinejad:19} in terms of the synthesis technique. Similar to~\cite{Rashidinejad18} and~\cite{Rashidinejad:19}, the following practical conditions are taken into account: 1) A controllable event can be executed in the plant only if it is commanded (enabled) by the supervisor. 2) An uncontrollable event is not commanded (enabled) by the supervisor; it occurs spontaneously in the plant. 3) Any event, controllable or uncontrollable, executed in the plant is observable to the supervisor. 4) A control command sent by the supervisor reaches the plant after a constant amount of time delay. The command may not necessarily be accepted by the plant, in which case it will be removed from the control channel when the next \ensuremath{\mathit{tick}}\xspace occurs. Also, the observation of a plant event, controllable or uncontrollable, occurs after a constant amount of time delay. 5) The control channel is assumed to be FIFO, so \MFedit{consecutive}{}control commands sent by the supervisor will reach the plant in the same order as they have been sent. However, the observation channel is non-FIFO, and so consecutive events that occur in the plant may be observed by the supervisor in any possible order. For instance, if the events $a$ and $b$ occur in that order between two $\ensuremath{\mathit{tick}}\xspace$s in the plant, they may be observed in the other order. \MFedit{Note that \MFedit{}{also} the control channel \MFedit{can also}{could} be non-FIFO\MFedit{. A non-FIFO control channel can}{, which would} make the results more conservative.}{} Here, we investigate the situation where only the observation channel is non-FIFO. See Section \ref{remark:nonfifocontrolchannel} for a discussion on how the proposed solution is adapted for a non-FIFO control channel. This paper improves~\cite{Rashidinejad18,Rashidinejad:19} in the following aspects: 1) Modeling purposes. In~\cite{Wonham:19}, a TDES is generally derived from a DES by restricting the execution of each event within a lower and an upper time bound specified to the event. Also, a TDES should satisfy the ``activity-loop free" (ALF) assumption to guarantee that the clock never stops~\cite{Wonham:19}. Fixing time bounds for events and imposing the ALF condition restrict the applications that can be modeled as TDESs. In this paper, the plant is already given as a TDES. Namely, the plant behavior is represented by an automaton, including the event \ensuremath{\mathit{tick}}\xspace with no specific relationship between the occurrences of \ensuremath{\mathit{tick}}\xspace and other events. To relax the ALF condition, the concept of time-lock freeness is introduced as a property, expressing the time progress of the system. Time-lock freeness, similar to nonblockingness, is guaranteed by the networked supervisor. 2) Synthesis technique. Inspired from the idea introduced in~\cite{Rashidinejad:19} to synthesize an asynchronous supervisor for DES, the synthesis method proposed in~\cite{Rashidinejad18} for networked supervisory control of TDES is improved. For this purpose, first, the networked supervisory control (NSC) framework is modeled. Then, a networked plant automaton is proposed, modeling the behavior of the plant in the NSC framework. Based on the networked plant, a networked supervisor is synthesized. It is guaranteed that the networked supervisor provides nonblockingness, time-lock freeness, and maximal permissiveness. 3) Control requirement. The control requirement in~\cite{Rashidinejad18,Rashidinejad:19} is limited to the avoidance of illegal states. Here, the networked supervisory control synthesis is generalized to control requirements modeled as automata. In the following, the NSC framework is introduced in Section~\ref{section:basic NSP}. For the NSC framework, an operator is proposed to give the networked supervised plant. Moreover, the conventional controllability and maximal permissiveness conditions are modified to timed networked controllability and timed networked maximal permissiveness conditions suitable for the NSC framework. Then, the basic networked supervisory control synthesis problem is formulated which aims to find a timed networked controllable and timed networked maximally permissive networked supervisor guaranteeing nonblockingness and time-lock freeness of the networked supervised plant. In Section~\ref{section:synthesis}, first, the networked plant is defined as an automaton representing the behavior of the plant under communication delays and disordered observations. Furthermore, a technique is presented to synthesize a networked supervisor that is a solution to the basic networked supervisory control problem. In Section~\ref{section:requirements}, the basic networked supervisory control synthesis problem is generalized to satisfy a given set of control requirements. Relevant examples are provided in each section. Finally, Section~\ref{section:conclusions} concludes the paper. To enhance readability, all technical lemmas and proofs are given in the appendices. \section{Basic NSC Problem} \label{section:basic NSP} \subsection{Conventional Supervisory Control Synthesis of TDES} A TDES $G$ is formally represented as a quintuple \begin{equation*} G=(A, \Sigma, \delta, a_{0}, A_{m}), \end{equation*} where $A, \Sigma$, $\delta: A \times \Sigma \rightarrow A$, $a_{0}\in A$, and $A_{m}\subseteq A$ stand for the set of states, the set of events, the (partial) transition function, the initial state, and the set of marked states, respectively. The set of events of any TDES is assumed to contain the event $\ensuremath{\mathit{tick}}\xspace \in \Sigma$. The set $\Sigma_a = \Sigma \setminus \{ \ensuremath{\mathit{tick}}\xspace \}$ is called the set of active events. The notation $\delta(a,\sigma)!$ denotes that $\delta$ is defined for state $a$ and event $\sigma$, i.e., there is a transition from state $a$ with label $\sigma$ to some state. The transition function is generalized to words in the usual way: $\delta(a,w)=a'$ means that there is a sequence of subsequent transitions from state $a$ to the state $a'$ that together make up the word $w\in\Sigma^*$. Starting from the initial state, the set of all possible words that may occur in $G$ is called the language of $G$ and is indicated by $L(G)$; $L(G):=\{w\in\Sigma^*\mid\delta(a_0,w)!\}$. Furthermore, for any state $a\in A$, the function $\textit{Reach}(a)$ gives the set of states reachable from the state $a$; $\textit{Reach}(a) :=\{a'\in A\mid \exists w\in\Sigma^*, \delta(a,w)=a'\}$. States from which it is possible to reach a marked state are called nonblocking. An automaton is nonblocking when each state reachable from the initial state is nonblocking; for each $a\in \textit{Reach}(a_0)$, $\textit{Reach}(a) \cap A_m \neq \varnothing$. $L_m(G)$ denotes the marked language of $G$; $L_m(G):=\{w\in L(G)\mid \delta(a_0,w)\in A_m\}$. States from which time can progress are called \emph{time-lock free} (TLF). An automaton is TLF when each state reachable from the initial state is TLF; for each $a\in \textit{Reach}(a_0)$, there exists a $w\in \Sigma^*$ such that $\delta(a, w\,\ensuremath{\mathit{tick}}\xspace)!$. \MFedit{In this paper, we frequently use \emph{natural projection} operator~\cite{Cassandras:99}. }{} \begin{comment} \begin{defn}[Subautomaton~\cite{Cassandras:99}] Given two TDES $G'=(A',\Sigma',\delta',a'_0,A'_m)$ and $G=(A,\Sigma,\delta,a_0,A_m)$, $G'$ is a subautomaon of $G$, denoted by $G'\subseteq G$, if for all $w\in L(G')$: $\delta'(a'_0,w)=\delta(a_0,w)$. In other word, $x'_0=x_0$, $X'\subseteq X$, and $L(G')\subseteq L(G)$. Moreover, $X'_m=X_m\cap X'$ ($L_m(G')\subseteq L_m(G)$). \hfill $\blacksquare$ \end{defn} \end{comment} \begin{defn}[Natural Projection~\cite{Cassandras:99}] \label{dfn:proj} For sets of events $\Sigma$ and $\Sigma'\subseteq\Sigma$, $P_{\Sigma'}: \Sigma^* \rightarrow \Sigma'^*$ is defined as follows: for $e \in \Sigma$ and $w \in \Sigma^*$, \begin{align*} P_{\Sigma'}(\epsilon)&:=\epsilon,\\ P_{\Sigma'}(we)&:=\begin{cases} P_{\Sigma'}(w) e &\text{if $e\in\Sigma'$,}\\ P_{\Sigma'}(w) &\text{if $e\in\Sigma\setminus\Sigma'$.} \end{cases} \end{align*} The definition of natural projection is extended to a language $L\subseteq\Sigma^*$; $P_{\Sigma'}(L):=\{w'\in\Sigma'^*\mid \exists w\in L, P_{\Sigma'}(w)=w'\}$ \cite{Cassandras:99}. \hfill $\blacksquare$ \end{defn} Natural projection is an operation which is generally defined for languages. However, it is also possible to apply it on automata~\cite{Ware:08}. For an automaton with event set $\Sigma$, $P_{\Sigma'}$ first replaces all events not from $\Sigma'$ by the silent event $\tau$. Then, using a determinization algorithm (such as the one introduced in~\cite{Hopcroft:01}), the resulting automaton is made deterministic. A state of a projected automaton is then marked if it contains at least one marked state from the original automaton (see~\cite{Hopcroft:01} for more details). Using the notation $\delta_P$ for the transition function of the projected automaton, we state the following properties of this construction: (1) for any $w \in \Sigma^*$, if $\delta(a_0,w)=a_r$ then $\delta_P(A_0,P_{\Sigma'}(w)) = A_r$ where $A_0$ is the initial state of the projected automaton, and $A_r \subseteq A$ is a set with $a_r \in A_r$, (2) for any $w \in \Sigma^*$, if $\delta(a_0,w) \in A_m$, then $\delta_P(A_0,P_{\Sigma'}(w))$ is a marked state in the projected automaton. \begin{comment} \begin{lemma}[Nonblockingness over Projection\cite{Rashidinejad18}] \label{lemma:projectionNB} For any TDES $G$ with event set $\Sigma$ and any event set $\Sigma'\subseteq \Sigma$: if $G$ is nonblocking, then $P_{\Sigma'}(G)$ is nonblocking. \hfill $\blacksquare$ \end{lemma} \begin{proof} Consider an arbitrary TDES $G = (A,\Sigma,\delta,a_0,A_m)$ and arbitrary $\Sigma'\subseteq \Sigma$. Suppose that $G$ is nonblocking. Consider an arbitrary reachable state $A_r \subseteq A$ in $P_{\Sigma'}(G)$. By construction $A_r$ is nonempty. Assume that this state is reached using the word $w \in \Sigma'$. Then for each state $a \in A_r$, again by construction, $\delta(a_0,w') = a$ for some $w'\in \Sigma^*$ with $P_{\Sigma'}(w')=w$. Because $G$ is nonblocking, there exists a $v'\in \Sigma^*$ such that $\delta(a,v') =a_m$ for some $a_m \in A_m$. Consequently, from state $A_r$ it is possible to have a transition labelled with $P_{\Sigma'}(v')$ to a state $A'_r$ containing $a_m$. By construction this state $A'_r$ is a marked state in $P_{\Sigma'}(G)$. Hence, the projection automaton is nonblocking as well. \hfill $\blacksquare$ \end{proof} \end{comment} \begin{comment} \begin{lemma}[Time-lock Freeness over Projection\cite{Rashidinejad18}] \label{lemma:projectionTLF} For any TDES $G$ with event set $\Sigma$ and any event set $\Sigma'\subseteq \Sigma$, $\ensuremath{\mathit{tick}}\xspace\in \Sigma'$: if $G$ is TLF , then $P_{\Sigma'}(G)$ is TLF. \hfill $\blacksquare$ \end{lemma} \begin{proof} The proof is similar to the proof of Lemma \ref{lemma:projectionNB}. \hfill$\blacksquare$ \end{proof} \end{comment} In the rest of the paper, the plant is given as the TDES $G$ represented by the automaton $(A,\Sigma_G,\delta_G,a_0,A_m)$ with $\Sigma_G=\Sigma_a\cup\{\ensuremath{\mathit{tick}}\xspace\}$ and $\Sigma_a\cap\{\ensuremath{\mathit{tick}}\xspace\}=\varnothing$. Also, as it holds for many applications, $G$ is a finite automaton \cite{Wonham:19}. A finite automaton has a finite set of states and a finite set of events \cite{Carrol:89}. Here, it is assumed that all events in $G$ are observable. A subset of the active events $\ensuremath{\Sigma_{\mathit{uc}}}\xspace\subseteq \Sigma_a$ is uncontrollable. $\Sigma_c=\Sigma_a\setminus\ensuremath{\Sigma_{\mathit{uc}}}\xspace$ gives the set of controllable active events. The event \ensuremath{\mathit{tick}}\xspace is uncontrollable by nature. However, as in~\cite{Wonham:94}, it is assumed that \ensuremath{\mathit{tick}}\xspace can be preempted by a set of forcible events $\ensuremath{\Sigma_{\mathit{for}}}\xspace\subseteq \Sigma_a$. Note that forcible events can be either controllable or uncontrollable. For instance, closing a valve to prevent overflow of a tank, and the landing of a plane are controllable and uncontrollable forcible events, respectively~\cite{Wonham:19}. Note that for synthesis, the status of the event \ensuremath{\mathit{tick}}\xspace lies between controllable and uncontrollable depending on the presence of enabled forcible events. To clarify, when the event $\emph{tick}$ is enabled at some state $a$ and also there exists a forcible event $\sigma\in\ensuremath{\Sigma_{\mathit{for}}}\xspace$ such that $\delta_G(a,\sigma)!$, then \ensuremath{\mathit{tick}}\xspace is considered as a controllable event since it can be preempted. Otherwise, $\emph{tick}$ is an uncontrollable event. In the figures, forcible events are underlined. The transitions labelled by controllable (active or \ensuremath{\mathit{tick}}\xspace) events are indicated by solid lines and the transitions labelled by uncontrollable (active or \ensuremath{\mathit{tick}}\xspace) events are indicated by dashed lines. If the plant $G$ is blocking, then a supervisor $S$ needs to be synthesized to satisfy nonblockingness of the supervised plant. $S$ is also a TDES with the same event set as $G$. Since the plant and supervisor are supposed to work synchronously in a conventional non-networked setting, the automaton representing the supervised plant behavior is obtained by applying the \emph{synchronous product} indicated by $S||G$ \cite{Cassandras:99}. Generally, in the synchronous product of two automata, a shared event can be executed only when it is enabled in both automata, and a non-shared event can be executed if it is enabled in the corresponding automaton. Since the conventional supervisor $S$ has the same event set as $G$, each event will be executed in $S||G$ only if the supervisor enables (allows) it. $S$ is controllable if it allows all uncontrollable events that may occur in the plant. This is captured in \emph{conventional controllability for TDES}. \begin{comment} \begin{defn}[Conventional Controllability for TDES (reformulated from~\cite{Wonham:94})] \label{dfn:cont.TDES} Given a plant $G$ with uncontrollable events $\ensuremath{\Sigma_{\mathit{uc}}}\xspace$ and forcible events $\ensuremath{\Sigma_{\mathit{for}}}\xspace$, a (conventional) supervisor $S$, is controllable w.r.t.\ $G$ if for all $w\in L(S||G)$ and $u\in \begin{cases} \ensuremath{\Sigma_{\mathit{uc}}}\xspace &\quad\text{if}\quad \exists\sigma\in\ensuremath{\Sigma_{\mathit{for}}}\xspace,~ w\sigma\in L(\mathit{S||G}) \\ \ensuremath{\Sigma_{\mathit{uc}}}\xspace\cup\{tick\} &\quad\text{otherwise,} \end{cases}$,\\ whenever $wu\in L(G)$, then $wu\in L(S||G)$. \hfill $\blacksquare$ \end{defn} \end{comment} \begin{defn}[Conventional Controllability for TDES (reformulated from~\cite{Wonham:19})] \label{dfn:cont.TDES} Given a plant $G$ with uncontrollable events $\ensuremath{\Sigma_{\mathit{uc}}}\xspace$ and forcible events $\ensuremath{\Sigma_{\mathit{for}}}\xspace$, a TDES $S$, is controllable w.r.t.\ $G$ if for all $w\in L(S||G)$ and $\sigma\in \ensuremath{\Sigma_{\mathit{uc}}}\xspace\cup\{tick\}$, if $w\sigma\in L(G)$, \begin{enumerate} \item \label{defCtrlStandard}$w\sigma\in L(S||G)$ , or \item \label{defCtrlForcible}$\sigma=\ensuremath{\mathit{tick}}\xspace$ and $w\sigma_f\in L(S||G)$ for some $\sigma_f\in \ensuremath{\Sigma_{\mathit{for}}}\xspace$.\hfill $\blacksquare$ \end{enumerate} \end{defn} Property~\eqref{defCtrlStandard} in the above definition is the standard controllability property (when there is no forcible event to preempt \ensuremath{\mathit{tick}}\xspace); $S$ cannot disable uncontrollable events that $G$ may generate. However, if a forcible event is enabled, this may preempt the time event, which is captured by Property~\eqref{defCtrlForcible}. A supervisor $S$ is called \emph{proper} for a plant $G$ whenever $S$ is controllable w.r.t.\ $G$, and the supervised plant $S||G$ is nonblocking. \begin{defn}[Conventional Maximal Permissivenesss] \label{dfn:MaxPer} A proper supervisor $S$ is \emph{maximally permissive} for a plant $G$, whenever $S$ preserves the largest behavior of $G$ compared to any other proper supervisor $S'$; for any proper $S'$: $L(S'||G)\subseteq L(S||G)$. \hfill$\blacksquare$ \end{defn} For a TDES, a proper and a maximally permissive supervisor can be synthesized by applying the synthesis algorithm proposed in~\cite{Wonham:19}. \subsection{Motivating Examples} This section discusses the situations where a proper and maximally permissive conventional supervisor $S$ fails in the presence of observation delay (Example~\ref{exp:ME1}), non-FIFO observation (Example \ref{exp:ME2}), or control delay (Example \ref{exp:ME3}). \begin{example}[Observation Delay] \label{exp:ME1} Consider the plant depicted in Figure \ref{fig:ME1P}. To be maximally permissive, $S$ must not disable $a$ at $a_0$, and to be nonblocking, $S$ must disable $a$ at $a_2$. Now, assume that the observation of the events executed in $G$ are not immediately received by $S$ due to observation delay. Starting from $a_0$, imagine that $u$ occurs, and $G$ goes to $a_2$. Since $S$ does not observe $u$ immediately, it supposes that $G$ is still at $a_0$ where it enables $a$. Then, $a$ will be applied at the real state where $G$ is, i.e., $a_2$, and so $G$ goes to $a_3$ which is blocking. \begin{figure}[htb] \centering \begin{tikzpicture}[>=stealth',,shorten >=0.8pt,auto,node distance=2.2cm,scale = 0.6, transform shape] \node[initial,initial text={},state,accepting] (A) {$a_0$}; \node[state,accepting] (B) [right of=A] {$a_1$}; \node[state,accepting] (C) [below of=A] {$a_2$}; \node[state] (D) [right of=C] {$a_3$}; \path[->] (A) edge [above] node [align=center] {$a$} (B) (A) edge [right,dashed] node [align=center] {$u$} (C) (C) edge [above] node [align=center] {$a$} (D) (B) edge [loop right, dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (D) edge [loop right, dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (D) (C) edge [loop left, dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (C); \end{tikzpicture} \caption{Plant for Example \ref{exp:ME1}.}\label{fig:ME1P} \end{figure} \end{example} \begin{example}[Non-FIFO Observation] \label{exp:ME2} Consider the plant $G$ depicted in Figure \ref{fig:ME2P}. To be nonblocking, $S$ must disable $a$ at $a_3$, and to be maximally permissive, $S$ must not disable $a$ at $a_6$. Now, assume that the observation channel is non-FIFO, i.e., events may be observed in a different order as they occurred in $G$. Starting from $a_0$, imagine that $G$ executes $\ensuremath{\mathit{tick}}\xspace\,a\,b$ and goes to $a_3$. Since the observation channel is non-FIFO, $S$ may receive the observation of $\ensuremath{\mathit{tick}}\xspace\,a\,b$ as $\ensuremath{\mathit{tick}}\xspace\,b\,a$ after which it does not disable $a$. However, $G$ is actually at $a_3$ and by executing $a$, it goes to $a_4$ which is blocking. \begin{figure}[htb] \centering \begin{tikzpicture}[>=stealth',,shorten >=0.8pt,auto,node distance=2.2cm,scale = 0.6, transform shape] \node[initial,initial text={},state,accepting] (A) {$a_0$}; \node[state,accepting] (B) [right of=A] {$a_1$}; \node[state,accepting] (C) [right of=B] {$a_2$}; \node[state,accepting] (D) [right of=C] {$a_3$}; \node[state] (E) [right of=D] {$a_4$}; \node[state,accepting] (F) [below of=B] {$a_5$}; \node[state,accepting] (G) [right of=F] {$a_6$}; \node[state,accepting] (H) [right of=G] {$a_7$}; \path[->] (A) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [above] node [align=center] {$a$} (C) (C) edge [above] node [align=center] {$b$} (D) (D) edge [above] node [align=center] {$a$} (E) (E) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (E) (B) edge [right] node [align=center] {$b$} (F) (F) edge [above] node [align=center] {$a$} (G) (G) edge [above] node [align=center] {$a$} (H) (H) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (H); \end{tikzpicture} \caption{Plant for Example \ref{exp:ME2}.}\label{fig:ME2P} \end{figure} \end{example} \begin{example}[Control Delay] \label{exp:ME3} Consider the plant depicted in Figure \ref{fig:ME3P}. To be maximally permissive, $S$ must not disable $a$ at $a_1$, and to be nonblocking $S$ must disable $a$ at $a_3$. Now, assume that control commands are received by $G$ after one \ensuremath{\mathit{tick}}\xspace. Starting from $a_0$, $S$ does not disable $a$ after one \ensuremath{\mathit{tick}}\xspace (when $G$ is at $a_1$). However, the command is received by $G$ after the passage of one \ensuremath{\mathit{tick}}\xspace (due to the control delay) when $G$ is at $a_3$. So, by executing $a$ at $a_3$, $G$ goes to $a_4$ which is blocking. \begin{figure}[htb] \centering \begin{tikzpicture}[>=stealth',,shorten >=0.8pt,auto,node distance=2.2cm,scale = 0.6, transform shape] \node[initial,initial text={},state,accepting] (A) {$a_0$}; \node[state,accepting] (B) [right of=A] {$a_1$}; \node[state,accepting] (C) [right of=B] {$a_2$}; \node[state,accepting] (D) [below of=B] {$a_3$}; \node[state] (E) [right of=D] {$a_4$}; \path[->] (A) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [above] node [align=center] {$a$} (C) (B) edge [right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (D) (D) edge [above] node [align=center] {$a$} (E) (D) edge [loop left,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (D) (E) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (E) (C) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (C); \end{tikzpicture} \caption{Plant for Example \ref{exp:ME3}.}\label{fig:ME3P} \end{figure} \end{example} \begin{remark} Conventional supervisory control synthesis of a TDES guarantees nonblockingness~\cite{Wonham:19}. However, as can be seen in Example~\ref{exp:ME2}, it cannot guarantee time-lock freeness; $a_3$ is not TLF, and it is not removed by $S$. This is not an issue in~\cite{Wonham:19} since a TDES is assumed to satisfy the ALF condition. Here, to guarantee time progress, the TLF property must be considered in synthesis. \end{remark} As \MFedit{it}{}is clear from the examples, a supervisor is required that can deal with the problems caused by communication delays and disordered observations. To achieve such a supervisor, first, the networked supervisory control framework is established. \subsection{NSC Framework} In the presence of delays in the control and observation channels, enabling, executing and observing events do not happen at the same time. Figure \ref{fig:NSCFW} depicts the networked supervisory control (NSC) framework that is introduced in this paper. To recognize the differences between the enablement and observation of events and their execution in the plant, as in \cite{{Rashidinejad18,Rashidinejad:19}}, a set of \emph{enabling events} $\Sigma_e$ and a set of \emph{observed events} $\Sigma_o$ are introduced. \begin{defn}[Enabling and Observed Events]\label{dfn:Sigmao&Sigmae} Given a plant $G$, to each controllable active event $\sigma\in\Sigma_c$ an enabling event $\sigma_{e}\in\Sigma_{e}$, and to each active event $\sigma\in\Sigma_a$ an observed event $\sigma_o\in\Sigma_o$ are associated such that $\Sigma_e\cap\Sigma_a=\varnothing$ and $\Sigma_o\cap\Sigma_a=\varnothing$ (clearly $\Sigma_e\cap\Sigma_o=\varnothing$). \hfill $\blacksquare$ \end{defn} Note that all events executed in the plant are supposed to be observable so that the observed event $\sigma_o$ is associated to any $\sigma\in \Sigma_a$. However, not all the events are supposed to be controllable. Uncontrollable events such as disturbances or faults occur in the plant spontaneously. In this regard, enabling events $\sigma_e$ are associated only to events from $\Sigma_c$. \begin{figure}[htbp] \centering \includegraphics[width=55mm]{NSCFWclock.pdf} \caption{NSC framework~\cite{Rashidinejad18}.} \label{fig:NSCFW} \end{figure} Considering Figure \ref{fig:NSCFW}, a networked supervisor for $G$ that fits in the proposed framework is a TDES given as: \begin{equation*} \label{eq:NS} \ensuremath{\mathit{NS}}\xspace = (Y, \ensuremath{\Sigma_{\mathit{NS}}}\xspace, \ensuremath{\delta_{\mathit{NS}}}\xspace, y_{0}, Y_{m}), \end{equation*} for which the event set $\ensuremath{\Sigma_{\mathit{NS}}}\xspace=\Sigma_e\cup\Sigma_o\cup\{\ensuremath{\mathit{tick}}\xspace\}$, and the event \ensuremath{\mathit{tick}}\xspace is produced by the global clock in the system so that $\ensuremath{\Sigma_{\mathit{NS}}}\xspace\cap\Sigma_G=\{\ensuremath{\mathit{tick}}\xspace\}$. For the proposed NSC framework, the behavior of the plant under the control of a networked supervisor is achieved through \emph{asynchronous composition}. To define asynchronous composition, we first need to consider the effects of delays on events sent through the control and observation channels. In this paper, it is assumed that the control (observation) channel has a finite capacity denoted by $\ensuremath{L_{\mathit{max}}}\xspace$ ($\ensuremath{M_{\mathit{max}}}\xspace$), which introduces a constant amount of delay represented by a natural number $N_c$ ($N_o$). Since the control channel is supposed to be FIFO, a list or sequence is used to consider the journey of events through the control channel. As given in Definition \ref{dfn:list} below, $l\in (\Sigma_c \times [0,N_c])^*$ provides us with the current situation of the control channel. The interpretation of $l[i]=(\sigma,n)$ is that the $i^{th}$ enabling event present in the control channel is $\sigma_e$ which still requires $n$ \ensuremath{\mathit{ticks}}\xspace before being received by the plant. \begin{defn}[Control Channel Representation] \label{dfn:list} The control channel is represented by the set $L=(\Sigma_c \times [0,N_c])^*$. Moreover, we define the following operations for all $\sigma \in \Sigma_c$, the time counter $n \in [0,N_c]$ and $l \in L$: \begin{itemize} \item $\varepsilon$ denotes the empty sequence. \item $\mathit{app}(l,(\sigma,n))$ adds the element $(\sigma,n)$ to the end of $l$ if $|l|<\ensuremath{L_{\mathit{max}}}\xspace$ (the channel is not full), otherwise $l$ stays the same. \item $\ensuremath{\mathit{head}}\xspace(l)$ gives the first element of $l$ (for nonempty lists). Formally, $\ensuremath{\mathit{head}}\xspace((\sigma,n)~l) = (\sigma,n)$ and $\ensuremath{\mathit{head}}\xspace(\varepsilon)$ is undefined. \item $\mathit{tail}(l)$ denotes the list after removal of its leftmost element. Formally, $\mathit{tail}((\sigma,n)~l) = l$ and $\mathit{tail}(\varepsilon)$ is undefined. \item $l-1$ decreases the natural number component of every element in $l$ by one (if possible). It is defined inductively as follows $\varepsilon -1 = \varepsilon$, $((\sigma,0) ~l) -1 = ~l-1$, and $((\sigma,n+1)~l) -1 = (\sigma,n)~(l-1)$. \hfill $\blacksquare$ \end{itemize} \end{defn} Due to the assumption that the observation channel is non-FIFO, we use a multiset to consider the journey of each event through the observation channel. As given in Definition \ref{dfn:medium} below, the multiset $m:\Sigma_a \times [0,N_o]\rightarrow \mathbb{N}$ provides us with the current situation of the observation channel. The interpretation of $m(\sigma,n) = k$ is that currently there are $k$ events $\sigma$ in the observation channel that still require $n$ \ensuremath{\mathit{ticks}}\xspace before reaching the (networked) supervisor. \begin{defn}[Observation Channel Representation] \label{dfn:medium} The observation channel is represented by the set $M=\{m\, |\, m:\Sigma_a \times [0,N_o]\rightarrow \mathbb{N}\}$. Moreover, we define the following operations for all $m\in M$, $\sigma,\sigma' \in \Sigma_a$ and the time counters $n,n' \in [0,N_o]$: \begin{itemize} \item $[]$ denotes the empty multiset, i.e., the function $m$ with $m(\sigma,n)=0$. \item $|m|=\sum_{(\sigma,n)\in\Sigma_a\times[0,N_o]}m(\sigma,n)$ denotes the number of events in the observation channel represented by $m$. \item $m\uplus [(\sigma,n)]$ inserts $(\sigma,n)$ to $m$ if $|m|<\ensuremath{M_{\mathit{max}}}\xspace$ (the observation channel is not full). Formally, it denotes the function $m'$ for which $m'(\sigma,n)=m(\sigma,n)+1$ and $m'(\sigma',n') = m(\sigma',n')$ otherwise. If $|m|=\ensuremath{M_{\mathit{max}}}\xspace$ (the observation channel is full), then the channel stays the same, i.e., $m'=m$. \item $m\setminus [(\sigma,n)]$ removes $(\sigma,n)$ from $m$ once. Formally, it denotes the function $m'$ for which $m'(\sigma,n)=\max(m(\sigma,n)-1,0)$ and $m'(\sigma',n') = m(\sigma',n')$ otherwise. \item $m-1$ decreases the natural number component of every element by one (as long as it is positive). Formally, it denotes the function $m'$ for which $m'(\sigma,n) = m(\sigma,n+1)$ for all $n < N_o$ and $m'(\sigma,N_o) = 0$. \item $(\sigma,n) \in m$ denotes that the pair $(\sigma,n)$ is present in $m$, it holds if $m(\sigma,n) > 0$. \hfill $\blacksquare$ \end{itemize} \end{defn} In the rest of the paper, a networked supervisor for the plant $G$ is given as the TDES $\ensuremath{\mathit{NS}}\xspace$ represented by the automaton $(Y,\ensuremath{\Sigma_{\mathit{NS}}}\xspace,\ensuremath{\delta_{\mathit{NS}}}\xspace,y_0,Y_m)$. Considering the representation of control and observation channels, an asynchronous composition operator is defined to achieve a networked supervised plant. \begin{defn}[Timed Asynchronous Composition Operator] \label{dfn:operator} Given a plant $G$ and a networked supervisor $\ensuremath{\mathit{NS}}\xspace$ (for $G$), the asynchronous product of $G$ and $\ensuremath{\mathit{NS}}\xspace$, denoted by $\ensuremath{\mathit{NS}}\xspace_{N_c}\|_{N_o}\,G$, is given by the automaton \begin{equation*} \ensuremath{\mathit{NS}}\xspace_{N_c}\|_{N_o}\,G = (Z, \ensuremath{\Sigma_{\mathit{NSP}}}\xspace, \ensuremath{\delta_{\mathit{NSP}}}\xspace, z_{0}, Z_{m}), \end{equation*} where \begin{equation*} \begin{aligned} &Z= A\times Y\times M\times L,\qquad \ensuremath{\Sigma_{\mathit{NSP}}}\xspace= \ensuremath{\Sigma_{\mathit{NS}}}\xspace\cup\Sigma,\\ &z_{0}=(a_{0},y_0,[],\varepsilon),\hspace{1.30cm} Z_{m}=A_{m}\times Y_{m}\times M\times L. \end{aligned} \end{equation*} Moreover, for $a\in A$, $y\in Y$, $m\in M$, and $l\in L$, $\ensuremath{\delta_{\mathit{NSP}}}\xspace:Z\times\ensuremath{\Sigma_{\mathit{NSP}}}\xspace\rightarrow Z$ is defined as follows: \begin{enumerate} \item When an event $\sigma_e \in \Sigma_e$ occurs in $\ensuremath{\mathit{NS}}\xspace$, it is sent through the control channel. This is represented by adding $(\sigma,N_c)$ to $l$ where $N_c$ is the remaining time for $\sigma_e$ until being received by $G$. If $\ensuremath{\delta_{\mathit{NS}}}\xspace(y,\sigma_e)!$: \begin{equation*} \ensuremath{\delta_{\mathit{NSP}}}\xspace((a,y,m,l),\sigma_{e})=(a,\ensuremath{\delta_{\mathit{NS}}}\xspace(y,\sigma_e),m,\mathit{app}(l,(\sigma,N_c))). \end{equation*} \item An active controllable event $\sigma\in \Sigma_c$ can occur if the plant enables it, and the corresponding control command (enabling event) is received by the plant as $(\sigma,0)$ (as the enabling event finished its journey through the control channel). When $\sigma$ occurs, it will be stored in $m$ with the remaining time $N_o$ until being observed by $\ensuremath{\mathit{NS}}\xspace$. If $\delta_G(a,\sigma)!$ and $\ensuremath{\mathit{head}}\xspace(l)=(\sigma,0)$: \begin{equation*} \ensuremath{\delta_{\mathit{NSP}}}\xspace((a,y,m,l),\sigma)=(\delta_G(a,\sigma),y,m\uplus [(\sigma,N_{o})],\mathit{tail}(l)). \end{equation*} \item An uncontrollable event $\sigma \in \ensuremath{\Sigma_{\mathit{uc}}}\xspace$ can occur if it is enabled in $G$. When $\sigma$ occurs, it will be stored in $m$ with the remaining time $N_o$ until being observed by $\ensuremath{\mathit{NS}}\xspace$. If $\delta_G(a,\sigma)!$: \begin{equation*} \ensuremath{\delta_{\mathit{NSP}}}\xspace((a,y,m,l),\sigma)=(\delta_G(a,\sigma),y,m\uplus [(\sigma,N_{o})],l). \end{equation*} \item Event \ensuremath{\mathit{tick}}\xspace can occur if both $\ensuremath{\mathit{NS}}\xspace$ and $G$ enable it, and there is no event ready to be observed by $\ensuremath{\mathit{NS}}\xspace$. Upon the execution of \ensuremath{\mathit{tick}}\xspace, all the time counters in $m$ and $l$ are decreased by one. If $\delta_G(a,\ensuremath{\mathit{tick}}\xspace)!$, $\ensuremath{\delta_{\mathit{NS}}}\xspace(y,\ensuremath{\mathit{tick}}\xspace)!$, $(\sigma,0)\notin m$ for all $\sigma\in \Sigma_a$ \begin{multline*} \ensuremath{\delta_{\mathit{NSP}}}\xspace((a,y,m,l),\ensuremath{\mathit{tick}}\xspace)=\\ (\delta_G(a,\ensuremath{\mathit{tick}}\xspace),\ensuremath{\delta_{\mathit{NS}}}\xspace(y,\ensuremath{\mathit{tick}}\xspace),m-1,l-1). \end{multline*} \item The observation of an active event $\sigma \in \Sigma_a$ can occur when it finishes its journey through the observation channel (and so it is received by $\ensuremath{\mathit{NS}}\xspace$), and $\sigma_o$ is enabled by $\ensuremath{\mathit{NS}}\xspace$. When $\sigma_o$ occurs, $(\sigma,0)$ is removed from $m$. If $\ensuremath{\delta_{\mathit{NS}}}\xspace(y,\sigma_o)!$ and $(\sigma,0)\in m$: \begin{equation*} \ensuremath{\delta_{\mathit{NSP}}}\xspace((a,y,m,l),\sigma_{o})=(a,\ensuremath{\delta_{\mathit{NS}}}\xspace(y,\sigma_o),m\setminus[(\sigma,0)],l). \hspace*{0.55cm} \blacksquare \end{equation*} \end{enumerate} \end{defn} In the rest of the paper, the asynchronous composition $\ensuremath{\mathit{NS}}\xspace_{N_c}\|_{N_o}\,G$ of the plant $G$ and the networked supervisor $\ensuremath{\mathit{NS}}\xspace$ (for that plant) is assumed to be the TDES $\ensuremath{\mathit{NSP}}\xspace$ represented by the automaton $(Z, \ensuremath{\Sigma_{\mathit{NSP}}}\xspace, \ensuremath{\delta_{\mathit{NSP}}}\xspace, z_{0}, Z_{m})$. Note that the networked supervised plant models the behavior of a plant controlled by a networked supervisor, and so for the proposed operator, we need to prove that the result does not enlarge the behavior of the plant. \begin{property}[\ensuremath{\mathit{NSP}}\xspace and Plant] \label{property:NSP&P} Given a plant $G$ and networked supervisor $\ensuremath{\mathit{NS}}\xspace$ (for that plant): $P_{\Sigma_G}(L(\mathit{\ensuremath{\mathit{NSP}}\xspace})) \subseteq L(G)$. \end{property} \begin{proof} See Appendix \ref{proof:NSP&P}. \hfill $\blacksquare$ \end{proof} A networked supervisor is controllable with respect to a plant if it never disables any uncontrollable event that can be executed by the plant. To have a formal representation of controllability in the NSC framework, Definition \ref{dfn:cont.TDES} is adapted to \emph{timed networked controllability}. \begin{comment} \begin{defn} [Timed Networked Controllability] \label{dfn:NScont} \AR{I am still wondering if this is the right definition ... $wu\in P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace))$ only guarantees that there exists a word in $\ensuremath{\mathit{NSP}}\xspace$ with the same projection on $\Sigma$ followed by $u$?} Consider a plant $G$ and a networked supervisor $\ensuremath{\mathit{NS}}\xspace$ for that plant with control and observation delays $N_c$ and $N_o$, respectively. Then, \ensuremath{\mathit{NSP}}\xspace is timed networked controllable for $G$ if for any $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace))$ and \begin{equation*} u\in \ensuremath{\Sigma_{\mathit{uc}}}\xspace\cup \begin{cases} \varnothing &\quad\text{if}\quad \exists_{\sigma\in\ensuremath{\Sigma_{\mathit{for}}}\xspace}~w\sigma\in P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace)) \\ \{tick\} &\quad\text{otherwise,} \end{cases} \end{equation*} whenever $wu\in L(G)$, then $wu\in P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace))$. \hfill$\blacksquare$ \end{defn} \end{comment} \begin{defn}[Timed Networked Controllability] \label{dfn:NScont} Given a plant $G$ with uncontrollable events $\ensuremath{\Sigma_{\mathit{uc}}}\xspace$ and forcible events $\ensuremath{\Sigma_{\mathit{for}}}\xspace$, a networked supervisor $\ensuremath{\mathit{NS}}\xspace$, is controllable w.r.t.\ $G$ if for all $w\in L(\ensuremath{\mathit{NSP}}\xspace)$ and $\sigma\in \ensuremath{\Sigma_{\mathit{uc}}}\xspace\cup\{tick\}$, whenever $P_{\Sigma_G}(w)\sigma\in L(G)$: \begin{enumerate} \item \label{defCtrlStandard}$w\sigma\in L(\ensuremath{\mathit{NSP}}\xspace)$ , or \item \label{defCtrlForcible}$\sigma=\ensuremath{\mathit{tick}}\xspace$ and $w\sigma_f\in L(\ensuremath{\mathit{NSP}}\xspace)$ for some $\sigma_f\in \ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace\cup\Sigma_o$, where $\ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace=\ensuremath{\Sigma_{\mathit{for}}}\xspace\cup\Sigma_e$.\hfill $\blacksquare$ \end{enumerate} \end{defn} When there is no network, i.e., $\ensuremath{\Sigma_{\mathit{NS}}}\xspace=\Sigma_G$, timed networked controllability coincides with conventional controllability for TDES (Definition \ref{dfn:cont.TDES}). \begin{comment} \begin{defn} [Timed Networked Controllability] \label{dfn:NScont} Consider a plant $G$ and a networked supervisor $\ensuremath{\mathit{NS}}\xspace$. Then, \ensuremath{\mathit{NSP}}\xspace is \emph{timed networked controllable} for $G$ if for any $w\in L(\ensuremath{\mathit{NSP}}\xspace)$ and \begin{equation*} u\in \ensuremath{\Sigma_{\mathit{uc}}}\xspace\cup \begin{cases} \varnothing &\quad\text{if}\quad \exists_{\sigma\in\ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace\cup\Sigma_o}~w\sigma\in L(\ensuremath{\mathit{NSP}}\xspace) \\ \{tick\} &\quad\text{otherwise,} \end{cases} \end{equation*} whenever $P_{\Sigma_G}(w)u\in L(G)$, then $wu\in L(\ensuremath{\mathit{NSP}}\xspace)$ where $\ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace=\ensuremath{\Sigma_{\mathit{for}}}\xspace\cup\Sigma_e$. \hfill$\blacksquare$ \end{defn} \end{comment} \begin{remark} Considering Definition \ref{dfn:operator}, \ensuremath{\mathit{tick}}\xspace does not occur if there is an event ready to be observed ($(\sigma,0)\in m$). In other words, observed events always preempt \ensuremath{\mathit{tick}}\xspace since they occur once they finish their journey in the observation channel. The enabling events are assumed to be forcible as well. This gives the opportunity to the networked supervisor to preempt \ensuremath{\mathit{tick}}\xspace by enabling an event whenever it is necessary. In Section \ref{section:PV}, we discuss other possible cases. \end{remark} A networked supervisor $NS$ is called proper in NSC framework if is timed networked controllable, nonblocking, and TLF. Similar to controllability, the definition of maximal permissiveness (in the conventional setting) is adapted to \emph{timed networked maximal permissiveness} (for NSC Framework). \begin{defn}[Timed Networked Maximal Permissiveness] A proper networked supervisor $NS$ is timed networked maximally permissive for a plant $G$, if for any other proper networked supervisor $NS'$ in the same NSC framework (with event set $\ensuremath{\Sigma_{\mathit{NS}}}\xspace$): $P_{\Sigma_G}(L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G))\subseteq P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace)$. In other words, $\ensuremath{\mathit{NS}}\xspace$ preserves the largest admissible behavior of $G$. \hfill$\blacksquare$ \end{defn} Again, when there is no network, this notion coincides with conventional maximal permissiveness (Definition \ref{dfn:MaxPer}). \subsection{Problem Formulation} The \emph{Basic NSC Problem} is defined as follows. Given a plant model $G$ as a TDES, observation (control) channel with delay $N_o$ ($N_c$) and maximum capacity $\ensuremath{M_{\mathit{max}}}\xspace$ ($\ensuremath{L_{\mathit{max}}}\xspace$), provide a networked supervisor $\ensuremath{\mathit{NS}}\xspace$ such that \begin{itemize} \item \ensuremath{\mathit{NSP}}\xspace is nonblocking, \item \ensuremath{\mathit{NSP}}\xspace is time-lock free \item \ensuremath{\mathit{NS}}\xspace is timed networked controllable for $G$, and \item \ensuremath{\mathit{NS}}\xspace is timed networked maximally permissive. \end{itemize} \section{Networked Supervisory Control Synthesis} \label{section:synthesis} To achieve a proper and maximally permissive networked supervisor (in the NSC framework), the synthesis is applied on the ``networked plant", as indicated in Figure \ref{fig:solution1}. The networked plant is a model for how events are executed in the plant according to the enabling events, and how the observations of the executed events may occur in a networked supervisory control setting. Based on the networked plant, a synthesis algorithm is proposed to obtain a networked supervisor, which is a solution to the basic NSC problem. Example \ref{exp:BusPed} is used to illustrate each step of the approach. \begin{figure}[h] \centering \includegraphics[width=60mm]{Solution1.pdf} \caption{Networked plant.} \label{fig:solution1} \end{figure} \begin{example} (Endangered Pedestrian) \label{exp:BusPed} Let us consider the endangered pedestrian example from~\cite{Wonham:19}. The plant $G$ is depicted in Figure \ref{fig:BusPed}. Both the bus and pedestrian are supposed to do single transitions indicated by \emph{p} for passing and \emph{j} for jumping. The requirement considered in~\cite{Wonham:19} is that the pedestrian should jump before the bus passes. However, since we do not consider requirements here (yet), we adapt the plant from~\cite{Wonham:19} such that if the bus passes before the pedestrian jumps, then $G$ goes to a blocking state. The control channel is FIFO, the observation channel is non-FIFO, $N_c=N_o=1$, $\ensuremath{L_{\mathit{max}}}\xspace=1$, and $\ensuremath{M_{\mathit{max}}}\xspace=2$. We aim to synthesize a proper and maximally permissive networked supervisor for $G$. \begin{figure}[htbp] \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2.2cm,scale = 0.6, transform shape] \node[initial,initial text={},state] (A) {$a_0$}; \node[state] (B) [right of=A] {$a_1$}; \node[state] (C) [right of=B] {$a_2$}; \node[state] (D) [right of=C] {$a_3$}; \node[state] (E) [right of=D] {$a_4$}; \node[state] (F) [below of=B] {$a_5$}; \node[state] (G) [right of=F] {$a_6$}; \node[state,accepting] (H) [right of=G] {$a_7$}; \path[->] (A) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [above] node [align=center] {$ \ensuremath{\mathit{tick}}\xspace $} (C) (C) edge [dashed,above] node [align=center] {$ p $} (D) (D) edge [above] node [align=center] {$ j $} (E) (B) edge [right] node [align=center] {$j$} (F) (C) edge [right] node [align=center] {$j$} (G) (F) edge [above,dashed] node [align=center] {$ \ensuremath{\mathit{tick}}\xspace $} (G) (G) edge [dashed,above] node [align=center] {$ p $} (H) (E) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (E) (H) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (H) (D) edge [loop above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (D); \end{tikzpicture} \caption{Endangered pedestrian from Example \ref{exp:BusPed}.} \label{fig:BusPed} \end{figure} \end{example} \subsection{Networked Plant} The behavior of the plant communicating through the control and observation channels is captured by the \emph{networked plant}. As is clear from Figure~\ref{fig:solution1}, if we do not consider enabling and observation of events, what is executed in the networked plant is always a part of the plant behavior. Let us denote by $\ensuremath{\mathit{NP}}\xspace$ the networked plant automaton, then $P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))\subseteq L(G)$. Moreover, note that a networked supervisor is synthesized for a plant on the basis of the networked plant. The networked plant should represent all the possible behavior of the plant in the networked supervisory control setting, and it is only the networked supervisor that may prevent the occurrence of some plant events by disabling the relevant enabling event. This means that $\ensuremath{\mathit{NP}}\xspace$ should be such that $L(G)\subseteq P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$. The latter property relies on the following assumptions. \textbf{Assumption 1:} The plant enables enough \ensuremath{\mathit{ticks}}\xspace in the beginning; there are at least $N_c$ \ensuremath{\mathit{ticks}}\xspace (there can be uncontrollable events occurring between \ensuremath{\mathit{ticks}}\xspace) enabled before the first controllable event. \textbf{Assumption 2:} The control channel provides enough capacity for all enabling commands being sent to the plant. Imagine that $\ensuremath{\mathit{tick}}\xspace\,\sigma\,\ensuremath{\mathit{tick}}\xspace^*\in L(G)$, and $\ensuremath{L_{\mathit{max}}}\xspace=0$. Then, $\sigma_e$ may occur in $\ensuremath{\mathit{NP}}\xspace$, but the plant will never execute $\sigma$ as it does not receive the relevant enabling command. To avoid this situation, the size of the control channel should be such that it always has the capacity for all enabling events. An enabling event will be removed from the control channel after $N_c$ \ensuremath{\mathit{ticks}}\xspace. So, considering all substrings $w$ that can appear in the plant (after an initial part $w_0$) which are no longer (in the time sense) than $N_c$ \ensuremath{\mathit{ticks}}\xspace, then the control channel capacity should be at least equal to the number of controllable events occurring in $w$; $\ensuremath{L_{\mathit{max}}}\xspace\geq \max_{w\in W}|P_{\Sigma_c}(w)|$ where $W=\{w\in\Sigma^*_G\mid \exists w_0w\in L(G), |P_{\{tick\}}(w)| \leq N_c-1\}$. To obtain the networked plant, we present the function $\Pi$ in Definition \ref{dfn:NP}. In order to determine enabling commands we look $N_c$ \ensuremath{\mathit{ticks}}\xspace ahead for only the controllable active events enabled in $G'=P_{\Sigma_G\setminus\Sigma_u}(G)$. We use a list $L$ to store the controllable events that have been commanded and a medium $M$ to store the events that were executed. \begin{defn}[Networked Plant Operator] \label{dfn:NP} For a given plant, $G$, $\Pi$ gives the networked plant as: \begin{equation*} \Pi(G,N_c,N_o,\ensuremath{L_{\mathit{max}}}\xspace,\ensuremath{M_{\mathit{max}}}\xspace)=(X,\ensuremath{\Sigma_{\mathit{NSP}}}\xspace,\ensuremath{\delta_{\mathit{NP}}}\xspace,x_{0},X_{m}), \end{equation*} Let $G'=P_{\Sigma_G\setminus\Sigma_u}(G)=(A', \Sigma_G, \delta'_G, a'_{0}, A'_{m})$, and \begin{equation*} \begin{aligned} X &= A\times A'\times M \times L,\qquad x_{0} =(a_{0},\delta'_G(a'_0,\ensuremath{\mathit{tick}}\xspace^{N_c}),[],\varepsilon),\\ X_{m}&=A_{m}\times A'\times M \times L. \end{aligned} \end{equation*} For $a \in A$, $a'\in A'$, $m\in M$ and $l\in L$, the transition function $\ensuremath{\delta_{\mathit{NP}}}\xspace:X\times\ensuremath{\Sigma_{\mathit{NSP}}}\xspace\rightarrow X$ is defined as follows: \begin{enumerate} \item If $\delta'_G(a',\sigma)!$, $\sigma\in\Sigma_c$ \begin{equation*} \ensuremath{\delta_{\mathit{NP}}}\xspace((a,a',m,l),\sigma_{e})=(a,\delta'_G(a',\sigma),m,\mathit{app}(l,(\sigma,N_c))). \end{equation*} \item If $\delta_G(a,\sigma)!$, $\ensuremath{\mathit{head}}\xspace(l)=(\sigma,0),\sigma \in \Sigma_c$ \begin{equation*} \ensuremath{\delta_{\mathit{NP}}}\xspace((a,a',m,l),\sigma)=(\delta_G(a,\sigma),a',m\uplus[(\sigma,N_o)],\mathit{tail}(l)). \end{equation*} \item If $\delta_G(a,\sigma)!,\sigma\in\ensuremath{\Sigma_{\mathit{uc}}}\xspace$ \begin{equation*} \ensuremath{\delta_{\mathit{NP}}}\xspace((a,a',m,l),\sigma)=(\delta_G(a,\sigma),a',m\uplus[(\sigma,N_o)],l). \end{equation*} \item If $\delta_G(a,\ensuremath{\mathit{tick}}\xspace)!$, $\neg\delta'_G(a',\sigma)!$ for all $\sigma\in \Sigma_c$, and $(\sigma',0)\notin m$ for all $\sigma, \sigma'\in \Sigma_a$ \begin{multline*} \ensuremath{\delta_{\mathit{NP}}}\xspace((a,a',m,l),\ensuremath{\mathit{tick}}\xspace)=\\ \begin{cases} (\delta_G(a,tick),\delta'_G(a',\ensuremath{\mathit{tick}}\xspace),m-1,l-1) &\text{if $\delta'_G(a',\ensuremath{\mathit{tick}}\xspace)!$,}\\ (\delta_G(a,tick),a',m-1,l-1) &\text{otherwise.} \end{cases} \end{multline*} \item If $(\sigma,0)\in m$ \begin{equation*} \ensuremath{\delta_{\mathit{NP}}}\xspace((a,a',m,l),\sigma_{o})=(a,a',m\setminus[(\sigma,0)],l). \hspace*{1.65cm} \blacksquare \end{equation*} \end{enumerate} \end{defn} Note that due to Assumption 1, $\delta'_G(a'_0,\ensuremath{\mathit{tick}}\xspace^{N_c})$ is always defined. In the rest of the paper, the networked plant of the plant $G$ is assumed to be the TDES $\ensuremath{\mathit{NP}}\xspace$ represented by the automaton $(X,\ensuremath{\Sigma_{\mathit{NSP}}}\xspace,\ensuremath{\delta_{\mathit{NP}}}\xspace,x_{0},X_{m})$. \begin{property}[\ensuremath{\mathit{NP}}\xspace and Plant] \label{property:NPLE} For any plant $G$: \begin{enumerate} \item\label{NP&P} $P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))\subseteq L(G)$, and \item\label{P&NP} $L(G)\subseteq P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$ whenever assumptions 1 and 2 hold. \end{enumerate} \end{property} \begin{proof} See Appendix \ref{proof:NPLE}. \hfill $\blacksquare$ \end{proof} \begin{comment} \begin{corollary}[\ensuremath{\mathit{NSP}}\xspace and Product] \label{corollary:product} Given a plant $G$ and a networked supervisor $\ensuremath{\mathit{NS}}\xspace$ with event set $\ensuremath{\Sigma_{\mathit{NS}}}\xspace$ such that $L(\ensuremath{\mathit{NS}}\xspace)\subseteq P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(L(\ensuremath{\mathit{NP}}\xspace))$: \begin{enumerate} \item $L(\ensuremath{\mathit{NSP}}\xspace)\subseteq L(\ensuremath{\mathit{NP}}\xspace)$, and \item $L_m(\ensuremath{\mathit{NSP}}\xspace)\subseteq L_m(NP)$. \hfill $\blacksquare$ \end{enumerate} \end{corollary} \begin{proof} This clearly holds since due to Lemma \ref{lemma:product}, $\ensuremath{\mathit{NSP}}\xspace=\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace$ and $\ensuremath{\Sigma_{\mathit{NS}}}\xspace\subseteq\ensuremath{\Sigma_{\mathit{NSP}}}\xspace$ \hfill$\blacksquare$ \end{proof} \end{comment} \begin{example} \label{exp:BusPedOP} For the endangered pedestrian from Example \ref{exp:BusPed}, $G'$ and $\ensuremath{\mathit{NP}}\xspace$ are given in Figure \ref{fig:P'4BusPed} and Figure \ref{fig:NP4BusPed}, respectively. \begin{figure}[htbp] \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2.2cm,scale = 0.6, transform shape] \node[initial,initial text={},state] (A) {$a'_0$}; \node[state] (B) [right of=A] {$a'_1$}; \node[state] (C) [right of=B] {$a'_2$}; \node[state] (F) [below of=B] {$a'_3$}; \node[state,accepting] (G) [right of=F] {$a'_4$}; \path[->] (A) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [above] node [align=center] {$ \ensuremath{\mathit{tick}}\xspace $} (C) (B) edge [right] node [align=center] {$j$} (F) (C) edge [right] node [align=center] {$j$} (G) (F) edge [above,dashed] node [align=center] {$ \ensuremath{\mathit{tick}}\xspace $} (G) (G) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (G) (C) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (C); \end{tikzpicture} \caption{$G'$ for the endangered pedestrian from Example \ref{exp:BusPed}.} \label{fig:P'4BusPed} \end{figure} \begin{figure}[htbp] \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2.0cm,scale = 0.55, transform shape] \node[initial,initial text={},state] (x0) {$x_0$}; \node[state] (x1) [right of=x0] {$x_1$}; \node[state] (x2) [right of=x1] {$x_2$}; \node[state] (x3) [right of=x2] {$x_3$}; \node[state] (x4) [right of=x3] {$x_4$}; \node[state] (x5) [right of=x4] {$x_5$}; \node[state] (x6) [right of=x5] {$x_6$}; \node[state] (x7) [right of=x6] {$x_7$}; \node[state] (x8) [below of=x0] {$x_8$}; \node[state] (x9) [right of=x8] {$x_9$}; \node[state] (x10) [right of=x9] {$x_{10}$}; \node[state] (x11) [right of=x10] {$x_{11}$}; \node[state] (x12) [right of=x11] {$x_{12}$}; \node[state] (x13) [right of=x12] {$x_{13}$}; \node[state] (x14) [right of=x13] {$x_{14}$}; \node[state] (x15) [below of=x8] {$x_{15}$}; \node[state] (x16) [right of=x15] {$x_{16}$}; \node[state] (x17) [right of=x16] {$x_{17}$}; \node[state] (x18) [right of=x17] {$x_{18}$}; \node[state] (x19) [right of=x18] {$x_{19}$}; \node[state] (x20) [right of=x19] {$x_{20}$}; \node[state] (x21) [right of=x20] {$x_{21}$}; \node[state] (x22) [right of=x21] {$x_{22}$}; \node[state] (x23) [below of=x15] {$x_{23}$}; \node[state] (x24) [right of=x23] {$x_{24}$}; \node[state] (x25) [right of=x24] {$x_{25}$}; \node[state] (x26) [right of=x25] {$x_{26}$}; \node[state] (x27) [right of=x26] {$x_{27}$}; \node[state] (x28) [right of=x27] {$x_{28}$}; \node[state] (x29) [right of=x28] {$x_{29}$}; \node[state] (x30) [right of=x29] {$x_{30}$}; \node[state] (x31) [below of=x23] {$x_{31}$}; \node[state,accepting] (x32) [right of=x31] {$x_{32}$}; \node[state,accepting] (x33) [right of=x32] {$x_{33}$}; \node[state,accepting] (x34) [right of=x33] {$x_{34}$}; \node[state,accepting] (x35) [right of=x34] {$x_{35}$}; \node[state] (x36) [below of=x30] {$x_{36}$}; \node[state] (x37) [below of=x32] {$x_{37}$}; \node[state,accepting] (x38) [right of=x37] {$x_{38}$}; \node[state,accepting] (x39) [right of=x38] {$x_{39}$}; \node[state,accepting] (x40) [right of=x39] {$x_{40}$}; \path[->] (x0) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x1) (x0) edge [right] node [align=center] {$j_e$} (x8) (x1) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace $} (x2) (x1) edge [right] node [align=center] {$j_e $} (x9) (x2) edge [above,dashed] node [align=center] {$ p $} (x3) (x2) edge [right] node [align=center] {$j_e $} (x10) (x3) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace $} (x4) (x3) edge [right] node [align=center] {$j_e $} (x11) (x4) edge [above,dashed] node [align=center] {$p_o$} (x5) (x4) edge [right] node [align=center] {$j_e $} (x12) (x5) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace $} (x6) (x5) edge [right] node [align=center] {$j_e $} (x13) (x6) edge [above] node [align=center] {$j_e$} (x7) (x7) edge [right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace $} (x14) (x8) edge [right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace $} (x15) (x9) edge [right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace $} (x17) (x10) edge [above,dashed] node [align=center] {$p$} (x11) (x11) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x20) (x12) edge [above] node [align=center] {$p_o$} (x13) (x13) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x14) (x14) edge [right,dashed] node [align=center] {$j$} (x21) (x15) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x16) (x15) edge [right,dashed] node [align=center] {$j$} (x23) (x16) edge [right,dashed] node [align=center] {$p$} (x24) (x17) edge [above,dashed] node [align=center] {$p$} (x18) (x17) edge [right,dashed] node [align=center] {$j$} (x25) (x18) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x19) (x18) edge [right,dashed] node [align=center] {$j$} (x26) (x19) edge [right,dashed] node [align=center] {$p_o$} (x27) (x20) edge [above,dashed] node [align=center] {$p_o$} (x21) (x20) edge [right,dashed] node [align=center] {$j$} (x28) (x21) edge [right] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x29) (x21) edge [above,dashed] node [align=center] {$j$} (x22) (x22) edge [right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x30) (x23) edge [right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x31) (x24) edge [bend left=2,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x19) (x25) edge [right,dashed] node [align=center] {$p$} (x33) (x26) edge [bend left=5,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x20) (x27) edge [bend right=45,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x29) (x28) edge [bend left=5,dashed] node [align=center] {$p_o$} (x22) (x29) edge [loop below,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x29) (x30) edge [right,dashed] node [align=center] {$j_o$} (x36) (x31) edge [above,dashed] node [align=center] {$p$} (x32) (x31) edge [right,dashed] node [align=center] {$j_o$} (x37) (x32) edge [right,dashed] node [align=center] {$j_o$} (x38) (x33) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x34) (x34) edge [above,dashed] node [align=center] {$p_o$} (x35) (x34) edge [right,dashed] node [align=center] {$j_o$} (x39) (x35) edge [right,dashed] node [align=center] {$j_o$} (x40) (x36) edge [loop below,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x36) (x37) edge [above,dashed] node [align=center] {$p$} (x38) (x38) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x39) (x39) edge [above,dashed] node [align=center] {$p_o$} (x40) (x40) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x40); \end{tikzpicture} \caption{Networked plant for the endangered pedestrian from Example \ref{exp:BusPed} ($N_c=1,N_o=1$).} \label{fig:NP4BusPed} \end{figure} \end{example} \subsection{Synthesis} \label{subsection:S} As is clear from Figure~\ref{fig:solution1}, enabling events are the only controllable events that can be disabled by the networked supervisor. All other events in the networked plant (active events and observed events) are uncontrollable. Moreover, controllability of \ensuremath{\mathit{tick}}\xspace depends on the forcible events of the plant as well as the enabling events (as we assume that they are forcible). To clarify, uncontrollable events are indicated by dashed lines in Figure \ref{fig:NP4BusPed}. Note also that the observed events are observable to the networked supervisor. Also, events from $\Sigma_e$ are \MFedit{supposed to be}{}observable, as the networked supervisor knows about the commands that it sends to the plant. However, the events from $\Sigma_a$ are now unobservable to the networked supervisor. To consider these issues in the current step of the approach, the sets of unobservable events $\ensuremath{\hat{\Sigma}_{\mathit{uo}}}\xspace$, observable events $\hat{\Sigma}_o$, uncontrollable active events $\ensuremath{\hat{\Sigma}_{\mathit{uc}}}\xspace$, and controllable active events $\hat{\Sigma}_c$ of the networked plant are given by $\ensuremath{\hat{\Sigma}_{\mathit{uo}}}\xspace=\Sigma_a$, $\hat{\Sigma}_o=\Sigma_e\cup\Sigma_o\cup\{\ensuremath{\mathit{tick}}\xspace\}$, $\ensuremath{\hat{\Sigma}_{\mathit{uc}}}\xspace=\Sigma_a\cup \Sigma_o$, $\hat{\Sigma}_c=\Sigma_e$. Also, as mentioned before $\ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace=\ensuremath{\Sigma_{\mathit{for}}}\xspace\cup\Sigma_e$. The event \ensuremath{\mathit{tick}}\xspace is always observable to the networked supervisor. Moreover, it is uncontrollable unless there exists an event from $\ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace$ enabled in parallel to \ensuremath{\mathit{tick}}\xspace. Regarding the new sets of events, the synthesis algorithm takes into account the TDES conventional controllability (in Definition \ref{dfn:cont.TDES}) and is inspired from the weak observability condition introduced in~\cite{Takai:06,Cai:16}. Algorithm \ref{algo} presents the synthesis procedure in which we use the following additional concepts and abbreviations: \begin{itemize} \item $\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace)=\ensuremath{\mathit{BLock}}\xspace(\ensuremath{\mathit{NS}}\xspace)\cup\ensuremath{\mathit{TLock}}\xspace(\ensuremath{\mathit{NS}}\xspace)$ where $\ensuremath{\mathit{BLock}}\xspace(\ensuremath{\mathit{NS}}\xspace)$ gives the set of blocking states of $\ensuremath{\mathit{NS}}\xspace$, and $\ensuremath{\mathit{TLock}}\xspace(\ensuremath{\mathit{NS}}\xspace)$ gives the set of time-lock states of $\ensuremath{\mathit{NS}}\xspace$. \item Due to the fact that events from $\Sigma_a$ are unobservable in the networked plant, one should be careful that the same control command is applied on the states reachable through the same observations. To take this issue into account, the following function is used in the synthesis algorithm; $\ensuremath{\mathit{OBS}}\xspace(x)=\{ x'\in X \mid \exists w,w'\in \Sigma^*_{NP}, \ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w) = x \land \ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w') = x' \land P_{\hat{\Sigma}_o}(w)=P_{\hat{\Sigma}_o}(w') \}$ gives the set of states observationally equivalently reachable as $x$. The function $\ensuremath{\mathit{OBS}}\xspace$ can be applied on a set of states $X'\subseteq X$ as well such that $\ensuremath{\mathit{OBS}}\xspace(X')=\bigcup_{x\in X'} \ensuremath{\mathit{OBS}}\xspace(x)$. \item $F(y)=\{\sigma\in\ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace \,|\, \ensuremath{\delta_{\mathit{NS}}}\xspace(y,\sigma)!\}$ is the set of forcible events enabled at state $y$. \item Besides blocking and time-lock states, we should take care of states from which a state from $\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace)$ can be reached in an uncontrollable way, taking preemption of \ensuremath{\mathit{tick}}\xspace events into account. $\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace))$ gives a set of states, called \emph{bad states}, such that \begin{enumerate} \item $\ensuremath{\mathit{BS}}\xspace \subseteq \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace))$; \item if $\ensuremath{\delta_{\mathit{NS}}}\xspace(y,\sigma) \in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace))$ for some $y \in Y$ and $\sigma \in \ensuremath{\hat{\Sigma}_{\mathit{uc}}}\xspace$, then $y \in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace))$; \item if $\ensuremath{\delta_{\mathit{NS}}}\xspace(y,\ensuremath{\mathit{tick}}\xspace) \in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace))$ for some $y \in Y$ such that for all $y'\in \ensuremath{\mathit{OBS}}\xspace(y)$, $F(y)\cap F(y')=\varnothing$, then $y \in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace))$. This is to make sure that the supervisor behaves the same towards all observationally equivalent transitions. \end{enumerate} \item $\mathit{\ensuremath{\mathit{BPre}}\xspace}(\ensuremath{\mathit{NS}}\xspace) = \{ y \in Y \mid F(y)=0$ $\land$ $\neg\ensuremath{\delta_{\mathit{NS}}}\xspace(y,\ensuremath{\mathit{tick}}\xspace)!$ $\land$ $\ensuremath{\delta_{\mathit{NP}}}\xspace(y,\ensuremath{\mathit{tick}}\xspace)! \}$ contains states (still in \ensuremath{\mathit{NS}}\xspace) from which no forcible events and no $\ensuremath{\mathit{tick}}\xspace$ are enabled while there was a $\ensuremath{\mathit{tick}}\xspace$ event enabled in the networked plant. \item $\mathit{Reach}(\ensuremath{\mathit{NS}}\xspace)$ restricts an automaton to those states that are reachable from the initial state. \end{itemize} \begin{comment} \renewcommand{\algorithmicrequire}{\textbf{Input: }} \renewcommand{\algorithmicensure}{\textbf{Output: }} \begin{algorithm} \caption{Networked supervisory control synthesis\\ \algorithmicrequire $\ensuremath{\mathit{NP}}\xspace=(X, \ensuremath{\Sigma_{\mathit{NSP}}}\xspace, \ensuremath{\delta_{\mathit{NP}}}\xspace, x_{0}, X_{m})$, $\hat{\Sigma}_{uo}$, $\hat{\Sigma}_{uc}$, $\hat{\Sigma}_c$, $\ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace$\\ \algorithmicensure{$\ensuremath{\mathit{NS}}\xspace=(Y, \ensuremath{\Sigma_{\mathit{NS}}}\xspace, \ensuremath{\delta_{\mathit{NS}}}\xspace, y_{0}, Y_{m})$}}\label{algo} \begin{algorithmic}[1] \State \textcolor{red}{$i\gets 0$} \State $\ensuremath{\mathit{ns}}\xspace(0)\gets NP$ \State $\ensuremath{\mathit{BS}}\xspace\gets \ensuremath{\mathit{BLock}}\xspace(\ensuremath{\mathit{ns}}\xspace(0))\cup \ensuremath{\mathit{TLock}}\xspace(\ensuremath{\mathit{ns}}\xspace(0))$ \While{$x_0\notin BS \wedge BS \neq \varnothing$} \State \textcolor{red}{$\ensuremath{\mathit{NS}}\xspace\gets NS(0)$} \For{$y \in Y \And \sigma \in \hat{\Sigma}_{c}$} \If{$\ensuremath{\delta_{\mathit{NS}}}\xspace(y,\sigma) \in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace)$} \If{$\sigma\neq\{tick\}$} \For{$y'\in \ensuremath{\mathit{OBS}}\xspace(y)$} \State $\ensuremath{\delta_{\mathit{NS}}}\xspace(y',\sigma) \gets \textbf{undefined}$ \label{line:disabling1} \EndFor \EndIf \If{$\sigma=\{tick\}\And F(y)\cap F(y')\neq\varnothing$} \For{$y'\in E(\ensuremath{\mathit{OBS}}\xspace(y))$} \State $\ensuremath{\delta_{\mathit{NS}}}\xspace(y',\sigma) \gets \textbf{undefined}$ \label{line:disabling2} \EndFor \EndIf \EndIf \EndFor \State \textcolor{red}{$i\gets i+1$} \State \textcolor{red}{ $\ensuremath{\mathit{ns}}\xspace(i) \gets \mathit{Reach}(\ensuremath{\mathit{NS}}\xspace)$} \label{line:reachable} \State $\ensuremath{\mathit{BS}}\xspace\gets \mathit{\ensuremath{\mathit{BPre}}\xspace}(\ensuremath{\mathit{ns}}\xspace(i)) \cup\ensuremath{\mathit{BLock}}\xspace(\ensuremath{\mathit{ns}}\xspace(i))\cup\ensuremath{\mathit{TLock}}\xspace(\ensuremath{\mathit{ns}}\xspace(i))$ \label{line:BPre} \EndWhile \If {$x_0\in BS$} \State{no result} \EndIf \State \textcolor{red}{$\ensuremath{\mathit{NS}}\xspace \gets P_{\ensuremath{\Sigma_{\mathit{NSP}}}\xspace\setminus\Sigma}(\ensuremath{\mathit{ns}}\xspace(i))$}\label{line:project} \end{algorithmic} \end{algorithm} \end{comment} \renewcommand{\algorithmicrequire}{\textbf{Input: }} \renewcommand{\algorithmicensure}{\textbf{Output: }} \begin{algorithm} \caption{Networked supervisory control synthesis\\ \algorithmicrequire $\ensuremath{\mathit{NP}}\xspace=(X, \ensuremath{\Sigma_{\mathit{NSP}}}\xspace, \ensuremath{\delta_{\mathit{NP}}}\xspace, x_{0}, X_{m})$, $\hat{\Sigma}_{uo}$, $\hat{\Sigma}_{uc}$, $\hat{\Sigma}_c$, $\ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace$\\ \algorithmicensure{$\ensuremath{\mathit{NS}}\xspace=(Y, \ensuremath{\Sigma_{\mathit{NS}}}\xspace, \ensuremath{\delta_{\mathit{NS}}}\xspace, y_{0}, Y_{m})$}}\label{algo} \begin{algorithmic}[1] \State \textcolor{red}{$i\gets 0$} \State $\ensuremath{\mathit{ns}}\xspace(0)\gets NP$ \State $\ensuremath{\mathit{bs}}\xspace(0)\gets BS(\ensuremath{\mathit{ns}}\xspace(0))$ \While{$y_0\notin \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{bs}}\xspace(i)) \wedge \ensuremath{\mathit{bs}}\xspace(i) \neq \varnothing$}\label{line:while} \For{$y \in Y\setminus\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{bs}}\xspace(i))$ and $\sigma \in \hat{\Sigma}_{c}\cup\{tick\}$} \label{line:ctrl} \If{$\ensuremath{\delta_{\mathit{NS}}}\xspace(y,\sigma) \in \ensuremath{\mathit{OBS}}\xspace(\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{bs}}\xspace(i)))$} \label{line:obs.eq} \For{$y'\in \ensuremath{\mathit{OBS}}\xspace(y)$} \State $\ensuremath{\delta_{\mathit{NS}}}\xspace(y',\sigma) \gets \textbf{undefined}$ \label{line:disabling1} \EndFor \EndIf \EndFor \State $Y \gets Y\setminus\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{bs}}\xspace(i))$ \label{line:removeUncon} \State \textcolor{red}{$i\gets i+1$} \State \textcolor{red}{ $\ensuremath{\mathit{ns}}\xspace(i) \gets \mathit{Reach}(\ensuremath{\mathit{ns}}\xspace(i-1))$} \label{line:reachable} \State $\ensuremath{\mathit{bs}}\xspace(i)\gets \ensuremath{\mathit{BPre}}\xspace(\ensuremath{\mathit{ns}}\xspace(i)) \cup BS(\ensuremath{\mathit{ns}}\xspace(i))$\label{line:BPre} \EndWhile \If {$y_0\in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{bs}}\xspace(i))$} \State{no result} \EndIf \State \textcolor{red}{$\ensuremath{\mathit{NS}}\xspace \gets P_{\ensuremath{\Sigma_{\mathit{NSP}}}\xspace\setminus\Sigma}(\ensuremath{\mathit{ns}}\xspace(i))$}\label{line:project} \end{algorithmic} \end{algorithm} Starting from $\ensuremath{\mathit{NS}}\xspace=NP$, Algorithm \ref{algo} changes $\ensuremath{\mathit{NS}}\xspace$ by disabling transitions at line \ref{line:disabling1} and delivering the reachable part at line \ref{line:reachable}. For the proposed algorithm, the following property and theorems hold. \begin{property}[Algorithm Termination] \label{property:termination} The synthesis algorithm presented in Algorithm \ref{algo} terminates. \end{property} \begin{proof} See Appendix \ref{proof:termination}. \hfill $\blacksquare$ \end{proof} \begin{comment} \begin{proof} \AR{At least one transition is removed at each iteration .... especially line 9} Starting from $\ensuremath{\mathit{ns}}\xspace(0)=NP$, in each iteration, say iteration $i$, of Algorithm \ref{algo}, a controllable event which leads a state $y\in Y$ to a state that is observationally equivalent to a bad state is undefined at all states that are observationally equivalent to $y$ (line \ref{line:disabling1}). This continues until all controllable edges that lead $y$ to a bad state have become undefined (in the second for-loop \MR{not clear which one this is}). Note that a bad state is reachable only through controllable edges \MR{Is this true?} since otherwise due to the definition of $\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace)$, the start state would be considered bad as well. When the second for-loop ends, all bad states reachable from $y$ have become unreachable and are removed (line \ref{line:reachable}). This procedure is repeated for all states (through the first for-loop). Afterwards, it is checked if any forcible event preempting \ensuremath{\mathit{tick}}\xspace was undefined or a state becomes blocking (time-lock), and the whole procedure is repeated until either the initial state becomes bad or no bad state is left (the condition of while-loop becomes false). Since the automaton is finite initially (Lemma \ref{lemma:finiteNP}), this can only be done finitely often. \hfill $\blacksquare$ \end{proof} \end{comment} \begin{comment} \begin{lemma}[Nonblocking NS] \label{lemma:NBNS} The networked supervisor $\ensuremath{\mathit{NS}}\xspace$ synthesized for a plant $G$ using Algorithm \ref{algo} is nonblocking. \hfill$\blacksquare$ \end{lemma} \begin{proof} Based on Property \ref{property:termination}, Algorithm \ref{algo} terminates, let say after $n$ iterations. Then, either $x_0\in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(n))$ or $\ensuremath{\mathit{BS}}\xspace(n)=\varnothing$ where $\ensuremath{\mathit{BS}}\xspace(n)=\ensuremath{\mathit{BPre}}\xspace(\ensuremath{\mathit{NS}}\xspace(n)\cup\ensuremath{\mathit{BLock}}\xspace(\ensuremath{\mathit{NS}}\xspace(n))\cup \ensuremath{\mathit{TLock}}\xspace(\ensuremath{\mathit{NS}}\xspace(n)))$. In case that $x_0\in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(n))$, the algorithm gives no result. Otherwise, the algorithm gives $\ensuremath{\mathit{NS}}\xspace=P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(\ensuremath{\mathit{NS}}\xspace(n))$ where $\ensuremath{\mathit{NS}}\xspace(n)$ is nonblocking since $\ensuremath{\mathit{BLock}}\xspace(\ensuremath{\mathit{NS}}\xspace(n))=\varnothing$. Moreover, due to Lemma \ref{lemma:projectionNB}, the projection of a nonblocking automaton is nonblocking, and so $\ensuremath{\mathit{NS}}\xspace$ is nonblocking. \hfill$\blacksquare$ \end{proof} \begin{lemma}[TLF NS] \label{lemma:TLFNS} The networked supervisor $\ensuremath{\mathit{NS}}\xspace$ synthesized for a plant $G$ using Algorithm \ref{algo} is TLF. \hfill$\blacksquare$ \end{lemma} \begin{proof} The proof is similar to the proof of Lemma \ref{lemma:NBNS}. \hfill$\blacksquare$ \end{proof} \end{comment} \begin{theorem}[Nonblocking \ensuremath{\mathit{NSP}}\xspace] \label{theorem:nonblockingness} Given a plant $G$ and the networked supervisor $\ensuremath{\mathit{NS}}\xspace$ computed by Algorithm \ref{algo}: \ensuremath{\mathit{NSP}}\xspace is nonblocking. \end{theorem} \begin{proof} See Appendix \ref{proof:NBness}. \hfill $\blacksquare$ \end{proof} \begin{theorem}[TLF \ensuremath{\mathit{NSP}}\xspace] \label{theorem:TLF} Given a plant $G$ and the networked supervisor $\ensuremath{\mathit{NS}}\xspace$ computed by Algorithm \ref{algo}: \ensuremath{\mathit{NSP}}\xspace is TLF. \end{theorem} \begin{proof} See Appendix \ref{proof:TLF}. \hfill $\blacksquare$ \end{proof} \begin{theorem}[Controllable \ensuremath{\mathit{NS}}\xspace] \label{theorem:controllability} Given a plant $G$ and the networked supervisor $\ensuremath{\mathit{NS}}\xspace$ computed by Algorithm \ref{algo}: $\ensuremath{\mathit{NS}}\xspace$ is timed networked controllable w.r.t.\ $G$. \end{theorem} \begin{proof} See Appendix \ref{proof:controllability}. \hfill $\blacksquare$ \end{proof} \begin{theorem}[Timed Networked Maximally Permissive \ensuremath{\mathit{NS}}\xspace] \label{theorem:MPness} For a plant $G$, the networked supervisor $\ensuremath{\mathit{NS}}\xspace$ computed by Algorithm \ref{algo} is timed networked maximally permissive. \end{theorem} \begin{proof} See Appendix \ref{proof:MPness}. \hfill $\blacksquare$ \end{proof} \begin{comment} \begin{proof} To prove that $\ensuremath{\mathit{NS}}\xspace$ is maximally permissive, we need to ensure that for any other networked supervisor (let say $\ensuremath{\mathit{NS}}\xspace'$) communicating through the same control and observation channels with $L(\ensuremath{\mathit{NS}}\xspace')\subseteq P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(L(\ensuremath{\mathit{NP}}\xspace)) $ for which $\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G$ is nonblocking, TLF, timed networked controllable: $P_{\Sigma_G}(L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G))\subseteq P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace))$. In other words, $\ensuremath{\mathit{NS}}\xspace$ preserves the largest admissible behavior of the plant. Take an arbitrary $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G))$, it suffices to prove that $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace))$. Assume that $\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G=(z'_0,\ensuremath{\Sigma_{\mathit{NSP}}}\xspace,\delta_{NS'P},Z'_m)$, $\ensuremath{\mathit{NSP}}\xspace=(z_0,\ensuremath{\Sigma_{\mathit{NSP}}}\xspace,\ensuremath{\delta_{\mathit{NSP}}}\xspace,Z_m)$ and $\ensuremath{\mathit{NP}}\xspace=(x_0,\ensuremath{\Sigma_{\mathit{NSP}}}\xspace,\ensuremath{\delta_{\mathit{NP}}}\xspace,X_m)$. Then, for $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G))$ due to the projection properties, there exist $v'_1\in\Sigma^*_{NSP}$, $P_{\Sigma_G}(v'_1)=w$ such that $\delta_{NS'P}(z_0',v'_1)=z'_1$ for some $z'_1\in Z'$. $\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G$ is nonblocking and TLF as assumed. Thereto, there exist $v'_n,v'_t\in\Sigma^*_{NSP}$ such that $\delta_{NS'P}(z'_1,v'_n)=z'_n$, $z'_n\in Z'_m$. Also, $\delta_{NS'P}(z'_1,v'_t)=z'_t$ and $\delta_{NS'P}(z'_1,v'_t\ensuremath{\mathit{tick}}\xspace)!$. From Property \ref{property:NSP&P} and Lemma \ref{lemma:NSP}, we get $\delta_G(a_0,w)=z'_1.a$ and $\delta_G(z'_1.a,P_{\Sigma_G}(v'_n))=z'_n.a$ where due to Definition \ref{dfn:operator} $z'_n.a\in A_m$ because otherwise $z'_n\notin Z'_m$. Also, $\delta_G(z'_1.a,P_{\Sigma_G}(v'_t))=z'_t.a$ and $\delta_G(z'_t.a,\ensuremath{\mathit{tick}}\xspace)!$ because otherwise $\neg\delta_{NS'P}(z'_1,v'_t\ensuremath{\mathit{tick}}\xspace)!$. Based on Property \ref{property:NPLE}, for $w P_{\Sigma_G}(v'_n)\in L(G)$, one can say $\exists w'\in L(\ensuremath{\mathit{NP}}\xspace)$, $P_{\Sigma_G}(w')=w P_{\Sigma_G}(v'_n)$. Without loss of generality, assume that $w'=w_1w_2$ such that $P_{\Sigma_G}(w_1)=w$ and $P_{\Sigma_G}(w_n)=P_{\Sigma_G}(v'_n)$. Also, let say $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w_1)=x_1$ and $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_1,w_n)=x_n$. Then, due to Lemma \ref{lemma:NP}, $x_1.a=\delta_G(a_0,w)=z'_1.a$ and $x_n.a=\delta_G(x_1.a,P_{\Sigma_G}(v_n))=\delta_G(z'_1.a,P_{\Sigma_G}(v_n))=z'_n.a$ so that due to Definition \ref{dfn:NP}, $x_n\in X_m$ since $x_n.a=z'_n.a$ and $z'_n.a\in A_m$. This means that $x_1$ is a nonblocking state. In a similar way, one can get that $x_1$ is a TLF state. So far, we know that $w_1\in L(\ensuremath{\mathit{NP}}\xspace)$ with $P_{\Sigma_G}(w_1)=w$ leads to a nonblocking and TLF state. Algorithm \ref{algo} starts from $\ensuremath{\mathit{ns}}\xspace(0)=NP$. Going through the algorithm, state $x_1$ will become unreachable if $x_1$ is observationally equivalent to a blocking or time-lock state, or there exists $w_i\in \Sigma^*_{NSP}$ leading from some $x_i\in X$ to $x_1$ such that $x_i$ is observationally equivalent to another state that leads to a bad state by $w_i$. Based on Lemma \AR{for any $w_1,w_2\in L(\ensuremath{\mathit{NP}}\xspace)$, $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w_1)=P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w_2)=v$, if $v\in L(\ensuremath{\mathit{NS}}\xspace)$, then, $w_1,w_2\in L(\ensuremath{\mathit{NSP}}\xspace)$.}, .... \AR{incomplete yet} \hfill$\blacksquare$ \end{proof} \end{comment} \begin{comment} \begin{example} \label{exp:BusPed-S} Assume the plant stated in Example \ref{exp:BusPed} with the observed plant given in Figure \ref{fig:OP4BusPed}. Using Algorithm \ref{algo}, we achieve a non-predictive supervisor $S$ providing nonblockingness for the following scenarios: \begin{itemize} \item Suppose $\ensuremath{\Sigma_{\mathit{uc}}}\xspace=\{p\}$ and $\ensuremath{\Sigma_{\mathit{for}}}\xspace=\{j\}$. To avoid the blocking states, \emph{tick} needs to be disabled at state $(a_1,[])$ (sinc $G$ is uncontrollable) by forcing the event $j$ at state $(a_1,[])$ and all other states equivalent to it ($E(\ensuremath{\mathit{OBS}}\xspace(a_1,[]))=\{(a_1,[])\}$). The resulted supervisor is depicted in Figure \ref{fig:BusPed-S}. \item Suppose the event $G$ also becomes controllable by providing the system with a traffic light. Then, to avoid the blocking states, $G$ needs to be disabled at all states equivalent to $(a_2,[])$ which is given by $E(\ensuremath{\mathit{OBS}}\xspace(a_1,[]))=\{(a_1,[]),(a_6,[(j,1)]),(a_6,[(j,0)]),(a_6,[])\}$ where the first three elements are observationally equivalent and the last one is equivalent to $(a_6,[(j,1)])$ and $(a_6,[(j,0)])$ since it is related to the same state of the plant. To prevent the new set of blocking state in the next step of the algorithm, the event $j$ needs to be disabled at both states $(a_1,[])$ and $(a_2,[])$ followed by diabling \emph{tick} in at state $(a_0,[])$ which means that there will be no supervisor. \end{itemize} \begin{figure}[htbp] \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2.8cm,scale = 0.43, transform shape] \node[initial,initial text={},state,label=above:{${[}{]}$}] (A) {$a_0$}; \node[state,label=above:{${[}{]}$}] (B) [right of=A] {$a_1$}; \node[state,label=left:{${[}(j,1){]}$}] (I) [below of=A] {$a_5$}; \node[state,label=below:{${[}(p,0){]}$},accepting] (N) [below of=L] {$a_7$}; \node[state,label=below:{${[}{]}$},accepting] (O) [below of=M] {$a_7$}; \node[state,label=left:{${[}(j,0){]}$}] (G) [below of=I] {$a_6$}; \node[state,label=above:{${[}{]}$}] (Q) [right of=P] {$a_6$}; \node[state,label=left:{${[}(p,1),(j,0){]}$},accepting] (R) [below of=P] {$a_7$}; \node[state,label=below:{${[}(p,1){]}$},accepting] (S) [right of=R] {$a_7$}; \path[->] (A) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [left] node [align=center] {$ j $} (I) (N) edge [above] node [align=center] {$ p_{o} $} (O) (G) edge [left,dashed] node [align=center] {$ p $} (R) (G) edge [above] node [align=center] {$ j_{o} $} (Q) (R) edge [above] node [align=center] {$ j_{o} $} (S) (Q) edge [left,dashed] node [align=center] {$ p $} (S) (I) edge [right] node [align=center] {$ \ensuremath{\mathit{tick}}\xspace $} (G) (S) edge [above] node [align=center] {$ \ensuremath{\mathit{tick}}\xspace $} (N) (O) edge [loop right] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (O); \end{tikzpicture} \caption{Non-predicitve supervisor for the endangered pedestrian from Example \ref{exp:BusPed}.} \label{fig:BusPed-S} \end{figure} \end{example} \end{comment} \subsection{Possible Variants} \label{section:PV} The proposed synthesis approach can be adjusted for the following situations. \subsubsection{Nonblockingness or time-lock freeness} Algorithm \ref{algo} can easily be adapted to either only provide nonblockingness or time-lock freeness by removing $\ensuremath{\mathit{TLock}}\xspace(\ensuremath{\mathit{NS}}\xspace)$ and $\ensuremath{\mathit{BLock}}\xspace(\ensuremath{\mathit{NS}}\xspace)$ from $\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace)$, respectively. \subsubsection{Unobservable enabling events} We could have assumed that some events from $\Sigma_e$ are unobservable. In this case, $\Sigma_a\subseteq\hat{\Sigma}_{uo}\subseteq\Sigma_a\cup\Sigma_e$, and so there would be more states that become observationally equivalent. Hence, the resulting supervisor could be more restrictive since a control command should be disabled at all observationally equivalent states if it needs to be disabled at one of them. Also if the observation channel does not provide enough capacity, more states become observationally equivalent, resulting in a more conservative solution. To not introduce any observation losses, the observation channel needs to be such that it has the capacity for all observations of events executed in the plant; $\ensuremath{M_{\mathit{max}}}\xspace\geq \max_{w\in W}\{|P_{\Sigma_a}(w)|\}$ where $W=\{w\in\Sigma^*_G \mid \exists w_0w\in L(G), |P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w)|\leq N_o\}$ as all events are observed after $N_o$ \ensuremath{\mathit{ticks}}\xspace. \subsubsection{Non-forcible enabling events} We could have assumed that some events from $\Sigma_e$ are not forcible. In this case, $\ensuremath{\Sigma_{\mathit{for}}}\xspace$ $\subseteq$ $\ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace\subseteq\ensuremath{\Sigma_{\mathit{for}}}\xspace\cup\Sigma_e$. Providing less forcible events makes the synthesis result more conservative since if the non-preemptable \ensuremath{\mathit{tick}}\xspace leads to a bad state, the current state where \ensuremath{\mathit{tick}}\xspace is enabled must be avoided as well (illustrated by Example \ref{exp:NS4BusPed}). \begin{example} \label{exp:NS4BusPed} Consider the endangered pedestrian from Example \ref{exp:BusPedOP}. With the assumption that events from $\Sigma_e$ are forcible, the networked supervisor is given in Figure \ref{fig:NS4BusPed}. Without this assumption, there exists no networked supervisor. \begin{figure}[htbp] \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2.2cm,scale = 0.6, transform shape] \node[initial,initial text={},state] (y0) {$y_0$}; \node[state] (y1) [right of=y0] {$y_1$}; \node[state] (y2) [below of=y0] {$y_2$}; \node[state] (y3) [right of=y2] {$y_3$}; \node[state] (y4) [right of=y3] {$y_4$}; \node[state,accepting] (y5) [right of=y4] {$y_5$}; \node[state,accepting] (y6) [right of=y5] {$y_6$}; \node[state] (y7) [below of=y4] {$y_7$}; \node[state,accepting] (y8) [right of=y7] {$y_8$}; \node[state,accepting] (y9) [right of=y8] {$y_9$}; \path[->] (y0) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (y1) (y0) edge [right] node [align=center] {$j_e$} (y2) (y1) edge [right] node [align=center] {$j_e $} (y3) (y2) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace $} (y3) (y3) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace $} (y4) (y4) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace $} (y5) (y5) edge [above] node [align=center] {$p_o$} (y6) (y4) edge [right] node [align=center] {$j_o$} (y7) (y5) edge [right] node [align=center] {$j_o$} (y8) (y7) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (y8) (y8) edge [above] node [align=center] {$p_o$} (y9) (y6) edge [right] node [align=center] {$j_o$} (y9) (y9) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (y9) ; \end{tikzpicture} \caption{Networked supervisor for the endangered pedestrian from Example \ref{exp:BusPed} ($N_c=1,N_o=1$).} \label{fig:NS4BusPed} \end{figure} \end{example} \subsubsection{Non-FIFO control channel} \label{remark:nonfifocontrolchannel} Our proposed framework can easily be extended to the case that the control channel is non-FIFO by applying the following changes. Similar to the observation channel, the control channel is represented by $L=\{l\mid l:\Sigma\times[0,N_c]\rightarrow\mathbb{N}\}$ where $l$ is a multiset. So, for each $l\in L$ and the time counter $n$, we define the operators $l\uplus [(\sigma,n)]$ and $l\setminus[(\sigma,0)]$ instead of $\mathit{app}(l,(\sigma,n))$ and $\mathit{tail}(l)$, respectively. This affects item 1) of both Definition \ref{dfn:operator} and Definition \ref{dfn:NP} such that $(\sigma,N_c)$ is simply added to $l$ without taking into account the order of elements. Also, in item 2) of both definitions, $head(l)$ is replaced by $\exists (\sigma,0)\in l$. This may change the result pretty much as the enabling events can now be received by $G$ in any possible order. As Example \ref{ex:non-FIFO-CCH} illustrates, this may increase the chance of reaching blocking or time-lock states and result in very conservative solutions for many applications. \begin{example} \label{ex:non-FIFO-CCH} Given a plant $G$ indicated in Figure \ref{fig:non-FIFO-CCH-G}, $N_c=N_o=1$, and $\ensuremath{L_{\mathit{max}}}\xspace=\ensuremath{M_{\mathit{max}}}\xspace=1$, $\ensuremath{\mathit{NP}}\xspace$ is obtained as in Figure \ref{fig:non-FIFO-CCH-NP}. The networked supervisor computed by Algorithm \ref{algo} only disables the event $b_e$ at $x_0$. Now, assume that the control channel is non-FIFO as well. Then, at $x_3=(\delta_G(a_0,\ensuremath{\mathit{tick}}\xspace),\delta'(a'_0,\ensuremath{\mathit{tick}}\xspace\,a\,b\,\ensuremath{\mathit{tick}}\xspace),[],(a,0)(b,0))$, $b$ can be executed as well as $a$. By executing $b$ at $x_3$, $\ensuremath{\mathit{NP}}\xspace$ goes to a blocking state. In this case, Algorithm \ref{algo} returns no result since $x_0$ becomes a blocking state and needs to be removed. \begin{figure}[htb] \centering \begin{tikzpicture}[>=stealth',,shorten >=0.8pt,auto,node distance=2.2cm,scale = 0.6, transform shape] \node[initial,initial text={},state] (A) {$a_0$}; \node[state] (B) [right of=A] {$a_1$}; \node[state] (C) [right of=B] {$a_2$}; \node[state,accepting] (D) [right of=C] {$a_3$}; \node[state] (F) [below of=B] {$a_4$}; \path[->] (A) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [above] node [align=center] {$a$} (C) (C) edge [above] node [align=center] {$b$} (D) (D) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (D) (B) edge [right] node [align=center] {$b$} (F) (F) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (F); \end{tikzpicture} \caption{Plant from Example \ref{ex:non-FIFO-CCH}.} \label{fig:non-FIFO-CCH-G} \end{figure} \begin{figure}[htb] \centering \begin{tikzpicture}[>=stealth',,shorten >=0.8pt,auto,node distance=2.2cm,scale = 0.6, transform shape] \node[initial,initial text={},state] (x0) {$x_0$}; \node[state] (x1) [right of=x0] {$x_1$}; \node[state] (x2) [right of=x1] {$x_2$}; \node[state] (x3) [right of=x2] {$x_3$}; \node[state] (x4) [right of=x3] {$x_4$}; \node[state,accepting] (x5) [right of=x4] {$x_5$}; \node[state,accepting] (x6) [right of=x5] {$x_6$}; \node[state,accepting] (x7) [below of=x6] {$x_7$}; \node[state,accepting] (x8) [below of=x7] {$x_8$}; \node[state,accepting] (x9) [below of=x5] {$x_9$}; \node[state] (x10) [below of=A] {$x_{10}$}; \node[state] (x11) [right of=x10] {$x_{11}$}; \node[state] (x12) [right of=x11] {$x_{12}$}; \node[state] (x13) [right of=x12] {$x_{13}$}; \node[state] (x14) [right of=x13] {$x_{14}$}; \path[->] (x0) edge [above] node [align=center] {$a_e$} (x1) (x1) edge [above] node [align=center] {$b_e$} (x2) (x2) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x3) (x3) edge [above] node [align=center] {$a$} (x4) (x4) edge [above] node [align=center] {$b$} (x5) (x5) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x6) (x6) edge [left] node [align=center] {$b_o$} (x7) (x6) edge [above] node [align=center] {$a_o$} (x9) (x7) edge [left] node [align=center] {$a_o$} (x8) (x8) edge [below] node [align=center] {$b_o$} (x9) (x9) edge [loop below,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x9) (x0) edge [right] node [align=center] {$b_e$} (x10) (x10) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x11) (x11) edge [above] node [align=center] {$b$} (x12) (x12) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x13) (x13) edge [above,dashed] node [align=center] {$b_o$} (x14) (x14) edge [loop below,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x14); \end{tikzpicture} \caption{Networked plant from Example \ref{ex:non-FIFO-CCH}.} \label{fig:non-FIFO-CCH-NP} \end{figure} \end{example} \section{Requirement Automata} \label{section:requirements} To generalize the method to a wider group of applications, we solve the basic NSC problem for a given set of control requirements. It is assumed that the desired behavior of $G$, denoted by the TDES $R$, is represented by the automaton $(Q,\Sigma_R,\delta_R,q_0,Q_M)$ where $\Sigma_R\subseteq \Sigma_G$. Since most control requirements are defined to provide safety of a plant, we call a supervised plant \emph{safe} if it satisfies the control requirements. \begin{defn}[Safety] \label{dfn:safety} Given a plant $G$ and requirement $R$, a TDES $\mathit{NSP}$ with event set $\ensuremath{\Sigma_{\mathit{NSP}}}\xspace$ is safe w.r.t.\ $G$ and $R$ if its behavior stays within the legal/safe behavior as specified by $R$; $P_{\ensuremath{\Sigma_{\mathit{NSP}}}\xspace\cap\Sigma_R}(L({\mathit{NSP}}))\subseteq P_{\ensuremath{\Sigma_{\mathit{NSP}}}\xspace\cap\Sigma_R}(L(R))$. \hfill $\blacksquare$ \end{defn} \textbf{Problem Statement:} Given a plant model $G$ as a TDES, control requirement $R$ for $G$ (also a TDES), observation (control) channel with delay $N_o$ ($N_c$) and maximum capacity $\ensuremath{M_{\mathit{max}}}\xspace$ ($\ensuremath{L_{\mathit{max}}}\xspace$), provide a networked supervisor \ensuremath{\mathit{NS}}\xspace such that \begin{itemize} \item \ensuremath{\mathit{NSP}}\xspace is nonblocking, \item \ensuremath{\mathit{NSP}}\xspace is time-lock free, \item \ensuremath{\mathit{NS}}\xspace is timed networked controllable w.r.t.\ $G$, \item \ensuremath{\mathit{NS}}\xspace is timed networked maximally permissive, and \item \ensuremath{\mathit{NSP}}\xspace is safe for $G$ w.r.t.\ $R$. \end{itemize} \begin{comment} \begin{example} \label{example:ISS} Consider $G$ depicted in Figure \ref{fig:ISSP} for which $\ensuremath{\Sigma_{\mathit{uc}}}\xspace=\{u\}$ and $\ensuremath{\Sigma_{\mathit{for}}}\xspace=\varnothing$. The control objective is to design a supervisor satisfying the requirement that the event $u$ must precede the event $a$, and not vice versa. This requirement can easily be modeled by the automaton shown in Figure \ref{fig:ISSR}. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.40\linewidth} \centering \begin{tikzpicture}[>=stealth',,shorten >=0.8pt,auto,node distance=2.2cm,scale = 0.55, transform shape] \node[initial,initial text={},state] (A) {$a_0$}; \node[state] (B) [right of=A] {$a_1$}; \node[state] (C) [right of=B] {$a_2$}; \node[state] (D) [below of=C] {$a_3$}; \node[state] (E) [right of=C] {$a_4$}; \node[state,accepting] (F) [right of=D] {$a_5$}; \path[->] (A) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [above] node [align=center] {$a$} (C) (B) edge [below,dashed] node [align=center] {$u$} (D) (C) edge [above,dashed] node [align=center] {$u$} (E) (D) edge [above] node [align=center] {$a$} (F) (E) edge [right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (F) (F) edge [loop below,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (F); \end{tikzpicture} \caption{Plant}\label{fig:ISSP} \end{subfigure} \hfill \begin{subfigure}[b]{0.40\linewidth} \centering \begin{tikzpicture}[>=stealth',,shorten >=0.8pt,auto,node distance=2.2cm,scale = 0.55, transform shape] \node[initial,initial text={},state] (A) {$q_0$}; \node[state] (B) [right of=A] {$q_1$}; \node[state,accepting] (C) [right of=B] {$q_2$}; \path[->] (A) edge [above,dashed] node [align=center] {$u$} (B) (A) edge [loop above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (A) (B) edge [loop above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (C) edge [loop above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (C) (B) edge [above] node [align=center] {$a$} (C); \end{tikzpicture} \caption{Requirement}\label{fig:ISSR} \end{subfigure} \caption{Plant and requirement for Example \ref{example:ISS}.}\label{fig:ISS} \end{figure} \end{example} \end{comment} In the conventional non-networked supervisory control setting, if $R$ is controllable w.r.t.\ $G$ (as defined in Definition \ref{dfn:cont.TDES}), then an optimal nonblocking supervisor can be synthesized for $G$ satisfying $R$~\cite{Wonham:19}. If $R$ is not controllable w.r.t.\ $G$, then the supremal controllable sublanguage of $G||R$, indicated by $sup \mathcal{C}(G||R)$, should be calculated. Then, the synthesis is applied on $sup \mathcal{C}(G||R)$~\cite{Cassandras:99,Wonham:19}. \begin{comment} then there are two approaches to achieve a proper and maximally permissive supervisor: \MF{This is a bit tricky... I do not really agree that these are two different approaches. The "plantify" approach is just ordinary synthesis, but we can forget about what is plant and what is spec.\AR{I think I need help to take care of this comment/shorten this section}} \begin{enumerate} \item pre-synthesis: find the supremal controllable sublanguage of $P||R$ as $Sup_c(P||R)$, and apply the synthesis on $Sup_c(P||R)$~\cite{Cassandras:99,Wonham:19}. \MF{Why is this called ``pre-synthesis''? We are talking about the conventional setting here, right? So this is just calculating $\sup CNB$.} \item completion: make the requirement automaton $R$ complete $R^\bot$ as described in~\cite{Flordal:07} by adding a new \MFedit{}{blocking} state called $s_d$\MFedit{(which is blocking)}{} to the set of states of $R$. Then, whenever $R$ disables an uncontrollable event\MFedit{, the uncontrollable transition is considered to lead to $s_d$ in $R^\bot$}{ $\sigma_u$ in a state $q$, a transition $\langle q, \sigma_u, s_d\rangle$ is added in $R^\bot$}. By applying the synthesis on $P||R^\bot$, all original controllability problems in $P||R$ are translated to blocking issues. Note that this translation is necessary to let the supervisor know about the uncontrollable events that are disabled by a given requirement. To solve the blocking issues, synthesis still takes the controllability definition into account. \end{enumerate} \end{comment} \begin{comment} \begin{defn}[Automata Completion~\cite{Flordal:07}] \label{dfn:Rcomp} Let $R=(S,\Sigma_R,\delta_R,s_0,S_M)$ be the control requirement. The complete automaton for $R$ is $R^\bot=(S\cup s_d,\Sigma_R,\delta_R^\bot,s_0,S_M)$. \end{defn} \end{comment} \begin{comment} Example \ref{exp:BusPed} is provided to clarify these approaches. In a conventional supervisory control setting, both the methods give the same result. \begin{example} \label{exp:BusPed} Let us consider the endangered pedestrian example from~\cite{Wonham:19}. The plant $G$ is depicted in Figure \ref{fig:P} as a TDES where the events $G$ and $j$ stand for pass (for the bus) and jump (for the pedestrian), respectively. We have $\Sigma'_c=\ensuremath{\Sigma_{\mathit{for}}}\xspace=\{j\}$. The safety requirement is to force the pedestrian jump before the bus passes. This is modeled as a TDES $R$ given in Figure \ref{fig:R}. To synthesize a controllable and nonblocking supervisor for $G$ satisfying $R$, we first compute $P||R$. As shown in Figure \ref{fig:P||R}, $P||R$ is uncontrollable w.r.t.\ $G$ since it cancels the event $G$ after the word $tt$ (we use $t$ instead of \ensuremath{\mathit{tick}}\xspace when we want to refer to it in a word). Supervisory control synthesis of $Sup_c(P||R)$ and $P||R^\bot$ achieve the same result $S$ where we have $S=Sup_c(P||R)$ depicted in Figure \ref{fig:R_c}. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.46\columnwidth} \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2cm,scale = 0.5, transform shape] \node[initial,initial text={},state] (A) {$a_0$}; \node[state] (B) [right of=A] {$a_1$}; \node[state] (C) [right of=B] {$a_2$}; \node[state] (D) [right of=C] {$a_3$}; \node[state] (E) [below of=B] {$a_4$}; \node[state] (F) [right of=E] {$a_5$}; \node[state,accepting] (G) [right of=F] {$a_6$}; \path[->] (A) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [above] node [align=center] {$ \ensuremath{\mathit{tick}}\xspace $} (C) (C) edge [above] node [align=center] {$ p $} (D) (B) edge [right] node [align=center] {$j$} (E) (C) edge [right] node [align=center] {$j$} (F) (D) edge [right] node [align=center] {$j$} (G) (E) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (F) (F) edge [above] node [align=center] {$ p $} (G) (G) edge [loop right] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (G) (D) edge [loop right] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (D); \end{tikzpicture} \caption{Plant $G$.} \label{fig:P} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\columnwidth} \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2cm,scale = 0.5, transform shape] \node[initial,initial text={},state] (A) {$s_0$}; \node[state] (B) [right of=A] {$s_1$}; \node[state,accepting] (C) [right of=B] {$s_2$}; \path[->] (A) edge [loop above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [loop above] node [align=center] {$ \ensuremath{\mathit{tick}}\xspace $} (B) (C) edge [loop above] node [align=center] {$ \ensuremath{\mathit{tick}}\xspace $} (C) (A) edge [above] node [align=center] {$ j $} (B) (B) edge [above] node [align=center] {$G$} (C) \end{tikzpicture} \caption{Requirement $R$.} \label{fig:R} \end{subfigure} \caption{Plant and safety requirement for the endangered pedestrian from Example \ref{exp:BusPed}.} \label{fig:BusPed} \end{figure} \begin{figure}[htbp] \centering \begin{subfigure}[b]{1\columnwidth} \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2cm,scale = 0.5, transform shape] \node[initial,initial text={},state] (A) {$a_0$}; \node[state] (B) [right of=A] {$a_1$}; \node[state] (C) [right of=B] {$a_2$}; \node[state] (E) [below of=B] {$a_4$}; \node[state] (F) [right of=E] {$a_5$}; \node[state,accepting] (G) [right of=F] {$a_6$}; \path[->] (A) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [above] node [align=center] {$ \ensuremath{\mathit{tick}}\xspace $} (C) (B) edge [right] node [align=center] {$j$} (E) (C) edge [right] node [align=center] {$j$} (F) (E) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (F) (F) edge [above] node [align=center] {$ p $} (G) (G) edge [loop right] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (G) ; \end{tikzpicture} \caption{Synchronous product of the plant and requirement $P||R$.} \label{fig:P||R} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\columnwidth} \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2cm,scale = 0.5, transform shape] \node[initial,initial text={},state] (A) {$a_0$}; \node[state] (B) [right of=A] {$a_1$}; \node[state] (E) [below of=B] {$a_4$}; \node[state] (F) [right of=E] {$a_5$}; \node[state,accepting] (G) [right of=F] {$a_6$}; \path[->] (A) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [right] node [align=center] {$j$} (E) (E) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (F) (F) edge [above] node [align=center] {$ p $} (G) (G) edge [loop above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (G) ; \end{tikzpicture} \caption{Supremal controllable language $Sup_c(P||R)$ of $P||R$} \label{fig:R_c} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\columnwidth} \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2cm,scale = 0.5, transform shape] \node[initial,initial text={},state] (A) {$a_0$}; \node[state] (B) [right of=A] {$a_1$}; \node[state] (C) [right of=B] {$a_2$}; \node[state] (D) [right of=C] {$s_d$}; \node[state] (E) [below of=B] {$a_4$}; \node[state] (F) [right of=E] {$a_5$}; \node[state,accepting] (G) [right of=F] {$a_6$}; \path[->] (A) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [above] node [align=center] {$ \ensuremath{\mathit{tick}}\xspace $} (C) (C) edge [above] node [align=center] {$ p $} (D) (B) edge [right] node [align=center] {$j$} (E) (C) edge [right] node [align=center] {$j$} (F) (E) edge [above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (F) (F) edge [above] node [align=center] {$ p $} (G) (G) edge [loop above] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (G) (D) edge [loop above,] node [align=center] {$\ensuremath{\mathit{tick}}\xspace,j$} (D); \end{tikzpicture} \caption{$P||R^{\bot}$} \label{fig:R^{tot}} \end{subfigure} \caption{Synchronous product $P||R$ of $G$ and $R$, supremal controllable language $Sup_c(P||R)$ of $P||R$, and the total requirement $R^\bot$ from Example \ref{exp:BusPed}.} \label{fig:BusPed} \end{figure} \end{example} \end{comment} \begin{comment} \subsection{Problem Formulation} Since most control requirements are defined to provide safety of a plant, we call a supervised plant \emph{safe} if it satisfies the control requirements. \begin{defn}[Safety] \label{dfn:safety} A TDES $\mathit{NSP}$ with event set $\ensuremath{\Sigma_{\mathit{NSP}}}\xspace$ is safe for a requirement automaton $R$ with event set $\Sigma_R$ if its behavior stays within the legal/safe behavior as specified by $R$; $P_{\ensuremath{\Sigma_{\mathit{NSP}}}\xspace\cap\Sigma_R}(L({\mathit{NSP}}))\subseteq P_{\ensuremath{\Sigma_{\mathit{NSP}}}\xspace\cap\Sigma_R}(L(R))$. \hfill $\blacksquare$ \end{defn} \textbf{Problem Statement:} Given a plant $G$, control requirement $R$, observation(control) channel with delay $N_o$($N_c$) and maximum capacity $\ensuremath{M_{\mathit{max}}}\xspace$($\ensuremath{L_{\mathit{max}}}\xspace$), provide a networked supervisor \ensuremath{\mathit{NS}}\xspace such that: \begin{itemize} \item \ensuremath{\mathit{NS}}\xspace is (timed networked) controllable w.r.t.\ $G$, \item \ensuremath{\mathit{NS}}\xspace is (timed networked) maximally permissive, \item \ensuremath{\mathit{NSP}}\xspace is nonblocking, \item \ensuremath{\mathit{NSP}}\xspace is time-lock free, and \item \ensuremath{\mathit{NSP}}\xspace is safe for $G$ w.r.t.\ $R$. \end{itemize} \end{comment} In a networked supervisory control setting, synthesizing a networked supervisor for $sup \mathcal{C}(G||R)$ does not always result in a safe networked supervised plant. This issue occurs due to the fact that in $sup \mathcal{C}(G||R)$, some events are already supposed to be disabled, to deal with controllability problems introduced by requirement $R$. In a conventional non-networked setting, this does not cause a problem because events are observed immediately when executed. However, when observations are delayed, there could be a set of states reached by the same observation. Hence, if an event is disabled at a state, it should be disabled at all observationally equivalent ones. Even for a controllable requirement, any disablement of events should be considered at all observationally equivalent states. To take care of this issue, any requirement automaton $R$ (whether controllable or uncontrollable) is made complete as $R^{\bot}$ in terms of both uncontrollable and controllable events. \emph{Completion} was first introduced in~\cite{Flordal:07} where the requirement automaton $R$ is made complete in terms of only uncontrollable events. By applying the synthesis on $G||R^\bot$, all original controllability problems in $G||R$ are translated to blocking issues. Note that this translation is necessary to let the supervisor know about the uncontrollable events that are disabled by a given requirement. To solve the blocking issues, synthesis still takes the controllability definition into account. \begin{comment} This issue is clarified in the following example. \begin{example} Let us consider Example \ref{example:ISS} once more in a networked supervisory control setting where $N_c=N_o=1$. The objective is to synthesize a networked supervisor to satisfy the requirement. By applying Algorithm \ref{algo} on $\Pi(G||R,N_c,N_o,\ensuremath{L_{\mathit{max}}}\xspace,\ensuremath{M_{\mathit{max}}}\xspace)$ we obtain the networked supervisor depicted in Figure \ref{fig:NS4ISS}. If we calculate the networked supervised plant using Definition \ref{dfn:operator}, we get the automaton depicted in Figure \ref{fig:NSP4ISS}. Clearly, the requirement is not met as $a$ may occur before $u$. \hfill $\blacksquare$ \begin{figure}[htbp] \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2.2cm,scale = 0.6, transform shape] \node[initial,initial text={},state] (y0) {$y_0$}; \node[state] (y1) [right of=y0] {$y_1$}; \node[state] (y2) [right of=y1] {$y_2$}; \node[state,accepting] (y3) [right of=y2] {$y_3$}; \node[state,accepting] (y4) [right of=y3] {$y_4$}; \node[state,accepting] (y5) [right of=y4] {$y_5$}; \node[state,accepting] (y6) [below of=y4] {$y_6$}; \path[->] (y0) edge [above] node [align=center] {$a_e$} (y1) (y1) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (y2) (y2) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (y3) (y3) edge [above] node [align=center] {$u_o$} (y4) (y4) edge [above] node [align=center] {$a_o$} (y5) (y3) edge [above] node [align=center] {$a_o$} (y6) (y6) edge [above] node [align=center] {$u_o$} (y5) (y5) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (y5) ; \end{tikzpicture} \caption{Networked supervisor for Example \ref{example:ISS} without totalization ($N_c=1,N_o=1$).} \label{fig:NS4ISS} \end{figure} \begin{figure}[htbp] \centering \begin{tikzpicture}[>=stealth',,shorten >=1pt,auto,node distance=2.2cm,scale = 0.6, transform shape] \node[initial,initial text={},state] (x0) {$x_0$}; \node[state] (x1) [right of=x0] {$x_1$}; \node[state] (x2) [right of=x1] {$x_2$}; \node[state] (x3) [right of=x2] {$x_3$}; \node[state] (x4) [right of=x3] {$x_4$}; \node[state] (x5) [below of=x2] {$x_5$}; \node[state,accepting] (x6) [right of=x5] {$x_6$}; \node[state,accepting] (x7) [right of=x6] {$x_7$}; \node[state,accepting] (x8) [right of=x7] {$x_8$}; \node[state,accepting] (x9) [below of=x7] {$x_9$}; \node[state,accepting] (x10) [right of=x9] {$x_{10}$}; \path[->] (x0) edge [above] node [align=center] {$a_e$} (x1) (x1) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x2) (x2) edge [above] node [align=center] {$a$} (x3) (x3) edge [above,dashed] node [align=center] {$u$} (x4) (x2) edge [right,dashed] node [align=center] {$u$} (x5) (x5) edge [above] node [align=center] {$a$} (x6) (x6) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x7) (x4) edge [right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x7) (x7) edge [above,dashed] node [align=center] {$u_o$} (x8) (x7) edge [right,dashed] node [align=center] {$a_o$} (x9) (x8) edge [right,dashed] node [align=center] {$a_o$} (x10) (x9) edge [above,dashed] node [align=center] {$u_o$} (x10) (x10) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (x10) ; \end{tikzpicture} \caption{Networked supervised plant for Example \ref{example:ISS} without totalization ($N_c=1,N_o=1$).} \label{fig:NSP4ISS} \end{figure} \end{example} In conclusion, in a networked supervisory control setting, we cannot deal with requirements using the available methods. \end{comment} \begin{defn}[Automata Completion] \label{dfn:Rtot} For a TDES $R=(Q,\Sigma_R,\delta_R,q_0,Q_M)$, the complete automaton $R^{\bot}$ is defined as $R^{\bot}=(Q\cup\{q_d\},\Sigma_R,\delta^{\bot}_R,q_0,Q_M)$ with $q_d\notin Q$, where for every $q\in Q$ and $\sigma\in\Sigma_R$, \begin{equation*} \delta^{\bot}_R(q,\sigma)=\begin{cases}\text{$\delta_R(q,\sigma)$} &\quad\text{if $\delta_R(q,\sigma)!$}\\ \text{$q_d$} &\quad\text{otherwise}. \end{cases} \end{equation*} \hfill $\blacksquare$ \end{defn} \begin{comment} \AR{The following example was chosen before to show why we need to complete $R$ in terms of controllable events as well ... Shall we keep this one? provide a more positive (with result) example? or no example here to save space?} \begin{example} \label{example:extensions} Consider $G$ depicted in Figure \ref{fig:ISSP} for which $\ensuremath{\Sigma_{\mathit{uc}}}\xspace=\{u\}$ and $\ensuremath{\Sigma_{\mathit{for}}}\xspace=\varnothing$. The control objective is to design a supervisor satisfying the requirement that the event $u$ must precede the event $a$, and not vice versa. This requirement can easily be modeled by the automaton shown in Figure \ref{fig:ISSR}. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.40\linewidth} \centering \begin{tikzpicture}[>=stealth',,shorten >=0.8pt,auto,node distance=2.2cm,scale = 0.55, transform shape] \node[initial,initial text={},state] (A) {$a_0$}; \node[state] (B) [right of=A] {$a_1$}; \node[state] (C) [right of=B] {$a_2$}; \node[state] (D) [below of=C] {$a_3$}; \node[state] (E) [right of=C] {$a_4$}; \node[state,accepting] (F) [right of=D] {$a_5$}; \path[->] (A) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (B) edge [above] node [align=center] {$a$} (C) (B) edge [below,dashed] node [align=center] {$u$} (D) (C) edge [above,dashed] node [align=center] {$u$} (E) (D) edge [above] node [align=center] {$a$} (F) (E) edge [right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (F) (F) edge [loop below,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (F); \end{tikzpicture} \caption{Plant}\label{fig:ISSP} \end{subfigure} \hfill \begin{subfigure}[b]{0.40\linewidth} \centering \begin{tikzpicture}[>=stealth',,shorten >=0.8pt,auto,node distance=2.2cm,scale = 0.55, transform shape] \node[initial,initial text={},state] (A) {$q_0$}; \node[state] (B) [right of=A] {$q_1$}; \node[state,accepting] (C) [right of=B] {$q_2$}; \path[->] (A) edge [above,dashed] node [align=center] {$u$} (B) (A) edge [loop above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (A) (B) edge [loop above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (C) edge [loop above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (C) (B) edge [above] node [align=center] {$a$} (C); \end{tikzpicture} \caption{Requirement}\label{fig:ISSR} \end{subfigure} \caption{Plant and requirement for Example \ref{example:extensions}}\label{fig:ISS} \end{figure} Let us select $N_o=N_c=\ensuremath{L_{\mathit{max}}}\xspace=\ensuremath{M_{\mathit{max}}}\xspace=1$ and find a networked supervisor satisfying all the properties mentioned in the problem statement. For this purpose, $R^{\bot}$ is depicted in Figure \ref{fig:ISSRtot}. For $NP^t$ depicted in Figure \ref{fig:ISSNPtot}, Algorithm \ref{algo} does not result in a networked supervisor (the safe behavior of the plant cannot be preserved by any networked supervisor in the presence of delays). \begin{figure} \centering \begin{tikzpicture}[>=stealth',,shorten >=0.8pt,auto,node distance=2.2cm,scale = 0.6, transform shape] \node[initial,initial text={},state,accepting] (A) {$q_0$}; \node[state] (B) [right of=A] {$q_1$}; \node[state,accepting] (C) [right of=B] {$q_2$}; \node[blue,state] (D) [below of=B] {$q_d$}; \path[->] (A) edge [above,dashed] node [align=center] {$u$} (B) (A) edge [loop above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (A) (B) edge [loop above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (B) (C) edge [loop above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (C) (B) edge [above] node [align=center] {$a$} (C) (A) edge [left,blue] node [align=center] {$a$} (D) (B) edge [right,blue,dashed] node [align=center] {$u$} (D) (C) edge [above,blue] node [align=center] {$a$} (D) (C) edge [bend left,blue,dashed] node [align=center] {$u$} (D) ; \end{tikzpicture} \caption{Total requirement for the plant stated in Example \ref{example:extensions}}\label{fig:ISSRtot} \end{figure} \begin{figure}[h] \centering \begin{tikzpicture}[>=stealth',,shorten >=0.8pt,auto,node distance=2.2cm,scale = 0.55, transform shape] \node[initial,initial text={},state] (A) {$x_0$}; \node[state] (B) [right of=A] {$x_1$}; \node[state] (C) [right of=B] {$x_2$}; \node[state] (D) [right of=C] {$x_3$}; \node[state] (E) [below of=C] {$x_4$}; \node[state] (F) [right of=E] {$x_5$}; \node[state,accepting] (G) [right of=F] {$x_6$}; \node[state,accepting] (H) [right of=G] {$x_7$}; \node[state,accepting] (I) [below of=H] {$x_8$}; \node[state,accepting] (J) [right of=H] {$x_9$}; \path[->] (A) edge [above] node [align=center] {$a_e$} (B) (B) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (C) (C) edge [above,dashed] node [align=center] {$a$} (D) (C) edge [right,dashed] node [align=center] {$u$} (E) (E) edge [above,dashed] node [align=center] {$a$} (F) (F) edge [above,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (G) (G) edge [above,dashed] node [align=center] {$a_o$} (H) (G) edge [above,dashed] node [align=center] {$u_o$} (I) (H) edge [above,dashed] node [align=center] {$u_o$} (J) (I) edge [right,dashed] node [align=center] {$a_o$} (J) (J) edge [loop right,dashed] node [align=center] {$\ensuremath{\mathit{tick}}\xspace$} (J); \end{tikzpicture} \caption{Networked Plant from Example \ref{example:extensions}}\label{fig:ISSP} \end{figure} \end{example} \end{comment} To find a networked supervisor, Algorithm~\ref{algo} is applied on $\Pi(G||R^{\bot},N_c,N_o,\ensuremath{L_{\mathit{max}}}\xspace,\ensuremath{M_{\mathit{max}}}\xspace)$. The obtained networked supervisor is already guaranteed to be timed networked controllable, timed networked maximally permissive, and it results in a nonblocking and time-lock free networked supervised plant. Theorem \ref{theorem:safety} shows that the networked supervised plant is safe as well. \begin{theorem}[Safe NSP] \label{theorem:safety} Given a plant $G$, requirement $R$, and the networked supervisor $\ensuremath{\mathit{NS}}\xspace$ computed by Algorithm~\ref{algo} for $\Pi(G||R^{\bot},N_c,N_o,\ensuremath{L_{\mathit{max}}}\xspace,\ensuremath{M_{\mathit{max}}}\xspace)$: $\ensuremath{\mathit{NS}}\xspace_{N_c}\|_{N_o}\,(G||R^{\bot})$ is safe for $G$ w.r.t.\ $R$. \end{theorem} \begin{proof} See Appendix \ref{proof:safety}. \hfill $\blacksquare$ \end{proof} \begin{comment} \section{Case Study/examples from DSD} \subsection{Implementation in CIF3 Toolset} \AR{from DSD} In this section we briefly discuss the implementation of the networked supervisory control synthesis procedure which is made in the context of the tool set CIF 3~\cite{CIF:14}. CIF 3 has been developed for supporting model-based engineering of supervisory controllers. Among others, it allows the specification of networks of (extended) finite automata for modelling plants and requirements \AR{synch product vs composition?}. It provides tool support for simulation and animating such models and also implementations of different synthesis algorithms. As such it is a good basis for protyping the proposed algorithms. \begin{figure}[h] \includegraphics[width=85mm]{imp+forcibleE.jpg} \caption{Structure of the implementation of networked supervisory control synthesis procedure in the CIF 3 tool set.} \label{fig:imp} \end{figure} Figure \ref{fig:imp} depicts how the presented approach is implemented in the CIF 3 tool set. $G$, $R$, \ensuremath{\Sigma_{\mathit{for}}}\xspace, $N_o$ and $N_c$ are the inputs standing for the plant, requirement automaton, subset of forcible events of $G$, control and observation delays, respectively; the output $\ensuremath{\mathit{NS}}\xspace$ indicates the networked supervisor. As a first step, the requirement automaton $R$, which is not necessarily controllable, is completed using the method introduced in Definition \ref{dfn:Rtot}. The resulting requirement automaton is denoted $R^{tot}$ in the figure. Presently, this step is carried out manually, but it can be automated obviously. \AR{Can we provide the codes for it?}\AR{Also can we change the codes regarding forcible events} Then, the plant $G$ and the completed requirement $R$ are composed using the synchronous product. Implementation of this functionality is already available in CIF 3. Note that both the plant and the requirement may be specified in terms of a number of automata. In both cases the implementation discussed above requires that these are combined into a single plant and requirement automaton by means of synchronous product, respectively. For requirement automata it does not matter if this is done before or after completing them. \AR{The point is that in Wonham composition is used, we need to provide reasoning/assumptions on $G$ or $R$ for not using it} \subsection{Examples} \AR{I can go for examples from DSD following by a theorem or a case study or both :), an example can be also added w.r.to forcible events} \AR{we can also have the theorem resulted from examples+proof} \end{comment} \section{Conclusions and Future Work} \label{section:conclusions} In this paper, we study the networked supervisory control synthesis problem. We first introduce a networked supervisory control framework in which both control and observation channels introduce delays, the control channel is FIFO, and the observation channel is non-FIFO. Moreover, we assume that a global clock exists in the system such that the passage of a unit of time is considered as an event \emph{tick} in the plant model. Also, communication delays are measured as a number of occurrences of the \emph{tick} event. In our framework, uncontrollable events occur in the plant spontaneously. However, controllable events can be executed only if they have been enabled by the networked supervisor. On the other hand, the plant can either accept a control command (enabled by the networked supervisor) and execute it or ignore the control command and execute some other event. For the proposed framework, we also provide an asynchronous composition operator to obtain the networked supervised plant. Furthermore, we adapt the definition of conventional controllability for our framework and introduce timed networked controllability. Then, we present a method of achieving the networked plant automaton representing the behavior of the plant in the networked supervisory control framework. For the networked plant, we provide an algorithm synthesizing a networked supervisor which is timed networked controllable, nonblocking, time-lock free, and maximally permissive. Finally, to generalize, we solve the problem for a given set of control (safety) requirements modeled as automata. We guarantee that the proposed technique achieves a networked supervisor that is timed networked controllable, nonblocking, time-lock free, maximally permissive, and safe. Our proposed approach can be adjusted to a setting with observation delay and control delay specified to each event, a setting with bounded control and observation delays, or to a setting with lossy communication channels. In each case, only the timed asynchronous composition and networked plant operators need to be updated, the synthesis algorithm stays the same. For cases with large state spaces, we must deal with the scalability problem of the networked plant. For such cases, it is suggested to switch to timed automata. A supervisory control synthesis method for timed automata has been recently proposed by the authors~\cite{Rashidinejad:20}. Networked supervisory control of timed automata will be investigated in future research. \appendices \section{Technical Lemmas} Here, the notation $.$ is used to refer to an element of a tuple. For instance, $z.a$ refers to the (first) element $a$ of $z=(a,y,m,l)$. \begin{lemma}[Nonblockingness over Projection\cite{Rashidinejad18}] \label{lemma:projectionNB} For any TDES $G$ with event set $\Sigma$ and any event set $\Sigma'\subseteq \Sigma$: if $G$ is nonblocking, then $P_{\Sigma'}(G)$ is nonblocking. \end{lemma} \begin{proof} Consider an arbitrary TDES $G = (A,\Sigma,\delta,a_0,A_m)$ and arbitrary $\Sigma'\subseteq \Sigma$. Suppose that $G$ is nonblocking. Consider an arbitrary reachable state $A_r \subseteq A$ in $P_{\Sigma'}(G)$. By construction $A_r$ is nonempty. Assume that this state is reached through the word $w \in \Sigma'$. Then, for each state $a \in A_r$, again by construction, $\delta(a_0,w') = a$ for some $w'\in \Sigma^*$ with $P_{\Sigma'}(w')=w$. Because $G$ is nonblocking, there exists a $v'\in \Sigma^*$ such that $\delta(a,v') =a_m$ for some $a_m \in A_m$. Consequently, from state $A_r$, it is possible to have a transition labelled with $P_{\Sigma'}(v')$ to a state $A'_r$ containing $a_m$. By construction, this state $A'_r$ is a marked state in $P_{\Sigma'}(G)$. Hence, the projection automaton is nonblocking as well. \hfill $\blacksquare$ \end{proof} \begin{lemma}[Time-lock Freeness over Projection\cite{Rashidinejad18}] \label{lemma:projectionTLF} For any TDES $G$ with event set $\Sigma$ and any event set $\Sigma'\subseteq \Sigma$, $\ensuremath{\mathit{tick}}\xspace\in \Sigma'$: if $G$ is TLF , then $P_{\Sigma'}(G)$ is TLF. \end{lemma} \begin{proof} The proof is similar to the proof of Lemma \ref{lemma:projectionNB}. \hfill$\blacksquare$ \end{proof} \begin{lemma}[\ensuremath{\mathit{NSP}}\xspace Transitions] \label{lemma:NSP} Given a plant $G$, networked supervisor $\ensuremath{\mathit{NS}}\xspace$ (for that plant) and networked supervised plant $\ensuremath{\mathit{NSP}}\xspace$ (for those): $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w).a=\delta_G(a_0,P_{\Sigma_G}(w))$ and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w).y=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w))$, for any $w\in L(\ensuremath{\mathit{NSP}}\xspace)$. \end{lemma} \begin{proof} Take $w\in L(\ensuremath{\mathit{NSP}}\xspace)$, we show that $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w).a=\delta_G(a_0,P_{\Sigma_G}(w))$ and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w).y=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w))$. This is proved by induction on the structure of $w$. \textbf{Base case:} Assume $w=\epsilon$. Then, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w).a=a_0=\delta_G(a_0,P_{\Sigma_G}(\epsilon))$ and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w).y=y_0=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(\epsilon))$. \textbf{Induction step:} Assume that $w=v\sigma$ where the statement holds for $v$, i.e., $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a=\delta_G(a_0,P_{\Sigma_G}(v))$ and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).y=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v))$. It suffices to prove that the statement holds for $v\sigma$, i.e., $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).a=\delta_G(a_0,P_{\Sigma_G}(v\sigma))$ and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).y=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v\sigma))$. Considering Definition \ref{dfn:operator}, for $\sigma$ enabled at $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v)$ the following cases may occur: $\sigma\in\ensuremath{\Sigma_{\mathit{NS}}}\xspace\setminus\{\ensuremath{\mathit{tick}}\xspace\}$, which refers to item 1) and item 5). Then, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a$ remains unchanged; $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).a=$ $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a=$ $\delta_G(a_0,P_{\Sigma_G}(v))=$ $\delta_G(a_0,P_{\Sigma_G}(v)\epsilon)=$ $\delta_G(a_0,P_{\Sigma_G}(v\sigma))$, and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).y=\ensuremath{\delta_{\mathit{NS}}}\xspace(\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v),\sigma)=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v )\sigma)=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v\sigma))$. $\sigma\in \Sigma_G\setminus\{tick\}$, which refers to item 2) and item 3). Then, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).a=\delta_G(\delta_G(a_0,P_{\Sigma_G}(v)),\sigma)=\delta_G(a_0,P_{\Sigma_G}(v)\sigma)=\delta_G(a_0,P_{\Sigma_G}(v\sigma))$, and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).y$ remains unchanged; $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).y=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).y=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v)\epsilon)=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v\sigma))$. $\sigma=\ensuremath{\mathit{tick}}\xspace$, which refers to item 4). Then, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).a=\delta_G(\delta_G(a_0,P_{\Sigma_G}(v)),\sigma)=\delta_G(a_0,P_{\Sigma_G}(v)\sigma)=\delta_G(a_0,P_{\Sigma_G}(v\sigma))$, and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).y=\ensuremath{\delta_{\mathit{NS}}}\xspace(\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v)),\sigma)=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v)\sigma)=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v\sigma))$. \textbf{Conclusion:} By the principle of induction, the statement ($\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w).a=\delta_G(a_0,P_{\Sigma_G}(w))$ and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w).y=\ensuremath{\delta_{\mathit{NS}}}\xspace(v_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w))$) holds for all $w\in L(\ensuremath{\mathit{NSP}}\xspace)$. \hfill$\blacksquare$ \end{proof} \begin{lemma}[NP Transitions] \label{lemma:NP} Given a plant $G$ with $x_0.a=a_0$, for any $w\in L(\ensuremath{\mathit{NP}}\xspace)$: $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w).a=\delta_G(a_0,P_{\Sigma_G}(w))$. \end{lemma} \begin{proof} The proof is similar to the proof of Lemma \ref{lemma:NSP}. \hfill $\blacksquare$ \end{proof} \begin{lemma}[NP Enabling Commands] \label{lemma:ontime} Given a plant $G$, events from $\Sigma_e$ are enabled on time in the networked plant $\ensuremath{\mathit{NP}}\xspace$ (for that plant); for any $w\sigma\in L(G)$, $\sigma\in \Sigma_c$: there exists $w_0\sigma_e w_1 \sigma\in L(\ensuremath{\mathit{NP}}\xspace)$ where $\sigma_e\in\Sigma_e$ is the enabling event of $\sigma$, $P_{\Sigma_G}(w_0w_1)=w$ and $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w_1)|=N_c$. \end{lemma} \begin{proof} Assume that $G'=P_{\Sigma_G\setminus\ensuremath{\Sigma_{\mathit{uc}}}\xspace}(G)$ is represented by $(A', \Sigma_G, \delta'_G, a'_{0}, A'_{m})$, and let us do the proof by induction on the number of controllable events in $w\in L(G)$. \textbf{Base case:} Assume that $\sigma$ is the $1^{th}$ controllable event enabled in $G$. Then, $w\in (\ensuremath{\Sigma_{\mathit{uc}}}\xspace\cup\{\ensuremath{\mathit{tick}}\xspace\})^*$. According to Assumption 1, $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w)|\geq N_c$. Let say $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w)|=N_c+i$ for some $i\in\mathbb{N}_0$. Then, $\ensuremath{\mathit{tick}}\xspace^{N_c+i}\sigma\in L(G')$. Also, assume that $w=w_iw_{N_c}$ for some $w_i,w_{N_c}$ where $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w_i)|=i$, and $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w_{N_c})|=N_c$. Considering Definition \ref{dfn:NP}, $\ensuremath{\mathit{NP}}\xspace$ starts from $x_0=(a_0,\delta'_G(a'_0,\ensuremath{\mathit{tick}}\xspace^{N_c}),[],\epsilon)$. Then, based on item 4), \ensuremath{\mathit{tick}}\xspace occurs in $\ensuremath{\mathit{NP}}\xspace$ when it is enabled in both $G$ and $G'$. For the first $i$~\ensuremath{\mathit{ticks}}\xspace, whenever \ensuremath{\mathit{tick}}\xspace is enabled in $G$, it is also enabled in $G'$ (there are $i$ \ensuremath{\mathit{ticks}}\xspace enabled in $G'$ before $\sigma$ occurs). Meanwhile, if there is an event ready to be observed, then based on item 5), the corresponding observed event occurs in $\ensuremath{\mathit{NP}}\xspace$ which does not change the current state of $G$ and $G'$. Also, based on item 3), if an uncontrollable event is enabled in $G$, it occurs in $\ensuremath{\mathit{NP}}\xspace$ without changing the state of $G'$. Otherwise, \ensuremath{\mathit{tick}}\xspace occurs in $\ensuremath{\mathit{NP}}\xspace$ by being executed in both $G$ and $G'$. We call this situation as $G$ and $G'$ are synchronized on \ensuremath{\mathit{tick}}\xspace. Therefore, it is feasible that some $w_0$ is executed in $\ensuremath{\mathit{NP}}\xspace$ based on the execution of $\ensuremath{\mathit{tick}}\xspace^{i}$ in $G'$ and $w_i$ in $G$. Then, $\delta_{NP}(x_0,w_0).a=\delta_G(a_0,w_i)$ and $\delta_{NP}(x_0,w_0).a'=\delta'_G(a_0,\ensuremath{\mathit{tick}}\xspace^{N_c+i})$, and so $P_{\Sigma_G}(w_0)=w_i$. After that, since $(\delta_{NP}(x_0,w_0).a',\sigma)!$, based on item 1), $\sigma_e$ occurs in $\ensuremath{\mathit{NP}}\xspace$, and $(\sigma,N_c)$ is added to $\delta_{NP}(x_0,w_0).l$. Note that based on item 3) (item 5)), uncontrollable events enabled in $G$ (events ready to be observed) can occur in between, but without loss of generality, let us assume that $\sigma_e$ is enabled first, and then uncontrollable (observed) events are executed. So, $w_1$ will be executed in $NP$ based on the execution of $w_{N_c}$ in $G$. Therefore, $P_{\Sigma_G}(w_1)=w_{N_c}$, and $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w_{1})|=|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w_{N_c})|=N_c$. Based on item 4), by the execution of each \ensuremath{\mathit{tick}}\xspace, $\delta_{NP}(x_0,w_0\sigma_e).l$ is decreased by one. Also, $\sigma$ is the only controllable event enabled in $G$ so that $head(\delta_{NP}(x_0,w_0\sigma_ew_1).l)=(\sigma,0)$. Then, based on item 2), $\sigma$ will be executed in $\ensuremath{\mathit{NP}}\xspace$. \textbf{Induction step:} Assume that $\sigma$ is the $n^{th}$ controllable event enabled in $G$ where the statement holds for all previous controllable events. Let us indicate the $(n-1)^{th}$ controllable event by $\sigma^{n-1}$ such that $w_{n-1}\,\sigma^{n-1}\in L(G)$. As the statement holds for $\sigma^{n-1}$, there exists some $w^{n-1}_0\,\sigma^{n-1}_e\,w^{n-1}_1\,\sigma^{n-1}\in L(NP)$ such that $P_{\Sigma_G}(w^{n-1}_0\,w^{n-1}_1)=w_{n-1}$ and $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w^{n-1}_1)|=N_c$. It suffices to prove that for the next controllable event $\sigma^n$, $w_n\,\sigma^n\in L(G)$, there exists some $w^n_0\,\sigma_e\,w^n_1\,\sigma^n\in L(NP)$ with $P_{\Sigma_G}(w^n_0\,w^n_1)=w_n$ and $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w^n_1)|=N_c$. Let us say $w_n=w_{n-1}\,\sigma^{n-1}w$ where $w\in (\ensuremath{\Sigma_{\mathit{uc}}}\xspace\cup\{\ensuremath{\mathit{tick}}\xspace\})^*$. Assume that $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w)|=j$, and $w_{n-1}=w^{n-1}_i\,w^{n-1}_{N_c}$ where $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w^{n-1}_i)|=i$, and $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w^{n-1}_{N_c})|=N_c$. Moreover, let us say for $w_{n-1}\,\sigma^{n-1}\,w\,\sigma\in L(G)$, there exists $\ensuremath{\mathit{tick}}\xspace^{N_c}\,w'_{n-1}\,\sigma^{n-1}\,\ensuremath{\mathit{tick}}\xspace^j\sigma^n\in L(G')$ where $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w'_{n-1})|=i$. Considering Definition \ref{dfn:NP}, $G'$ synchronizes with $G$ on executing \ensuremath{\mathit{tick}}\xspace since whenever \ensuremath{\mathit{tick}}\xspace is enabled in $G'$, it occurs in $\ensuremath{\mathit{NP}}\xspace$ only if $G$ enables it as well. Also, uncontrollable events (observed events) occur as they are enabled in $G$ (as the corresponding event is ready to be observed), and due to the induction assumption, all controllable events occurring in $G$ before $\sigma^n$ are enabled on time, and so they will be executed in $\ensuremath{\mathit{NP}}\xspace$. By the execution of $w^{n-1}_0\,\sigma^{n-1}_e$ in $\ensuremath{\mathit{NP}}\xspace$, $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w^{n-1}_0\,\sigma^{n-1}_e).a'=\delta'_G(a'_0,\ensuremath{\mathit{tick}}\xspace^{N_c}\,w'_{n-1}\,\sigma^{n-1})$, and so $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w^{n-1}_0\,\sigma^{n-1}_e).a=\delta_G(a_0,w^{n-1}_i)$ ($G$ and $G'$ synchronize on \ensuremath{\mathit{tick}}\xspace). At this point, (before reaching $\sigma$) in $G'$, $\ensuremath{\mathit{tick}}\xspace^j$ is enabled, and $w^{n-1}_{N_C}$ is enabled in $G$ (before reaching $\sigma^{n-1}$). Then, one of the following cases may occur: $j< N_c$. Then, assume $w^{n-1}_{N_c}=w^{n-1}_jw^{n-1}_{N_c-j}$ for some $w^{n-1}_j,w^{n-1}_{N_c-j}$ where $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}w^{n-1}_j|=j$ and $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}w^{n-1}_{N_c-j}|=N_c-j$. Then, the execution of $\ensuremath{\mathit{tick}}\xspace^j$ in $G'$ is synchronized with the execution of $w^{n-1}_j$ in $G$ resulting in $w^{n-1}_0\sigma^{n-1}_ev_1\in L(NP)$ with $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(v_1)|=j$ and $P_{\Sigma_G}(v_1)=w^{n-1}_{j}$. After that $\sigma_e$ occurs in $\ensuremath{\mathit{NP}}\xspace$ (as it is enabled in $G'$) adding $(\sigma,N_c)$ to $l$. This follows by the execution of $w^{n-1}_{N_c-j}$ in $G$ and results in $w^{n-1}_0\sigma^{n-1}_ev_1\sigma^n_ev_2\in L(\ensuremath{\mathit{NP}}\xspace)$ where $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(v_2)|=N_c-j$ and $P_{\Sigma_G}(v_2)=w^{n-1}_{N_c-j}$. At this point, $\sigma^{n-1}$ is executed in $\ensuremath{\mathit{NP}}\xspace$ following by the execution of $v_3$ where $P_{\Sigma_G}(v_3)=w$, and so $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(v_3)|=j$. This results in $w^{n-1}_0\,\sigma^{n-1}_e\,v_1\,\sigma^n_e\,v_2\,\sigma^{n-1}\,v_3\in L(\ensuremath{\mathit{NP}}\xspace)$ where $P_{\{\ensuremath{\mathit{tick}}\xspace\}}(v_2\sigma^{n-1}v_3)=N_c-j+j=N_c$, and $head(l)=(\sigma^n,0)$ (after the execution of $\sigma^{n-1}$, this is only $(\sigma^n,N_c)$ in $l$, and $l$ is decreased by one by the execution of each \ensuremath{\mathit{tick}}\xspace), and so $\sigma^n$ occurs in $\ensuremath{\mathit{NP}}\xspace$. Hence, $w^n_0\,\sigma^n_e\,w^n_1\,\sigma^n\in L(\ensuremath{\mathit{NP}}\xspace)$ for $w^n_0=w^{n-1}_0\,\sigma^{n-1}_e\,v_1$ and $w^n_1=v_2\,\sigma^{n-1}\,v_3$ where $P_{\Sigma_G}(w^n_0\,w^n_1)=P_{\Sigma_G}(w^{n-1}_0\,\sigma^{n-1}_e\,v_1\,v_2\,\sigma^{n-1}\,v_3)=w^n$ and $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w^n_1)|=N_c-j+j=N_c$. $j> N_c$. Then, assume $w=w_{j-N_c}w_{N_c}$ for some $w_{j-N_c},w_{N_c}$ where $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}w_{j-N_c}|=j-N_c$ and $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}w_{N_c}|=N_c$. The execution of $\ensuremath{\mathit{tick}}\xspace^{N_c}$ in $G'$ is synchronized with the execution of $w^{n-1}_{N_c}$ in $G$ resulting in $w^{n-1}_0\,\sigma^{n-1}_e\,w^{n-1}_1\,\sigma_{n-1}\in L(NP)$. After that, the execution of the remaining $\ensuremath{\mathit{tick}}\xspace^{j-N_c}$ in $G'$ will be synchronized with execution of $w_{j-N_c}$ in $G$ resulting in $w^{n-1}_0\,\sigma^{n-1}_e\,w^{n-1}_1\,\sigma^{n-1}\,v_1\in L(\ensuremath{\mathit{NP}}\xspace)$ where $P_{\Sigma_G}(v_1)=w_{j-N_c}$. Then, $\sigma^n_e$ occurs in $\ensuremath{\mathit{NP}}\xspace$ as it is enabled in $G'$ adding $(\sigma^n,N_c)$ to $l$. Finally, the execution of $w_{N_c}$ in $G$ results in $w^{n-1}_0\sigma^{n-1}_ew^{n-1}_1\sigma^{n-1}v_1\sigma^n_ev_2\in L(\ensuremath{\mathit{NP}}\xspace)$ with $P_{\Sigma_G}(v_2)=w_{N_c}$. As $N_c$ \ensuremath{\mathit{ticks}}\xspace have passed, $head(l)=(\sigma^n,0)$, and $\sigma^n$ occurs in $\ensuremath{\mathit{NP}}\xspace$. Hence, $w^n_0\,\sigma^n_e\,w^n_1\,\sigma^n\in L(\ensuremath{\mathit{NP}}\xspace)$ for $w^n_0=w^{n-1}_0\,\sigma^{n-1}_e\,w^{n-1}_1\,\sigma^{n-1}\,v_1$ and $w^n_1=v_2$ where $P_{\Sigma_G}(w^n_0\,w^n_1)=w_n$ and $|P_{\{\ensuremath{\mathit{tick}}\xspace\}}(w^n_1)|=N_c$. $j=N_c$. Then, after the execution of $w^{n-1}_0\sigma^{n-1}_e$ in $\ensuremath{\mathit{NP}}\xspace$, $v_1$ occurs in $\ensuremath{\mathit{NP}}\xspace$ related to the execution of $w^{n-1}_{N_c}$ in $G$ and $w'_n$ in $G'$. At this point, $\sigma^{n-1}$ is enabled in $G$ and $head(l)=(\sigma^{n-1})$. Also, $\sigma^n$ is enabled in $G'$. Therefore, either $\sigma^n_e\sigma^{n-1}$ or $\sigma^{n-1}\sigma^n_e$ occurs in $\ensuremath{\mathit{NP}}\xspace$ both followed by the execution of some $v_2$ in $\ensuremath{\mathit{NP}}\xspace$ such that $P_{\Sigma_G}(v_2)=w$. As $N_c$ \ensuremath{\mathit{ticks}}\xspace have passed ($head(l)=(\sigma^n,0)$), and $\sigma^n$ is enabled in $G$, $\sigma^n$ occurs in $\ensuremath{\mathit{NP}}\xspace$. This results in one of the following words; $w^{n-1}_0\sigma^{n-1}_ev_1\sigma^{n-1}\sigma^n_ev_2\sigma^n\in L(\ensuremath{\mathit{NP}}\xspace)$ or $w^{n-1}_0\sigma^{n-1}_ev_1\sigma^n_e\sigma^{n-1}v_2\sigma^n\in L(\ensuremath{\mathit{NP}}\xspace)$ where the statement holds in both cases as already discussed in the previous items. \textbf{Conclusion:} By the principle of induction, the statement holds for all $\sigma\in\Sigma_c$ and $w\in\Sigma^*_G$ with $w\sigma\in L(G)$. \hfill $\blacksquare$ \end{proof} \begin{lemma}[NSP and NP] \label{lemma:m,l} Consider a plant $G$, a networked supervisor $\ensuremath{\mathit{NS}}\xspace$ (for that plant), the observation channel $M$, and the control channel $L$. The networked plant $\ensuremath{\mathit{NP}}\xspace$ has the set of states $X$ and the networked supervised plant $\ensuremath{\mathit{NSP}}\xspace$ has the set of states $Z$. Then, for any pair of $x\in X$ and $z\in Z$ reachable through the same $w\in \Sigma^*_{\mathit{NSP}}$: $x.m=z.m$ and $x.l=z.l$. \end{lemma} \begin{proof} Take $x\in X$, $z\in Z$, and $w\in \Sigma^*_{\mathit{NSP}}$ such that $x=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w)$ and $z=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w)$. By induction on the structure of $w$, it is proved that $x.m=z.m$ and $x.l=z.l$. \textbf{Base case:} Assume $w=\epsilon$. Then, $x.m=x_0.m=[]$ ($x.l=x_0.l=\epsilon$), and $z.m=z_0.m=[]$ ($z.l=z_0.l=\epsilon$). Thereto, $x.m=z.m$ and $x.l=z.l$. \textbf{Induction step:} Assume $w=v\sigma$ where the statement holds for $v\in \Sigma^*_{\mathit{NSP}}$ and the intermediate states reached by $v$ so that $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m$ and $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).l=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).l$. It suffices to prove that the statement holds for $v\sigma$, i.e., $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v\sigma).m=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).m$ and $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v\sigma).l=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).l$. Considering Definition \ref{dfn:NP} and Definition \ref{dfn:operator}, in both operators, $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m$ ($\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m$) changes by the execution of $\sigma\in\Sigma_c\cup\ensuremath{\Sigma_{\mathit{uc}}}\xspace\cup\ensuremath{\mathit{tick}}\xspace\cup\Sigma_o$ (item 2), item 3), item 4), and item 5)), and $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).l$ ($\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).l$) changes by the execution of $\sigma\in\Sigma_e\cup \Sigma_c\cup\ensuremath{\mathit{tick}}\xspace$ (item 1), item 2), and item 4)) in a similar way. Therefore, starting from $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v)$ and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v)$ with $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m$ ($\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).l=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).l$), the execution of the same event $\sigma$ results in $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v\sigma).m=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m$ ($\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v\sigma).l=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).l$). \textbf{Conclusion:} By the principle of induction, the statement ($x.m=z.m$ and $x.l=z.l$) holds for all $w\in \Sigma^*_{\mathit{NSP}}$, $x=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w)$ and $z=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w)$. \hfill$\blacksquare$ \end{proof} \begin{lemma}[\ensuremath{\mathit{NSP}}\xspace and Product] \label{lemma:product} Given a plant $G$ and a networked supervisor $\ensuremath{\mathit{NS}}\xspace$ with event set $\ensuremath{\Sigma_{\mathit{NS}}}\xspace$. If $L(\ensuremath{\mathit{NS}}\xspace)\subseteq P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(L(\ensuremath{\mathit{NP}}\xspace))$, then $L(\ensuremath{\mathit{NSP}}\xspace)=L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$. \end{lemma} \begin{proof} This is proved in two steps; 1. for any $w\in L(\ensuremath{\mathit{NSP}}\xspace)$: $w\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$, and 2. for any $w\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$: $w\in L(\ensuremath{\mathit{NSP}}\xspace)$. 1) Take $w\in L(\ensuremath{\mathit{NSP}}\xspace)$. By induction on the structure of $w$, it is proved that $w\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$. \textbf{Base case:} Assume that $w=\epsilon$. Then, $w\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$ by definition. \textbf{Induction step:} Let $w=v\sigma$ for some $v\in \Sigma^*_{NSP}$ and $\sigma\in \ensuremath{\Sigma_{\mathit{NSP}}}\xspace$ where the statement holds for $v$, i.e., $v\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$. It suffices to prove that the statement holds for $v\sigma$, i.e., $v\sigma\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$. Due to Lemma \ref{lemma:NSP}, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).y=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v))$, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).y=\ensuremath{\delta_{\mathit{NS}}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).y,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(\sigma))$, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a=\delta_G(a_0,P_{\Sigma_G}(v))$, and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v\sigma).a=\delta_G(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a,P_{\Sigma_G}(\sigma))$. Due to the definition of synchronous product (in \cite{Cassandras:99}), since $\ensuremath{\Sigma_{\mathit{NS}}}\xspace\subseteq\ensuremath{\Sigma_{\mathit{NSP}}}\xspace$, one can say any $w\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$ if $w\in L(\ensuremath{\mathit{NP}}\xspace)$ and $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w)\in L(\ensuremath{\mathit{NS}}\xspace)$. For $v\sigma\in L(\ensuremath{\mathit{NSP}}\xspace)$, it is already showed that $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v\sigma)\in L(\ensuremath{\mathit{NS}}\xspace)$, and so it suffices to prove $v\sigma\in L(\ensuremath{\mathit{NP}}\xspace)$. For $v\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$: $v\in L(\ensuremath{\mathit{NP}}\xspace)$ (since $\ensuremath{\Sigma_{\mathit{NS}}}\xspace\subseteq\ensuremath{\Sigma_{\mathit{NSP}}}\xspace$). Then, due to Lemma \ref{lemma:NP}, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a=\delta_G(a_0,P_{\Sigma_G}(v))=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a$. Moreover, both $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v)$ and $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v)$ are reachable through $v$, and so due to Lemma \ref{lemma:m,l}, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m$ and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).l=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).l$. Since $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v\sigma)\in L(\ensuremath{\mathit{NS}}\xspace)$ (for $v\sigma\in L(\ensuremath{\mathit{NSP}}\xspace)$, $\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,v\sigma)!$ due to Lemma \ref{lemma:NSP}), and $L(\ensuremath{\mathit{NS}}\xspace)\subseteq P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(L(\ensuremath{\mathit{NP}}\xspace))$, then one can say there exists $w'\in L(\ensuremath{\mathit{NP}}\xspace)$, $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w')=P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v\sigma)$. Without loss of generality, assume $w'=v'P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(\sigma)$ where $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v')=P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v)$. Let us complete the proof for different cases of $\sigma\in\ensuremath{\Sigma_{\mathit{NSP}}}\xspace$. $\sigma\in\Sigma_e$. Then, $\delta'_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a',\sigma)!$ since $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a'=$ $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').a'$ and $\delta'_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').a',\sigma)!$ ($P_{\Sigma_e\cup\{tick\}}(v)=P_{\Sigma_e\cup\{tick\}}(v')$, and due to Definition \ref{dfn:NP}, $x.a'$ changes by $w\in (\Sigma_e\cup\{tick\})^*$). So, due to item 1), $\ensuremath{\delta_{\mathit{NP}}}\xspace(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v),\sigma)!$. $\sigma\in\Sigma_{c}$. Then, $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a,\sigma)!$ since $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a$ and $\delta_G(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a,\sigma)!$. Also, the condition $\ensuremath{\mathit{head}}\xspace(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).l)=(\sigma,0)$ is satisfied since $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).l=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).l$ and $\ensuremath{\mathit{head}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).l)=(\sigma,0)$ (considering Definition \ref{dfn:operator}-item 2)), $\sigma$ can occur only if $\ensuremath{\mathit{head}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).l)=(\sigma,0)$). So, due to Definition \ref{dfn:NP}-item 2), $\ensuremath{\delta_{\mathit{NP}}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v),\sigma)!$. $\sigma\in\ensuremath{\Sigma_{\mathit{uc}}}\xspace$. Then, $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a,\sigma)!$ since $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a$ and $\delta_G(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a,\sigma)!$. So, based on Definition \ref{dfn:NP}-item 3), $\ensuremath{\delta_{\mathit{NP}}}\xspace(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v),\sigma)!$. $\sigma=tick$. Then, $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a,\sigma)!$ since $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a$ and $\delta_G(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a,\sigma)!$. In addition, $\delta'_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a',\sigma)!$ since $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a'=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').a'$ and $\delta'_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').a',\sigma)!$. Also, $(\sigma,0)\notin \ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m$ for all $\sigma\in \Sigma_a$ since $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m$ and $(\sigma,0)\notin \ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m$ (considering Definition \ref{dfn:operator}-item 4), \ensuremath{\mathit{tick}}\xspace can occur if $(\sigma,0)\notin \ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m$). Therefore, based on Definition \ref{dfn:NP}-item 4), $\ensuremath{\delta_{\mathit{NP}}}\xspace(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v),\sigma)!$. $\sigma\in \Sigma_o$. Then, $(\sigma,0)\in \ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m$ because $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m=\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m$ and $(\sigma,0)\in \ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m$ (due to Definition \ref{dfn:operator}-item 5)). So, due to Definition \ref{dfn:NP}-item 5), $\ensuremath{\delta_{\mathit{NP}}}\xspace(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v),\sigma)!$. \textbf{Conclusion:} By the principle of induction, $w\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$ is true for any $w\in L(\ensuremath{\mathit{NSP}}\xspace)$. 2) Take $w\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$, by induction, it is proved that $w\in L(\ensuremath{\mathit{NSP}}\xspace)$ is true. \textbf{Base case:} Assume that $w=\epsilon \in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$. Then, $w\in L(\ensuremath{\mathit{NSP}}\xspace)$ by definition. \textbf{Induction step:} Let $w=v\sigma\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$ where the statement is true for $v$, i.e., $v\in L(\ensuremath{\mathit{NSP}}\xspace)$. It suffices to prove that the statement holds for $v\sigma$, i.e., $v\sigma\in L(\ensuremath{\mathit{NSP}}\xspace)$. Due to the definition of synchronous product (in \cite{Cassandras:99}), since $\ensuremath{\Sigma_{\mathit{NS}}}\xspace\subseteq\ensuremath{\Sigma_{\mathit{NSP}}}\xspace$, one can say any $w\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$ if $w\in L(\ensuremath{\mathit{NP}}\xspace)$ and $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w)\in L(\ensuremath{\mathit{NS}}\xspace)$. Due to Lemma \ref{lemma:NP}, $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a=\delta_G(a_0,P_{\Sigma_G}(v))$ and $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v\sigma).a=\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a,\sigma)$. Also, due to Lemma \ref{lemma:NSP}, for $v\in L(\ensuremath{\mathit{NSP}}\xspace)$, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).y=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v))$, and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a=\delta_G(a_0,P_{\Sigma_G}(v))$. Moreover, since both $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v)$ and $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v)$ are reachable through $v$, based on Lemma \ref{lemma:m,l}, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m$ and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).l=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).l$. Now, for different cases of $\sigma\in\ensuremath{\Sigma_{\mathit{NSP}}}\xspace$, we prove that $\ensuremath{\delta_{\mathit{NSP}}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v),\sigma)!$. $\sigma\in\Sigma_e$. Then, based on the assumption, $\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v)\sigma)!$, and so considering Definition \ref{dfn:operator}-item 1), $\ensuremath{\delta_{\mathit{NSP}}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v),\sigma)!$. $\sigma\in\Sigma_{c}$. Then, $\delta_G(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a,\sigma)!$ since $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a$ and $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a,\sigma)!$. Also, the condition $\ensuremath{\mathit{head}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).l)=(\sigma,0)$ is satisfied since $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).l=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).l$ and $\ensuremath{\mathit{head}}\xspace(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).l)=(\sigma,0)$ (based on Definition \ref{dfn:NP}-item 2)). Hence, considering Definition \ref{dfn:operator}-item 2), $\ensuremath{\delta_{\mathit{NSP}}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v),\sigma)!$. $\sigma\in\ensuremath{\Sigma_{\mathit{uc}}}\xspace$. Then, $\delta_G(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a,\sigma)!$ since $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a$ and $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a,\sigma)!$, and so considering Definition \ref{dfn:operator}-item 3), $\ensuremath{\delta_{\mathit{NSP}}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v),\sigma)!$. $\sigma=tick$. Then, $\delta_G(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a,\sigma)!$ since $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).a=$ $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a$ and $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).a,\sigma)!$. Also, $\ensuremath{\delta_{\mathit{NS}}}\xspace(\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v)),\sigma)!$ due to the assumption. Moreover, $(\sigma,0)\notin \ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m$ for all $\sigma\in \Sigma_a$ since $(\sigma,0)\notin \ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m$ (based on Definition \ref{dfn:NP}-item 4)) and $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m$. Therefore, based on Definition \ref{dfn:operator}-item 4), $\ensuremath{\delta_{\mathit{NSP}}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v),\sigma)!$. $\sigma\in \Sigma_o$. Then, $(\sigma,0)\in \ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m$ because $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v).m=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m$ and $(\sigma,0)\in \ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v).m$ (due to Definition \ref{dfn:NP}-item 5)). Moreover, $\ensuremath{\delta_{\mathit{NS}}}\xspace(\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v)),\sigma)!$ based on the assumption. So, due to Definition \ref{dfn:operator}-item 5), $\ensuremath{\delta_{\mathit{NSP}}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,v),\sigma)!$. \textbf{Conclusion:} By the principle of induction $w\in L(\ensuremath{\mathit{NSP}}\xspace)$ is true for any $w\in L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$. \hfill$\blacksquare$ \end{proof} \begin{corollary}[Lemma \ref{lemma:product}] \label{corollary:product} Given a plant $G$ and a networked supervisor $\ensuremath{\mathit{NS}}\xspace$ with event set $\ensuremath{\Sigma_{\mathit{NS}}}\xspace$ such that $L(\ensuremath{\mathit{NS}}\xspace)\subseteq P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(L(\ensuremath{\mathit{NP}}\xspace))$: \begin{enumerate} \item $L(\ensuremath{\mathit{NSP}}\xspace)\subseteq L(\ensuremath{\mathit{NP}}\xspace)$, and \item $L_m(\ensuremath{\mathit{NSP}}\xspace)\subseteq L_m(NP)$. \end{enumerate} \end{corollary} \begin{proof} This clearly holds since due to Lemma \ref{lemma:product}, $\ensuremath{\mathit{NSP}}\xspace=\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace$ and $\ensuremath{\Sigma_{\mathit{NS}}}\xspace\subseteq\ensuremath{\Sigma_{\mathit{NSP}}}\xspace$ \hfill$\blacksquare$ \end{proof} \begin{lemma}[Finite NP] \label{lemma:finiteNP} Given a plant $G$ with a set of states $A$ and a set of events $\Sigma_G$: $\ensuremath{\mathit{NP}}\xspace$ is a finite automaton. \end{lemma} \begin{proof} We need to prove that $\ensuremath{\mathit{NP}}\xspace$ has a finite set of states and a finite set of events. Considering Definition \ref{dfn:NP}, $\ensuremath{\mathit{NP}}\xspace$ has a set of states $X=A\times Q'\times A'\times M\times L$. To prove that $X$ is finite, it is sufficient to guarantee that $A, Q', A', M$ and $L$ are finite sets because as proved in~\cite{SetTheory} the Cartesian product of finite sets is finite. $A$ is finite as the plant is assumed to be given as a finite automaton. $A'$ is finite since for each $a'\in A'$, $a'\subseteq A$, and $A$ is finite. $M$($L$) is finite as the maximum size of every element of $M$ is limited to a finite number $\ensuremath{M_{\mathit{max}}}\xspace$($\ensuremath{L_{\mathit{max}}}\xspace$). Moreover, $\ensuremath{\Sigma_{\mathit{NSP}}}\xspace=\Sigma_e\cup\Sigma_o\cup\Sigma_G$ is finite since $G$ is a finite automaton, and so $\Sigma_G$ is finite. $\Sigma_e$ and $\Sigma_o$ are finite since due to Definition \ref{dfn:Sigmao&Sigmae}, the size of $\Sigma_e$ is equal to the size of $\Sigma_c$, and the size of $\Sigma_o$ is equal to the size of $\Sigma_a$. \hfill $\blacksquare$ \end{proof} \begin{lemma}[Nonblocking NS] \label{lemma:NBNS} The networked supervisor $\ensuremath{\mathit{NS}}\xspace$ synthesized from Algorithm \ref{algo} is nonblocking. \end{lemma} \begin{proof} Based on Property \ref{property:termination}, Algorithm \ref{algo} terminates, let say after $n$ iterations. Then, either $x_0\in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(n))$ or $\ensuremath{\mathit{BS}}\xspace(n)=\varnothing$ where $\ensuremath{\mathit{BS}}\xspace(n)=\ensuremath{\mathit{BPre}}\xspace(\ensuremath{\mathit{NS}}\xspace(n)\cup\ensuremath{\mathit{BLock}}\xspace(\ensuremath{\mathit{NS}}\xspace(n))\cup \ensuremath{\mathit{TLock}}\xspace(\ensuremath{\mathit{NS}}\xspace(n)))$. In case that $x_0\in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(n))$, the algorithm gives no result. Otherwise, the algorithm gives $\ensuremath{\mathit{NS}}\xspace=P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(\ensuremath{\mathit{NS}}\xspace(n))$ where $\ensuremath{\mathit{NS}}\xspace(n)$ is nonblocking since $\ensuremath{\mathit{BLock}}\xspace(\ensuremath{\mathit{NS}}\xspace(n))=\varnothing$. Moreover, due to Lemma \ref{lemma:projectionNB}, the projection preserves nonblockingness, and so $\ensuremath{\mathit{NS}}\xspace$ is nonblocking. \hfill$\blacksquare$ \end{proof} \begin{lemma}[TLF NS] \label{lemma:TLFNS} The networked supervisor $\ensuremath{\mathit{NS}}\xspace$ synthesized for a plant $G$ using Algorithm \ref{algo} is TLF. \end{lemma} \begin{proof} The proof is similar to the proof of Lemma \ref{lemma:NBNS}. \hfill$\blacksquare$ \end{proof} \section{Proofs of Properties and Theorems} \subsection{Proof of Property \ref{property:NSP&P}}\label{proof:NSP&P} It suffices to prove that $w\in L(G)$ for any $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace))$. Take arbitrary $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace))$. Then, according to Definition \ref{dfn:proj}, $P_{\Sigma_G}(w')=w$ for some $w'\in L(\ensuremath{\mathit{NSP}}\xspace)$. Then, due to Lemma \ref{lemma:NSP}, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w').a=\delta_G(a_0,P_{\Sigma_G}(w'))$ meaning that $w \in L(G)$. \subsection{Proof of Property \ref{property:NPLE}}\label{proof:NPLE} The proof consists of two cases: 1) for any $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$: $w\in L(G)$. This is proved by induction on the structure of $w$. \textbf{Base case:} Assume $w=\epsilon$. Then, $w\in L(G)$ by definition. \textbf{Induction step:} Assume that $w=v\sigma$ for some $v\in \Sigma^*_G$ and $\sigma\in\Sigma_G$ where the statement holds for $v$, i.e., $v\in L(G)$. It suffices to prove that the statement holds for $v\sigma$, i.e., $v\sigma \in L(G)$. Due to the projection properties, for $v\sigma\in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$, one can say there exists $v'\in \Sigma^*_{NSP}$, $P_{\Sigma_G}(v')=v\sigma$. Without loss of generality, let say $v'=v''\sigma$ where $P_{\Sigma_G}(v'')=v$. Then, due to Lemma \ref{lemma:NP}, $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v'').a=\delta_G(a_0,P_{\Sigma_G}(v''))=\delta_G(a_0,v)$, and $\ensuremath{\delta_{\mathit{NP}}}\xspace(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v''),\sigma).a=\delta_G(a_0,P_{\Sigma_G}(v''\sigma))=\delta_G(\delta_G(a_0,v),\sigma)$. So, $\delta_G(\delta_G(a_0,v),\sigma)!$ and the statement holds for $v\sigma$. \textbf{Conclusion:} By the principle of induction, the statement $w\in L(G)$ holds for all $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$. 2) If $max_c\leq \ensuremath{L_{\mathit{max}}}\xspace$, for any $w\in L(G)$: $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$. This is proved by using induction on the structure of $w$. \textbf{Base case:} assume $w=\epsilon$. Then $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$ by definition. \textbf{Induction step:} assume that $w=v\sigma$ for some $v\in L(G)$ and $\sigma\in\Sigma_G$ where the statement holds for $v$, i.e., $v\in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$. It suffices to prove that $v\sigma \in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$. For $v\in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$, there exists $v'\in \Sigma^*_{NSP}$, $P_{\Sigma_G}(v')=v$ due to the projection properties. Considering Definition \ref{dfn:NP}, one of the following cases may occur at $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v')$. $\sigma\in \ensuremath{\Sigma_{\mathit{uc}}}\xspace$, then due to item 3), $\ensuremath{\delta_{\mathit{NP}}}\xspace(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v'),\sigma)!$ because $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').a,\sigma)!$. Applying the projection on $v'\sigma\in L(\ensuremath{\mathit{NP}}\xspace)$ results in $v\sigma\in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$. $\sigma\in\Sigma_c$, then $(\sigma,0)\in \ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').l$ since due to Lemma \ref{lemma:ontime}, $N_c$ \ensuremath{\mathit{ticks}}\xspace earlier, $\sigma_e$ was enabled in $\ensuremath{\mathit{NP}}\xspace$. When $\sigma_e$ occurred, based on item 1), $(\sigma,N_c)$ was certainly put in $l$ as Assumption 2 holds. The occurrence of each \ensuremath{\mathit{tick}}\xspace (from $N_c$ \ensuremath{\mathit{ticks}}\xspace) causes $l-1$ as item 4) says. Also, the control channel is FIFO ($l$ is a list), so even if a sequence of events have been enabled simultaneously, the ordering is preserved in $l$. So far, $\ensuremath{\mathit{head}}\xspace(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').l)=(\sigma,0)$ and $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').a,\sigma)!$ as assumed. So, due to item 2), $v'\sigma\in L(\ensuremath{\mathit{NP}}\xspace)$, and $v\sigma\in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$. $\sigma=\ensuremath{\mathit{tick}}\xspace$, then let us first empty $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').m$ from any $(\sigma',0)$ by executing $v_o\in \Sigma^*_o$. Then, $(\sigma',0)\notin \ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v'\,v_o).m$. Also, $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v'\,v_o).a=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').a$ since the execution of observed events only changes $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').m$. $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v').a,\ensuremath{\mathit{tick}}\xspace)!$ due to the assumption, and so $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v'\,v_o).a,\ensuremath{\mathit{tick}}\xspace)!$. Now, as the worst case, assume that at $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v'\,v_o).a'$, only $v_c\in \Sigma^{*a}_c$ is enabled, and after that either \ensuremath{\mathit{tick}}\xspace occurs or nothing. Based on item 4) and item 1), this is then only $v_{c_e}$ executed at $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v'\,v_o)$. $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v'\,v_o\,v_{c_e}).a,\ensuremath{\mathit{tick}}\xspace))!$, $(\sigma,0)\notin \ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v'\,v_o\,v_{c_e}).m$, and $\neg\delta'_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v'\,v_o\,v_{c_e}).a',\sigma')!$ for all $\sigma,\sigma'\in\Sigma_a$. So, based on item 4), $v'\,v_o\,v_{c_e}\,\ensuremath{\mathit{tick}}\xspace\in L(\ensuremath{\mathit{NP}}\xspace)$, and so $v\,\ensuremath{\mathit{tick}}\xspace\in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$. \textbf{Conclusion:} By the principle of induction, the statement ($w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))$) holds for all $w\in L(G)$. \subsection{Proof of Property \ref{property:termination}}\label{proof:termination} Algorithm \ref{algo} terminates if at some iteration $i$, $y_0\in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{bs}}\xspace(i))$ or $\ensuremath{\mathit{bs}}\xspace(i)=\varnothing$. At each iteration $i$, $\ensuremath{\mathit{bs}}\xspace(i)\subseteq Y$ since initially $\ensuremath{\mathit{bs}}\xspace(0)=\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{ns}}\xspace(0))$ where $\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{ns}}\xspace(0))=\ensuremath{\mathit{BLock}}\xspace(\ensuremath{\mathit{ns}}\xspace(0))\cup\ensuremath{\mathit{TLock}}\xspace(\ensuremath{\mathit{ns}}\xspace(0)$, and so $\ensuremath{\mathit{bs}}\xspace(0)\subseteq Y$ by definition. Also, $\ensuremath{\mathit{bs}}\xspace(i)$ is updated at line \ref{line:BPre} to $\ensuremath{\mathit{BPre}}\xspace(\ensuremath{\mathit{ns}}\xspace(i))\cup\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{ns}}\xspace(i))$ where $\ensuremath{\mathit{BPre}}\xspace(\ensuremath{\mathit{ns}}\xspace(i))\subseteq Y$ and $\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{ns}}\xspace(i))\subseteq Y$ by definition, and so $\ensuremath{\mathit{bs}}\xspace(i)\subseteq Y$. Since $Y$ is a finite set, it suffices to prove that at each iteration, at least one state is removed from $Y$. Then, it is guaranteed that the algorithm loops finitely often. So, let's say $y_0\notin \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{bs}}\xspace(i))$ and $\ensuremath{\mathit{bs}}\xspace(i)\neq\varnothing$ (because otherwise the algorithm terminates immediately). Then, there exists some state $y'\in \ensuremath{\mathit{bs}}\xspace(i)$. By definition this gives $y'\in \ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{bs}}\xspace(i))$. Also, since at the end of each iteration, the automaton is made reachable (line \ref{line:reachable}), $y'$ is reachable from $y_0$ (possibly through some intermediate states). According to line \ref{line:removeUncon}, at least $y'$ is removed from $Y$, and so the algorithm terminates. \subsection{Proof of Theorem \ref{theorem:nonblockingness}}\label{proof:NBness} We need to prove that for all $z\in Reach(z_0)$, there exists a $w\in \Sigma^*_{NSP}$ such that $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z,w)\in Z_m$. Take $z\in Reach(z_0)$, then we need to find $w\in \Sigma^*_{NSP}$ for which $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z,w)\in Z_m$. Let us assume that $z$ is reachable from $z_0$ via $w_0\in \Sigma^*_{NSP}$, i.e., $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w_0)=z$. Then, due to Lemma \ref{lemma:NSP}, $z.y=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w_0))$. Due to Lemma \ref{lemma:NBNS}, for $z.y\in Reach(y_0)$, there exists some $v\in\Sigma^*_{NS}$ such that $\ensuremath{\delta_{\mathit{NS}}}\xspace(z.y,v)\in Y_m$. Moreover, due to line \ref{line:reachable} of Algorithm \ref{algo}, $L(\ensuremath{\mathit{ns}}\xspace(i))\subseteq L(\ensuremath{\mathit{ns}}\xspace(i-1))$, and $\ensuremath{\mathit{ns}}\xspace(0)=NP$. Hence, $L(\ensuremath{\mathit{NS}}\xspace)\subseteq L(P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(NP))$ (line \ref{line:project} of Algorithm \ref{algo}). Then, due to the projection properties, for $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w_0)v\in L(\ensuremath{\mathit{NS}}\xspace)$, one can say there exists some $w'\in L(\ensuremath{\mathit{NP}}\xspace)$, $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w')=P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w_0)v$ such that $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w')\in X_m$ (due to the projection properties, any state $y$ is marked only if $y \cap X_m \neq \varnothing$). Without loss of generality, assume that $w'=w'_0w'_1$ for some $w'_0, w'_1 \in \Sigma^*_{\mathit{NSP}}$ with $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w'_0)=P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w_0)$ and $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w'_1)=v$. Let $x'_1\in X$ be such that $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w'_0)=x'_1$, and then $\ensuremath{\delta_{\mathit{NP}}}\xspace(x'_1,w'_1)\in X_m$. Moreover, due to Corollary \ref{corollary:product}, $w_0\in L(\ensuremath{\mathit{NP}}\xspace)$, and so $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w_0)=x_1$ for some $x_1\in X$. So far, we have $x_1,x'_1$ are reachable from $x_0$ via $w_0,w'_0$, respectively, where $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w_0)=P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w'_0)$. Thereto, $x_1$ is observationally equivalent to $x'_1$. Then, $x_1\notin\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{ns}}\xspace(i))$ at any iteration $i$ because otherwise $x'_1\in \ensuremath{\mathit{OBS}}\xspace(\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{ns}}\xspace(i))))$, and $w'_0$ will be undefined ($y_0\in Y\setminus\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{ns}}\xspace(i))$, and so there exists at least a controllable event leading $x_0$ to $x'_1$ which is undefined). This is the case for all other states observationally equivalent to $x_1$ (because otherwise $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w_0)\notin L(\ensuremath{\mathit{NS}}\xspace)$ which contradicts the assumption). Therefore, $x_1\notin\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{ns}}\xspace(i))$ for any iteration $i$ of the algorithm. So, at each iteration $i$, there exists a $w\in\Sigma^*_{\ensuremath{\mathit{NSP}}\xspace}$ leading $x_1$ to a marked state which does not become undefined because if it does, then $x_1\in\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace(i+1))$ which is a contradiction. \subsection{Proof of Theorem \ref{theorem:TLF}}\label{proof:TLF} We need to prove that for all $z\in Reach(z_0)$, there exists a $w\in \Sigma^*_{\ensuremath{\mathit{NSP}}\xspace}$ such that $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z,w\,\ensuremath{\mathit{tick}}\xspace)!$. Take $z\in Reach(z_0)$, and assume $z$ is reachable from $z_0$ via $w_0\in \Sigma^*_{NSP}$, i.e., $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w_0)=z$. Then, due to Lemma \ref{lemma:NSP}, $z.a=\delta_G(a_0,P_{\Sigma_G}(w_0))$ and $z.y=\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w_0))$. Based on Definition \ref{dfn:operator}, we need to find $w\in \Sigma^*_{NSP}$ such that $\delta_G(z.a,P_{\Sigma_G}(w)\,\ensuremath{\mathit{tick}}\xspace)!$, $\ensuremath{\delta_{\mathit{NS}}}\xspace(z.y,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w)\,\ensuremath{\mathit{tick}}\xspace)!$, and $(\sigma,0)\notin m$ for all $\sigma\in \Sigma_a$. As guaranteed by Lemma \ref{lemma:TLFNS}, $\ensuremath{\mathit{NS}}\xspace$ is TLF, and so for $z.y\in Reach (y_0)$, there exists $v\in\Sigma^*_{NS}$ such that $\ensuremath{\delta_{\mathit{NS}}}\xspace(z.y,v\,\ensuremath{\mathit{tick}}\xspace)!$. Also, $L(\ensuremath{\mathit{NS}}\xspace)\subseteq P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(L(\ensuremath{\mathit{NP}}\xspace))$ (as stated before), and so from the projection properties, one can say there exists $v'\in \Sigma^*_{NSP}$, $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v')=v$, $\ensuremath{\delta_{\mathit{NP}}}\xspace(x,v'\,\ensuremath{\mathit{tick}}\xspace)!$. Let us take $w=v'$ for which we already know $\ensuremath{\delta_{\mathit{NS}}}\xspace(y,P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w)\,\ensuremath{\mathit{tick}}\xspace)!$. Also, $(\sigma,0)\notin m$ for all $\sigma\in \Sigma_a$ because otherwise Definition \ref{dfn:NP}-item 4) could not be satisfied. It now suffices to prove $\delta_G(z.a,P_{\Sigma_G}(w)\,\ensuremath{\mathit{tick}}\xspace)!$. As Property \ref{property:NPLE} says, $P_{\Sigma_G}(L(\ensuremath{\mathit{NP}}\xspace))\subseteq L(G)$, and so $P_{\Sigma_G}(w)\,\ensuremath{\mathit{tick}}\xspace\in L(G)$ for $w\,\ensuremath{\mathit{tick}}\xspace\in L(\ensuremath{\mathit{NP}}\xspace)$. \subsection{Proof of Theorem \ref{theorem:controllability}}\label{proof:controllability} We need to prove that if we take any $w\in L(\ensuremath{\mathit{NSP}}\xspace)$ and $u\in \ensuremath{\Sigma_{\mathit{uc}}}\xspace\cup\{tick\}$ such that $P_{\Sigma_G}(w)u\in L(G)$. Then, $wu\in L(\ensuremath{\mathit{NSP}}\xspace)$ for $u\in\ensuremath{\Sigma_{\mathit{uc}}}\xspace$, and for $u=\ensuremath{\mathit{tick}}\xspace$ when there does not exist any $\sigma_f\in \ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace\cup\Sigma_o$ such that $w\sigma_f\in L(\ensuremath{\mathit{NSP}}\xspace)$. Take $w\in L(\ensuremath{\mathit{NSP}}\xspace)$ and $u\in\ensuremath{\Sigma_{\mathit{uc}}}\xspace$. From Lemma \ref{lemma:NSP}, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w).a=$ $\delta_G(a_0,P_{\Sigma_G}(w))$. Based on Definition \ref{dfn:operator}-item 2), $u$ occurs only if it is enabled by $G$. So, $\ensuremath{\delta_{\mathit{NSP}}}\xspace(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w),u)!$ since $\delta_G(\ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w).a,u)!$ due to the assumption. Take $u=\ensuremath{\mathit{tick}}\xspace$ where $\nexists_{\sigma\in\ensuremath{\hat{\Sigma}_{\mathit{for}}}\xspace\cup\Sigma_o}~w\sigma\in L(\ensuremath{\mathit{NSP}}\xspace)$. Considering Definition \ref{dfn:operator}-item 4), $\ensuremath{\mathit{tick}}\xspace$ occurs in $\ensuremath{\mathit{NSP}}\xspace$ after $w$ if the following conditions hold; 1. $P_{\Sigma_G}(w)\,\ensuremath{\mathit{tick}}\xspace \in L(G)$, 2. $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w)\,\ensuremath{\mathit{tick}}\xspace\in L(\ensuremath{\mathit{NS}}\xspace)$, and 3. $\nexists \sigma\in\Sigma_o, \ensuremath{\delta_{\mathit{NSP}}}\xspace(z_0,w\sigma)!$. The first and the last conditions hold based on the assumption. So, we only need to prove $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w)\,\ensuremath{\mathit{tick}}\xspace\in L(\ensuremath{\mathit{NS}}\xspace)$. Due to Corollary \ref{corollary:product}, for $w\in L(\ensuremath{\mathit{NSP}}\xspace)$: $w\in L(\ensuremath{\mathit{NP}}\xspace)$. Due to Property \ref{property:NPLE}, for $P_{\Sigma_G}(w)\,\ensuremath{\mathit{tick}}\xspace \in L(G)$, there exists $w'\in L(\ensuremath{\mathit{NP}}\xspace)$, $P_{\Sigma_G}(w')=P_{\Sigma_G}(w)$ such that $w'.\ensuremath{\mathit{tick}}\xspace\in L(\ensuremath{\mathit{NP}}\xspace)$. Considering Definition \ref{dfn:NP}, $w\,\ensuremath{\mathit{tick}}\xspace\in L(\ensuremath{\mathit{NP}}\xspace)$ for the following reasons; 1. $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w).a=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w').a$ and $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w).a'=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w').a'$ since $P_{\Sigma_G}(w')=P_{\Sigma_G}(w)$. Hence, $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w).a,\ensuremath{\mathit{tick}}\xspace)!$ and $\delta_G(\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w).a',\ensuremath{\mathit{tick}}\xspace)!$ (since $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w'\,\ensuremath{\mathit{tick}}\xspace)!$). 2. $m\in M$ changes only by the execution of $\sigma\in\Sigma_G$. So, $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w).m=\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w').m$ since $P_{\Sigma_G}(w')=P_{\Sigma_G}(w)$. Also, $(\sigma,0)\notin \ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w').m$ for any $\sigma\in\Sigma_a$ since $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w'\,\ensuremath{\mathit{tick}}\xspace)!$, and so $(\sigma,0)\notin \ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w).m$ for any $\sigma\in\Sigma_a$. Due to the assumption, $w\sigma\notin L(\ensuremath{\mathit{NSP}}\xspace)$ for $\sigma\in\ensuremath{\Sigma_{\mathit{for}}}\xspace\cup\Sigma_e\cup\Sigma_o$. Also, due to Theorem \ref{lemma:product}, $\ensuremath{\mathit{NSP}}\xspace=\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace$. In case that $w\sigma\in L(\ensuremath{\mathit{NP}}\xspace)$ for some $\sigma\in\ensuremath{\Sigma_{\mathit{for}}}\xspace\cup\Sigma_o$, then, due to line \ref{line:ctrl} of Algorithm \ref{algo}, it could not be disabled by $\ensuremath{\mathit{NS}}\xspace$. Also, if $w\sigma\in L(\ensuremath{\mathit{NP}}\xspace)$ for some $\sigma\in\Sigma_e$ where both $\ensuremath{\mathit{tick}}\xspace$ and $\sigma$ become disabled by $\ensuremath{\mathit{NS}}\xspace$, then by definition, $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,w)\in \ensuremath{\mathit{BPre}}\xspace(\ensuremath{\mathit{NS}}\xspace)$ and will be removed which violates the assumption ($w\in L(\ensuremath{\mathit{NSP}}\xspace)$). Hence, $w\,\ensuremath{\mathit{tick}}\xspace\in L(\ensuremath{\mathit{NP}}\xspace)$ and \ensuremath{\mathit{tick}}\xspace does not become disabled by Algorithm \ref{algo}, and so $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w)\,\ensuremath{\mathit{tick}}\xspace\in L(\ensuremath{\mathit{NS}}\xspace)$. \subsection{Proof of Theorem \ref{theorem:MPness}}\label{proof:MPness} To prove that $\ensuremath{\mathit{NS}}\xspace$ is (timed networked) maximally permissive for $G$, we need to ensure that for any other proper networked supervisor (say $\ensuremath{\mathit{NS}}\xspace'$) in the same NSC framework (with event set $\ensuremath{\Sigma_{\mathit{NS}}}\xspace$): $P_{\Sigma_G}(L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G))\subseteq P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace))$. First, assume that $L(\ensuremath{\mathit{NS}}\xspace')\nsubseteq P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(L(\ensuremath{\mathit{NP}}\xspace))$. Then, any extra transition of $\ensuremath{\mathit{NS}}\xspace'$ that is not included in $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(L(\ensuremath{\mathit{NP}}\xspace))$ does not add any new transition to $P_{\Sigma_G}(L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G))$. Let say $v\sigma\in L(\ensuremath{\mathit{NS}}\xspace')$ and $v\in P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(L(\ensuremath{\mathit{NP}}\xspace))$, but $v\sigma\notin P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(L(\ensuremath{\mathit{NP}}\xspace))$ for $\sigma\in\ensuremath{\Sigma_{\mathit{NS}}}\xspace$. Also, there exists $w\in L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G)$ with $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w)=v$. If $\sigma=\ensuremath{\mathit{tick}}\xspace$, then $\sigma$ cannot be executed in $\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G$ because based on Definition \ref{dfn:operator}-item 4), $\ensuremath{\mathit{tick}}\xspace$ should be enabled by $G$ which is not the case; \ensuremath{\mathit{tick}}\xspace is not enabled in $\ensuremath{\mathit{NP}}\xspace$, and so due to Property \ref{property:NPLE}, it is not enabled in $G$. If $\sigma\in\Sigma_o$, then it does not matter if $\sigma$ occurs in $\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G$ because it does not change $P_{\Sigma_G}(L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G))$. If $\sigma\in\Sigma_e$, then as Lemma \ref{lemma:ontime} says, $\ensuremath{\mathit{NP}}\xspace$ enables all enabling events of $\Sigma_c$ that are executed in the plant on time ($N_c$ \ensuremath{\mathit{ticks}}\xspace ahead). So, any extra enabling event by $\ensuremath{\mathit{NS}}\xspace'$ will not be executed by the plant, and so it does not enlarge $P_{\Sigma}(L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G))$. Therefore, we continue the proof for the case that $L(\ensuremath{\mathit{NS}}\xspace')\subseteq P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(L(\ensuremath{\mathit{NP}}\xspace))$ (where Lemma \ref{lemma:product} and Corollary \ref{corollary:product} hold for $NS'$). Take an arbitrary $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G))$, it suffices to prove that $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace))$. Let say $\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G=(z'_0,\ensuremath{\Sigma_{\mathit{NSP}}}\xspace,\delta_{NS'P},Z'_m)$. For $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G))$, due to the projection properties, there exists $v'\in L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G)$ such that $P_{\Sigma_G}(v')=w$ where $\delta_{NS'P}(z'_0,v')$ is a TLF and non-blocking state ($NS'$ is proper due to the assumption). Also, any uncontrollable active event/non-preemptable \ensuremath{\mathit{tick}}\xspace enabled at $\delta_G(a_0,w)$ is enabled at $\delta_{NS'P}(z_0,v')$, and it leads to a nonblobking and TLF state. Based on Lemma \ref{lemma:NSP}, $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v')\in L(\ensuremath{\mathit{NS}}\xspace')$ for $v'\in L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G)$, and due to Corollary \ref{corollary:product}, $v'\in L(\ensuremath{\mathit{NP}}\xspace)$. Moreover, due to Lemma \ref{lemma:product}, $L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G)=L(\ensuremath{\mathit{NS}}\xspace'||\ensuremath{\mathit{NP}}\xspace)$, so regarding the definition of synchronous product, for any $w'\in L(\ensuremath{\mathit{NP}}\xspace)$ and $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(w')=P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v')$: $w'\in L(\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G)$. $\delta_{NS'P}(z'_0,w')$ is a TLF and non-blocking state because $\ensuremath{\mathit{NS}}\xspace'_{N_c}\|_{N_o}\,G$ is nonblocking and TLF due to the assumption. Also, any uncontrollable active event or non-preemptable \ensuremath{\mathit{tick}}\xspace enabled at $w'$ leads to a nonblocking and TLF state since $\ensuremath{\mathit{NS}}\xspace'$ is controllable for $G$ by the assumption. Therefore, one can say $\ensuremath{\delta_{\mathit{NP}}}\xspace(x_0,v')\notin \ensuremath{\mathit{OBS}}\xspace(\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(NP))$. Considering Algorithm \ref{algo}, initially, $\ensuremath{\mathit{ns}}\xspace(0)=NP$ where $v'\in L(\ensuremath{\mathit{NP}}\xspace)$ and $\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,v')\notin \ensuremath{\mathit{OBS}}\xspace(\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{ns}}\xspace(0)))$. The last statement holds for any iteration of the algorithm until the last one (say $n$) so that $\ensuremath{\delta_{\mathit{NS}}}\xspace(y_0,v')\notin \ensuremath{\mathit{OBS}}\xspace(\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace(n)))$ because otherwise all $y\in \ensuremath{\mathit{OBS}}\xspace(\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace(n)))$ are removed (based on line \ref{line:obs.eq}), and so $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v')\notin L(\ensuremath{\mathit{NS}}\xspace)$ because it leads $\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace$ ($\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace=\ensuremath{\mathit{NS}}\xspace(n)$) to a state in $\ensuremath{\mathit{Uncon}}\xspace(\ensuremath{\mathit{BS}}\xspace(\ensuremath{\mathit{NS}}\xspace(n)))$. Then, based on Lemma \ref{lemma:product}, $\ensuremath{\mathit{NSP}}\xspace$ becomes blocking/time-lock/uncontrollable which violates the assumption. Hence, (considering line \ref{line:obs.eq}) $v'$ is not undefined by Algorithm \ref{algo}, and so $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v')\in L(\ensuremath{\mathit{NS}}\xspace)$. Based on Lemma \ref{lemma:product}, $L(\ensuremath{\mathit{NSP}}\xspace)=L(\ensuremath{\mathit{NS}}\xspace||\ensuremath{\mathit{NP}}\xspace)$. $P_{\ensuremath{\Sigma_{\mathit{NS}}}\xspace}(v')\in L(\ensuremath{\mathit{NS}}\xspace)$ and $v'\in L(\ensuremath{\mathit{NP}}\xspace)$, so $v'\in L(\ensuremath{\mathit{NSP}}\xspace)$ where applying the projection on $\Sigma_G$ gives $w\in P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace))$. \subsection{Proof of Theorem \ref{theorem:safety}}\label{proof:safety} To simplify, let us denote $G||R^{\bot}$ by $G^{t}$, the networked plant $\Pi(G^{t},N_c,N_o,\ensuremath{L_{\mathit{max}}}\xspace,\ensuremath{M_{\mathit{max}}}\xspace)$ by $NP^{t}$ and the networked supervised plant $\ensuremath{\mathit{NS}}\xspace_{N_c}\|_{N_o}\,G^{t}$ by $\ensuremath{\mathit{NSP}}\xspace^{t}$. We need to prove that if we take any $w\in P_{\ensuremath{\Sigma_{\mathit{NSP}}}\xspace\cap\Sigma_R}(L(\ensuremath{\mathit{NSP}}\xspace^{t}))$, then $w\in P_{\ensuremath{\Sigma_{\mathit{NSP}}}\xspace\cap\Sigma_R}(L(R))$. In our setting, $\ensuremath{\Sigma_{\mathit{NSP}}}\xspace\cap\Sigma_R=\Sigma_R$ since $\ensuremath{\Sigma_{\mathit{NSP}}}\xspace=\Sigma_e\cup\Sigma\cup\Sigma_o$ and $\Sigma_R\subseteq\Sigma_G$. Hence, it suffices to prove that for any $w\in P_{\Sigma_R}(L(\ensuremath{\mathit{NSP}}\xspace^{t}))$: $w\in L(R)$. Take $w\in P_{\Sigma_R}(L(\ensuremath{\mathit{NSP}}\xspace^{t}))$, then due to Definition \ref{dfn:proj}, there exists $w'\in L(\ensuremath{\mathit{NSP}}\xspace^{t})$ such that $P_{\Sigma_R}(w')=w$. Also, based on Property \ref{property:NSP&P}, $P_{\Sigma_G}(L(\ensuremath{\mathit{NSP}}\xspace^t))\subseteq L(G^t)$, and so $P_{\Sigma_G}(w')\in L(G||R^{\bot})$. Applying the projection on $\Sigma_R$ gives $P_{\Sigma_R}(w')\in L(R^{\bot})$. For $w\in P_{\Sigma_R}(L(\ensuremath{\mathit{NSP}}\xspace^{t}))\cap L(R^{\bot})$, $w\in L(R)$ since the blocking state $q_d$ added to $G||R$ to make $G||R^{\bot}$ is removed by $\ensuremath{\mathit{NS}}\xspace$ as guaranteed by Theorem \ref{theorem:nonblockingness}. \bibliographystyle{IEEEtran} \section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Communications Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi
1,116,691,499,484
arxiv
\section{Model} \begin{itemize} \item Ultrathin film is composed of $N$ infinite monolayers; \item the interaction fields $h$ between spin magnetic moments of the atoms are distributed randomly, and the interaction is realized only between the nearest neighbors; \item spin magnetic moments are oriented along an axis $Oz$ (approximation of the Ising model) and are equal to $m_0$ in magnitude. \end{itemize} According to~\cite{Belokon2001} the distribution function for random interaction fields $h$ on a particle located at the origin (in $n$-th monolayer) is defined as: \begin{equation} \label{GrindEQ__1a_} W_n\left(h\right) = \int{\delta \left(h-\sum_j{h_{nj}\left({\mathbf r}_j,{\mathbf m}_j\right)}\right) \prod_j{F_n\left({\mathbf m}_j\right)\delta \left({\mathbf r}_j-{\mathbf r}_{j0}\right)}} d{\mathbf r}_jd{\mathbf m}_j, \end{equation} where $\delta (x-x_0)$ -- Dirac delta function, $h_{nj}=h_{nj}\left({\mathbf m}_j,{\mathbf r}_j\right)$~-- field produced by atoms with magnetic moments ${\mathbf m}_j$, located at the nodes with coordinates ${\mathbf r}_j$ in $n$-th monolayer. $F_n\left({\mathbf m}_j\right)$ is the distribution function for the magnetic moments, which in the approximation of the Ising model for a ferromagnet can be represented as follow: \begin{equation} \label{GrindEQ__2_} F_n\left({\mathbf m}_j\right)= \left({\alpha}_{nj}\delta \left({\theta}_j\right)+{\beta}_{nj} \delta \left({\theta }_j-\pi \right)\right)\delta \left(m_{0j}\right). \end{equation} Here ${\theta}_j$ is the angle between ${\mathbf m}_j$ and $Oz$-axis, ${\alpha}_{nj}$ and ${\beta}_{nj}$~-- relative probabilities of the spin orientation along (${\theta}_j=0$) and against (${\theta}_j=\pi$) $Oz$-axis respectively; $m_{0j}$~-- magnetic moment of the $j$-th magnetic atom. Now we introduce symmetric notations ${\alpha}_{+n}\stackrel{def}{=}\alpha_n$ and ${\alpha}_{-n}\stackrel{def}{=} 1-{\alpha}_n$ and consider the set $k_j$ of nearest neighbors of an arbitrary atom numbered as $j$. Let denote $z=\dim k_j$ as its coordination number. Then let $\Omega_{k_j}$ be the set of all possible products of $\alpha_{\pm n}$ (where $n \in k_j$), which contains $z$ various factors with different values of $n$ (therefore $\dim \Omega_{k_j} = 2^z$). The elements of $\Omega_{k_j}$ are marked as ${\omega_{\ell}}$: ${\omega_{\ell}} \stackrel{def}{=} \prod_{n \in k_j}\alpha_{\pm n} =\prod^{\nu=z}_{(\nu =1;\, l_{\nu }\in k_j) }{{{\alpha }_{\pm l}}_v}$. In fact, $\ell$ is a number of binomial permutation of $\alpha_{\pm n}$ that forms $\Omega$. If ${\EuScript L}$ is such a set of binomial permutations, on which $\Omega$ was build, then we can define a similar set ${\EuScript M}$ of elements $M_{\ell}$: $M_{\ell} \stackrel{def}{=} \sum_{n\in k_j}{\pm m_n}= m_0\sum_{n\in k_j}{\pm |{2\alpha }_n-1|}$. In the approximation of nearest neighbors and the direct exchange interaction between magnetic atoms, the equation~\eqref{GrindEQ__1a_} can be represented as: \begin{equation} \label{GrindEQ__3_} W_n\left(h\right)= \sum^{C^{z-n}_n}_{\nu =1}{\sum^{2^n}_{l_{\nu }\in L\left(C^n_z\left(k_j\right)\right)} {\omega_{l_{\nu}}\delta \left(h-M_{l_{\nu}}J_{l_{\nu}}\right)}}, \end{equation} where $C^n_z\left(k_j\right)$ is a sample of $n$ atoms of the total number of $z$ nearest neighbors of $j$-th atom, $J_{l_{\nu }}$ are the constants of exchange interaction (that may differ between different monolayers or even inside the same monolayer between different sorts of atoms). General equation that determines the average relative magnetic moment ${\mu}_n$ in $n$-th monolayer is \begin{equation} \label{eq5} \mu_n = \int\tanh \left( \frac{m_nH}{k_BT}\right)W_n(H)dH. \end{equation} Replacing in expressions for ${\omega_{\ell}}$ and $M_{\ell}$ all ${\alpha}_{\pm n}$ on their average values $\left\langle {\alpha}_{\pm n}\right\rangle ={(1\pm {\mu}_n)}/{2}$ and substituting \eqref{GrindEQ__3_} into \eqref{eq5}, one can obtain the equations that determine ${\mu}_n$ in each monolayer: \begin{equation} \label{GrindEQ__4_} \left\{ \begin{array}{rl} \mu_1 &= \sum\limits^{z_{1,1}}_{l=0}{C^l_{z_{1,1}}}\frac{{\left(1{+}\mu_1\right)}^l{\left(1{-}\mu_1\right)}^{z_{1,1}-l}}{2^{z_{1,1}}}\times\\ &\qquad \sum^{z_{1,2}}_{k=0}{C^k_{z_{1,2}}}\frac{{\left(1+{\mu }_2\right)}^k{\left(1-{\mu }_2\right)}^{z_{1,2}-k}}{2^{z_{1,2}}} {\tanh \left(\frac{2\left(l+k\right)-(z_{1,1}+z_{1,2})}{t}\right)\ },\\ \mu_n &= \sum^{z_{n,n}}_{l=0}{C^l_{z_{n,n}}}\frac{{\left(1{+}\mu_n\right)}^l{\left(1{-}\mu_n\right)}^{z_{n,n}-l}}{2^{z_{n,n}}} \sum^{z_{n-1,n}}_{k=0}{C^k_{z_{n-1,n}}}\frac{{\left(1{+}\mu_{n-1}\right)}^k{\left(1{-}\mu_{n-1}\right)}^{z_{n-1,n}-k}}{2^{z_{n-1,n}}}\times\\ &\qquad \sum^{z_{n,n+1}}_{r=0}{C^r_{z_{n,n+1}}}\frac{{\left(1{+}\mu_{n+1}\right)}^r{\left(1{-}\mu_{n+1}\right)}^{z_{n,n+1}-r}}{2^{z_{n,n+1}}} \tanh \left(\frac{2\left(l+k+r\right)-(z_{n-1,n}+z_{n,n}+z_{n,n+1})}{t}\right)\\ \mu_N &= \sum^{z_{N,N}}_{l=0}{C^l_{z_{N,N}}}\frac{{\left(1{+}\mu_N\right)}^l{\left(1{-}\mu_N\right)}^{z_{N,N}-l}}{2^{z_{N,N}}}\times\\ &\qquad \sum^{z_{N-1,N}}_{k=0}{C^k_{z_{N-1,N}}}\frac{{\left(1{+}\mu_{N-1}\right)}^k{\left(1{-}\mu_{N-1}\right)}^{z_{N-1,N}-k}}{2^{z_{N-1,N}}} \tanh \left(\frac{2\left(l+k\right)-(z_{N-1,N}+z_{N,N})}{t}\right), \end{array} \right. \end{equation} where $z_{n,n}$ is the number of nearest neighbors in the $n$-th layer, $z_{n-1,n}$ is the number of nearest neighbors of the atom in $(n-1)$-th layer, located in the $n$-th layer; $i_{nn}={J_{nn}m_n}/{J_{11}}m_1$, $i_{n-1,n}={J_{n-1,n}m_{n-1}}/{J_{11}m_1}$, $i_{n,n+1}={J_{n,n+1}m_{n+1}}/{J_{11}m_1}$, $t={k\ T}/{J_{11}}m_1$. Using \eqref{GrindEQ__4_}, one can study the dependence of the average magnetic moment of the film on its temperature and thickness. \comment{ \subsection{Решение уравнений?} Для построения уравнений, определяющих температуру фазового перехода, воспользуемся следующими соображениями. При подходе к точке Кюри $(\mu_1>0,\,\dots,\mu_n>0,\,\dots,\,\mu_N>0)$, уравнения~\eqref{GrindEQ__4_}, которые можно представить в виде $\mu_n=f_n(\mu_1,\,\dots,\mu_n,\,\dots,\,\mu_N)$, имеют ненулевые решения, если $$ {\left.\frac{\partial \mu_n}{\partial \mu_k}\right|}_{(\mu_1>0,\,\dots,\, \mu_n>0,\,\dots,\,\mu_N>0)} \le {\left.\frac{df_n}{d\mu_k}\right|}_{(\mu_1>0,\,\dots,\,\mu_n>0,\,\dots,\,\mu_N>0)}. $$ Выбрав $\mu_k=\mu_1$, и дифференцируя \eqref{GrindEQ__4_} по $\mu_1$, можно получить систему из $N$ уравнений для $N$ переменных $\left\{t_c,x_2,x_3,\dots,x_N\right\}$, где $x_n={\partial\mu_n}/{\partial \mu _1}$: \begin{equation} \label{GrindEQ__5_} \begin{array}{rl} 1=& \sum^{z_{1,1}}_{l=0}{\sum^{z_{1,2}}_{k=0}{{C^l_{z_{1,1}}C}^k_{z_{1,2}}}} \frac{2\left(l+k\right)-(z_{1,1}+z_{1,2})}{2^{z_{1,1}+z_{1,2}}} \tanh \left(\frac{2\left(l+k\right)-(z_{1,1}+z_{1,2})}{t_c}\right),\\ x_n=& \sum^{z_{n,n}}_{l=0}{\sum^{z_{n-1,n}}_{k=0}{\sum^{z_{n,n+1}}_{r=0}{C^l_{z_{n,n}}}}}C^k_{z_{n-1,n}}C^r_{z_{n,n+1}} \frac{2\left(l+k+r\right)-(z_{n-1,n}+z_{n,n}+z_{n,n+1})}{2^{z_{n-1,n}+z_{n,n}+z_{n,n+1}}}\times\\ &{\rm tanh} \left(\frac{2\left(l+k+r\right)-(z_{n-1,n}+z_{n,n}+z_{n,n+1})}{t_c}\right)\\ x_N=& \sum^{z_{N,N}}_{l=0}{\sum^{z_{N-1,N}}_{k=0}{{C^l_{z_{N,N}}C}^k_{z_{N-1,N}}}} \frac{2\left(l+k\right)-(z_{N-1,N}+z_{N,N})}{2^{z_{N-1,N}+z_{N,N}}} \tanh \left(\frac{2\left(l+k\right)-(z_{N-1,N}+z_{N,N})}{t_c}\right) \end{array} \end{equation} Полученные системы уравнений \eqref{GrindEQ__4_}, \eqref{GrindEQ__5_} позволяют исследовать зависимость среднего магнитного момента и температуры магнитного фазового перехода от толщины плёнок разной кристаллической структуры. } \section{The Curie temperature of ultrathin films} Fig.~\ref{fig1} shows the temperature dependence of the average relative magnetic moment $\left\langle m\right\rangle =\sum^N_j{\left\langle \mu_j\right\rangle}/N$ films of different thickness. From the graphs it follows that the decrease in the number of monolayers leads to a reduction in the average number of nearest neighbors and, consequently, to lower the transition temperature~$T_c$. Moreover, the temperature dependence of $\left\langle m\right\rangle =\left\langle m(T)\right\rangle$, as well as the position of the Curie point, determined not only by the type of crystal lattice, but also by the crystallographic orientation of the plane of the film growth. The above-mentioned feature is related to the difference in the number of nearest neighbors between atoms in the same monolayer, and atoms located in adjacent layers. For example, in a material with a FCC lattice grown on $(100)$-plane, each atom in the monolayer has~4 neighbors, and~4 more neighbors in the monolayer, which is located nearby. When the film of the same material grows on $(111)$-plane, the number of nearest neighbors changes to~6 and~3, respectively. \begin{figure} \begin{center} \begin{tabular}{rl} \includegraphics[width=9cm]{fig_1.pdf} \includegraphics[width=8.5cm]{fig_1a.pdf} \end{tabular} \caption{The dependence of the average magnetic moment $\left\langle m\right\rangle$ on the temperature for different crystallographic planes of the FCC lattice in the films of different thickness. On the inset on the left it is shown an increase in the average number of nearest neighbors $\left\langle z\right\rangle$ in films of different crystalline structures, depending on the thickness of the $N$.} \label{fig1} \end{center} \end{figure} The dependence of the Curie temperature $T_c$ on the thickness of films with different crystal structures, that have been calculated by solving \eqref{GrindEQ__4_}, is shown on the Fig.~\ref{fig2}. From the illustration it is clear that $T_c$ essentially depends on the type of crystal lattice and is almost independent of the crystallographic orientation of the surface of the film, except for the area of small thickness (2 --- 6 monolayers). This effect can be explained by the fact that in the area of small amount of monolayers the mean number of neighbors differs notably. \begin{figure} \begin{center} \includegraphics[width=9cm]{T_c_N_.pdf} \caption{The dependence of the reduced Curie temperature $T_C$ on the thickness of ultrathin film ($N$ is the number of monolayers) for different crystal lattices (FCC and BCC) and different crystallographic orientations of the surface. Compare with the inset in Fig.~\ref{fig1}} \label{fig2} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=8.0cm]{lambda_fcc.pdf}& \includegraphics[width=8.0cm]{lambda_bcc.pdf} \end{tabular} \caption{The logarithmic dependence of relative change of the Curie temperature $\varepsilon (N)$ on the film thickness $N$ (in monolayers) for FCC (on the left) and BCC lattice.} \label{fig3} \end{center} \end{figure} The calculation of relative change of the phase transition temperature with the growth of film thickness (in logarithmic scale) is shown in Fig.~\ref{fig3}. The dependence $\varepsilon =\varepsilon (N)$ can be approximated by expression~\eqref{GrindEQ__1_}. The results can be compared with the experimental data (see the table). Within the measurement errors and the accuracy of approximation, the calculated values of $\lambda$ and the critical exponent of the spin-spin correlation $\nu$ are close to the experimental that obtained on films of $Ni/Cu(111)$ and $Ni/Cu(100)$. We note that the argument $\lambda$ does not depend on the crystallographic orientation of the surface ($\lambda_{110}\approx \lambda_{111}$), nor the type of lattice ($\lambda$ for FCC and BCC lattices differs by $0.9\,\%$). In addition, the critical exponent of the spin-spin correlation $\left\langle \nu \right\rangle =\frac{1}{\lambda}=0.68$, calculated in this framework, is close to the value $\nu {\rm =0.63}$, obtained by using renormalization group calculations in three-dimensional Ising model~\cite{Guillou1980,Guillou1977}. \begin{table} \begin{center} {\textbf{Table} of theoretical and experimental values of argument $\lambda$ and the critical exponent $\nu$:\\ \vspace{5pt}} \begin{tabular}{|p{1.7in}|p{1.1in}|p{1.0in}|p{1.0in}|p{0.7in}|} \hline Film & $l$ (monolayers) & $\lambda $ & $\nu $ & Link \\ \hline Ni/Cu(111) & 1 -- 8 & $1.48\pm0.20$ & 0.68$\pm$0.09 & \cite{Ballentine1989} \\ \hline Ni/Cu(111) & 1 -- 10 & $1.44\pm0.20$ & 0.70$\pm$0.10 & \cite{Ballentine1990} \\ \hline Calculation: FCC (111) & 2 -- 10 & 1.43 & 0.70 & \\ \hline Ni/Cu(100) & 4 -- 26 & $1.42$ & 0.70 & \cite{Schulz1994} \\ \hline Calculation: FCC (100) & 2-- 26 & 1.48 & 0.68 & \\ \hline \end{tabular} \end{center} \end{table} \section{Conclusion} The model of randomly interacting atomic magnetic moments at its relative simplicity provides two valuable results. Firstly, it allows to assess the influence of the thickness of ultrathin film $N$ on the temperature of magnetic phase transition, and, secondly, to establish a power-dependence of the relative change of the Curie temperature of $N$: $\varepsilon (N)\sim N^{{1}/{\nu}}$, where $\nu$ is the critical exponent of the spin-spin correlation. The obtained regularities and numerical values of critical exponents agree well with experimental data~\cite{Huang1993,Huang1994,Ballentine1989,Ballentine1990,Schulz1994} as well as with RG calculations in three-dimensional Ising model~\cite{Guillou1980,Guillou1977}. The magnetic properties of the lattice (in particular, the independence of the choice of crystallographic orientation of the surfaces and the type of lattice) are consistent with the general concepts of critical scaling. \small \bibliographystyle{ieeetr}
1,116,691,499,485
arxiv
\section{THEORETICAL APPROACH} The fiber used in our experiment is a step-index non-zero dispersion shifted CLF which is widely been used in communication. The fiber exhibits effective mode area and numerical aperture (NA) of 72 $\mu m^{2}$ and 0.14, respectively at 1550 nm. Pumping below the cut-off wavelength of the fiber, turns it into a few-mode fiber. At 1064 nm pump wavelength, the supported spatial modes through the fiber are simulated using full-vectorial finite-element method (FEM) based commercial COMSOL software and shown in Fig. \ref{modal_profile}. Simulated results show that the fiber supports $LP_{01}$, $LP_{02}$ and $LP_{11}$ spatial modes at the pump wavelength which are confirmed experimentally and will be discussed later. To demonstrate IM-MI, we excite the pair of circularly symmetric spatial modes $LP_{01}$ and $LP_{02}$ with equal peak power. The dispersion profile of two modes is shown in Fig. \ref{dispersion_plot} and it is observed that the pump at 1064 nm falls in the normal dispersion region for both the modes. The step-index refractive index profile provides distinct group-velocity for different modes as shown in Fig. \ref{group_velocity}. It is observed that the GVM between the two modes increase with the increase in wavelength. \begin{figure*}[t] \centering \includegraphics[width=12.5 cm,height=14.5 cm]{comined_2d_3d_image_f.eps} \caption{{\small 3D/2D schematic view of the modal profile distribution of (a) $LP_{01}$ (b) $LP_{02}$ (c) $LP_{11x}$ and (b) $LP_{11y}$ at 1064 nm inside CLF.}} \label{modal_profile} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=8 cm,height=4.5 cm]{dispersion_plot.eps} \caption{{\small Dispersion characteristics of $LP_{01}$ and $LP_{02}$ modes of CLF.}} \label{dispersion_plot} \end{figure} \begin{figure}[t] \centering \includegraphics[width=8.5 cm,height=4.5 cm]{group_velocity_plot_ff.eps} \caption{{\small Simulated group velocity for the $LP_{01}$ and $LP_{02}$ spatial modes.}} \label{group_velocity} \end{figure} \begin{figure}[t] \centering \includegraphics[width=8.5 cm,height=4.5 cm]{gain_plot_modified.eps} \caption{{\small Theoretical gain spectra as a function of frequency shift ($\Omega/2\pi$) for the mode group combination $LP_{01}$ and $LP_{02}$ with identical power in each mode (P=Q).}} \label{gain_plot} \end{figure} \begin{figure}[t] \centering \includegraphics[width=8.5 cm,height=4.5 cm]{GVM_gain_plot.eps} \caption{{\small Gain spectra as a function of frequency shift ($\Omega/2\pi$) for different GVM values while the peak power is fixed for the both modes (P = Q = 3.75 kW).}} \label{GVM_variation} \end{figure} \begin{figure}[t] \centering \includegraphics[width=8.5 cm,height=4.5 cm]{gain_OMF_shift_plot_combined.eps} \caption{{\small Variation of OMF and peak gain of IM-MI as a function of GVM for fixed pump power (P = Q = 3.75 kW).}} \label{OMF_GAIN_shift} \end{figure} The nonlinear propagation of the interacting modes with identical carrier frequency $\omega$ satisfies the following set of coupled nonlinear Schrodinger equations (NLSE), \begin{equation} \begin{aligned} \frac{\partial u_{p}}{\partial z}-\frac{\delta_{pq}}{2}\frac{\partial u_{p}}{\partial t}+\frac{i}{2}\beta_{2p}\frac{\partial^{2} u_{p}}{\partial t^{2}}=i\gamma(\textit{f}_{pp}|u_{p}|^{2}\\+2\textit{f}_{pq}|u_{q}|^{2})u_{p}+i\gamma \textit{f}_{pq}u_{q}^{2}u_{p}^{*}exp(2i\Delta\beta z) \label{up_equation} \end{aligned} \end{equation} \begin{equation} \begin{aligned} \frac{\partial u_{q}}{\partial z}+\frac{\delta_{pq}}{2}\frac{\partial u_{q}}{\partial t}+\frac{i}{2}\beta_{2q}\frac{\partial^{2} u_{q}}{\partial t^{2}}=i\gamma(\textit{f}_{qq}|u_{q}|^{2}\\+2\textit{f}_{pq}|u_{p}|^{2})u_{q}+i\gamma \textit{f}_{pq}u_{p}^{2}u_{q}^{*}exp(-2i\Delta\beta z) \label{uq_equation} \end{aligned} \end{equation} where $u_{j}(j=p,q)$ is the field envelopes of the interacting modes, \textit{z} is the propagation distance and \textit{t} is the time. Nonlinear Kerr coefficient, $\gamma$=$\frac{n_{2}\omega}{c}$, c is the velocity of light in vacuum. $\Delta\beta=\beta_{q}-\beta_{p}$, where $\beta_{j}$ is the propagation constant of the corresponding mode, $\beta_{nj}=\frac{\partial^{n}\beta_{j}}{\partial\omega^{n}}$ stands for the nth derivatives of the propagation constant, $\beta_{1j}$ is the inverse of the group velocity, $\beta_{2j}$ is the second order dispersion coefficient. $\delta_{pq}$ indicates the GVM of the participating modes and defined as, $\delta_{pq}=\beta_{1q}-\beta_{1p}$. The overlap function $\textit{f}_{pq}$ is defined as, \begin{equation} \begin{aligned} f_{pq}=\dfrac{\int\int |F_{p}(\omega)|^{2}|F_{q}(\omega)|^{2}\,dx\,dy}{[\int\int |F_{p}(\omega)||F_{q}(\omega)|\,dx\,dy]^{2}} \label{fpq_equation} \end{aligned} \end{equation} where, $F_{p}$ and $F_{q}$ are the transverse field distributions of the participating modes. The effective mode area of the two modes can be expressed by $1/f_{pp}$ and $1/f_{qq}$, respectively, where, $1/f_{pq}$ represents the overlap between two interacting modes. We would like to point out that the coherent coupling terms $i\gamma \textit{f}_{pq}u_{q}^{2}u_{p}^{*}exp(2i\Delta\beta z)$ and $i\gamma \textit{f}_{pq}u_{p}^{2}u_{q}^{*}exp(-2i\Delta\beta z)$, in the right hand side of Eqs. (\ref{up_equation}) and (\ref{uq_equation}), depend essentially on the GVM between the spatial modes. For large GVM, these terms are essentially negligible. As, in our case, we use step-index fiber which leads to large GVM between the interacting modes, thus the coherent coupling term is neglected in the preceding calculations. Considering only the incoherent coupling terms, Eqs. (\ref{up_equation}) and (\ref{uq_equation}) can be rewritten as, \begin{equation} \begin{aligned} \frac{\partial u_{p}}{\partial z}-\frac{\delta_{pq}}{2}\frac{\partial u_{p}}{\partial t}+\frac{i}{2}\beta_{2p}\frac{\partial^{2} u_{p}}{\partial t^{2}}=i\gamma(\textit{f}_{pp}|u_{p}|^{2}\\+2\textit{f}_{pq}|u_{q}|^{2})u_{p} \label{up_equation_modified} \end{aligned} \end{equation} \begin{equation} \begin{aligned} \frac{\partial u_{q}}{\partial z}+\frac{\delta_{pq}}{2}\frac{\partial u_{q}}{\partial t}+\frac{i}{2}\beta_{2q}\frac{\partial^{2} u_{q}}{\partial t^{2}}=i\gamma(\textit{f}_{qq}|u_{q}|^{2}\\+2\textit{f}_{pq}|u_{p}|^{2})u_{q} \label{uq_equation_modified} \end{aligned} \end{equation} To analyze the stability of the steady state solution of Eqs. (\ref{up_equation_modified}) and (\ref{uq_equation_modified}), we introduce a small first-order amplitude and phase perturbation u and v, where, \begin{equation} \begin{aligned} u_{p}=(\sqrt{P}+u)exp[i\gamma(\textit{f}_{pp}P+2\textit{f}_{pq}Q)z] \label{up_perturbation} \end{aligned} \end{equation} \begin{equation} \begin{aligned} u_{q}=(\sqrt{Q}+v)exp[i\gamma(\textit{f}_{qq}Q+2\textit{f}_{pq}P)z] \label{uq_perturbation} \end{aligned} \end{equation} Now, we consider perturbation of modulational ansatz with wavenumber K and frequency $\Omega$, of the form, \begin{equation} \begin{aligned} u(z,t)=u_{s}(z)exp[i(\Omega t-Kz)]+u_{a}(z)exp[i(-\Omega t+Kz)] \label{up_ansaz} \end{aligned} \end{equation} \begin{equation} \begin{aligned} v(z,t)=v_{s}(z)exp[i(\Omega t-Kz)]+v_{a}(z)exp[i(-\Omega t+Kz)] \label{uq_ansaz} \end{aligned} \end{equation} where, $u_{s}$ and $u_{a}$ represents the amplitude of Stokes and anti-Stokes sidebands for the spatial mode p, respectively, whereas $v_{s}$ and $v_{a}$ corresponds to the spatial mode q. $\Omega$ is the angular offset frequency relative to the pump, $\Omega = \omega - \omega_{p}$, where $\omega_{p}$ is the angular frequency for the pump wavelength. After linearizing Eqs. (\ref{up_equation_modified}) and (\ref{uq_equation_modified}) in u and v and then substituting Eqs. (\ref{up_ansaz}) and (\ref{uq_ansaz}) in it, we arrive at the following eigenvalue equation, \begin{equation} \begin{aligned} [M][Y]=K[Y] \label{eigenvalue_eq} \end{aligned} \end{equation} where the eigen vector is defined as, \begin{equation} \begin{aligned} [Y]^{T}=[u_{a},u_{s}^{*},v_{a},v_{s}^{*}] \label{eigenvector_eq} \end{aligned} \end{equation} [M] is the stability matrix of the system defined as,\par \vspace{-.5ex} \begin{frame} \resizebox{1.05\linewidth}{!} $\displaystyle M = \begin{bmatrix} -\dfrac{\Omega\delta_{pq}}{2}+\beta_{2p}\dfrac{\Omega^{2}}{2}+\gamma_{p}\textit{f}_{pp}P & \gamma \textit{f}_{pp}P & 2\gamma \textit{f}_{pq}\sqrt{PQ} & 2\gamma \textit{f}_{pq}\sqrt{PQ} \\ -\gamma \textit{f}_{pp}P & -\dfrac{\Omega\delta_{pq}}{2}-\beta_{2p}\dfrac{\Omega^{2}}{2}-\gamma_{p}\textit{f}_{pp}P&-2\gamma \textit{f}_{pq}\sqrt{PQ} & -2\gamma\textit{ f}_{pq}\sqrt{PQ} \\ 2\gamma \textit{f}_{pq}\sqrt{PQ} & 2\gamma f_{pq}\sqrt{PQ} & \dfrac{\Omega\delta_{pq}}{2}+\beta_{2q}\dfrac{\Omega^{2}}{2}+\gamma\textit{f}_{qq}Q & \gamma \textit{f}_{qq}Q \\ -2\gamma \textit{f}_{pq}\sqrt{PQ} & -2\gamma \textit{f}_{pq}\sqrt{PQ}& -\gamma f_{qq}Q & \dfrac{\Omega\delta_{pq}}{2}-\beta_{2q}\dfrac{\Omega^{2}}{2}-\gamma\textit{f}_{qq}Q\\ \end{bmatrix} $} \end{frame} from which we obtain the following dispersion relation, \begin{equation} \begin{aligned} det([M]-K[I])=0 \label{eigenvector_eq} \end{aligned} \end{equation} \begin{table} \centering \caption{ Calculated IM-MI parameters for different mode combinations at 1064 nm.} \begin{tabular}{ccccccc} \hline $p$ & $LP_{01}$ & $LP_{01}$ & $LP_{02}$ \\ $q$ & $LP_{11}$ & $LP_{02}$ & $LP_{11}$ \\ \hline $ f_{pp}$ $ (1/\mu m^{2}) $ & $0.0257$ & $0.0257$ & 0.015 \\ $ f_{qq}$ $ (1/\mu m^{2}) $ & $0.015$ & 0.0085 & 0.0085 \\ $ f_{pq}$ $ (1/\mu m^{2}) $ & $0.008$ & $0.017$ & 0.0045 \\ $ \sqrt{f_{pp}f_{qq}}/2f_{pq}$ & $1.22$ & $0.435$ & 1.25 \\ \hline \end{tabular} \label{table_IMMI_parameter} \end{table} The equation implies that, for MI process to occur, the wavenumber K of the perturbation must possesses a non-zero imaginary part and manifest itself by an exponential growth of the amplitude of the perturbation. The power gain G, which is a measure of efficiency of MI process, is defined as, $G(\Omega)=2|Im(K)|$, where K is the eigenvalue of the matrix [M] with highest imaginary part. Detailed analysis of the Eq. (\ref{eigenvector_eq}) describes the necessary condition for MI phenomena as, $\sqrt{\textit{f}_{pp}\textit{f}_{qq}}/2f_{pq} <1$ \cite{IM_MI_greadedindex_fiber}, i.e., the cross-phase modulation (XPM) term will be greater than self-phase modulation term (SPM). Table \ref{table_IMMI_parameter} shows the calculated values of IM-MI parameters for different mode combinations and indicates that among three different mode combinations, the condition to achieve IM-MI process has been satisfied for the mode group combination $LP_{01}$ and $LP_{02}$ only. For the theoretical calculations of MI gain of our experimental fiber, we have used the following parameters which have been calculated for the mode combination $LP_{01}$ and $LP_{02}$ of the CLF fiber at pump wavelength 1064 nm: $\beta_{2p} = 0.01618$ $ps^{2}$/m, $\beta_{2q} = 0.01183$ $ps^{2}$/m, $f_{pp} (1/\mu m^{2}) = 0.0257$, $f_{qq} (1/\mu m^{2}) = 0.0085$, $f_{pq} (1/\mu m^{2}) = 0.017$ and $\delta_{pq} = 0.75 ps/m $. The manifestation of IM-MI phenomena with varying peak pump power (P=Q) for the mode combination $LP_{01}$ and $LP_{02}$ is shown in Fig. \ref{gain_plot} where the gain spectra have been plotted as a function of frequency detuning $\Omega$/2$\pi$ with different peak power levels. It is observed that IM-MI region broadens with the increase in power. The optimum modulation frequency (OMF) which is defined as the frequency at which IM-MI gain attain its maximum value, also shift towards higher value with the increase in pump power which also support our experimental findings. Finally the gain plot for varying GVM is shown in Fig. \ref{GVM_variation}, where the power is kept fixed for both the modes at 3.75 kW. It is observed that large GVM leads to higher value of the gain spectra and simultaneously the optimal modulation frequency move towards longer frequency shift. Fig. \ref{OMF_GAIN_shift} shows the variation of OMF and peak gain as a function of GVM of the participating modes. Simulated results reveal that peak gain increases gradually up to GVM 1.5 ps/m and then it tends to stabilize with further increasing GVM. \section{Experimental setup and results} The schematic of the experimental setup is shown in Fig. \ref{exp_setup}. The pump is an output from Q-switched microchip Nd:YAG laser generating central wavelength at 1064 nm with the pulse duration of 0.77 ns. The output power of the pulses are controlled by the combination of HWP and PBS. The pump is coupled into CL optical fiber using microscope objective (NA=0.4, 20X). \begin{figure*}[hbt!] \centering \includegraphics[width=14 cm,height=5.5 cm]{exp_setup.eps} \caption{{\small Schematic of the experimental set-up. $M_{1},M_{2}$: silvered mirror, HWP: half wave plate, PBS: polarization beam splitter, $MO_{1},MO_{2},MO_{3}$: microscope objective, BS: plate beam splitter, $F_{1}$ : laser line filter, $L_{1}$: convex lens, CCD: charged coupled device, OSA: optical spectrum analyzer.}} \label{exp_setup} \end{figure*} \begin{figure}[hbt!] \centering \includegraphics[width=8 cm,height=7 cm]{detected_mode_profile.eps} \caption{{\small Spatial group of mode profile experimentally identified at 1064 nm.}} \label{spacial_modes} \end{figure} \begin{figure}[hbt!] \centering \includegraphics[width=8.5 cm,height=5 cm]{power_variation_6m_CLF_ff.eps} \caption{{\small Output spectrum for different pump powers for a 6 m long CLF.}} \label{power_var_6m} \end{figure} \begin{figure}[hbt!] \centering \includegraphics[width=8 cm,height=5 cm]{2_71kW_pump_power_gain_f.eps} \caption{{\small Theoretical plot of gain spectrum as a function of frequency shift ($\Omega/2\pi$.}) considering the parameters of CLF for the peak pump power of 2.71 kW (P = Q = 2.71 kW) } \label{gain_2_71kw} \end{figure} \begin{figure}[hbt!] \centering \includegraphics[width=8.5 cm,height=4.5 cm]{detuned_frequency_plot_lebelled_f.eps} \caption{{\small Solid lines show the frequency shift ($\Omega/2\pi$.}) of the experimental spectral peaks at the output of 6 m long CLF for peak pump power 2.70 kW, whereas the dashed lines show the calculated theoretical positions of the peak gain of IM-MI for the same experimental conditions.} \label{frequency_shifting} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=17.5 cm,height=9.8 cm]{MI_length_variation.eps} \caption{{\small Output spectrum for different fiber lengths (a) 4 m (b) 5m (c) 7m and (d) 8m of CLF.}} \label{MI_length_variation} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=8 cm,height=4.5 cm]{MI_threshold_plot.eps} \caption{{\small Threshold power for IM-MI with fiber length.}} \label{threshold_power} \end{figure} \begin{figure}[hbt!] \centering \includegraphics[width=8 cm,height=8 cm]{wavelength_shift_combined_f.eps} \caption{{\small Output spectrum for (a) various pump powers and (b) different fiber lengths. For (a) fiber length is fixed at 6 m, and for (b) peak power is fixed at 2.71 kW. MI peaks shift for the case (a) whereas no shift in wavelength is observed when the fiber length is varied.}} \label{wavelength_shift} \end{figure} \begin{figure}[t] \centering \includegraphics[width=8 cm,height=4.5 cm]{frequency_shift_experimental_theoretical.eps} \caption{{\small Experimental (solid line) and theoretical (dashed line) comparison of frequency shift ($\Omega/2\pi$) of the IM-MI peaks as a function of peak pump power.}} \label{frequency_shift_exp_theoretical} \end{figure} A three axis translational stage is used to excite selectively the desired modes and the output mode profile is detected with CCD as shown in the figure. The output spectrum is recorded by optical spectrum analyzer (OSA). All the supported spatial modes experimentally identified at 1064 nm are shown in Fig. \ref{spacial_modes} and agrees well with the simulated modal profiles as shown in Fig. \ref{modal_profile}. To study IM-MI, we have taken 6 m long CLF and pump is launched to excite the pair of modes ($LP_{01}$ and $LP_{02}$) with equal power in each modes. By finely adjusting the launching conditions of the pump pulses at the fiber input, we were able to excite equally the circularly symmetric modes. The recorded output spectrum with varying total input peak power is shown in Fig. \ref{power_var_6m}. It is observed that with the increase in power, multiple MI sidebands are generated together with the Raman Stokes and anti-Stokes peaks. For 2.71 kW of peak power, IM-MI peaks are generated at wavelengths 1073 nm ($M_{1R}$) and 1056 nm ($M_{1L}$). Harmonics are generated in the red-side of the pump nearly at 1081 nm ($M_{2R}$) and 1090 nm ($M_{3R}$) whereas, on the blue-side of the pump at 1048.5 nm ($M_{2L}$) and 1040 nm ($M_{3L}$) wavelengths. Raman Stokes and anti-Stokes are generated at 1118 nm ($R_{1}$) and 1017.5 nm ($R_{2}$), respectively. The IM-MI gain for the pump power of 2.71 kW in each modes are shown in Fig. \ref{gain_2_71kw} which yields that maximum gain occurs at the frequency shift of $\pm \Delta$, where $\Delta = 3.45$ THz. The position of the spectral peaks generated through 6 m fiber length with 2.71 kW pump power, are shown in Fig. \ref{frequency_shifting} as a function of frequency shift from the pump wavelength. The IM-MI peaks are generated at $\Delta_{1} = - 2.36$ THz and $\Delta_{2} = 2.14$ THz apart. The asymmetry in frequency shift occurs due to the effect of higher order dispersion coefficient. Considering up to second-order dispersion coefficient, the position of the peaks of theoretical IM-MI gain are indicated by the dashed lines. The slight difference of the experimental and the theoretical observations are due to the refractive index profile that used in COMSOL for calculating dispersion parameters has been fitted and extrapolated at the pump wavelength (1064nm) from the original profile. To investigate the influence of fiber length, the evolution of IM-MI spectra for different fiber lengths with varying peak pump powers are measured and shown in Fig. \ref{MI_length_variation}. It is observed that, the threshold power require to build up spectral IM-MI peaks from noise gradually decreases with the increase in fiber length and shown in Fig. \ref{threshold_power}. Also, Raman threshold power reduces with the increase in fiber length. Longer fiber length provides effective platform to break-up the input pulses into multiple peaks through IM-MI even with sufficient low input pump power. Efficient IM-MI peaks with strong Stokes and anti-Stokes wave are generated with very low input pump power (2.43 kW in each mode) using 8m long CLF which is shown in Fig. \ref{MI_length_variation} (d). The shift of spectral peaks with varying pump powers and fiber lengths is shown in Fig. \ref{wavelength_shift}(a) and \ref{wavelength_shift}(b), respectively. The fiber length has been fixed to 6 m. It is clearly evident that the IM-MI peaks and the cascaded harmonics exhibit shift in wavelength, as depicted in Fig. \ref{wavelength_shift}(a). The experimental observation of OMF shift of the IM-MI peaks as a function of peak pump power is shown by the solid line in Fig. \ref{frequency_shift_exp_theoretical}, whereas the dashed line shows the theoretical prediction. The theoretical and the experimental lines shows almost same slope. It is important to note that MI peaks do not shift with the variation in fiber length, as shown in Fig. \ref{wavelength_shift}(b) where input peak pump power is fixed at 2.71 kW. \section{Conclusion} To conclude our work, we have reported detail theoretical and experimental observation of IM-MI in step-index scenario which yields that step-index multimode fiber can provide excellent platform for the realization of IM-MI. The modal GVM plays an important role in the gain spectrum and large GVM provides strong IM-MI peaks. Furthermore, details investigation with varying pump power and fiber length is demonstrated with unified manner. We also demonstrate the shifting of optimum frequency with pump power. The study can be effectively extended by employing large core step-index MMF which can support many higher order modes and launching the pump in different wavelength with non-identical spatial modes providing large GVM. The GVM can also be tailored by choosing the participating modes. Our observation will pave the way in multitude of applications such as the realization of coherent wideband frequency generation, formation of highly repetition rate vector soliton trains and so on.
1,116,691,499,486
arxiv
\section{ Main result } We work in the smooth category. An {\it $m$-component 2-(dimensional) link} is a closed oriented 2-submanifold $L=(K_1,...,K_m)$ $\subset S^4$ such that $K_i$ is diffeomorphic to $S^2$. If $m=1$, $L$ is called a {\it 2-knot} We say that 2-links $L_1$ and $L_2$ are {\it equivalent} if there exists an orientation preserving diffeomorphism $f:$ $S^4$ $\rightarrow$ $S^4$ such that $f(L_1)$=$L_2$ and that $f | _{L_1}:$ $L_1$ $\rightarrow$ $L_2$ is an order and orientation preserving diffeomorphism. Take a 3-ball $B^3$ in $S^4$. Then $\partial B^3$ is a 2-knot. We say that a 2-knot $K$ is a {\it trivial} knot if $K$ is equivalent to the 2-knot $\partial B^3$. A 2-link $L= ( K_1,...,K_m ) $ is called a {\it ribbon } 2-link if $L$ satisfies the following properties. (See e.g. \cite{C}. \f (1) There is a self-transverse immersion $f:D^3_1\amalg...\amalg D^3_m$ $\rightarrow S^4$ such that $f(\partial D^3_i)=K_i$. \f (2) The singular point set $C$ $(\subset S^4$) of $f$ consists of double points. $C$ is a disjoint union of 2-discs $D^2_i (i=1,...,k)$. \f (3) Put $f^{-1}(D^2_j)=D^2_{jB}\amalg D^2_{jS}$. The 2-disc $D^2_{jS}$ is trivially embedded in the interior Int $D^3_\alpha$ of a 3-disc component $D^3_\alpha$. The circle $\partial D^2_{jB}$ is trivially embedded in the boundary $\partial D^3_\beta$ of a 3-disc component $D^3_\beta$. The 2-disc $D^2_{jB}$ is trivially embedded in the 3-disc component $D^3_\beta$. (Note that there are two cases, $\alpha=\beta$ and $\alpha\neq\beta$.) There are nonribbon 2-knots. (See e.g. \cite{C}.) It is trivial that, if a component of a 2-link is a nonribbon 2-knot, the 2-link is a nonribbon 2-link. It is natural to ask: \vskip3mm \noindent {\bf Question} Is there a nonribbon 2-link all of whose components are ribbon knots? In particular, is there a nonribbon 2-link all of whose components are trivial knots? \vskip3mm We give an affirmative answer to this question. \vskip3mm \noindent {\bf Theorem 1.1 } {\it There is a nonribbon 2-link $L=(K_1, K_2)$ such that $K_i$ is a trivial 2-knot ($i=1,2$). } \vskip3mm \noindent {\bf Note.} The announcement of Theorem 1.1 is in \cite{O1}. \section{ Band-sums} Let $L=(K_1, K_2)$ be a 2-link. A 2-knot $K_0$ is called a {\it band-sum } of the components $K_1$ and $K_2$ of the 2-link $L$ along a {\it band} $h$ if we have: \f (1) There is a 3-dimensional 1-handle $h$, which is attached to $L$, embedded in $S^4$. \f (2) There are a point $p_1\in K_1$ and a point $p_2\in K_2$. We attach $h$ to $K_1\amalg K_2$ along $p_1\amalg p_2$. $h\cap (K_1\cup K_2)$ is the attach part of $h$. Then we obtain a 2-knot from $K_1$ and $K_2$ by this surgery. The 2-knot is $K_0$. \section{ A sufficient condition of Theorem 1.1 } In \S4 and \S5 we prove: \vskip3mm \noindent {\bf Proposition 3.1 }{\it There is a 2-link $L=(K_1,K_2)$ such that \f (1) $K_i$ is a trivial 2-knot ($i=1,2$), and \f (2) a band-sum $K_3$ of the components $K_1,K_2$ of the 2-link $L$ is a nonribbon 2-knot. } \vskip3mm \noindent {\bf Claim 3.2 }{\it Proposition 3.1 implies Theorem 1.1. } \vskip3mm \noindent {\bf Proof of Claim 3.2. } By the definition of ribbon links, we have the following fact: If $L=(K_1, K_2)$ is a ribbon 2-link, then any band sum of $L=(K_1,K_2)$ is a ribbon 2-knot. The contrapositive proposition of this fact implies Claim 3.2. \section{ $Q(K)$ } Let $K$ be a 2-knot $\subset S^4$. We define a 2-knot $Q(K)$ for $K$. The 2-knot $Q(K)$ plays an important role in our proof as we state in the last paragraph of this section. Let $K\x D^2$ be a tubular neighborhood of $K$ in $S^4$, where $D^2$ is a disc. In Int $D^2$, take a compact oriented 1-dimensional submanifold $[-1,1]$. Take $K\x[-1,1]$ $\subset K\x$ Int$D^2$. We give an orientation to $K\x[-1,1]$. Let $D(K)$ be the 2-component 2-link $(K\x\{-1\}, K\x\{1\})$. Then $K\x[-1,1]$ is a Seifert hypersurface of the 2-link $D(K)$, where we give an orientation to $D(K)$ so that the orientation of $D(K)$ is compatible with that of $K\x[-1,1]$. In order to prove our main theorem, we construct some 2-knots, 2-links, and some subsets in $S^4$ from $D(K)$. For this purpose, we prepare the following $B^4$ and $F_\theta$. \f Let $B^4$ be a 4-ball $\subset S^4$. Put $B^4=$ $\{( x,y,z,w )\vert$ $0\leq x \leq1, 0\leq y \leq1, $ $z=r\cdot cos\theta, w=r\cdot sin\theta, $ $0\leq r \leq1, 0\leq\theta<2\pi \}$. \f Let $F _0=$ $\{( x,y,z,0 )\vert$ $0\leq x \leq1, 0\leq y \leq1, 0\leq z \leq1 \}\subset B^4$. \f Let $A=\{( x,y,0,0 )\vert$ $0\leq x \leq1, 0\leq y \leq1 \}\subset F_0\subset B^4$. \f We regard $B^4$ as the result of rotating $F_0$ around the axis $A$. For each $\theta$, we put $F_\theta=$ $\{(x,y,r\cdot cos\theta, r\cdot sin\theta)\vert$ $0\leq x \leq1, 0\leq y \leq1, 0\leq r \leq1,\theta$: fix$\}$. We suppose that $B^4\cap D(K)$ satisfies the condition that, for each $\theta$, $F_\theta \cap D(K)$ is drawn as in Figure 4.1. \newpage \input 4.1.tex \vskip10mm \noindent {\bf Note.} In Figure 4.1, we suppose the following hold: The intersection $B^4\cap D(K)$ is a disjoint union of two 2-discs. Call them $D^2_1$ and $D^2_2$. The intersection $F_\theta \cap D(K)$ is two arcs. Call them $E_1$ and $E_2$. The boundary $\partial E_i$ is a set of two points $a_i\amalg b_i$, where $a_i$ is in $A$ and $b_i$ is in $F_0-A$. The 2-disc $D_i$ is the result of rotating $E_i$ around the axis $A$. The result of rotating $b_i$ is $\partial D^2_i$. Since $a_i$ is in the axis $A$, the result of rotating $a_i$ is the point $a_i$ itself. The point $b_i$ is in the boundary of $D^2_i$. The point $a_i$ is in the interior of $D^2_i$. \vskip3mm Let $Q(K)$ be a band-sum of the components $K\x\{-1\}$ and $K\x\{1\}$ of the 2-link $D(K)$ with the following properties. \f (1) The band $h$ is in $K\x$ Int$D^2$. \f (2) $\{h-$(the attach part of $h$)$\}$$\cap (K\x[-1,1])=\phi$. \f (3) $\overline{Q(K)-B^4}=$ $\overline{D(K)-B^4}$. \f (4) $B^4\cap$ $(D(K)\cup h)$ ( =$B^4\cap$ $(K\x\{-1\}\cup h\cup K\x\{1\})$ ) satisfies the following conditions. (We summarize the conditions in Table 1.) \f For $\pi\leq\theta<2\pi$ and $\theta=0$, $F_\theta\cap( D(K)\cup h)$ is drawn as in Figure 4.2. \f For $0<\theta<\pi$, $F_\theta\cap(D(K)\cup h)$ is drawn as in Figure 4.1. \f (5) $B^4\cap h$ satisfies the following conditions. \f For $\pi\leq\theta<2\pi$ and $\theta=0$, $F_\theta\cap h$ is drawn as in Figure 4.3. \f For $0<\theta<\pi$, $F_\theta\cap h$ is empty. \f $B^4\cap$ (the attach part of $h$) is as follows. \f For $\pi\leq\theta<2\pi$ and $\theta=0$, $F_\theta\cap$ (the attach part of $h$) is drawn as in Figure 4.4. \f For $0<\theta<\pi$, $F_\theta\cap$ (the attach part of $h$) is empty. \f (6) $B^4\cap Q(K)$ satisfies the following conditions. (We summarize the conditions in Table 1.) \f For $\pi<\theta<2\pi$, $F_\theta\cap Q(K)$ is drawn as in Figure 4.5. \f For $\theta=0,\pi$, $F_\theta\cap Q(K)$ is drawn as in Figure 4.2. \f For $0<\theta<\pi$, $F_\theta\cap Q(K)$ is drawn as in Figure 4.1. \newpage \input 4.2.tex \newpage \input 4.3.tex \newpage \input 4.4.tex \newpage \input 4.5.tex \newpage \begin{tabular}{|l||l|l|l|l|} & {$\theta=0$} & {$0<\theta<\pi$} & {$\theta=\pi$} & {$\pi<\theta<2\pi$}\\ $B^4\cap$$D(K)$ &Figure 4.1 & Figure 4.1 & Figure 4.1&Figure 4.1 \\ $B^4\cap (D(K)\cup h)$ &Figure 4.2 & Figure 4.1 & Figure 4.2 &Figure 4.2 \\ $B^4\cap Q(K)$ &Figure 4.2 & Figure 4.1 & Figure 4.2 &Figure 4.5 \\ $B^4\cap L$ &Figure 4.5 & Figure 4.5 & Figure 4.5 &Figure 4.5 \\ $B^4\cap$ $(L\cup h')$ &Figure 4.2 & Figure 4.2 & Figure 4.2 &Figure 4.5 \\ $B^4\cap K_3$ &Figure 4.2 & Figure 4.1 & Figure 4.2 &Figure 4.5 \\ \end{tabular} \vskip1cm \hskip5cm Table 1 \noindent {\bf Note.} The following hold: \f (I) $h\cup (K\x[-1,1])$ is a Seifert hypersurface of the 2-knot $Q(K)$. \f (II) Let $A$ be a Seifert matrix of $Q(K)$ associated with the Seifert hypersurface $h\cup (K\x[-1,1])$. Since $h\cup (K\x[-1,1])$ is diffeomorphic to $\overline{(S^1\x S^2)-B^3}$, $A$ is a $(1\x1)$-matrix. By the construction of $Q(K)$, $A=(2)$ or $A$=(-1) holds. Recall that whether $A$ is a $1\times1$-matrix (2) or (-1) depends on which orientation we give $Q(K)$. Recall that the orientation of $Q(K)$ is determined by that of $D(K)$. \f (III) $2t-1$ or $2-t$ represents for the Alexander polynomial of the 2-knot $Q(K)$. (See \S F,G,H of \S 7 of \cite{Ro} for Seifert matrices of 2-knots and the Alexander polynomial of 2-knots. ) Hnece $Q(K)$ is a nontrivial 2-knot. \vskip3mm In \S5 we prove: \vskip3mm \noindent {\bf Lemma 4.1 }{\it Let $K$ be a 2-knot. There is a 2-link $L=(K_1, K_2)$ such that \f (1) $K_i$ is a trivial 2-knot ($i=1,2$), and \f (2) $Q(K)$ is a band-sum $K_3$ of the components $K_1$, $K_2$ of the 2-link $L$.} \vskip3mm The above $Q(K)$ is `a 2-knot $D(J,\gamma)$ whose $\gamma$ is sufficiently complicated' in \S4 of \cite{C}. Corollary 4.3 in \S4 of \cite{C} or Example after Corollary 4.3 in \S4 of \cite{C} says that, for a 2-knot $K$, the above $Q(K)$ is a nonribbon 2-knot. Hence Lemma 4.1 implies Proposition 3.1. \section{ Proof of Lemma 4.1 } Let $L=(K_1, K_2)$ be a 2-link with the following conditions. \f (1) $(S^4-B^4)\cap L$=$(S^4-B^4)\cap D(K)$. \f (2) $B^4\cap L$ satisfies the condition that, for each $\theta$, $F_\theta\cap L$ is drawn as in Figure 4.5. (We summarize the conditions in Table 1.) \vskip3mm \noindent {\bf Note.} In Figure 4.5, the following hold: The two arcs are called $l_1$ and $l_2$. $l_i$ is a trivial arc. $l_1\cap A\neq\phi$. $l_2\cap A=\phi$. $K_i$ is made from $l_i$ by the rotation. \vskip3mm By the construction of $L=(K_1, K_2)$, $K_1$ satisfies the conditions: \f (1) $K_1\subset B^4$. \f (2) For each $\theta$, $F_\theta\cap K_1$ is drawn as in Figure 5.1. We prove $K_1$ is a trivial knot. Because: Since $l_1$ is a trivial arc, $K_1$ is a spun knot of a trivial 1-knot. See \cite{Z} for spun knots. By the construction of $L=(K_1, K_2)$, $K_2$ satisfies the following conditions. \f (1) $(S^4-B^4)\cap K_2$=$(S^4-B^4)\cap D(K)$. \f (2) For each $\theta$, $F_\theta\cap K_2$ is drawn as in Figure 5.2. We prove: $K_2$ is a trivial knot. Because: Let $P$ be a subset $(K\x[-1,1])-B^4\}$. Then $P$ is diffeomorphic to a 3-ball. Hence $\partial P$ is a trivial 2-knot. Since $l_2$ is a trivial arc, $K_2$ is equivalent to $\partial P$. Hence $K_2$ is a trivial 2-knot. Let $K_3$ be a band-sum of the components $K_1$ and $K_2$ of the 2-link $L$ with the following conditions. \f (1) The band $h'$ is in $B^4$. \f (2) $B^4\cap$$(L\cup h')$=$B^4\cap(K_1\cup h' \cup K_2)$ satisfies the following conditions. (We summarize the conditions in Table 1.) \f For $\pi<\theta<2\pi$, $F_\theta\cap(L\cup h')$ is drawn as in Figure 4.5. \f For $0\leq\theta\leq\pi$, $F_\theta\cap(L\cup h')$ is drawn as in Figure 4.2. \f (3) $B^4\cap h'$ satisfies the following conditions. \f For $\pi<\theta<2\pi$, $F_\theta\cap h'$ is empty. \f For $0\leq\theta\leq\pi$, $F_\theta\cap h'$ is drawn as in Figure 4.3. \f (4) Note that $h$ and $h'$ are dual handles each other. \f (5) $B^4\cap K_3$ satisfies the following conditions. (We summarize the conditions in Table 1.) \f For $\pi<\theta<2\pi$, $F_\theta\cap K_3$ is drawn as in Figure 4.5. \f For $\theta=0,\pi$, $F_\theta\cap K_3$ is drawn as in Figure 4.2. \f For $0<\theta<\pi$, $F_\theta\cap K_3$ is drawn as in Figure 4.1. By the construction of this knot $K_3$ and the construction of $Q(K)$ in \S4, $K_3$ is identical to $Q(K)$. This completes the proof of Lemma 4.1, Proposition 3.1, Theorem 1.1. \newpage \input 5.1.tex \newpage \input 5.2.tex \newpage \section{ Related problems } \noindent {\bf Problem 6.1.} Let $L=(K_1, K_2)$ be a 2-link. Then do we have $\mu(L)=\mu(K_1)+\mu(K_2)$? \vskip3mm See \cite{R} \cite{O3} \cite{O4} for the $\mu$ invariant of 2-links and related topics. \vskip3mm \noindent {\bf Problem 6.2.} Is there a 2-link which is not an SHB link? \vskip3mm See \cite{CO} \cite{LO} for SHB links. The author proved in \cite{O3}: if $L=(K_1, K_2)$ is an SHB link, then the answer to Problem 6.1 is affirmative. \vskip3mm \noindent {\bf Problem 6.3.} Let $K_1, K_2, K_3$ be arbitrary 2-knots. Is there a 2-component 2-link $L=(L_1,L_2)$ such that $L_1$ (resp. $L_2$) is equivalent to $K_2$ (resp. $K_2$) and that a band-sum of $L$ is $K_3$? \vskip3mm If the answer to Problem 6.1 is affirmative, then the answer to Problem 6.3 is negative. In \cite{O2} the author gives the negative answer to the $n$-dimensional knot version of Problem 6.3 and proves the $n$-dimensional version of Theorem 1.1. The announcement of them is in \cite{O1}.
1,116,691,499,487
arxiv
\section{{Introduction}} Typical neural network training starts with random initialization and is trained until reaching convergence in some local optima. The final result is quite sensitive to the starting random seed as reported in \cite{picard2021torch,wightman2021resnet}, who observed 0.5\% difference in accuracy between worst and best seed on Imagenet dataset and 1.8\% difference on CIFAR-10 dataset. Thus, one might need to run an experiment several times to avoid hitting the unlucky seed. The final selected network is just the one with the best validation accuracy. We believe that discrepancy between starting seeds performance can be explained by selecting slightly different features in hidden layers in each initialization. One might ask a question, can we somehow select better features for network training? One approach is to train a bigger network and then select the most important channels via channel pruning \cite{molchanov2016pruning,molchanov2019importance,luo2017thinet,yu2018nisp}. Training a big network, which is subsequently pruned, might be in many cases prohibitive, since increasing network width by a factor of two results in a four times increase in FLOPs and also might require a change in some hyperparameters (e.g. regularization, learning rate). Here, we propose an alternative approach demonstrated in Fig. \ref{fig:prune_vs_merge}. Instead of training a bigger network and pruning it, we will train two same sized networks and merge them together into one. The idea is that each training run would fall into different local optima and thus have different sets of filters in each layer as shown in Fig.~\ref{fig:filters}. We then can select a better set of filters than in original networks and achieve better accuracy. \begin{figure}[t!] \centering \includegraphics[width=0.7\textwidth,clip,trim=0 2cm 0 0]{figures/intro_merge.pdf} \caption{ \label{fig:prune_vs_merge} Comparison between a) training a bigger network and then pruning and b) training two separate networks and then merging them together. Width of rectangles denotes the number of channels in the layer.} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth,clip,trim=3cm 0 3cm 0]{figures/filters.pdf} \caption{ \label{fig:filters} Set of filters in the first layer of two ResNet20 networks trained on CIFAR-100 dataset with different starting seed. Each row shows filters from one network. Selected filters for merged network are marked with red outline. } \end{figure} In summary, in this paper: \begin{itemize} \item We propose a procedure for merging two networks with the same architecture into one. \item We experimentally validate the procedure. We demonstrate that it produces a network with better performance than the best of original ones. On top of that, we also show that the resulting network is better than the same network trained for an extended number of epochs (matching the whole training budget for the merged network). \end{itemize} \subsection{\textbf{Related work}} There are multiple approaches, which try to improve accuracy/size tradeoff for neural networks without the need for specialized sparse computation (such as in case of weight pruning). Most notable one is \textbf{channel pruning} \cite{molchanov2016pruning,molchanov2019importance,luo2017thinet,yu2018nisp}. Here we first train a bigger network and then select the most important channels in each layer. Selection process usually involves assigning some score to each channel and then removing channels with the lowest score. Another approach is \textbf{knowledge distillation }\cite{hinton2015distilling}. This involves first training a bigger network (teacher) and then using its outputs as targets for a smaller network (student). It is hypothesized that by using larger network outputs, the smaller network can also learn hidden aspects of data, which are not visible in the dataset labels. However, it was shown that successful knowledge distillation requires training for a huge number of epochs (i.e. 1200) \cite{beyer2021knowledge}. A slight twist to distillation was applied in \cite{nath2020better} where bigger and smaller networks were cotrained together. One can also use auxiliary losses to reduce redundancy and increase diversity between various places in the neural network \cite{chen2022the}. \section{{Methods}} Here, we describe our training and merging procedure. We will denote two networks, which will be merged as \textbf{teachers} and the resulting network as a \textbf{student}. Our training strategy is composed of three stages: \begin{enumerate} \item Training of two teachers \item \textbf{Merging procedure}, i.e. creating a student, which consists of the following substeps: \begin{enumerate} \item Layerwise concatenation of teachers into a big student \item Learning importance of big student neurons \item Compression of big student \end{enumerate} \item Fine-tuning of the student \end{enumerate} Training of teachers and fine-tuning of the student is just standard training of a neural network by backpropagation. Below, we describe how we derive a student from two teachers. \subsection{\textbf{Layerwise concatenation of teachers into a big student}} First, we create a ``big'' student by layerwise concatenation of teachers. The big student simulates the two teachers and averages their predictions in the final layer. This phase is just network transformation without any training, see Fig.~\ref{fig:concatenation}. Concatenation of the convolutional layer is done in channel dimension, see Fig.~\ref{fig:concatenation_conv}. Concatenation of the linear layer is done analogically in the feature dimension. We call the model "big student" because it has a doubled width. \begin{figure}[t!] \centering \includegraphics[width=0.7\textwidth,clip,trim=0 5cm 5cm 0]{figures/big_student.pdf} \caption{ \label{fig:concatenation} Concatenation of linear layer. Orange and green weights are copies of the teacher's weights. Gray weights are initialized to zero. In the beginning, big student simulates two separate computational flows. But during the training, they can be interconnected. } \end{figure} \begin{figure} \begin{lstlisting} def merge_conv(conv1: nn.Conv2d, conv2: nn.Conv2d) -> nn.Conv2d: in_channels = conv1.in_channels out_channels = conv1.out_channels conv = nn.Conv2d(in_channels * 2, out_channels * 2, kernel_size=conv1.kernel_size, stride=conv1.stride, padding=conv1.padding, bias=False) conv.weight.data *= 0 conv.weight.data[:out_channels, :in_channels] = \ conv1.weight.data.detach().clone() conv.weight.data[out_channels:, in_channels:] = \ conv2.weight.data.detach().clone() return conv \end{lstlisting} \caption{\label{fig:concatenation_conv} Pytorch code for concatenation of a convolutional layer in ResNet. Since convolutions are followed by Batch normalization, they do not use biases. } \end{figure} \subsection{\textbf{Learning importance of big student neurons}} We want the big student to learn to use only half of the neurons in every layer. So after the removal of unimportant neurons, we will end up with the original architecture. Besides learning the relevance of neurons, we also want the two computational flows to interconnect. There are multiple ways to find the most relevant channels. One can assign scores to individual channels \cite{molchanov2016pruning,molchanov2019importance}, or one can use an auxiliary loss to guide the network to select the most relevant channels. We have chosen the latter approach, inspired by \cite{voita2019analyzing}. It leverages the L0 loss presented in \cite{louizos2017learning}. Let $\ell$ be a linear layer with $k$ input features. Let $g_i$ be gate assigned to feature $f_i$. Gate can be either opened $g_i =1$ (student is using the feature) or closed $g_i = 0$ (student isn't using the feature). Before computing outputs of the layer, we first multiply inputs by gates, i. e. instead of computing $Wf + b$, we compute $W(f\cdot g) + b$. To make our model use only half of the features we want $\frac{1}{n}\sum_1^k g_i = \frac{1}{2}$. The problem with this approach is that $g_i$ is discrete and is not trainable by the gradient descent. To overcome this issue, we used stochastic gates and continuous relaxation of $L_0$ norm presented in \cite{louizos2017learning}. The stochastic gates contain random variable that has nonzero probability to be 0: $P[g_i=0]>0$, nonzero probability to be 1 $P[g_i=1]>0$, and is continuous on interval $(0,1)$. The reparameterization trick makes the distribution of the gates trainable by gradient descent. To force the big student to use only half of the features of the layer, we use an auxiliary loss: $$L_{half}^\ell = \left(\frac{1}{2} - \frac{1}{k}\sum_1^k P[g_i>0]\right)^2$$ Note that our loss is different from the loss used in \cite{louizos2017learning}. Whereas our loss forces the model to have exactly half of the gates opened, their loss pushes the model to use as few gates as possible. Thus we are optimizing $L = L_{E} + \lambda \sum_\ell L_{half}^\ell$, where $L_{E}$ is error loss measuring fit on the dataset and new hyperparameter $\lambda$ is proportion of importance of error loss and auxiliary loss. Hyperparameter $\lambda$ is sensitive and needs proper tuning. At the beginning of the training, it can not be too big, or the student will set every gate to be closed with a probability of 0.5. At the end of the training, it can not be too small, or the student will ignore the auxiliary loss in favor of the error loss. It will use more than half of the neurons of the layer and will significantly drop performance after the compression. We found that using the quadratic increase of $\lambda$ during big student's training works sufficiently well, see Fig. \ref{fig:evolution_of_lambda}. \begin{figure}[t!] \centering \includegraphics[width=0.6\textwidth]{figures/evolition_of_lambda.pdf} \caption{ \label{fig:evolution_of_lambda} Evolution of $\lambda$ during training. For the sine problem we have used ${\lambda_{t+1} = \lambda_t + 0.05* \sqrt{\lambda_t}}$ (where $t$ is the epoch number). } \end{figure} We have implemented gates in a separate layer. We have used two designs of gate layers, one for 2d channels and one for 1d data. The position of gate layers is critical. For example, if a gate layer is positioned right before the batch norm, its effect (i. e. multiplying the channel by 0.1) would be countered by the batch norm, see Fig.~\ref{fig:network_with_gates}. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figures/gate_pos.pdf} \caption{ \label{fig:network_with_gates} Positions of gate layers a) sine problem b) LeNet c) two consecutive blocks in ResNet. Two of the ResNet gate layers have to be identical. If the layers would not be linked and for some channel $i$, the first gate would be closed while the second gate would be opened, the result of the second block, for that channel, would be $0+f(x)$ instead of $x_i + f(x)$ which would defeat the whole purpose of ResNet and skip connections.} \end{figure} \subsection{\textbf{Compression of big student}} After learning of importance is finished, we select half of the most important neurons for every layer. Then, we compress each layer by keeping only the selected neurons as is visualized in Fig.~\ref{fig:compression}. \begin{figure} \centering \includegraphics[width=0.7\textwidth,clip,trim=2cm 4.8cm 0 0]{figures/compress.pdf} \caption{ \label{fig:compression} Compression of the big student. On the left side is a linear layer of a big student. Prior and following gate layers decide which neurons are important. On the right side is a compressed layer. It consists of only the important neurons. } \end{figure} \section{Experimental results} First, we have tested our training strategy on a simple artificial regression problem to demonstrate that it can learn better features than standard training. Then we have tested it on the Imagewoof (Imagenet-1k using only 10 classes) dataset \cite{imagewoof,shleifer2019using} with LeNet \cite{lecun1989handwritten} and ResNet18 \cite{he2016deep} architectures. We also test our approach on CIFAR-100 dataset \cite{krizhevsky2009learning} using ResNet20 \cite{he2016deep}. Finally, we evaluate our approach on Imagenet-1k dataset \cite{deng2009large}. We are comparing our training strategy with strategies \textit{bo3 model} and \textit{one model}. We use the same number of epochs in all strategies. In our strategy, \textit{student}, we use two-thirds of epochs to train teachers and one-third to train student (one-sixth to find important neurons and one-sixth to fine-tune). In the \textit{bo3 model} we train three models, each for one-third of epochs, and then we choose the best. In the final strategy \textit{one model} we use all epochs to train one model. \begin{table*}[t!] \centering \caption{ \label{fig:statistics_of_all_experiments} Summary of experimental results on all tasks and architectures. Considering specific task, all strategies used the same number of epochs. Strategy student (our strategy) uses 2/3 to train two teachers and 1/3 to train student (1/6 finding important features, 1/6 finetuning). Strategy bo3 teacher trains three teachers and picks the best. Strategy long teacher uses all epochs to train one model. Strategy bo3 big teacher and long big teacher trains models with doubled width. } \begin{tabular}{|c|c|c|c|c|c|c|} \hline Task & Strategy & Min & Max & Median & Mean & Std \\ \hline\hline \multirow{3}{4cm}{\centering Sine problem\\ fully~connected~network\\ test loss} & student & \textbf{0.049} & \textbf{0.116} & \textbf{0.078} & \textbf{0.077} & \textbf{0.015} \\ \cline{2-7} & bo3 model & 0.105 & 0.373 & 0.253 & 0.240 & 0.076 \\ \cline{2-7} & one model & 0.053 & 0.363 & 0.281 & 0.249 & 0.097 \\ \hline\hline \multirow{3}{4cm}{\centering Imagewoof \\ LeNet \\ test acc} & student & \textbf{0.380} & \textbf{0.411} & \textbf{0.399} & \textbf{0.399} & 0.008 \\ \cline{2-7} &bo3 model & 0.369 & 0.394 & 0.387 & 0.385 & \textbf{0.007} \\ \cline{2-7} &one model & 0.365 & 0.397 & 0.377 & 0.378 & 0.010 \\ \hline\hline \multirow{3}{4cm}{\centering Imagewoof \\ ResNet18 \\ test acc} & student & \textbf{0.824} & \textbf{0.828} & \textbf{0.825} & \textbf{0.826} & \textbf{0.002} \\ \cline{2-7} &bo3 model & 0.801 & 0.809 & 0.807 & 0.806 & 0.003 \\ \cline{2-7} &one model & 0.810 & 0.818 & 0.813 & 0.813 & 0.003 \\ \hline\hline \multirow{3}{4cm}{\centering CIFAR-100 \\ ResNet20 \\ test acc} & student & \textbf{0.688} & \textbf{0.691} & \textbf{0.688} & \textbf{0.689} & 0.002 \\ \cline{2-7} &bo3 model & 0.670 & 0.672 & 0.670 & 0.670 & \textbf{0.001} \\ \cline{2-7} &one model & 0.670 & 0.679 & 0.675 & 0.675 & 0.004 \\ \hline \end{tabular} \end{table*} \subsection{\textbf{Sine problem}} We have created an artificial dataset - 5 sine waves with noise. The input is scalar $x$ and the target is $y = sin(10\pi x)+z$ where $z \sim \mathcal{N}(0,\,0.2)$, see Fig. \ref{fig:sine}. Our architecture is composed of two linear layers (i. e. one hidden layer) with 100 hidden neurons, see Fig. \ref{fig:sine}. In this experiment, we want to confirm the idea, that network trained from random initialization might end in suboptimal local optima and our merging procedure finds higher quality local optima. For placement of gates in the big student, see Fig. \ref{fig:network_with_gates}. In every strategy, we have used 900 epochs and SGD with starting learning rate 0.01 and momentum 0.9. Then we have decreased the learning rate to 0.001 after the 100th epoch for student fine-tuning, the 250th epoch for teachers and bo3 models, and the 800th epoch for a model in \textit{one model}. \begin{figure} \centering \includegraphics[width=0.7\textwidth,clip,trim=0 9cm 7cm 0]{figures/sine_ds.pdf} \caption{ \label{fig:sine} a) Training dataset for sine problem consisting of 10000 samples where $x \sim \mathcal{U}(0,1)$ and $y = sin(10\pi x) + z \, ; \: z \sim \mathcal{N}(0,\,0.2)$. b) Architecture of model for sine problem } \end{figure} We have trained 50 models with every strategy. Our strategy has significantly smaller error than other strategies. For a more comprehensive comparison see Fig.~\ref{fig:statistics_of_all_experiments}, for a visualization of test losses see Fig.~\ref{fig:sine_box_plot}. We also visualize learned functions of all models in Fig.~\ref{fig:learned_curve}. It shows, that our merged model avoids local collapses and always fits all of the sine peaks. Models trained by other strategies miss some sine peaks. \begin{figure}[t!] \centering \includegraphics[width=0.6\textwidth]{figures/box_plot_sine.pdf} \caption{ \label{fig:sine_box_plot} Box plot of testing losses of 50 experiments on sine problem. The vertical line inside the box represents the median, and the cross presents the mean. } \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figures/learned_curves.pdf} \caption{ \label{fig:learned_curve} Plots of learned sine curves by models trained with different strategies. We plot every train result as one line and overlay on top of each other. As we can see, all of the students resulting from merging get all of the peaks. However, models without merging often missed some peaks.} \end{figure} \subsection{\textbf{Imagewoof}} Imagewoof \cite{imagewoof} is a subset of 10 classes from Imagenet (10 dog breeds). On this recognition problem we have tested two architectures - LeNet and ResNet18. \subsubsection{\textbf{LeNet}} LeNet is composed of two convolutional layers followed by three linear layers. The shape of an input image is (28, 28, 3). The convolutional layers have 6 and 16 output channels, respectively. The linear layers have 400, 120, and 80 input features, respectively. For the architecture of the big student see Fig.~\ref{fig:network_with_gates}. Every strategy has used 6000 epochs cumulatively and SGD with starting learning rate 0.01 and momentum 0.9. Every training except finding important neurons (teachers, student fine-tuning, bo3 models, and one model) decreased the learning to 0.001 in the third quarter and 0.0001 in the last quarter of the training. We have conducted 10 experiments, see Fig.~\ref{fig:imagewoof_lenet} for visualisation and Table \ref{fig:statistics_of_all_experiments} for detailed statistics. Our strategy has better min, max, median, and mean testing accuracy. It has a greater sample variance than \textit{one model} as a consequence of an outlier, see Fig.~\ref{fig:imagewoof_lenet}. \begin{figure}[t!] \centering \includegraphics[width=0.6\textwidth]{figures/box_plot_imagewoof_lenet.pdf} \caption{ \label{fig:imagewoof_lenet} Box plot of the testing accuracies of 10 experiments on Imagewoof with LeNet. } \end{figure} \subsubsection{\textbf{ResNet18}} ResNet has two information flows (one through blocks, one through skip connections). Throughout the computation, its update is $x = f(x) + x$, instead of the original $x = f(x)$. To conserve this property, some gate layers have to be synchronized - share weights and realization of random variables, see \ref{fig:network_with_gates}. Every strategy has used 600 epochs cumulatively. The optimizer and the learning rate scheduler is analogical to the LeNet experiment. We have conducted 5 experiments, see Fig.~\ref{fig:imagewoof_resnet} for visualisation and table \ref{fig:statistics_of_all_experiments} for detailed statistics. Our strategy has better min, max, median, mean and sample variance of testing accuracy. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figures/box_plot_imagewoof_resnet.pdf} \caption{ \label{fig:imagewoof_resnet} Results of 5 experiments on Imagewoof with ResNet18. The worst student (0.819) had slightly better accuracy than the best long teacher (0.818).} \end{figure} \subsection{\textbf{CIFAR-100}} We also tested our approach on CIFAR-100 dataset using ResNet20. Our total training budget is 900 epochs. We optimize models using SGD with starting learning rate 0.1 and then divide it by 10 during half and three quarters of the training of one network. We run all strategies 5 times and report results in Table \ref{fig:statistics_of_all_experiments}. We can see, that our strategy is more than $1\%$ better than training one model for extended period of time. \subsection{Imagenet} \begin{table*} \centering \caption{\label{fig:imagenet} Results on ResNet-18 on Imagenet benchmark.} \begin{tabular}{|c|c|c|c|c|} \hline Teacher epochs & Big student epochs & Finetuning epochs & Total epochs & Validation accuracy \\\hline - & - & 90 & 90 & 0.6976 \\\hline - & - & 150 & 150 & 0.7028 \\\hline 20 & 20 & 90 & 150 & {\bf 0.7047} \\\hline \end{tabular} \end{table*} We tested our merging approach also on Imagenet-1k dataset \cite{deng2009large}. However as seen in \cite{wightman2021resnet} high quality training requires 300 to 600 epoch, which is quite prohibitive. We opted for approach from Torchvision \cite{Torchvision}, which achives decent results in 90 epochs. We train networks using SGD with starting learning rate 0.1, which decreases by factor of 10 in third and two thirds of training. For final finetuning of student we used slightly smaller starting learning rate of $0.07$. For merging we used slightly different appoach than in previous experiments. We trained teachers only for short amount of 20 epochs, which gives teacher accuracy around $65\%$. Then we spend 20 epochs in tuning big student and finding important neurons and finally finetune for 90 epochs. In total of 150 epochs, we get better results than ordinary training for 90 epochs and also better result than training for equivalent amount of 150 epochs. Results are summarized in Tab. \ref{fig:imagenet}. \section{{Conclusions and future work}} We propose a simple scheme for merging two neural networks trained from different initializations into one. Our scheme can be used as a finalization step after one trains multiple copies of one network with varying starting seeds. Alternatively, we can use our scheme for getting higher quality networks under a similar training budget which we experimentally demonstrated. One of the downsides of our scheme is that during selection of important neurons we need to instantiate rather big neural network. In future, we would like to optimize this step to be more resource efficient. One option is to select important neurons in layerwise fashion. Other possible options for future research include merging more than two networks and also merging networks pretrained on different datasets. \section{{Acknowledgements}} This research was supported by grant 1/0538/22 from Slovak research grant agency VEGA.
1,116,691,499,488
arxiv
\section{Introduction} Coronal mass ejections (CMEs) are large clouds of plasma and magnetic flux expelled from the Sun into the heliosphere. If directed towards Earth, they can cause significant space weather effects upon impact with the near-Earth environment. CMEs are believed to be ejected from the solar atmosphere as helical magnetic field structures known as flux ropes \citep[\textit{e.g.},][]{antiochos1999,moore2001,kliem2006,liu2008,vourlidas2014}. This flux rope structure is, however, not always observed in interplanetary space \citep[\textit{e.g.},][]{gosling1990,richardson2004b,huttunen2005}, purportedly because (1) CMEs often deform due to interactions with the ambient solar wind \citep[\textit{e.g.},][]{odstrcil1999,savani2010,manchester2017}, or with other CMEs \citep[\textit{e.g.},][]{burlaga2002,manchester2017}, (2) CMEs undergo magnetic flux erosion \citep{dasso2007,ruffenach2012}, or (3) due to the spacecraft crossing the flux rope far from its centre \citep[\textit{e.g.},][]{cane1997,jian2006,kilpua2011}. Interplanetary CMEs \citep[or ICMEs, \textit{e.g.},][]{kilpua2017b} that present, among other properties, enhanced magnetic fields, a monotonic rotation of the magnetic field direction through a large angle, small magnetic field fluctuations, and a low plasma temperature and plasma $\beta$ are often described and analysed using flux rope structures \citep[\textit{e.g.},][]{burlaga1981,rodriguez2016}. The geoeffectivity of an ICME depends significantly on its magnetic structure, and in particular on the North--South magnetic field component (\textit{i.e.}, $B_{Z}$). A southward $B_{Z}$ will cause reconnection at the dayside magnetopause, allowing the efficient transport of solar wind energy and plasma into the magnetosphere \citep[\textit{e.g.},][]{dungey1961,gonzalez1994,pulkkinen2007}. Strong geomagnetic storms occur when the interplanetary magnetic field points strongly southward (\textit{i.e.}, $B_{Z} < -10$ nT) for more than a few hours \citep[\textit{e.g.},][]{gonzalez1987}. Due to their coherent field rotation and their tendency for enhanced magnetic fields, flux ropes are one of the key interplanetary structures that create such conditions \citep[\textit{e.g.},][]{gosling1991,huttunen2005, richardson2012,kilpua2017a}. A major goal of space weather forecasting is to be able to predict the magnitude and direction of the southward $B_{Z}$ component before the ICME arrives at Earth. The first step in achieving this aim is to understand how the magnetic field of a flux rope is organised. The magnetic field of a flux rope can be described by two components: the helical field component, that wraps around the flux tube, and the axial field component, which runs parallel to the central axis. In addition, flux ropes can have either a left-handed or right-handed twist (chirality). Having knowledge of the flux rope chirality along with its orientation in space allows a flux rope to be classified as one of eight different ``types'', as described by \citet{bothmer1998} and \citet{mulligan1998}. Flux ropes that have their central axis more or less parallel to the ecliptic plane are called low-inclination flux ropes (in this case, the $B_{Z}$ component represents the helical field and thus its sign changes as the flux rope is crossed), while flux ropes that have their central axis more or less perpendicular to the ecliptic plane are called high-inclination flux ropes (in this case, the $B_{Z}$ component represents the axial field and thus its sign does not change). Figure \ref{fig:fr_types} shows the different flux rope types based on their chirality and orientation. There is a tendency for erupting CMEs to have negative (positive) helicity sign in the northern (southern) hemisphere. This pattern is known as the ``hemispheric helicity rule'' \citep{pevtsov2003}, but it holds only for about 60-75\% of cases \citep{pevtsov2014}. \begin{figure}[ht] \includegraphics[width=.99\linewidth]{figure1.pdf} \caption{Sketch representing the eight main flux rope types and how the helical (in red) and axial (in black) magnetic fields are related to each other for each type. Each letter describing a type represents one of the four directions (North, West, South, and East), while RH indicates right-handed and LH indicates left-handed helicity. This classification follows \citet{bothmer1998} and \citet{mulligan1998}.} \label{fig:fr_types} \end{figure} At present, it is not possible to determine the magnetic structure of erupting flux ropes in the corona from direct observations of the magnetic field. However, several indirect proxies based on EUV, X-ray, and photospheric magnetograms have been used to estimate the ``intrinsic'' flux rope type at the time of eruption. In several studies, such proxies have been used to estimate the magnetic structure of erupting CMEs, which have been compared to \textit{in situ} observations \citep[\textit{e.g.},][]{mcallister2001,yurchyshyn2001,mostl2008,palmerio2017}. These studies have been based either on observations alone or on observations combined with theoretical and/or empirical models. In order to reconstruct the intrinsic flux rope type, the chirality sign, the axis tilt (\textit{i.e.}, its inclination to the ecliptic), and the axial direction of the magnetic field have to be known. In a force-free magnetic field configuration like a flux rope, the total magnetic helicity is conserved \citep{woltjer1958}. Previous studies have suggested that the helicity sign, the total helicity, and the total magnetic flux of an ICME flux rope are related to those of its corresponding source region \citep[\textit{e.g.},][]{leamon2004,qiu2007,mostl2009,cho2013,hu2014,pal2017}. Hence, the property of magnetic helicity conservation can be used to assume that once the flux rope type at the Sun is determined, its chirality is maintained as the CME propagates from the Sun to Earth. \citet{palmerio2017} determined the magnetic structure of two CMEs both at the Sun and \textit{in situ}. The scheme presented in their work is based on the combination of multiwavelength remote-sensing observations in order to determine the chirality of the erupting flux rope and the inclination and direction of its axial field, thus reconstructing the intrinsic flux rope type. While, for the two eruptions under study, the flux rope type was the same when determined at the Sun as when measured \textit{in situ} at the Lagrange L1 point, this is not universally the case. CMEs can change their orientation due to deflections \citep[\textit{e.g.},][]{kay2013,wang2014}, rotations \citep[\textit{e.g.},][]{mostl2008,vourlidas2013,isavnin2014}, and deformations \citep[\textit{e.g.},][]{savani2010} in the corona and in interplanetary space, and this can alter the classification of the flux rope. CMEs can also change their direction, orientation, and shape due to interaction with other CMEs or corotating interaction regions \citep[CIRs,][]{lugaz2012,shen2012}. In addition, it is often difficult to predict how close a flux rope will cross Earth with respect to its nose and its central axis, and in some cases even whether a CME will encounter Earth at all \citep[\textit{e.g.},][]{mostl2014,mays2015,kay2017a}. In this work, we extend the study of \citet{palmerio2017}. In particular, we quantify the success of predicting flux rope types when neglecting CME evolution through a statistical analysis. The methods described by \citet{palmerio2017} provide a relatively quick and straightforward estimate of the flux rope type for space weather forecasting purposes. However, due to the potentially significant evolution of flux ropes in the corona and heliosphere through the previously described processes, the applicability of the approach has to be statistically evaluated. This is the key motivation for this study. We point out that irrespective of any direct correspondence that is found between intrinsic and \textit{in situ} flux rope types, the \citet{palmerio2017} scheme can provide a crucial input to semi-empirical CME models \citep[\textit{e.g.},][]{savani2015,savani2017,kay2016,kay2017a} or flux rope models used in numerical simulations \citep[\textit{e.g.},][]{shiota2016} that can capture the evolution. Apart from the CME evolution in the corona, changes in the axis orientation may be related to either global rotations of the whole CME body and/or to local deformations of the flux rope during its travel in the interplanetary medium, and/or to limitations of the methods used to determine the CME orientation both at the Sun and \textit{in situ}. This paper is organised as follows. In Section 2, we describe the spacecraft and ground-based data that we use, and also introduce the catalogue of events that we consider for this study. Then, we discuss in more detail the different methods that we have applied to determine the intrinsic flux rope type at the point of the eruption, from solar observations, and the \textit{in situ} analysis we performed. In Section 3, we apply our methods to 20 Earth-directed CMEs, by estimating the intrinsic flux rope type and comparing it to the magnetic structure measured near Earth. Finally, in Section 4, we discuss and summarize our results. \section{Data and Methods} \subsection{Spacecraft and Ground-based Data} We combine various remote-sensing observations to estimate the intrinsic flux rope type of the CMEs under study and to link the interplanetary structures to their solar origins. We use coronagraph images taken with the \textit{Large Angle Spectroscopic Coronagraph} \citep[LASCO:][]{brueckner1995} onboard the \textit{Solar and Heliospheric Observatory} \citep[SOHO:][]{domingo1995} and with the COR1 and COR2 coronagraphs that form part of the \textit{Sun Earth Connection Coronal and Heliospheric Investigation} \citep[SECCHI:][]{howard2008} instrument package onboard the \textit{Solar Terrestrial Relations Observatory} \citep[STEREO:][]{kaiser2008}. The Heliospheric Imagers \citep[HI:][]{eyles2009} onboard STEREO are also used, primarily to connect the CMEs with their corresponding ICMEs. We also use EUV/UV images and line-of-sight magnetograms taken with the \textit{Atmospheric Imaging Assembly} \citep[AIA:][]{lemen2012} and the \textit{Helioseismic and Magnetic Imager} \citep[HMI:][]{scherrer2012} instruments onboard the \textit{Solar Dynamics Observatory} \citep[SDO:][]{pesnell2012}. AIA takes images with a pixel size of 0.6 arcsec and a cadence of 12 seconds. HMI creates full-disc magnetograms using the 6173 \AA \, spectral line with a pixel size of 0.5 arcsec and a cadence of 45 seconds. During gaps in the AIA dataset, we use observations from the \textit{Sun-Watcher with Active Pixel System and Image Processing} \citep[SWAP:][]{berghmans2006} instrument onboard the \textit{Project for On Board Autonomy 2} (PROBA2) that images the Sun at 174 \AA \, with a cadence of one minute. Soft X-ray data are supplied by the \textit{X-Ray Telescope} \citep[XRT:][]{golub2007} onboard \textit{Hinode} \citep[Solar-B:][]{kosugi2007}. XRT has various focal plane analysis filters, detecting X-ray emission over a wide temperature range (from 1 to 10 MK). It provides images with a pixel size of two-arcseconds. We use H$\alpha$ (6563 \AA) observations from the \textit{Global Oscillations Network Group} (GONG) and the \textit{Global High Resolution H$\alpha$ Network} (HANET). GONG is a six-station network and HANET is a seven-station network of ground-based observatories located around the Earth to provide near-continuous observations of the Sun. \textit{In situ} measurements are taken from the \textit{Wind} satellite. In particular, we use the data from the \textit{Wind Magnetic Fields Investigation} \citep[MFI:][]{lepping1995} and the \textit{Wind Solar Wind Experiment} \citep[SWE:][]{ogilvie1995}, which provide 60-second and 90-second resolution data, respectively. Hourly disturbance storm time (\textit{Dst}) values are taken from the WDC for Geomagnetism, Kyoto, webpage (\url{http://wdc.kugi.kyoto-u.ac.jp/wdc/Sec3.html}). The events until 2013 are based on the final \textit{Dst} index, while those from 2014 and 2015 are based on the provisional \textit{Dst} index. \subsection{Event Selection} We searched the LINKed CATalogue (LINKCAT) for suitable events. LINKCAT is an output of the HELiospheric Cataloguing, Analysis and Techniques Service (HELCATS, \url{https://www.helcats-fp7.eu}) project and contains events in the timerange May 2007 to December 2013. LINKCAT connects CMEs from their solar source to their \textit{in situ} counterparts using a geometrical fitting technique based on single spacecraft data from the STEREO/HI instruments. CME tracks in HI time-elongation maps (so-called J-maps) are fitted using the Self-Similar Expansion Fitting (SSEF) method \citep{davies2012}, assuming a fixed angular half-width of $30^{\circ}$ for each CME. This yields estimates of a CME's propagation direction and radial speed. The LINKCAT catalogue consists of events where CMEs observed in HI imagery could be uniquely linked to CMEs observed in coronagraph and solar disc data and ICMEs detected \textit{in situ}.This was done by ensuring that the predicted impact of the CME based on SSEF is within $\pm 24$ hrs of the \textit{in situ arrival} time (often this is the shock arrival time). Cases where two CMEs are predicted to arrive within this window, or two ICMEs are detected within the window, are excluded, eliminating potential CME--CME interaction events. More details can be found in the online information pertaining to the catalogue (see Sources of Data and Supplementary Material). It must be kept in mind when thinking about real-time prediction that our study thus involves the down-selection to cases of a particular nature, and is based on science data. One of the ICME catalogues used to compile LINKCAT, in particular for CMEs detected towards Earth, is the Wind ICME catalogue ({\url{https://wind.nasa.gov/ICMEindex.php}). For a validation of use of the aforementioned HI-based SSEF technique to predict CME arrivals, see \citet{mostl2017}. Since SDO is our primary spacecraft for solar observations to study the CME source region, only the LINKCAT events that arrived at Earth after May 2010 are considered. During this period, LINKCAT contains 47 Earth-impacting events. We further consider only events that present a clear flux rope \textit{in situ}, \textit{i.e.} from which we are able to estimate the flux rope type by visual inspection. We are left with 12 CME--ICME pairs. Since LINKCAT is compiled in a semi-automated way, we also performed our own survey of on-disc CME signatures in SDO images for the events in the LINKCAT catalogue. Due to some restrictive assumptions (\textit{e.g.} $30^{\circ}$ fixed angular half width), LINKCAT does not include all possible CME--ICME pairs. Therefore, to find additional events for analysis we also searched other ICME catalogues, identifying ICMEs for which we could find the corresponding solar source over the period corresponding to SDO observations. In particular, we searched for additional \textit{in situ} flux ropes from the Wind ICME list and from the Near-Earth Interplanetary Coronal Mass Ejections list (\url{http://www.srl.caltech.edu/ACE/ASC/DATA/level3/icmetable2.htm}). We scanned backwards from the time at which events were observed by the HI imagers, identifying corresponding signatures in images from the COR2 and COR1 coronagraphs, and finally searched for the source on the solar disc. For those events that were not in LINKCAT, we tracked the ICME backwards in time to the Sun assuming constant speed and radial propagation, and used HI imagery to follow the CME in the heliosphere. At this stage, we utilised the HELCATS ARRival CATalogue \citep[ARRCAT,][]{mostl2017}, that lists predicted arrivals of CMEs at various spacecraft and planets using the previously described STEREO/HI SSEF fitting technique. In the search for additional events, we also extended the time range of the data under consideration to December 2015. We identify eight additional events in this way (two due to the extension of the time range), bringing the total number of events in the study up to 20. We number the events (1--20) in chronological order of their launch times; the additional events correspond to those numbered 2, 3, 9, 10, 11, 15, 19, and 20. Event number 10 is a CME--CME interaction event in June 2012 for which the CME--ICME relation has been clarified in several previous studies \citep[\textit{e.g.},][]{kubicka2016,palmerio2017,james2017,srivastava2018}. Event number 18 is a lineup event which was also partly observed by MESSENGER, situated only a few degrees away from the Sun--Earth line \citep{moestl2018}. \subsection{Intrinsic Flux Rope Type Determination} \label{subsec:frtype} As mentioned in the Introduction, in order to determine the magnetic flux rope type of an erupting CME, three parameters are needed: the chirality, the axis orientation, and the axial field direction. The chirality can be inferred from several multi-wavelength proxies: magnetic tongues \citep{lopezfuentes2000,luoni2011}, X-ray and/or EUV sigmoids and/or sheared arcades \citep[\textit{e.g.},][]{rust1996,canfield1999,green2007}, the skew of coronal arcades \citep{mcallister1998,martin2012}, flare ribbons \citep{demoulin1996}, and filament details \citep{martin1994,martin1996,chae2000}. For a detailed description of these helicity proxies, see \citet{palmerio2017}. The inclination of the flux rope axis with respect to the ecliptic, $\tau$, is taken to be the average of the orientation of the polarity inversion line \citep[PIL,][]{marubashi2015} and the orientation of the post-eruption arcades \citep[PEAs,][]{yurchyshyn2008}, in the range $[-90^{\circ},90^{\circ}]$. The tilt angle $\tau$ is measured from the solar East, and assumes a positive (negative) value if the acute angle to the ecliptic is to the North (South). For source regions where the PIL can easily be approximated as a straight line (\textit{e.g.}, quiet Sun and magnetically simple active regions), we determine the PIL orientation by eye, \textit{i.e.} we determine the location where the polarity of the magnetic field reverses, and approximate it as a straight line. When the PIL is more curved and/or complex, we smooth the data over square bins containing variable numbers of pixels, overplot the locations where $B_{r} = 0$, and then estimate the orientation of the resulting PIL. For source regions located between $\pm 30^{\circ}$ in longitude on the solar disc, we use HMI line-of-sight data. For source regions located closer to the limb, in order to reduce the projection effects, we use \textit{Space-weather HMI Active Region Patch} \citep[SHARP:][]{bobra2014} data, derived with the series \textit{hmi.sharp\_cea\_720s} where the vector \textbf{B} has been remapped onto a Lambert Cylindrical Equal-Area (CEA) projection. Similarly, the orientation of the PEAs is determined by eye for source regions located between $\pm 30^{\circ}$ in longitude on the solar disc, while for regions located nearer the limb, we correct the projection effects by first converting two points on the arcade axis from Helioprojective-Cartesian to Heliographic coordinates. Then, we apply to the axis the vector rotation operator ``rotate'', defined as \begin{equation} \text{rotate}(\mathbf{\hat{v}},\mathbf{\hat{a}},\gamma)=\mathbf{\hat{v}}\cos{\gamma}+(\mathbf{\hat{v}}\cdot\mathbf{\hat{a}})(1-\cos{\gamma})\mathbf{\hat{a}}+[\mathbf{\hat{a}}\times\mathbf{\hat{v}}]\sin{\gamma} \, , \end{equation} which rotates the arcade axis, $\mathbf{\hat{v}}$, counterclockwise around its median, $\mathbf{\hat{a}}$, by a tilt angle, $\gamma$ \citep{isavnin2013}. We rotate the axis until it becomes parallel to the ecliptic. The total rotation corresponds to the unprojected tilt of the arcade's axis. For some events, we could only estimate the orientation of the axis from the PIL direction, because PEAs were either too short or not visible. When we have obtained the average orientation between PIL and PEAs, we assume: \begin{enumerate} \item $0^{\circ}\leq |\tau| <35^{\circ} \Rightarrow$ low-inclination flux rope \item $35^{\circ}\leq |\tau |\leq 55^{\circ} \Rightarrow$ intermediate flux rope \item $55^{\circ}< |\tau| \leq 90^{\circ} \Rightarrow$ high-inclination flux rope \end{enumerate} Finally, we check the direction of the axial field by looking at coronal dimmings in EUV difference images and identifying in which magnetic polarities they are rooted. Then, the magnetic field direction is defined from the positive polarity to the negative one. When the three parameters are known, we can reconstruct the flux rope type at the point of the eruption. \subsection{\textit{In situ} Flux Rope Type Identification} \label{subsec:insitu} The CME flux rope type at the time of the eruption is compared to the magnetic configuration of the corresponding ICME. First, we analyse, by eye, the magnetic field components of the ICME observed \textit{in situ} in both Cartesian ($B_{x}$, $B_{y}$, $B_{z}$) and angular ($B_{\theta}$, $B_{\phi}$) geocentric solar ecliptic (GSE) coordinates, and make a first estimate of the type of the \textit{in situ} flux rope. We then apply minimum variance analysis \citep[MVA,][]{sonnerup1967} to the \textit{in situ} measurements during the flux rope interval, to estimate the orientation of the flux rope axis (latitude, $\theta_{\text{MVA}}$, and longitude, $\phi_{\text{MVA}}$) and obtain its helicity sign. The latter is done by inspection of the direction of the magnetic field rotation in the intermediate-to-maximum plane. The flux rope axis corresponds to the MVA intermediate variance direction, where $\theta_{\text{MVA}} = 90^{\circ}$ is defined as being northward and $\phi_{\text{MVA}} = 90^{\circ}$ is defined as being eastward. We apply the MVA to 20-minute averaged magnetic field data. We also consider the intermediate-to-mininum eigenvalue ratio ($\lambda_{2}/\lambda_{3}$) resulting from MVA. MVA can be considered most reliable when $\lambda_{2}/\lambda_{3} \geq 2$ \citep[\textit{e.g.},][]{lepping1980,bothmer1998,huttunen2005}. As a proxy for the spacecraft crossing distance from the flux rope central axis (or impact parameter), we calculate the ratio of the minimum variance direction to the total magnetic field in the MVA frame \citep{gulisano2007,demoulin2009}, $\langle|B_{\min}|\rangle/\langle B\rangle$. We average the quantities along the whole flux rope interval. A higher ratio indicates that the flux rope has been crossed progressively farther from its central axis, and it implies that the bias in the flux rope orientation is larger. As a proxy for the spacecraft crossing distance from the nose of the flux rope, we calculate the location angle, L, defined by \citet{janvier2013} as \begin{equation} \text{sin}\,\text{L} = \text{cos}\,\theta_{\text{MVA}}\,\text{cos}\,\phi_{\text{MVA}} \, . \end{equation} The location angle ranges from $\text{L} \approx -90^{\circ}$ in one leg, through $\text{L} \approx 0^{\circ}$ at the nose, to $\text{L}\approx 90^{\circ}$ in the other leg. Finally, we check the minimum value of the (\textit{Dst}) index related to each event. We only quote the events for which \textit{Dst}$_{\text{min}} < -50$. We consider those events with $-50$ > \textit{Dst}$_{\text{min}} > -100$ as moderate storms, and those events for which \textit{Dst}$_{\text{min}} \leq -100$ as major storms. \subsection{Orientation Angles} The next step is to compare the orientations of the CME axis at the Sun and \textit{in situ}. Regarding the former, we convert the tilt angle, $\tau$, into the orientation angle, $\alpha_{\text{SUN}}$, that lies within the range $[-180^{\circ},180^{\circ}]$. $\alpha_{\text{SUN}}$ is derived from $\tau$ by taking into account in which direction the flux rope axial field is pointing, that was previously estimated from coronal dimmings (see Section \ref{subsec:frtype}). The orientation angle is calculated from the positive East direction, clockwise for positive values and counterclockwise for negative values. \citet{yurchyshyn2008} determined the flux rope orientation of 25 CME events at the Sun from PEAs only, and estimated that the PEAs angles were measured with accuracy $\pm 10^{\circ}$ for 19 events, and $\pm 90^{\circ}$ for the remaining six. Since our flux rope orientations at the Sun are determined by a combination of PIL and PEAs, we estimate that the tilt angles were measured with an accuracy between $\pm 5^{\circ}$ (for the cases where PIL and PEAs had an almost identical orientation) and $\pm 15^{\circ}$--$20^{\circ}$ (for the cases when we could only use the PIL direction, or the PIL and PEAs directions had a larger angular separation). Regarding the orientation of the \textit{in situ} flux rope at the Lagrange L1 point, we project the axis resulting from the MVA analysis onto a 2D plane that corresponds to the YZ-plane in GSE coordinates. We then measure the \textit{in situ} clock angle orientation, $\alpha_{\text{L1}}$, within the range $[-180^{\circ},180^{\circ}]$ as for $\alpha_{\text{SUN}}$. The MVA fittings introduce an error of $\pm 5^{\circ}$--$10^{\circ}$ when the spacecraft crosses the flux rope axis approximately perpendicularly. However, for crossings that are progressively farther from the central axis, the error on the estimated flux rope axis orientation can be up to $\pm 90^{\circ}$ \citep{owens2012}. In particular, \citet{gulisano2007} studied in detail the bias introduced in MVA fittings for flux ropes. They found that $\theta_{\text{MVA}}$ is best determined for flux ropes that have their axis close to the ecliptic plane and nearly perpendicular to the Sun-Earth line. Moreover, the angle $\eta$ between the true flux rope orientation and the MVA-generated one is $\eta\approx 3^{\circ}$ for a spacecraft crossing a cloud within 30\% of its radius, and $\eta \lesssim 20^{\circ}$ for an impact parameter as high as 90\% of the flux rope radius. One of the main issues in flux rope fittings with MVA is, therefore, the fact that the impact parameter is unknown. \section{Results} \label{sec:results} The source regions of the 20 analysed CMEs have the following properties: \begin{itemize} \item 10 (50\%) CMEs erupted from the Northern hemisphere and 10 (50\%) from the Southern hemisphere. \item 14 (70\%) CMEs erupted from an active region, two (10\%) from between two active regions, and four (20\%) from a quiet Sun filament. \item 18 (90\%) of source regions followed the hemispheric helicity rule, while two (10\%) did not. \end{itemize} Table \ref{tab:helicity_proxies} shows which helicity sign proxies were used for each event. The proxy that we could use the most (applicable to 18 events or 90\%) is the skew of the coronal arcades. This is not surprising, considering that most CMEs are associated with arcades before and/or after an eruption. These arcades can either be the coronal loops that overlie the eruptive structure or arcades that form under the CME due to magnetic reconnection after it is ejected. In a few cases, however, the arcade skew was not clear enough to be used as a helicity proxy. Clear S-shaped features were found for 14 (70\%) events. We consider here both sheared arcades and sigmoids, which are structures that can be seen in X-ray and sometimes also in EUV. Sheared arcades are multi-loop systems, while sigmoids are single-loop S-shaped structures \citep[\textit{e.g.},][]{green2007}. Sigmoids and arcades that have forward (reverse) S-shape indicate positive (negative) helicity. Another popular chirality proxy is the use of flare ribbons. We were able to use this proxy for 11 (55\%) events. It is worth remarking that flare ribbons can be used to estimate the helicity sign of a CME and its source region if they form clear J-shapes, where a forward (reverse) J indicates positive (negative) helicity, or if they are significantly shifted along the PIL. A filament association was found for 12 (60\%) CMEs, and for all of these we were able to use filament characteristics to estimate the chirality. We analysed both H$\alpha$ details, \textit{i.e.} filament spine shape and barbs, and EUV details, \textit{i.e.} the crossings of dark and bright threads. H$\alpha$ characteristics are mostly visible in quiet Sun filaments, while absorption and emission threads are mostly visible in active region filaments. Only for one event (Event 9) were we able to analyse the filament successfully both in H$\alpha$ and EUV. The least applicable proxy involves the use of magnetic tongues. We were only able to apply this technique to three (15\%) events. This is expected, as magnetic tongues are only visible in emerging active regions. Finally, we emphasize that, for each analysed event, all helicity sign proxies agree with one another. \begin{table} \caption{A summary of the chirality and shear determinations used for each of the CMEs studied. The table shows, from left to right: event number, Solar Object Locator (SOL), eruption time rounded to the nearest hour, and the chirality made possible due to the presence of magnetic tongues, proxies visible in H$\alpha$ related to the chirality of a filament, absorption and emission filament threads visible in EUV, S-shaped structure (sheared arcade or sigmoid) in EUV or X-rays, skew of coronal loops, and J-shaped flare ribbons.} \label{tab:helicity_proxies} \begin{tabular}{l @{\hskip 0.1in}l c c c c c c c} \toprule \# & Eruption & Tongues & H$\alpha$-fil & EUV-fil & S-shape & Skew & Ribbons\\ \midrule 1 & SOL2010-05-23, 17 UT & - & LH & - & - & LH & - \\ 2 & SOL2011-03-25, 06 UT & - & - & - & - & RH & - \\ 3 & SOL2011-06-02, 07 UT & - & - & RH & RH & RH & - \\ 4 & SOL2011-09-13, 22 UT & - & - & - & LH & LH & - \\ 5 & SOL2011-10-22, 01 UT & - & LH & - & - & LH & LH \\ 6 & SOL2012-01-19, 14 UT & - & - & - & - & LH & LH \\ 7 & SOL2012-03-10, 17 UT & LH & - & LH & LH & LH & LH \\ 8 & SOL2012-03-13, 17 UT & LH & - & LH & LH & LH & LH \\ 9 & SOL2012-05-11, 23 UT & - & RH & RH & RH & RH & RH \\ 10 & SOL2012-06-14, 13 UT & RH & - & - & RH & RH & - \\ 11 & SOL2012-07-04, 17 UT & - & - & LH & LH & LH & LH \\ 12 & SOL2012-07-12, 16 UT & - & - & RH & RH & RH & RH \\ 13 & SOL2012-10-05, 00 UT & - & - & - & RH & - & - \\ 14 & SOL2012-10-08, 21 UT & - & - & - & LH & LH & - \\ 15 & SOL2012-10-27, 12 UT & - & - & - & - & RH & - \\ 16 & SOL2013-01-13, 00 UT & - & - & - & RH & RH & - \\ 17 & SOL2013-04-11, 07 UT & - & - & LH & LH & - & LH \\ 18 & SOL2013-07-09, 14 UT & - & LH & - & LH & LH & LH \\ 19 & SOL2014-08-15, 16 UT & - & RH & - & - & RH & RH \\ 20 & SOL2015-12-16, 08 UT & - & - & RH & RH & RH & RH \\ \bottomrule \end{tabular} \end{table} Table \ref{tab:CME_types} lists the estimated flux rope types at the Sun and Table \ref{tab:ICME_types} the local flux rope types observed \textit{in situ}. We note that the chirality of the intrinsic flux rope and \textit{in situ} flux rope matched for all 20 events, including the two events that did not follow the hemispheric helicity rule. This result is expected, as the helicity sign should be preserved during interplanetary propagation, and it also gives further confirmation that our indirect helicity proxies derived from solar observations are correct. For two events (numbers 6 and 16), the MVA intermediate-to-medium eigenvalue ratio was $\lambda_{2}/\lambda_{3} < 2$, but the flux rope orientation resulting from MVA agreed with the flux rope type obtained from visual inspection. \begin{sidewaystable} \caption{The results of the analysis of the magnetic structure of the flux rope on the Sun. The table shows, from left to right: event number, Solar Object Locator (SOL), eruption time rounded to the nearest hour, CME source (QS: Quiet Sun, NH: Northern Hemisphere, SH: Southern Hemisphere, AR: Active Region), chirality of the erupting flux rope, whether the chirality follows the hemispheric helicity rule (HHR), inclination of the PIL, inclination of the PEAs, average tilt of the axis with respect to the ecliptic plane, direction of the axial field, and erupting flux rope type.} \label{tab:CME_types} \begin{tabular}{l @{\hskip 0.2in}l c c c c c c c c c} \toprule \# & CME & & & & & & & & & \\ \cmidrule(r){2-11} & SOL & Eruption time & Source & Chirality & HHR & PIL & PEAs & Tilt & Axial field & FR type \\ \midrule 1 & SOL2010-05-23 & 17 UT & QS, NH & LH & Yes & $38^{\circ}$ & $50^{\circ}$ & $44^{\circ}$& Southwest & WSE/NWS \\ 2 & SOL2011-03-25 & 06 UT & AR 11176 & RH & Yes & $-86^{\circ}$ & -- & $-86^{\circ}$& South & ESW \\ 3 & SOL2011-06-02 & 07 UT & AR 11226/11227 & RH & Yes & $-45^{\circ}$ & -- & $-45^{\circ}$& Northwest & WNE/SWN \\ 4 & SOL2011-09-13 & 22 UT & AR 11289 & LH & Yes & $40^{\circ}$ & $40^{\circ}$ & $40^{\circ}$& Southwest & WSE/NWS \\ 5 & SOL2011-10-22 & 01 UT & QS, NH & LH & Yes & $32^{\circ}$ & $34^{\circ}$ & $33^{\circ}$& East & SEN \\ 6 & SOL2012-01-19 & 14 UT & AR 11402 & LH & Yes & $-80^{\circ}$ & $-88^{\circ}$ & $-84^{\circ}$& South & WSE \\ 7 & SOL2012-03-10 & 17 UT & AR 11429 & LH & Yes & $26^{\circ}$ & $38^{\circ}$ & $32^{\circ}$ & East & SEN \\ 8 & SOL2012-03-13 & 17 UT & AR 11429 & LH & Yes & $40^{\circ}$ & $46^{\circ}$ & $43^{\circ}$& Northeast & ENW/SEN \\ 9 & SOL2012-05-11 & 23 UT & small AR, SH & RH & Yes & $-65^{\circ}$ & $-65^{\circ}$ & $-65^{\circ}$ & South & ESW \\ 10 & SOL2012-06-14 & 13 UT & AR 11504 & RH & Yes & $-30^{\circ}$ & -- & $-30^{\circ}$& East & NES \\ 11 & SOL2012-07-04 & 17 UT & AR 11513 & LH & Yes & $46^{\circ}$ & $36^{\circ}$ & $41^{\circ}$& Southwest & WSE/NWS \\ 12 & SOL2012-07-12 & 16 UT & AR 11520 & RH & Yes & $-30^{\circ}$ & $-14^{\circ}$ & $-22^{\circ}$& East & NES \\ 13 & SOL2012-10-05 & 00 UT & AR 11582/11584 & RH & Yes & $-73^{\circ}$ & -- & $-73^{\circ}$& South & ESW \\ 14 & SOL2012-10-08 & 21 UT & AR 11585 & LH & No & $47^{\circ}$ & -- & $47^{\circ}$& Northeast & ENW/SEN \\ 15 & SOL2012-10-27 & 12 UT & AR 11598 & RH & Yes & $-50^{\circ}$ & -- & $-50^{\circ}$& Southeast & ESW/NES \\ 16 & SOL2013-01-13 & 00 UT & AR 11654 & RH & No & $-88^{\circ}$ & -- & $-88^{\circ}$& North & WNE \\ 17 & SOL2013-04-11 & 07 UT & AR 11719 & LH & Yes & $60^{\circ}$ & $50^{\circ}$ & $55^{\circ}$& Southwest & WSE/NWS \\ 18 & SOL2013-07-09 & 14 UT & QS, NH & LH & Yes & $47^{\circ}$ & $53^{\circ}$ & $50^{\circ}$& Southwest & WSE/NWS \\ 19 & SOL2014-08-15 & 16 UT & QS, SH & RH & Yes & $82^{\circ}$ & $70^{\circ}$ & $76^{\circ}$& North & WNE \\ 20 & SOL2015-12-16 & 08 UT & AR 12468 & RH & Yes & $-32^{\circ}$ & $-24^{\circ}$ & $-28^{\circ}$ & East & NES \\ \bottomrule \end{tabular} \end{sidewaystable} \begin{sidewaystable} \caption{The results of the analysis of the magnetic structure of the flux rope \textit{in situ}. The table shows, from left to right: arrival time of the ICME flux rope leading edge, time of the ICME flux rope trailing edge, chirality of the \textit{in situ} flux rope, flux rope axis from MVA in the form (latitude, longitude), MVA intermediate-to-minimum eigenvalue ratio, ratio of the MVA minimum variance component to the total magnetic field (proxy for the impact parameter or crossing distance from the ICME axis), location angle (proxy for the crossing distance from the ICME nose), minimum \textit{Dst} index value (only for events \textit{Dst} < -50), and \textit{in situ} flux rope type from visual inspection.} \label{tab:ICME_types} \begin{tabular}{l @{\hskip 0.2in}l c c c c c c c c} \toprule \# & ICME & & & & & &\\ \cmidrule(r){2-10} & Leading Edge & Trailing Edge & Chirality & MVA Axis & $\lambda_{2}/\lambda_{3} $ & $\langle|B_{\min}|\rangle/\langle B\rangle$ & L-angle &\textit{Dst}$_{\text{min}}$ & FR type\\ \midrule 1 & 2010-05-28, 19:10 & 2010-05-29, 16:50 & LH & ($-59^{\circ}$, $234^{\circ}$) & 17.9 & 0.08 & $-18^{\circ}$ & $-80$ & WSE\\ 2 & 2011-03-30, 00:25 & 2011-04-01, 15:05 & RH & ($17^{\circ}$, $119^{\circ}$) & 2.9 & 0.13 & $-28^{\circ}$ & -- & NES\\ 3 & 2011-06-05, 01:58 & 2011-06-05, 08:55 & RH & ($68^{\circ}$, $135^{\circ}$) & 3.9 & 0.10 & $-15^{\circ}$ & -- & WNE\\ 4 & 2011-09-17, 15:38 & 2011-09-18, 08:46 & LH & ($46^{\circ}$, $70^{\circ}$) & 4.5 & 0.19 & $14^{\circ}$ & $-72$ & ENW/SEN\\ 5 & 2011-10-25, 00:30 & 2011-10-25, 17:09 & LH & ($74^{\circ}$, $56^{\circ}$) & 2.7 & 0.22 & $9^{\circ}$ & $-147$ & ENW\\ 6 & 2012-01-22, 11:40 & 2012-01-23, 07:55 & LH & ($-49^{\circ}$, $263^{\circ}$) & 1.9 & 0.48 & $-5^{\circ}$ & $-71$ & NWS/WSE\\ 7 & 2012-03-12, 10:05 & 2012-03-12, 14:55 & LH & ($-16^{\circ}$, $35^{\circ}$) & 2.6 & 0.45 & $52^{\circ}$ & $-64$ & SEN\\ 8 & 2012-03-15, 15:52 & 2012-03-16, 14:06 & LH & ($65^{\circ}$, $105^{\circ}$) & 2.2 & 0.39 & $-6^{\circ}$ & $-88$ & ENW\\ 9 & 2012-05-16, 16:00 & 2012-05-17, 22:20 & RH & ($46^{\circ}$, $271^{\circ}$) & 27.9 & 0.17 & $1^{\circ}$ & -- & SWN/WNE\\ 10 & 2012-06-16, 22:10 & 2012-06-17, 12:30 & RH & ($-28^{\circ}$, $99^{\circ}$) & 19.3 & 0.10 & $-8^{\circ}$ & $-86$ & NES\\ 11 & 2012-07-08, 23:48 & 2012-07-09, 20:56 & LH & ($-50^{\circ}$, $340^{\circ}$) & 5.2 & 0.38 & $37^{\circ}$ & $-78$ & WSE\\ 12 & 2012-07-15, 06:16 & 2012-07-16, 14:33 & RH & ($-4^{\circ}$, $305^{\circ}$) & 5.8 & 0.57 & $35^{\circ}$ & $-139$ & ESW\\ 13 & 2012-10-08, 17:15 & 2012-10-09, 13:34 & RH & ($-66^{\circ}$, $258^{\circ}$) & 8.9 & 0.30 & $-5^{\circ}$ & $-109$ & ESW\\ 14 & 2012-10-12, 15:50 & 2012-10-13, 09:42 & LH & ($-60^{\circ}$, $247^{\circ}$) & 10.6 & 0.38 & $-11^{\circ}$ & $-90$ & WSE\\ 15 & 2012-10-31, 23:32 & 2012-11-02, 02:30 & RH & ($-68^{\circ}$, $49^{\circ}$) & 51.2 & 0.12 & $14^{\circ}$ & $-65$ & ESW\\ 16 & 2013-01-17, 16:13 & 2013-01-18, 11:48 & RH & ($18^{\circ}$, $250^{\circ}$) & 1.4 & 0.16 & $-19^{\circ}$ & $-52$ & SWN\\ 17 & 2013-04-14, 16:10 & 2013-04-15, 20:42 & LH & ($62^{\circ}$, $337^{\circ}$) & 6.4 & 0.17 & $26^{\circ}$ & -- & ENW\\ 18 & 2013-07-13, 04:55 & 2013-07-14, 23:30 & LH & ($-10^{\circ}$, $286^{\circ}$) & 13.5 & 0.08 & $16^{\circ}$ & $-81$ & NWS\\ 19 & 2014-08-19, 17:25 & 2014-08-21, 00:07 & RH & ($65^{\circ}$, $314^{\circ}$) & 48.5 & 0.07 & $17^{\circ}$ & -- & WNE\\ 20 & 2015-12-20, 02:55 & 2015-12-21, 20:25 & RH & ($-30^{\circ}$, $221^{\circ}$) & 3.8 & 0.43 & $-41^{\circ}$ & $-155$ & ESW\\ \bottomrule \end{tabular} \end{sidewaystable} The flux rope types (Figure \ref{fig:fr_types}) at the Sun and \textit{in situ} match strictly for only four (20\%) of the 20 events (Events 7, 10, 13, and 19). Figure \ref{fig:event10} gives an example of such an event (Event 10). Figure \ref{fig:event10}a shows an SDO/HMI line-of-sight (LOS) magnetogram approximately two days before the eruption, when the active region was emerging, revealing the presence of right-handed magnetic tongues. Figure \ref{fig:event10}b shows a sigmoid seen in EUV that also suggests positive helicity. Another helicity proxy that we used for this event is the skew of arcade loops (not shown). The orientation of the neutral line is shown in panel \ref{fig:event10}c and has a tilt $\tau = -30^{\circ}$. The axial field points to the East. As explained in Section \ref{subsec:frtype}, this can be deduced from the locations of the EUV dimmings associated with the flux rope footpoints that are overlaid with SDO/HMI magnetogram data (Figure \ref{fig:event10}d). The previously described solar observations yield a NES-type flux rope. \textit{In situ} observations are shown on the right-hand side of Figure \ref{fig:event10}. The ICME was preceded by a shock (red line), and the flux rope (bounded between the pair of blue lines) is clearly identified from the enhanced magnetic field and smooth rotation of the field direction. MVA yields the axis of tilt $-28^{\circ}$, the fact that the field at the axis points to the East, and that the chirality is right-handed. Hence, the flux rope type \textit{in situ} is also NES, and the axis tilts at the Sun and \textit{in situ} are almost identical. \begin{figure}[ht] \includegraphics[width=.99\textwidth]{figure2.pdf} \caption{Event 10, which is found to be a NES-type both at the Sun and \textit{in situ}. (a) Magnetic tongues as seen in an SDO/HMI magnetogram (saturated at $\pm 200$ G) that show positive chirality. (b) Forward-S sigmoid as seen by SDO/AIA 131 \AA \, that indicates a right-handed flux rope. (c) HMI magnetogram (saturated at $\pm 200$ G) showing the PIL approximated as a straight line (in red). (d) Base-difference AIA image in 131 \AA \, saturated at $\pm 70$ DN s$^{-1}$ pixel$^{-1}$ and overlaid with HMI magnetogram contours saturated at $\pm 200$ G (blue = negative polarity, red = positive polarity). The dimming regions (signatures of the flux rope footpoints) have been circled in green. (e) The ICME as observed in situ by \textit{Wind}. The red line indicates the arrival of the IP shock, while the blue lines indicate the leading and trailing edges of the flux rope. The parameters shown are, from top to bottom: magnetic field magnitude, magnetic field components in GSE cartesian coordinates, $\theta$ and $\phi$ components of the magnetic field in GSE angular coordinates, solar wind speed, proton density, proton temperature, and plasma $\beta$.} \label{fig:event10} \end{figure} We emphasize that, for a significant fraction of events (nine or 45\%), the tilt angle at the Sun and/or the latitude of the \textit{in situ} flux rope axis was close to $45^{\circ}$. For such cases, considering the possible errors, one cannot distinguish between low and high-inclination flux rope types. We categorise these cases as intermediate-inclination events (see Section \ref{subsec:frtype}). An example of such an event is Event 18 (Figure \ref{fig:event18}). The left-handed chirality of this event could be determined at the Sun from H$\alpha$ filament details, arcade skew, flare ribbons, and S-shape of the filament seen in EUV. The average between the PIL tilt (Figure \ref{fig:event18}c) and the PEAs' tilt (not shown) gives a tilt angle at the Sun of $50^{\circ}$. The axial field points to the Southwest, \textit{i.e.} the possible intrinsic flux rope types are either a high-inclination WSE flux rope or a low-inclination NWS flux rope. The \textit{in situ} data, again, show a clear flux rope identified from enhanced magnetic field magnitude and smooth field rotation. The MVA yields an axis tilt of $10^{\circ}$ and left-handed chirality. Hence, the \textit{in situ} flux rope clearly has a low-inclination and is of type NWS. If we also consider as a match cases where the flux rope is of intermediate type (\textit{i.e.} close to $45^{\circ}$ inclination at the Sun and/or \textit{in situ}), then the flux rope types agree between the Sun and \textit{in situ} for 11 (55\%) analysed events. \begin{figure}[ht] \includegraphics[width=.99\textwidth]{figure3.pdf} \caption{Event 18, which is intermediate between a WSE- and a NWS-type at the Sun and is a NWS-type \textit{in situ}. (a) The reverse-S filament shape seen by SDO/AIA 171 \AA \, that indicates left-handed chirality. (b) Reverse-J shaped flare ribbons as seen in 304 \AA, a sign of a left-handed flux rope. (c) SDO/HMI magnetogram (saturated at $\pm 200$ G) showing the PIL approximated as a straight line (in red). (d) Base-difference AIA image in 211 \AA \, saturated at $\pm 200$ DN s$^{-1}$ pixel$^{-1}$ and overlaid with HMI magnetogram contours saturated at $\pm 200$ G (blue = negative polarity, red = positive polarity). The dimming regions (signatures of the flux rope footpoints) have been circled in green. (e) The ICME as observed in situ by \textit{Wind} (see Figure \ref{fig:event10} for details).} \label{fig:event18} \end{figure} A clear example of a case where the flux rope types at the Sun and \textit{in situ} do not match is Event 17 (Figure \ref{fig:event17}). According to our analysis of the near-Sun observations, the intrinsic flux rope type is in the intermediate state between a high-inclination WSE-type and a low-inclination NWS-type. The helicity proxies that we used for this event were a clear reverse-S sigmoid (Figure \ref{fig:event17}a), a left-handed crossing of filament threads (Figure \ref{fig:event17}b), and reverse-J flare ribbons (visible in Figure \ref{fig:event17}d). The tilt angle at the Sun was estimated to be $55^{\circ}$. In this case, the tilt angle was deduced both from the PEAs seen in EUV (Figure \ref{fig:event17}c) and the orientation of the PIL (not shown). Visual inspection of the \textit{in situ} measurements, however, shows a strongly northward field during the passage of the entire ICME, and suggests that the flux rope type is ENW. MVA yields a high-inclination flux rope with a tilt of $-62^{\circ}$, in agreement with the visual analysis. This means that the axis orientation changed by $\sim 180^{\circ}$ from the Sun to L1. \begin{figure}[ht] \includegraphics[width=.99\textwidth]{figure4.pdf} \caption{Event 17, which is intermediate between a WSE- and a NWS-type at the Sun and is an ENW-type \textit{in situ}. (a) Reverse-colour soft X-ray images taken with \textit{Hinode}/XRT, showing an erupting reverse-S sigmoid, indicative of a left-handed flux rope. Filter wheel 1 is in the ``Beryllium thin'' (Be thin) position, while filter wheel 2 is Open. (b) Left-handed crossings of filament threads (indicated by the white arrows) as seen by SDO/AIA in 171 \AA . The direction of the magnetic field along the filament is also shown (in red). (c) 171 \AA \, observations showing the PEAs approximated as a straight line (in red). (d) Base-difference AIA image in 211 \AA \, saturated at $\pm 400$ DN s$^{-1}$ pixel$^{-1}$ and overlaid with HMI magnetogram contours saturated at $\pm 200$ G (blue = negative polarity, red = positive polarity). The dimming regions inside the reverse-J shapes of the flare ribbons (signatures of the flux rope footpoints) have been circled in green. (e) The ICME as observed in situ by \textit{Wind} (see Figure \ref{fig:event10} for details).} \label{fig:event17} \end{figure} We also note that for two events (Events 12 and 20) the axis orientation resulting from MVA did not agree with our visual determination. Event 12 is clearly a case where the flux rope crosses \textit{Wind} far from its centre; MVA does not perform well for such events. However for Event 20, it is not obvious why MVA yields a low-inclination flux rope ($\theta_{\text{MVA}}=30^{\circ}$), while observations suggests an intermediate event. Anyhow, the flux ropes types would not match between the Sun and L1, as the possible flux rope types \textit{in situ} would be SWN and ESW. The minimum \textit{Dst} value for each analysed CME is reported in Table \ref{tab:ICME_types}. We note that five (25\%) CMEs caused minor or no storm (\textit{i.e.}, \textit{Dst}$_{\text{min}}$ > -50 nT), 11 (55\%) caused a moderate storm (-50 nT > \textit{Dst}$_{\text{min}}$ > -100 nT), and four (20\%) caused an intense storm (\textit{Dst}$_{\text{min}}$ < -100 nT). The six high-inclination flux ropes detected \textit{in situ} with a southward axial field (\textit{i.e.}, of types ESW and WSE) all produced at least a moderate storm, and three of them produced intense storms. This is expected, since the primary requirement for a geomagnetic storm is that the interplanetary magnetic field is southward for a sufficiently long period of time. In total, our data set included five high-inclination and two ``intermediate'' ICMEs with northward axial fields. Four of these corresponded to minor or no storm (\textit{i.e.}, \textit{Dst}$_{\text{min}}$ > -50 nT), but two (Events 4 and 8) caused moderate storms and one (Event 5), an intense storm. In these three events, \textit{Dst}$_{\text{min}}$ was reached either before or shortly after (within four hours of) the passage of the ICME leading edge over L1. This suggests that these storms were driven by the sheath ahead of the ICME. A significant fraction of magnetic storms are, in fact, purely sheath-driven \citep{tsurutani1988,huttunen2002,huttunen2004, siscoe2007, kilpua2017a}. The sheaths of these three events, indeed, featured periods of strong southward fields (\textit{i.e.}, $B_{Z} \leq -10$ nT). Figure \ref{fig:angles} provides a visual representation of the results reported in Tables \ref{tab:CME_types} and \ref{tab:ICME_types}, by comparing the flux rope clock angles at the Sun to those at L1. The figure highlights how the expected flux rope type at Earth can change due to rotation of the flux rope axis in the corona or in interplanetary space. The events are grouped according to their chirality, in order to look for possible patterns that might be related to the sign of the helicity \citep[\textit{i.e.}, clockwise rotation is expected for right-handed chirality and counterclockwise rotation for left-handed chirality,][]{fan2003,green2007,lynch2009}. We note from Figure \ref{fig:angles} an obvious pattern: the axis clock angles at the Sun are clustered in the vicinity of the dashed lines both for left- and right-handed flux ropes (\textit{i.e.}, they lie along the Northwest--Southeast diagonal for right-handed events and the Northeast--Southwest diagonal for left-handed events). A similar pattern was found by \citet{marubashi2015}. The clock angle change from the Sun to Earth is < $90^{\circ}$ for 13 (65\%) events. The remaining seven (35\%) events experienced > $90^{\circ}$ rotation of their central axis. Of these, one event (Event 2) experienced an apparent rotation of its axis by $\sim 100^{\circ}$, while the other six (30\%) seemed to rotate by $\gtrsim 120^{\circ}$. Of these latter six cases, three events are right-handed and three events are left-handed. All of them were formed in active regions. Such large rotations have been reported previously in the literature \citep[e.g.,][]{harra2007,kilpua2009}. We have not considered here how the flux rope chirality affects the sense of rotation of the clock angle, because, in some cases, the MVA can have large errors related to the \textit{in situ} clock angle (up to about $\pm 90^{\circ}$ when the flux rope is crossed very far from its central axis) and because, from a forecasting perspective, it is more useful to consider the smallest rotation angle between the two orientations (\textit{i.e.}, < $\pm 180^{\circ}$). We remark that a large fraction of events had their solar tilt angle close to $45^{\circ}$. In this regard, we point out that, when the flux rope axis orientation determined from solar observations is close to the intermediate one, the expected flux rope type at Earth can change even due to a relatively small amount of rotation ($\sim 20^{\circ}$). \begin{figure}[ht] \centering \includegraphics[width=.99\linewidth]{figure5.pdf} \caption{Change in the flux rope clock angle from the Sun to L1, split into right- and left-handed events. The yellow dots represent the flux rope axis orientation at the Sun (the average between the orientations of the PIL and the PEAs), while the black dots indicate the orientation at L1 (taken from the axis orientation resulting from the MVA). Rotations are assumed to be < $180^{\circ}$ , \textit{i.e.} clockwise and counterclockwise rotations depending on chirality are not considered. Error bars are not included in the plot, but we assume that the error for the solar orientations can be up to $\pm 20^{\circ}$ and for the \textit{in situ} one can be up to $\pm 45^{\circ}$.} \label{fig:angles} \end{figure} It is also interesting to investigate whether the CME source region location or the crossing distance of the spacecraft along and across the ICME affect whether the intrinsic and \textit{in situ} flux rope types match. Figure \ref{fig:sources} shows the source coordinates of the CMEs, measured as the mid point between the flux rope footpoints. The colors show whether the intrinsic and \textit{in situ} flux ropes matched or not and the symbols give an estimate of the crossing distance from the ICME axis (Figure \ref{fig:sources}a) and the ICME nose (Figure \ref{fig:sources}b). We remind that the crossing distance across the flux rope was estimated through the ratio $\langle|B_{\min}|\rangle/\langle B\rangle$ in the MVA reference system, while the crossing distance along the flux rope was estimated through the location angle (see Section \ref{subsec:insitu}). It is clear that there is no obvious pattern, regarding either the source location or the crossing distance from the axis and nose of the ICMEs. Nearly all source regions are clustered relatively close to the solar disc centre, within $\pm 30^{\circ}$ both in latitude and longitude. The events with the largest distances from the disc centre are, however, identified as mismatches or intermediate cases. \begin{figure}[ht] \includegraphics[width=.99\textwidth]{figure6.pdf} \caption{Location of the source regions of the 20 CMEs under analysis. The different colours refer to how the flux rope types match between the Sun and L1: exact match (green), intermediate match (blue), and no match (red). The different symbols refer to the spacecraft crossing distance along and across the ICME. For panel (a), crossing closer to the axis (circles, $\langle|B_{\min}|\rangle/\langle B\rangle$ < 0.2), intermediate crossing (squares, 0.2 < $\langle|B_{\min}|\rangle/\langle B\rangle$ < 0.4), and crossing farther from the axis (triangles, $\langle|B_{\min}|\rangle/\langle B\rangle$ > 0.4). For panel (b), nose crossing (circles, $|\text{L}|$ < $15^{\circ}$), intermediate crossing (squares, $15^{\circ}$ < $|\text{L}|$ < $30^{\circ}$), and crossing closer to the flank (triangles, $|\text{L}|$ > $30^{\circ}$).} \label{fig:sources} \end{figure} \section{Discussion and Conclusions} In this work, we have analysed 20 CME events that had a clear and unique connection from the Sun to Earth as determined by heliospheric imaging. We have analysed their magnetic structure (specifically flux rope type) both at the Sun and \textit{in situ} at the Lagrange L1 point. The analysis of the solar sources was performed following the scheme presented in \citet{palmerio2017}. In particular, several multiwavelength indirect proxies were used to obtain the flux rope helicity sign (chirality), the axis tilt, and the direction of the magnetic field at the central axis, in order to determine the flux rope type of the erupting CME. The \textit{in situ} flux rope type was determined by visual inspection of magnetic field data and by applying the MVA technique. One important work towards understanding of the magnetic structure of ICMEs with a flux rope structure and their solar counterparts was performed by \citet{bothmer1998}. The authors estimated the flux rope type of 46 ICMEs and found a unique association for nine ICMEs with quiet-Sun filament eruptions. In eight of the nine cases, they found agreement between the solar and \textit{in situ} flux rope types, where the intrinsic flux rope configuration was inferred from the orientation of the filament axis and its magnetic polarity, and the heliospheric helicity rule. A more recent study by \citet{savani2015} studied eight CME events from the Sun to Earth, using the \citet{bothmer1998} scheme to estimate the intrinsic flux rope configuration, and proved that the initial flux rope structure must be adjusted for cases originating from between two active regions. Indeed, our present study shows that the \citet{palmerio2017} scheme to determine the intrinsic flux rope type is applicable to several different types of CME eruptions. Our analysis included CMEs originating from a single active region, from pairs of nearby active regions, and from filaments located on the quiet Sun. The scheme succeeded in estimating the intrinsic flux rope type also for CME source regions that did not follow the hemispheric helicity rule. We remark that the chirality has been determined from observations rather than from applying the statistical helicity rule. The proxies that we used and their success rate (\textit{i.e.}, the percentage of the events to which we could apply them) are: arcade skew (90\%), S-shaped features (70\%), filament characteristics (60\%), flare ribbons (55\%), and magnetic tongues (15\%). We point out that, for the quiet Sun filaments, we were typically able to study filament characteristics only using H$\alpha$, while for active regions filaments, we typically used EUV observations. The flux rope axis orientation at the Sun could by determined both from PIL and PEAs in 65\% of cases and from PIL only in rest of the cases. We found that the flux rope types at the Sun (\textit{i.e.}, the intrinsic flux rope type) and \textit{in situ} matched only for four (20\%) events but, if intermediate cases are considered as a match, then the rate is considerably higher, 11 events (55\%). The tendency of the tilt of the flux rope axis at the Sun to be close to $45^{\circ}$ is hence problematic for determining between the eight traditional flux rope categories. As mentioned in Section \ref{sec:results}, this trend was noted by \citet{marubashi2015}. There is a tendency for bipolar active regions to emerge with a systematic deviation from the East--West direction, with the leading sunspot being closer to the solar equator. This pattern is known as Joy's law \citep{hale1919}. The tilt angle of bipolar sunspot groups (\textit{i.e.}, the line that connects two sunspots), however, tends to have an inclination of $1^{\circ}$--$10^{\circ}$ only due to Joy's law \citep[\textit{e.g.},][]{vandriel2015}. This means that the angle of the corresponding PIL tends to be $89^{\circ}$--$80^{\circ}$ tilted to the ecliptic upon emergence. Most of the PILs under analysis were clustered around $45^{\circ}$ tilt, which means that Joy's law cannot explain such tendency. Since magnetic tongues could be used as a helicity proxy for three events only (out of 14 CMEs originating from a single active region), then it follows that most of the studied active regions were in their decay phase. A possible cause for the PILs to increasingly change their alignment from North--South to Northwest--Southeast (Northeast--Southwest) for right-handed (left-handed) active regions is the Sun's differential rotation, which progressively acts on the PILs' tilt angle. This would also hold for active regions that are at the final phase of their decay, which are usually source regions for quite-Sun filament eruptions. The frequent mismatch in flux rope type between the Sun and Earth suggests significant evolution after the eruption, particularly in terms of flux rope rotation. The comparison of the flux rope axis direction at the Sun and the Earth showed that for 35\% of the events that we studied (7 events) the difference between the axis directions at the Sun and \textit{in situ} was > $90^{\circ}$, with 20\% (\textit{i.e.}, 4 events) undergoing over $150^{\circ}$ rotation of their axis. All of the events that experienced a very large difference in the flux rope axis orientation originated from an active region. For the rest of the events (65\%; 13 events) the rotation was < $90^{\circ}$, and for 25\% of the events (\textit{i.e.}, 5 events) the difference was < $30^{\circ}$. Moreover, the four events that originated from a quiet-Sun filament seemed to rotate < $45^{\circ}$. This is in agreement with \citet{bothmer1998}, that found consistency in the flux rope configuration of erupting quiet-Sun filaments with their \textit{in situ} counterparts for eight out of nine cases. We therefore suggest that our lower percentage of matches between solar and \textit{in situ} flux rope types derives from the fact that we considered mostly active region CMEs in our dataset. We also showed that, at least for our relatively small data set, the difference between the axis orientations at the Sun and L1 did not seem to be obviously affected by the CME source location or by the crossing distance along and across the flux rope loop (Figure \ref{fig:sources}). We remind the reader that, in this analysis, we did not consider the expected sense of rotation dictated by the flux rope chirality, \textit{i.e.} clockwise (anticlockwise) for right- (left-) handed events. In fact, if we consider the smallest angle between the solar and \textit{in situ} flux rope orientations, then only ten events (50\%) seem to follow the sense of rotation expected from their chirality. This may either be because the remaining ten CMEs actually rotated in the opposite sense or that there was an external factor that counteracted the expected sense of rotation. However, it is important to remark that the resulting flux rope orientation \textit{in situ} may depend on the fitting technique. \citet{alhaddad2013} analysed 59 ICMEs using four different reconstruction or fitting methods, and found that for one event only all four methods found an orientation of the ICME axis within $\pm 45^{\circ}$. Reconstructions done with different techniques usually disagree, and that has to be taken into account when comparing solar and \textit{in situ} orientations, especially when considering the sense of rotation of the axis for the low rotation cases. If we consider, \textit{e.g.}, only the cases that present a > $45^{\circ}$ angular difference (\textit{i.e.}, 11 events in total), then four (five) right-handed (left-handed) flux ropes seemed to rotate anticlockwise and two (zero) clockwise. The left-handed events, hence, seem all to follow the expected sense of rotation if the analysis is restricted to the large rotation cases. It is noteworthy that the direct comparison between intrinsic and \textit{in situ} flux rope types can be performed only for a fraction of all CME--ICME pairs. As discussed in Section 2, we considered 47 candidates from the LINKCAT catalogue and ended up with only 12 events. The problems are related to (1) correctly connecting the CME--ICME pair, (2) excluding interacting events, and (3) the requirement for the relevant observations to be sufficiently clear both at the Sun and \textit{in situ} in order to estimate the flux rope type. In particular, many ICMEs do not show clear enough rotation of the field to determine the flux rope type. At the Sun, some CMEs may be so-called stealth CMEs \citep[\textit{e.g.},][]{robbrecht2009,kilpua2014,nitta2017}, \textit{i.e.} they lack obvious disk signatures, or have curved PEAs and/or PIL so reliable determination of the axis orientation is not possible. However, the cases for which determination of the intrinsic and \textit{in situ} flux rope types is possible are often geoeffective, as they show clear magnetic field enhancements and organized rotation of the magnetic field. In addition, as remarked in the Introduction, one important point to keep in mind for real-time space weather forecasts is that it is often difficult to predict if an erupting CME would impact Earth at all. Hence, a further investigation to study the applicability of the methods described in this article for forecasting would require to start at the Sun without first identifying CME--ICME pairs. As already mentioned in the Introduction, determination of the intrinsic flux rope type is a crucial step in space weather forecasting (as the input to different models), and as showed in this paper, in a fraction of cases it gives a good estimate of the flux rope magnetic structure at L1. Our results, however, strongly highlight the importance of capturing the amount of rotation and/or distortion that the flux rope experiences in the corona and in interplanetary space. This was stated already in the work by \citet{savani2015}, that highlights the importance of including evolutionary estimates of CMEs from remote sensing for space weather forecasts. The flux rope axis direction \textit{in situ} can be, \textit{e.g.}, estimated by considering coronagraph data in addition to solar disc observations \citep{savani2015}. Concerning flux rope rotations, in fact, several studies suggest that the most dramatic rotation occurs during the first few solar radii of a CME's propagation \citep[\textit{e.g.},][]{vourlidas2011,isavnin2014,kay2016}. Indeed, rotation can also occur even during the eruption \citep[\textit{e.g.},][]{green2007,lynch2009,bemporad2011,thompson2012}. Finally, we remark that \textit{in situ} data are one-dimensional and that a single spacecraft's trajectory through a CME may not reflect the global shape and orientation of the flux rope. The flux rope type that is seen at Earth may depend on where the spacecraft crosses the ICME (\textit{i.e.} the crossing distance from the ICME axis, named the impact parameter, and/or from the ICME nose) and on local distortions that might be present within an ICME. In terms of the latter, \citet{bothmer2017} recently demonstrated that kinks present in the CME source region seem to be reflected in the erupting flux rope during its expansion and propagation. \citet{owens2017} also showed that CMEs cease to be coherent magnetohydrodynamic structures within 0.3 AU of the Sun, and that their appearance beyond this distance is that of a dust cloud. This means that local deformations that may arise during the CME propagation do not propagate throughout the whole CME body. Nevertheless, the space weather effects at Earth depend strongly on the magnetic structure that is measured at L1, meaning that a significant step towards the improvement of current space weather forecasting capabilities is the prediction of the flux rope axis rotation (whether proper or apparent) during propagation. Other important factors to take into account for future space weather predictions are the crossing location, both along and across the flux rope, and eventual local distortions of the CME body. \acknowledgments EP acknowledges the Doctoral Programme in Particle Physics and Universe Sciences (PAPU) at the University of Helsinki. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n$^{\circ}$ 4100103). EK also acknowledges UH Three Year Grant project 75283109. CM's work was supported by the Austrian Science Fund (FWF): [P26174-N27]. AJ and LG acknowledge the support of the Leverhulme Trust Research Project Grant 2014-051. LG also acknowledges support through a Royal Society University Research Fellowship. We also thank the two anonymous reviewers, whose suggestions have significantly improved this article. We thank the HELCATS project, that has received funding from the European Union's Seventh Framework Programme for research, technological development and demonstration under grant agreement no 606692. This research has made use of SunPy, an open-source and free community-developed solar data analysis package written in Python \citep{sunpy2015}, and the ESA JHelioviewer software. We thank the geomagnetic observatories (Kakioka [JMA], Honolulu and San Juan [USGS], Hermanus [RSA], INTERMAGNET, and many others) for their cooperation to make the provisional and the final \textit{Dst} indices available. \section*{Sources of Data and Supplementary Material} \label{sec:data} \noindent Catalogues:\\~\\ LINKCAT, doi:10.6084/m9.figshare.4588330.v2,\\ \url{https://doi.org/10.6084/m9.figshare.4588330.v2}\\ ARRCAT, doi:10.6084/m9.figshare.4588324.v1,\\ \url{https://doi.org/10.6084/m9.figshare.4588324.v1}\\~\\ \noindent ICME Lists:\\~\\ Near-Earth Interplanetary Coronal Mass Ejections List, Richardson, I., and Cane, H.,\\ \url{http://www.srl.caltech.edu/ACE/ASC/DATA/level3/icmetable2.htm}\\ Wind ICME List, Nieves-Chinchilla, T., \textit{et al.},\\ \url{https://wind.nasa.gov/ICMEindex.php}
1,116,691,499,489
arxiv
\section{Introduction} Gravitational waves have been detected from coalescing black hole binaries \citep{DiscoveryPaper,LIGO-O1-BBH}. Over the next few years, ground-based gravitational wave detectors like LIGO and Virgo should detect the gravitational wave signal from many more similar merging compact binaries \citep{LIGO-O1-BBH,RatesPaper,gwastro-EventPopsynPaper-2016}, as well as binary neutron stars and black hole-neutron star binaries \citep{LIGO-Inspiral-Rates,popsyn-LowMetallicityImpact2c-StarTrackRevised-2014}. The host galaxies of gravitational wave sources will be identified, either directly or statistically. If the event is a merger of at least one neutron star, it is expected to be accompanied by detectable electromagnetic radiation via a number of mechanisms \citep[see][and references therein]{Metzger16} in addition to the strong gravitational wave signal. A multimessenger detection will pin down the sky position and therefore approximate birthplace of each merging binary \citep{Nissanke13}. Even in the absence of well-identified electromagnetic counterparts, host galaxy information is still available from gravitational wave localization alone \citep{2016LRR....19....1A,2016arXiv160307333S}. As GW detector networks increase in number and sensitivity, these localizations will allow statistical and, eventually, unique identification of host galaxies directly, even without associated electromagnetic emission. As with supernovae and GRBs, these host galaxy associations are expected to tightly constrain models for compact binary formation; see, for comparison, \citet{2011MNRAS.412.1508M}, \citet{long-grb-GuettaPiran2007}, \citet{2014ARAA..52...43B}, and references therein. The host galaxies of distant short GRBs have already been extensively investigated, with the associations being used to draw preliminary conclusions about their progenitors \citep{2014ARAA..52...43B}. Unlike GRBs, detected gravitational wave sources will be limited by the range of LIGO to the local universe; for example, binary neutron star sources should be closer than $400\unit{Mpc}$. Due to their proximity, each host galaxy can be explored at great depth and detail via position-resolved spectroscopy, enabling detailed position-resolved star-formation and chemical evolution histories \citep[see,e.g.][]{2009MNRAS.396..462K,2014MNRAS.444..336C, CALIFA,CALIFA2}. However, unlike short GRBs and supernovae (SN), present-day compact binary populations can depend sensitively on rare low metallicity star formation. In this work, we assess by concrete example the extent to which detailed analysis of individual galaxies' assembly histories will be essential in investigating key physical questions about the origin of compact binary mergers. This paper is organized as follows. In \S \ref{sec:sims}, we describe detailed hydrodynamical simulations of several galaxies, including four of Milky Way-mass and two dwarfs. Though the four Milky Way-like galaxies are morphologically similar at $z = 0$, their star formation and chemical evolution history have subtle differences due to their distinctive merger histories. To demonstrate the practical impact of these differences as well as that of halo mass, in \S \ref{sec:model}, we introduce a simple, metallicity-dependent phenomenological model to calculate the present-day rate and mass distribution of compact binary mergers from a galaxy's known history. In \S \ref{sec:results:BBH}, we use this model to investigate the compact binary coalescence rate dependence on each galaxy's assembly history and mass. In \S~\ref{sec:Discussion} we discuss the implications of our study for the interpretation of host galaxy associations identified via transient multmiessenger astronomy, in the short and long term. We summarize our results in \S~ \ref{sec:conclude}. \section{Cosmological simulations} \label{sec:sims} \subsection{Simulating galaxy evolution} To thoroughly study the significance of low metallicity star formation, we examine cosmological smoothed particle hydrodynamics (SPH) $N$-body simulations of Milky Way-like galaxies with GASOLINE \citep{Stadel01,Wadsley04}. These simulations allow us to analyze both spatially and temporally resolved star formation, and determine the metallicity history of compact object progenitors. We selected our simulated regions of interest from a volume of uniform resolution, and resampled the region at very high resolution using the volume renormalization technique \citep{Katz93}. This technique allows us to follow the detailed physical processes involved in galaxy evolution in our selected region while still including large-scale torques from cosmic structure. Our cosmological parameters are $\Omega_0 = 0.24$, $\Omega_{\rm baryon} = 0.04$, $\Lambda = 0.76$, h = 0.73, $\sigma_8 = 0.77$ \citep{WMAP3}. \footnote{Since we are simulating individual galaxy environments rather than large populations of halos, the selection of cosmological parameters provides a negligible contribution to the variance in the overall evolutionary history of galaxies in our simulations. } We model the ionizing UV background with the prescription from \citet{Haardt96}. Our interstellar medium (ISM) model includes the non-equilibrium formation and destruction of H$_2$, which is incorporated in the cooling model along with metal lines, along with shielding of HI and H$_2$ \citep{Christensen12}. Stars form probabilistically from gas particles which meet density ($n_{min} = 0.1$ amu cm$^{-3}$) and temperature ($T_{max} = 10^3$ K) thresholds, though since star formation also depends on the H$_{\rm 2}$ content of a particle (see below) the densities are nearly always much higher than this threshold. If a gas particle meets these criteria, it has a likelihood of forming a star particle (representing a simple stellar population with a Kroupa IMF \citep{Kroupa93}) which is given by \begin{equation} p = \frac{m_{\rm gas}}{m_{\rm star}} (1 - e^{c^*\rm X_{\rm H_2} \Delta t/t_{\rm form}}) \end{equation} \noindent where the star formation efficiency parameter $c^*$ is set to 0.1 such that our galaxies match the observed Kennicutt-Schmidt law \citep{Kennicutt89}; X$_{\rm H_2}$ is the molecular hydrogen fraction of the gas particle; $m_{star}$ and $m_{gas}$ are the star and gas particle masses; \footnote{Gas particles start with a set mass and may gain mass from feedback and lose it to star formation. Each star particle, when formed, has $1/3$ of the progenitor mass of the forming gas particle. See \cite{Christensen10} for a discussion of resolution issues in SPH simulations.} $t_{form}$ is the dynamical time for the gas particle; and $\Delta t$ is the time between star formation episodes, which we set to 1 Myr. A detailed study of different ISM models and the resulting star formation properties by \citet{Christensen14a} demonstrates that this model allows star formation to occur in clumps of dense gas, comparable to giant molecular clouds. We model supernova feedback using the blastwave formalism described in \citet{McKee77} and implemented in our simulations as in \citet{Stinson06}. Each supernova releases $E_{SN} = 10^{51}$ erg into the surrounding gas with a radius determined by the blastwave equations. These particles are not allowed to cool for the duration of the blastwave, mimicking the adiabatic expansion phase of a supernova explosion. Previous works have found that this set of parameters results in realistic galaxies which obey a number of observed relations such as the mass-metallicity relation \citep{Brooks07,Christensen16}, the Tully-Fisher relation \citep{Christensen16}, and the size-luminosity relation \citep{Brooks11}, as well as reproduce the detailed characteristics of bulgeless dwarf galaxies \citep{Governato10,Governato12}, low-mass disk galaxies with bulges \citep{Christensen14b}, and the Milky Way \citep{Guedes11}. Metals are created in supernova explosions and deposited directly to the gas within the blast radius. Stellar masses are converted to ages as described by \citet{Raiteri96}, and stars more massive than 8 M$_\odot$ are able to undergo a Type II supernova. For Type II supernovae, iron and oxygen are produced according to the analytic fits used in \citet{Raiteri96} using the yields from \citet{Woosley95}: \begin{equation} M_{\rm Fe} = 2.802 \times 10^{-4} M_*^{1.864} \end{equation} \begin{equation} M_{\rm O} = 4.586 \times 10^{-4} M_*^{2.721}. \end{equation} Feedback from Type Ia supernovae also follows \citet{Raiteri96}. Each supernova produces $0.63\msun$ iron and $0.13 \msun$ oxygen \citet{Thielemann86}. Metal production from stellar winds is also included; we implement stellar wind feedback based on \citet{Kennicutt94} and the returned mass fraction is derived using a function by \citet{Weidemann87}. The returned gas has the same metallicity as the star particle. Also included in our simulations is a scheme for turbulent metal diffusion \citep{Shen10}. Once created, metals diffuse through the surrounding gas, according to \begin{equation} \frac{dZ}{dt}|_{diff} = \nabla (D \nabla {Z}) \\ \end{equation} where the diffusion parameter $D$ is given by \begin{equation} D = C_{diff} |S_{ij}| h \end{equation} and $h$ is the SPH smoothing length, $S_{ij}$ is the trace-free shear tensor, and $C_{diff}$ is a dimensionless constant which we set to a conservative value of 0.03. Combined with infall, this procedure produces a range of metallicities within each galaxy. We do not include specific prescriptions for metal distribution based on initial metallicity (such as for Population III stars) or variations in IMF. We cannot thus discuss Population III contributions to the gravitational wave background; however, studies by \citet{Hartwig16} and \citet{Dvorkin16} suggest that this contribution is fairly negligible. We have shown that our scheme for metal production and distribution produces galaxies which match the mass-metallicity relation at $z \sim 3$ \citep{Brooks07} as well as in the local universe \citep{Christensen16}. At higher redshifts, there is a very large spread in the metal distributions of damped Lyman alpha systems \citep{Dvorkin15}; manifestly, other features in the galaxy evolutionary history play a role in its present day metallicity. Recently \citet{Hunt16a,Hunt16b} has demonstrated a relationship between of the mass, metallicity, and star formation rate, which demonstrates an evolutionary relationship between these quantities. As we will discuss in future work, we have verified that our evolutionary prescriptions are also qualitatively consistent with this relation. We identify individual galaxies using the halo finder $AHF$ \citep{Gill04,Knollmann09}, which identifies haloes based on an overdensity criterion for a flat universe \citep{Gross97}. In each simulation, we are focusing on the stars which make up the primary (i.e. most massive) galaxy within the zoomed-in high resolution region at $z = 0$. \subsection{Milky-Way-like galaxies with distinct histories } The evolution of a galaxy, in terms of its stellar mass and metallicity evolution, depends strongly on its interaction history. Galaxies which appear similar at the present day may have had drastically different histories, which may result in differences in compact object merger rates. To investigate whether galaxy history affects the compact binary event rate, we have chosen four simulations which are morphologically similar at $z = 0$ (see Figure \ref{fig:images}) but differ strongly in their merger histories. The simulation h277 is a Milky Way analog with a quiescent merger history. It experiences its last major merger at $z = 3$, after which a small number of minor interactions permeate its life. This simulation has been shown to emulate several Milky Way properties, including stellar dynamics \citep{Loebman12,Loebman14,Kassin14}, baryon fraction \citep{Munshi13}, and satellite properties \citep{Zolotov12,Brooks14}. It has a virial mass of $M_{vir} = 6.79 \times 10^{11} \msun$, stellar mass $M_* = 4.24 \times 10^{10} \msun$, and maximum circular velocity $v_{\rm circ} = 235$ km s$^{-1}$. The simulation h258, on the other hand, has a much more active merger history. At $z = 1$ there is a 1:1 merger event; a gas disc rapidly reforms following the collision (see \citet{Governato09}), resulting in a massive galaxy at $z = 0$ which looks remarkably similar to the Milky Way and to the other simulation, h277. Prior to the $z = 1$ merger, each of the four progenitor galaxies actually experiences its own additional major merger events around $z = 3$. The combination of the series of major mergers, plus a number of minor interactions and flybys, gives a stark contrast to the relatively quiescent history of h277. At $z = 0$, h258 has a virial mass of $M_{vir} = 7.74 \times 10^{11} \msun$, stellar mass $M_* = 4.46 \times 10^{10} \msun$, and maximum circular velocity $v_{\rm circ} = 242$ km s$^{-1}$. We include two additional Milky Way simulations with similar $z = 0$ properties, which have evolutionary histories that fall in between the extremes of the two described above. The galaxy h239 has a total virial mass of $M_{vir} = 9.3 \times 10^{11}\msun$, stellar mass $M_* = 4.50 \times 10^{10} \msun$, and maximum circular velocity $v_{\rm circ} = 250$ km s$^{-1}$. The galaxy h285 has a total virial mass of $M_{vir} = 8.82 \times 10^{11}\msun$, stellar mass $M_* = 4.56 \times 10^{10} \msun$, and maximum circular velocity $v_{\rm circ} = 248$ km s$^{-1}$. These galaxies are also described in \citet{Bellovary14,Sloane16}. Due to the differences in merger histories, the star formation histories and metallicity evolution of h258 and h277 also differ at early times. Figure \ref{fig:TwoGalaxies} shows the star formation history (left panel) and metallicity evolution (right panel) of h277 (black) and h258 (red).\footnote{We choose not to show galaxies h239 and h285 in our figures, as they fall in between the values bracketed by h258 and h277 and add confusion to the plots.} The star formation histories are quite different at early times, where h277 has larger bursts of star formation between 2-4 Gyr, but h258 has a large burst at $\sim 6$ Gyr during a major merger. The right panel shows the mean metallicity of recently formed stars vs time (thick solid lines), where we define recent as within the past 50 Myr. The shaded regions correspond to one standard deviation of the mean, while the thick dashed lines represent the top and bottom 90\%. We see that the early evolution of the metal properties of these galaxies does differ - between 0.5 and 6 Gyr, h277 hosts a \emph{modestly} more metal-rich population than h258. For most astrophysical processes, the small metallicity difference illustrated here will have no impact on present-day observables. However, the population of binary black holes depends sensitively on all low metallicity star formation over cosmic time. We wish to point out that these simulations do not include the effects of supermassive black holes (SMBHs). These galaxies are at the mass where feedback from SMBH accretion is thought to affect star formation, and adding these effects may add additional scatter to the stellar and metal evolution of each galaxy, which could alter the binary black hole merger history as well. \begin{figure*} \includegraphics[width=\columnwidth]{boring} \includegraphics[width=\columnwidth]{exciting} \caption{\label{fig:GalaxyImages}Synthetic SDSS $gri$ images of two of our Milky Way-type galaxies, h277 (left) and h258 (right), created with \textsc{SUNRISE} \citep{Jonsson06}. \label{fig:images} } \end{figure*} \begin{figure*} \includegraphics[width=\columnwidth]{sfh} \includegraphics[width=\columnwidth]{zvst} \caption{\label{fig:TwoGalaxies}\textbf{Star formation and metallicity versus time}: \emph{Left panel}: Star formation history $\dot{M}_{*}$ versus time. Black corresponds to \BoringGalaxy{} and red to \ExcitingGalaxy. \emph{Right panel}: A plot of the metallicity $Z$ of recently-formed stars versus time. Solid red and black lines show the mean metallicity; dotted lines correspond to 90\% of newly-born stars with lower metallicity, or 10\% of newly-born stars with greater metallicity. The shaded region represents one standard deviation. When these two galaxies are $2-3\unit{Gyr}$ old, prior to the first major merger, newly-born stars are created with significantly different metallicity. Additionally, prior to the second major merger of h258 at $\simeq 6\unit{Gyr}$, stars form in the \ExcitingGalaxy{} galaxy at a systematically lower metallicity than in the \BoringGalaxy{} counterpart. Even from 8-13 Gyr, h258 has a slightly lower metallicity than h277. } \end{figure*} \subsection{Dwarf galaxies} We have also employed the results of detailed simulations for two dwarf galaxies: \DwarfOne{} and \DwarfTwo{}. The simulation of h603 consists of a low-mass disc galaxy (qualitatively similar to M33). It has a virial mass of $3.4 \times 10^{11} \msun$, stellar mass of $7.8 \times 10^9 \msun$, and maximum circular velocity of 111 km s$^{-1}$. The structure and star formation of this galaxy has been extensively studied by \citet{Christensen14b}. We also include a bulgeless dwarf galaxy, h516, which has a disc with irregularly distributed star formation, with a virial mass of $3.8 \times 10^{10}\msun$, stellar mass $2.5 \times 10^8 \msun$, and maximum circular velocity of 65 km s$^{-1}$. Images of these galaxies are shown in Figure \ref{fig:dwarfimages}, and we show their star formation history and metallicity evolution in Figure \ref{fig:TwoDwarfGalaxies}. Note that these galaxies have quite different masses, and are not meant to be directly comparable. The more massive h603 has a much more active star formation history and an overall increasing metallicity with time, whereas the less massive h516 is characterized by small bursts of star formation and a fairly flat metallicity evolution, perhaps due to its substantial outflows. \begin{figure*} \includegraphics[width=\columnwidth]{dwarf1-eps-converted-to} \includegraphics[width=\columnwidth]{h516} \caption{\label{fig:dwarfimages}Synthetic SDSS $gri$ images of our two lower-mass galaxies, h603 (left) and h516, created with \textsc{SUNRISE} \citep{Jonsson06}. \label{fig:dwarfimages} } \end{figure*} \begin{figure*} \includegraphics[width=\columnwidth]{sfh_dwarf} \includegraphics[width=\columnwidth]{dwarf_zvst} \caption{\label{fig:TwoDwarfGalaxies}\textbf{Star formation and metallicity versus time}: \emph{Left panel}: Star formation history $\dot{M}_{*}$ versus time. Red corresponds to h603 and black to h516. \emph{Right panel}: A plot of the metallicity $Z$ of recently-formed stars versus time. Solid red and black lines show the mean metallicity; dotted lines correspond to 90\% of newly-born stars with lower metallicity, or 10\% of newly-born stars with greater metallicity. The shaded region represents one standard deviation. } \end{figure*} \section{Detection-weighted compact binary formation} \label{sec:model} Our goal in this work is to estimate the \emph{ratio} of compact binaries that should be merging, at present, in the four simulated Milky-Way analog galaxies described above as well as the dwarfs. To explore plausible binary detection scenarios, we adopt a parametrized formalism for binary evolution and event detection in an individual galaxy, motivated by the detailed studies of \cite[][(hereafter \abbrvPSgrb{})]{PSgrbs-popsyn} and \citet[][(hereafter \abbrvPSellipticals{})]{PSellipticals}; similar approaches have been used by \cite{2016arXiv160508783L} and others. Since prior population synthesis investigations extend relatively smoothly to very metallicity, we do not introduce a distinct, special group of low-metallicity Population III stars. Binary evolution calculations suggest the binary compact object formation rate depends sensitively on the assumed metallicity, in conjunction with other parameters \citep[see,\,e.g.][and references therein]{popsyn-LowMetallicityImpact-Chris2008,popsyn-LIGO-SFR-2008,gwastro-EventPopsynPaper-2016}. Gravitational wave detectors are also far more sensitive to massive compact binaries, which are preferentially formed in low metallicity environments \citep{PSellipticals,popsyn-LowMetallicityImpact2c-StarTrackRevised-2014}. As a result, low metallicity environments can be overwhelmingly efficient factories for detectable black hole binaries \citep{popsyn-LowMetallicityImpact2c-StarTrackRevised-2014,gwastro-EventPopsynPaper-2016}. For this reason, our estimates for compact binary coalescence rates must account for how often different star-forming conditions occur; how often compact binaries that can coalesce now can derive from each environment; and how often LIGO will detect compact binaries with different masses, all other things being equal. To characterize how much more likely LIGO will detect coalescing binaries with different masses, we use a common and naive estimate for the volume to which advanced LIGO is sensitive \citep[see,e.g.,][]{PSellipticals}:\footnote{For simplicity, in this calculation we neglect the effects of cosmology; strong field coalescence; and black hole spin; see \cite{popsyn-LowMetallicityImpact2c-StarTrackRevised-2014}, \cite{AstroPaper}, or \cite{RatesPaper} for more details.} \begin{eqnarray} V = \frac{4\pi}{3} [445 \unit{Mpc}]^3 \int p(m_1,m_2|Z) dm_1 dm_2 [ (\mc/1.2 M_\odot)^{5/6}]^3 \end{eqnarray} where $\mc\equiv (m_1 m_2)^{3/5}/(m_1+m_2)^{1/5}$. This expression depends explicitly on an assumed and metallicity-dependent mass distribution for compact object mergers, through the characteristic chirp mass $\mc_{*}(Z)\equiv [\int p(m_1,m_2|Z) dm_1 d,m_2 (\mc/1.2 M_\odot)^{15/6}]^{5/16}$ . We calibrate our metallicity-dependent mass distributions to metallicity-dependent binary evolution calculations presented in \cite{popsyn-LowMetallicityImpact2-StarTrackRevised-2012} and \cite{popsyn-LowMetallicityImpact2c-StarTrackRevised-2014}. For neutron stars, we adopt a fiducial neutron star mass of $1.4 M_\odot$ at all metallicities. For BH-NS binaries, we adopt a highly simplified model: the black hole masses are uniformly drawn from $5 M_\odot$ to the maximum black hole mass $M_{max}(Z)$ allowed by \emph{isolated} stellar evolution, as reported in prior work \citep[see,e.g.][and references therein]{gwastro-EventPopsynPaper-2016}. In agreement with much more detailed prior work \cite{popsyn-LowMetallicityImpact2c-StarTrackRevised-2014}, this simplified model yields typical chirp masses for BH-NS binaries that vary slightly as $Z/Z_\odot$ decreases, from a lower limit of $3 M_\odot$ near solar metallicity to an upper limit of $4.3 M_\odot$ in low-metallicity environments. Not least because the average chirp mass for BH-NS simply cannot vary dramatically, given the functional form of $\mc$, our results for the BH-NS coalescence rate per unit galaxy mass do not depend sensitively on the choice of black hole mass distribution. Finally, for binary black holes, we assume comparable-mass binaries form ($m_1=m_2$), with the distribution of component masses chosen to be $\propto m_1^{-p}$ between $5 M_\odot$ and $M_{max}(Z)$ and zero otherwise, adopting a fiducial exponent $p=2$; see, e.g., \cite{popsyn-LowMetallicityImpact2b-StarTrackRevised-2013}. For binary black holes, this means the detection-weighted mass distribution [$\propto p(m_1,m_2) \mc^{15/6}$] depends weakly on mass [$ p(m_1,m_2) \mc^{15/6} \propto m_1^{-0.5} \delta(m_1-m_2)$. In our simple model, the total and chirp mass distributions are qualitatively consistent with detailed binary evolution calculations \citep{popsyn-LowMetallicityImpact2b-StarTrackRevised-2013}. To characterize the frequency of different star-forming conditions, we use our cosmological simulations of Milky Way-like galaxies, which provide each galaxy's star formation rate $\dot{M}_*$ and metallicity distribution $p(\log Z|t)$ over time (see Eqn. \ref{eqn:rate}). To characterize how often compact binaries form and coalesce, we use the ansatz adopted in \abbrvPSellipticals{} and \abbrvPSgrb{}: for a star-forming parcel of mass $\Delta M$ the number of binaries born at time $0$ which are coalescing now is $\lambda \Delta M dP_t/dt$, where $\lambda$ is an overall efficiency per unit mass and $P_t(<t|Z)$ is a metallicity-dependent delay time distribution.\footnote{For simplicity, we assume the mass and delay time distributions are uncorrelated. Figure A9 in \abbrvPSgrb{} shows this approximation, while not strictly true, is an excellent approximation for merging BH-BH binaries at solar metallicity. } In this paper we are investigating the relative contribution of different star-forming environments and galaxy evolutionary histories, not the overall normalization, so the overall scale of $\lambda$ is irrelevant. However, to account for the strong tendency of low-metallicity star-forming regions to produce many binary compact objects, we adopt a power law \begin{eqnarray} \label{eq:LambdaVersusZModel} \lambda(Z) &=& \lambda_o \text{min}[(Z/Z_\odot)^{-a}, F_{max}] \end{eqnarray} with $a\in [0,3]$ and $F_{max}<10^3$ (see, e.g.\cite{popsyn-LIGO-SFR-2008}, \abbrvKBLowZa). For the purposes of illustration, we adopt a concrete scale factor $\lambda_0 = 10^{-3}/M_\odot $, a typical value suitable for neutron star compact binaries (see \abbrvPSgrb{}, \abbrvPSellipticals, and \abbrvKBLowZa). To calibrate the exponent, based on Tables 2 and 3 of \cite{popsyn-LowMetallicityImpact2-StarTrackRevised-2012} (model B), for binary black holes and black hole-neutron star binaries, we adopt $a=1$, while for binary neutron stars we adopt $a=0$; see, e.g., their Table 1 and Figures 5-7. This choice of exponent provides an extremely conservative assessment of the impact of low Z \cite[see,e.g.,][]{2012CQGra..29n5011O,gwastro-EventPopsynPaper-2016}. For binaries containing neutron stars, for simplicity and without loss of generality we adopt the universal delay time distribution \begin{eqnarray} \frac{dP_t(<t)}{dt} = \begin{cases} 0 & t<10 \unit{Myr} \\ \frac{1}{t \ln (13 \unit{Gyr}/10\unit{Myr})} & t \in [10\unit{Myr},13\unit{Gyr}] \end{cases} \end{eqnarray} \abbrvPSgrb{} and \abbrvPSellipticals{} show this distribution is a reasonable approximation to compact binary delay time distributions.\footnote{Simulations suggest the delay time distribution varies from model to model and with mass. These variations have less impact on our results than the evolving metallicity distribution of star forming gas.} For binary black holes forming at metallicities $Z<0.25 Z_\odot$, we adopt the same prescription. For binary black holes formed near solar metallicity, the delay time distribution can favor long delays between birth and merger, as demonstrated in Figures 9 and 10 of \abbrvPSellipticals{}. [Figure 2 of \cite{2016arXiv160508783L} is an extreme example of this well-known trend.] To be qualitatively consistent with detailed binary evolution calculations at near-solar metallicity (e.g., \abbrvPSellipticals{} and \cite{gwastro-EventPopsynPaper-2016}), for black holes forming at metallicities $Z>0.25 Z_\odot$ we adopt a much more uniform delay time distribution, so coalescing black hole binaries have nearly uniform delay time distribution between $100\unit{Myr}$ and $13\unit{Gyr}$; that said, our conclusions are not sensitive to this choice. For suitable choices of parameter, our phenomenological response function is qualitatively consistent with the results of detailed simulations of binary evolution \citep{2010ApJ...715L.138B,popsyn-LowMetallicityImpact2c-StarTrackRevised-2014,popsyn-LowMetallicityImpact2b-StarTrackRevised-2013,popsyn-LowMetallicityImpact2-StarTrackRevised-2012}. Therefore, up to an irrelevant overall scale factor, the present-day detection-weighted coalescence rate $r_D$ of binary compact objects formed by within two similar galaxies can be calculated via \begin{eqnarray}\label{eqn:rate} r_D \propto \int d\log Z \int _{13 \unit{Gyr}}^0dt V(Z) \lambda(Z) \frac{dP_t}{dt}(t) \dot{M}_* p(\log Z|t) \end{eqnarray} \section{Compact object binary formation rate} \label{sec:results:BBH} Our four Milky Way-like galaxies have extremely similar star formation histories and metallicity evolution, particularly at late times. However, at early times, the \ExcitingGalaxy{} galaxy forms stars for $\simeq 2\unit{Gyr}$ at a lower characteristic metallicity ($Z\simeq 0.8 \times10^{-3}$) compared to the \BoringGalaxy{} galaxy ($Z\simeq 2\times 10^{-3}$). In this regime, the formation efficiency $\lambda$ and sensitive volume $V$ can depend sensitively on mass; for example, for binary black holes, the ratio of $(\lambda V)_{\BoringGalaxy{}}/(\lambda V)_{\ExcitingGalaxy{}} \simeq 3$.\footnote{Adopting a more extreme exponent for the metallicity dependence ($a=2$) only changes this ratio by of order $2$.} However, in this same epoch, the star formation rate in the \ExcitingGalaxy{} (low-metallicity) galaxy is smaller, by a factor of roughly 2. Therefore, because only a fraction of order $10\%$ of all star formation occurs in this epoch, by this order of magnitude argument, the overall number of present-day coalescing binary black holes in our two galaxies is expected to differ by of order ten percent. Using the concrete phenomenological calculations described above, we in fact find $(r_{D}/M)_{\BoringGalaxy{}}/(r_{D}/M)_{\ExcitingGalaxy{}}\simeq 0.9$. The close agreement between the two galaxies' binary black hole populations, determined by the anticorrelation between star formation rate and metallicity, may be a single example of a broad trend. This anticorrelation could cause galaxies with similar present-day properties to always have similar present-day binary black hole populations, regardless of their detailed assembly histories. If low metallicity star formation makes up a small fraction of the total stellar mass, which is the case for each Milky Way-like galaxy we study here, then the quantity $V(Z) \lambda(Z) \dot{M}_* p(\log Z|t)$ must differ by about an order of magnitude for the binary black hole merger rates to differ strongly. Such a scenario is possible if we alter our conservative choice of the exponent $a$ in the $\lambda(Z)$ function, but nonetheless is rather unlikely that galaxies in this mass range will undergo drastically different early evolution. The effects described here are well within the scatter around the mass-metallicity relation. Marginally different realizations of these histories can easily produce factors of order unity difference in galaxies with otherwise indistinguishable present-day properties. More broadly, Table \ref{tab:Results} shows the results of our calculations for the three types of compact binaries described above. These calculations show just how dramatically {\em different} the compact binary populations of galaxies with different present-day {\em masses} could be. The dwarf galaxies have an exceptionally large fraction of low-metallicity star formation in their history \citep{Kirby13} relative to the more massive galaxies. The precise details of their chemical evolution can modify their present-day binary black hole binary populations by factors of order a few, despite adopting the conservative choices described above for the dependence of rate on metallicity. The progenitors (and, if present, electromagnetic counterparts) of future binary black hole gravitational wave events are overall much more likely to be located in nearby dwarf galaxies with a large population of low metallicity stars. To highlight how sensitively compact binaries depend on detailed evolutionary trajectories, Table \ref{tab:Results} shows results for binary neutron stars. By our construction, no metallicity dependence is included in the present-day event rate for neutron stars. Thus, the differences in present-day state between these two galaxy population models arise solely and exclusively on the time distribution history of past star formation. In general, the present-day population depends often significantly (i.e., tens of percent) on the assembly history alone, even aside from any composition-dependent effects. \begin{table} \begin{centering} \begin{tabular}{llllc}\hline \text{Simulation} & \text{BHBH} & \text{BHNS} & \text{NSNS} & M$_* (\msun)$\\ \hline \hline \text{\BoringGalaxy} & 0.0224 & 0.000247 & 0.000206 & $4.24 \times 10^{10}$ \\ \text{\ExcitingGalaxy} &0.0216 & 0.000297 & 0.000228 & $4.46 \times 10^{10}$ \\ \text{h239} & 0.023 & 0.000354 & 0.000268 & $4.50 \times 10^{10}$\\ \text{h285} & 0.0236 & 0.000284 & 0.000241 & $4.56 \times 10^{10}$ \\ \hline \text{\DwarfOne} & 0.0381 & 0.000308 & 0.000441& $7.8 \times 10^9$ \\ \text{\DwarfTwo} & 0.0949 & 0.000257 & 0.000884& $2.5 \times 10^8$ \\ \hline \end{tabular} \end{centering} \caption{\label{tab:Results}Event rates per unit mass $r_D/M$, arbitrary units, for the three different types of binaries discussed, along with the stellar mass of each galaxy. } \end{table} \section{Implications for transient multimessenger astronomy} \label{sec:Discussion} While a gravitational wave detection provides detailed information about merging objects (i.e. masses, spins, distance), we need further knowledge to understand the actual origins of the progenitors. Even when electromagnetic counterparts are available, the long delay times between binary formation and merger limit the prospects of examining the environment where the binary first formed. On the one hand, compact binaries are kicked by supernovae, moving substantially away from their birth position \citep{2013ApJ...776...18F,2014ARAA..52...43B}. On the other hand, particularly during the early epoch of galaxy formation, gas in galaxies is well-mixed: stars and adjacent gas generally do not have similar chemical composition. These mixing effects have been previously recognized as an obstacle to interpreting transient event spectra. For example, \citet{2010MNRAS.402.1523P} previously demonstrated that absorbing gas neighbouring transient events (there, long GRBs) would generally have high metallicity, even for low-metallicity progenitors. \citet{2010MNRAS.402.1523P} have previously used hydrodynamical simulations to demonstrate that observed ambient metallicities (there, using damped Lyman $\alpha$ absorbers in the host) do not tightly constrain the metallicity distribution of the progenitor; see, e.g., their Fig 3. Thus, the metallicity of stars and gas adjacent to a specific merger event provides few direct, unambiguous clues to a compact binary merger's progenitors. Fortunately, with the advent of IFUs and position-resolved spectroscopy, observers can now probe the star formation history and metallicity of individual gas packets at different points in a galaxy. These highly-detailed probes will be essential tools to develop a comprehensive model of the galaxy's merger and chemical evolution history. Obtaining the galaxy-wide evolutionary history can help us infer compact binary formation conditions by identifying the lowest-metallicity formation events using stellar archaeological techniques. These techniques have been applied with great success to other transient events. For example, in several cases the metallicity of gas neighbouring a long GRB (\citet{ 2008AJ....135.1136M}; \citet{2010AJ....140.1557L}) has been directly measured. The precise host offset can be compared to the distribution of light and star formation \citep{2010ApJ...708....9F}. Finally, on a host-by-host basis, the delay time between birth and merger has been constrained for short GRBs \citep{2010ApJ...725.1202L} and SN Ia \citep{2011MNRAS.412.1508M}; see, e.g., the review in \cite{2014ARAA..52...43B}. Thus, despite the challenges involved in obtaining sufficient statistics, possibly requiring third-generation instruments to obtain sufficiently many associations, past experience suggests host galaxy associations will provide unique clues into the formation mechanism of compact binaries. Each host galaxy associates a merger to a unique star formation history and metallicity distribution. With many events, these associations can potentially determine the ``response function'' for compact binaries: how often star forming gas of a given metallicity evolves into merging compact binaries. \section{Conclusions} \label{sec:conclude} We examine the present-day populations of coalescing compact binaries in galaxies with different assembly histories. We combine detailed and state-of-the-art cosmological simulations of galaxies with a simple but robust phenomenological model for how compact binaries form from different environments. We demonstrate that galaxies which appear similar at $z = 0$ but have differing merger histories will have somewhat different detection-weighted compact binary coalescence rates. For binary black hole mergers in particular, we show that the present-day binary black hole coalescence rate for our two Milky-Way like galaxies is nearly identical, independent of their highly distinctive early-time formation histories. This result perhaps comes as somewhat of a surprise, considering the early differences in stellar and chemical evolution. Our calculations adopt the same framework as prior investigations, which demonstrated that black hole merger rates depend sensitively on low-metallicitiy environments \citep[see,e.g.][]{PSellipticals,popsyn-LowMetallicityImpact2b-StarTrackRevised-2013,gwastro-EventPopsynPaper-2016}. More broadly, because compact binaries can merge long after they form, their host galaxy can evolve substantially in composition between birth and merger. Nonetheless, the apparent anticorrelation between the star formation rate and metallicity evolution of our galaxies has led to similar late-time populations, despite substantial differences early on. Further investigation is critical to assess whether this similarity is retained for more generic galaxy assembly histories and binary formation models. Now that we have introduced this method as a proof-of-concept here, it can be applied to large volume simulations, such as Illustris, EAGLE, or Romulus \citep{Illustris,EAGLE,Tremmel16}, in order to examine large numbers of galaxies and thus obtain statistically significant results. The detailed analysis of the compact binary populations formed through the assembly history of individual galaxies is complementary to the population-based approach reported in \cite{2016arXiv160508783L}. More broadly, our analysis reflects similar theoretical studies performed in the interpretation of, for example, long GRBs and their host galaxies. For example, \cite{2009ApJ...702..377K} demonstrated that a sufficiently strong bias towards low-metallicity star formation would predict most events in the local universe occur low-mass and dwarf galaxies. For less extreme metallicity biases, subsequent calculations by \citet{2011MNRAS.417..567N} demonstrated that the metallicity distribution within galaxies will usually lead to events in a wide range of host galaxies in the local universe. In our investigation, we have neglected the impact of population III stars. Previous studies suggest the first generation of stars may contribute a small but nonzero fraction of detectable events, potentially with distinctively high black hole masses \citep{2016MNRAS.460L..74H,2016PhRvL.117f1101S,2014MNRAS.442.2963K}. In short, in this work we have demonstrated that the confounding effects of host galaxy assembly history can in the near term complicate the interpretation of associations between GW sources and candidate host galaxies. However, given sufficient statistics, as will inevitably become available with next-generation GW instruments, combined with large-scale multiband followup, these confounding challenges can both be overcome and converged into opportunity. In the far future, with hundreds of thousands of thousands to millions of events per year in networks like Cosmic Voyager, Einstein Telescope, and DECIGO, GW measurements could even provide complementary statistical probes of the past history of galaxy assembly and evolution. \section*{Acknowledgements} ROS acknowledges support from NSF award AST-1412449, via subcontract from the University of Wisconsin-Milwaukee, and PHY-1505629. JB acknowledges generous support from the Helen Gurley Brown Trust. A portion of this work was performed at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1066293.
1,116,691,499,490
arxiv
\section{Introduction}\label{Ciaramella_mini_10_sec:intro} \vspace*{-4mm} The goal of this work is to study the asymptotic optimality of spectral coarse spaces for two-level iterative methods. In particular, we consider a linear system $A \mathbf{u} = \mathbf{f}$, where $A \in \mathbb{R}^{n \times n}$ and $\mathbf{f} \in \mathbb{R}^n$, and a two-level method that, given an iterate $\mathbf{u}^k$, computes the new vector $\mathbf{u}^{k+1}$ as \begin{align} \mathbf{u}^{k+1/2}&=G\mathbf{u}^{k}+M^{-1}\mathbf{f}, && \text{(smoothing step)} \label{Ciaramella_mini_10_eq:SM} \\ \mathbf{u}^{k+1}&=\mathbf{u}^{k+1/2}+ PA_c^{-1}R(\mathbf{f} - A\mathbf{u}^{k+1/2}). && \text{(coarse correction)} \label{Ciaramella_mini_10_eq:CC} \end{align} The smoothing step \eqref{Ciaramella_mini_10_eq:SM} is based on the splitting $A=M-N$, where $M$ is the preconditioner, and $G=M^{-1}N$ the iteration matrix. The correction step \eqref{Ciaramella_mini_10_eq:CC} is characterized by prolongation and restriction matrices $P \in \mathbb{R}^{n \times m}$ and $R=P^\top$, and a coarse matrix $A_c = RAP$. The columns of $P$ are linearly independent vectors spanning the coarse space $V_c := \mathrm{span} \, \{ \mathbf{p}_1 , \dots, \mathbf{p}_m \}$. The convergence of the one-level iteration \eqref{Ciaramella_mini_10_eq:SM} is characterized by the eigenvalues of $G$, $\lambda_j$, $j=1,\dots,n$ (sorted in descending order by magnitude). The convergence of the two-level iteration \eqref{Ciaramella_mini_10_eq:SM}-\eqref{Ciaramella_mini_10_eq:CC} depends on the spectrum of the iteration matrix $T$, obtained by replacing \eqref{Ciaramella_mini_10_eq:SM} into \eqref{Ciaramella_mini_10_eq:CC} and rearranging terms: \vspace*{-2mm} \begin{equation}\label{Ciaramella_mini_10_eq:2L} T = [ I - P ( RAP)^{-1} R A ] G. \end{equation} \vspace*{-4mm} \noindent The goal of this short paper is to answer, though partially, to the fundamental question: {\bf given an integer $m$, what is the coarse space of dimension $m$ which minimizes the spectral radius $\rho(T)$?} Since step \eqref{Ciaramella_mini_10_eq:CC} aims at correcting the error components that the smoothing step \eqref{Ciaramella_mini_10_eq:SM} is not able to reduce (or eliminate), it is intuitive to think that an optimal coarse space $V_c$ is obtained by defining $\mathbf{p}_j$ as the eigenvectors of $G$ corresponding to the $m$ largest (in modulus) eigenvalues. We call such a $V_c$ \textit{spectral coarse space}. Following the idea of correcting the `badly converging' modes of $G$, several papers proposed new, and in some sense optimal, coarse spaces. In the context of domain decomposition methods, we refer, e.g., to \cite{gander2014new,GHS2018,gander2019song}, where efficient coarse spaces have been designed for parallel, restricted additive and additive Schwarz methods. In the context of multigrid methods, it is worth to mention the work \cite{katrutsa2017deep}, where the interpolation weights are optimized using an approach based on deep-neural networks. Fundamental results are presented in \cite{xu_zikatanov_2017}: for a symmetric $A$, it is proved that the coarse space of size $m$ that minimizes the energy norm of $T$, namely $\| T \|_A$, is the span of the $m$ eigenvectors of $\overline{M}A$ corresponding to the $m$ lowest eigenvalues. Here, $\overline{M} := M^{-1} + M^{-\top} - M^{-\top}AM^{-1}$ is symmetric and assumed positive definite. If $M$ is symmetric, a direct calculation gives $\overline{M}A=2M^{-1}A-(M^{-1}A)^2$. Using that $M^{-1}A=I-G$, one can show that the $m$ eigenvectors associated to the lowest $m$ eigenvalues of $\overline{M}A$ correspond to the $m$ slowest modes of $G$. Hence, the optimal coarse space proposed in \cite{xu_zikatanov_2017} is a spectral coarse space. The sharp result of \cite{xu_zikatanov_2017} provides a concrete optimal choice of $V_c$ minimizing $\| T\|_A$. This is generally an upper bound for the asymptotic convergence factor $\rho(T)$. As we will see in Section \ref{Ciaramella_mini_10_sec:pert}, choosing the spectral coarse space, one gets $\rho(T)=|\lambda_{m+1}|$. The goal of this work is to show that this is not necessarily the optimal asymptotic convergence factor. In Section \ref{Ciaramella_mini_10_sec:pert}, we perform a detailed optimality analysis for the case $m=1$. The asymptotic optimality of coarse spaces for $m\geq 1$ is studied numerically in Section \ref{Ciaramella_mini_10_sec:opt}. Interestingly, we will see that by optimizing $\rho(T)$ one constructs coarse spaces that lead to preconditioned matrices with better condition numbers. \vspace*{-2mm} \section{A perturbation approach}\label{Ciaramella_mini_10_sec:pert} \vspace*{-4mm} Let $G$ be diagonalizable with eigenpairs $(\lambda_j,\mathbf{v}_j)$, $j=1,\dots,n$. Suppose that $\mathbf{v}_j$ are also eigenvectors of $A$: $A \mathbf{v}_j = \widetilde{\lambda}_j \mathbf{v}_j$. Concrete examples where these hypotheses are fulfilled are given in Section \ref{Ciaramella_mini_10_sec:opt}. Assume that $\textrm{rank} \, P = m$ ($\textrm{dim}\, V_c=m$). For any eigenvector $\mathbf{v}_j$, we can write the vector $T \mathbf{v}_j$ as \vspace*{-3mm} \begin{equation}\label{Ciaramella_mini_10_eq:wT} T \mathbf{v}_j = \sum_{\ell=1}^n \widetilde{t}_{j,\ell} \mathbf{v}_\ell, \: j=1,\dots,n. \end{equation} \vspace*{-2mm} \noindent If we denote by $\widetilde{T} \in \mathbb{R}^{n \times n}$ the matrix of entries $\widetilde{t}_{j,\ell}$, and define $V:=[\mathbf{v}_1,\dots,\mathbf{v}_n]$, then \eqref{Ciaramella_mini_10_eq:wT} becomes $TV=V\widetilde{T}^\top$. Since $G$ is diagonalizable, $V$ is invertible, and thus $T$ and $\widetilde{T}^\top$ are similar. Hence, $T$ and $\widetilde{T}$ have the same spectrum. We can now prove the following lemma. \vspace*{-1mm} \begin{lemma}[Characterization of $\widetilde{T}$]\label{Ciaramella_mini_10_lemma:2} Given an index $\widetilde{m} \geq m$ and assume that $V_c := \mathrm{span} \, \{ \mathbf{p}_1 , \dots, \mathbf{p}_m \}$ satisfies \begin{equation}\label{Ciaramella_mini_10_eq:ass1} V_c \subseteq \mathrm{span}\, \{ \mathbf{v}_j \}_{j=1}^{\widetilde{m}} \text{ and } V_c \cap \{ \mathbf{v}_j \}_{j=\widetilde{m}+1}^n = \{ 0 \}. \end{equation} Then, it holds that \begin{equation}\label{Ciaramella_mini_10_eq:wTT} \begin{aligned}[c] \begin{bmatrix} \widetilde{T}_{\widetilde{m}} & 0 \\ X & \Lambda_{\widetilde{m}}\\ \end{bmatrix}, \end{aligned} \qquad \begin{aligned}[c] &\Lambda_{\widetilde{m}} = \mathrm{diag}\, (\lambda_{\widetilde{m}+1},\dots,\lambda_{n}), \\ & \widetilde{T}_{\widetilde{m}} \in \mathbb{R}^{\widetilde{m} \times \widetilde{m}}, X \in \mathbb{R}^{(n-\widetilde{m}) \times \widetilde{m}}. \end{aligned} \end{equation} \end{lemma} \begin{proof} The hypothesis \eqref{Ciaramella_mini_10_eq:ass1} guarantees that $\mathrm{span}\, \{ \mathbf{v}_j \}_{j=1}^{\widetilde{m}}$ is invariant under the action of $T$. Hence, $T \mathbf{v}_j \in \mathrm{span}\, \{ \mathbf{v}_j \}_{j=1}^{\widetilde{m}}$ for $j=1,\dots,\widetilde{m}$, and, using \eqref{Ciaramella_mini_10_eq:wT}, one gets that $\widetilde{t}_{j,\ell}=0$ for $j=1,\dots,\widetilde{m}$ and $\ell=\widetilde{m}+1,\dots,n$. Now, consider any $j>\widetilde{m}$. A direct calculation using \eqref{Ciaramella_mini_10_eq:wT} reveals that $T\mathbf{v}_j = G\mathbf{v}_j - P (RAP)^{-1}RAG \mathbf{v}_j = \lambda_j \mathbf{v}_j - \sum_{\ell=1}^{\widetilde{m}} x_{j-\widetilde{m},\ell} \mathbf{v}_{\ell}$, where $x_{i,k}$ are the elements of $X\in \mathbb{R}^{(n-\widetilde{m})\times\widetilde{m} }$. Hence, the structure \eqref{Ciaramella_mini_10_eq:wTT} follows. \end{proof} Notice that, if \eqref{Ciaramella_mini_10_eq:ass1} holds, then Lemma \ref{Ciaramella_mini_10_lemma:2} allows us to study the properties of $T$ using the matrix $\widetilde{T}$ and its structure \eqref{Ciaramella_mini_10_eq:wTT}, and hence $\widetilde{T}_{\widetilde{m}}$. Let us now turn to the questions posed in Section \ref{Ciaramella_mini_10_sec:intro}. Assume that $\mathbf{p}_j=\mathbf{v}_j$, $j=1,\dots,m$, namely $V_c = \mathrm{span} \, \{ \mathbf{v}_j\}_{j=1}^m$. In this case, \eqref{Ciaramella_mini_10_eq:ass1} holds with $\widetilde{m} = m$, and a simple argument\footnote{ Let ${\bf v}_j$ be an eigenvector of $A$ with $j \in \{1,\dots,m\}$. Denote by ${\bf e}_j \in \mathbb{R}^n$ the $j$th canonical vector. Since $P {\bf e}_j={\bf v}_j$, $RAP {\bf e}_j = RA {\bf v}_j$. This is equivalent to ${\bf e}_j = (RAP)^{-1} RA {\bf v}_j$, which gives $T\mathbf{v}_j = \lambda_j(\mathbf{v}_j - P (RAP)^{-1}RA \mathbf{v}_j) = \lambda_j( \mathbf{v}_j - P {\bf e}_j) =0$. } leads to $\widetilde{T}_{\widetilde{m}}=0$, $\widetilde{T} = \begin{bmatrix} 0 & 0 \\ X & \Lambda_{\widetilde{m}}\\ \end{bmatrix}$. The spectrum of $\widetilde{T}$ is $\{0,\lambda_{m+1},\dots,\lambda_n\}$. This means that $V_c \subset \textrm{kern} \, T$ and $\rho(T)=|\lambda_{m+1}|$. Let us now perturb the coarse space $V_c$ using the eigenvector $\mathbf{v}_{m+1}$, that is $V_c(\varepsilon) := \mathrm{span} \, \{ \mathbf{v}_j + \varepsilon \, \mathbf{v}_{m+1} \}_{j=1}^m$. Clearly, $\text{dim}\, V_c(\varepsilon) = m$ for any $\varepsilon \in \mathbb{R}$. In this case, \eqref{Ciaramella_mini_10_eq:ass1} holds with $\widetilde{m} = m+1$ and $\widetilde{T}$ becomes \begin{equation}\label{Ciaramella_mini_10_thm:TTTT} \widetilde{T}(\varepsilon) = \begin{bmatrix} \widetilde{T}_{\widetilde{m}}(\varepsilon) & 0 \\ X(\varepsilon) & \Lambda_{\widetilde{m}}\\ \end{bmatrix}, \end{equation} where we make explicit the dependence on $\varepsilon$. Notice that $\varepsilon=0$ clearly leads to $\widetilde{T}_{\widetilde{m}}(0)=\text{ diag}\, (0,\dots,0,\lambda_{m+1}) \in \mathbb{R}^{\widetilde{m} \times \widetilde{m}}$, and we are back to the unperturbed case with $\widetilde{T}(0)=\widetilde{T}$ having spectrum $\{0,\lambda_{m+1},\dots,\lambda_n\}$. Now, notice that $\min_{\varepsilon \in \mathbb{R}} \rho(\widetilde{T}(\varepsilon)) \leq \rho(\widetilde{T}(0)) = | \lambda_{m+1} |$. Thus, it is natural to ask the question: is this inequality strict? Can one find an $\widetilde{\varepsilon} \neq 0$ such that $\rho(\widetilde{T}(\widetilde{\varepsilon}))=\min_{\varepsilon \in \mathbb{R}} \rho(\widetilde{T}(\varepsilon))<\rho(\widetilde{T}(0))$ holds? If the answer is positive, then we can conclude that choosing the coarse vectors equal to the dominating eigenvectors of $G$ is not an optimal choice. The next key result shows that, in the case $m=1$, the answer is positive. \begin{theorem}[Perturbation of $V_c$]\label{Ciaramella_mini_10_thm:perturb} Let $(\mathbf{v}_1,\lambda_1)$, $(\mathbf{v}_2,\lambda_2)$ and $(\mathbf{v}_3,\lambda_3)$ be three real eigenpairs of $G$, $G \mathbf{v}_j = \lambda_j \mathbf{v}_j$ such that with $0<|\lambda_3|<|\lambda_2| \leq |\lambda_1|$ and $\| \mathbf{v}_j \|_2 =1$, $j=1,2$. Denote by $\widetilde{\lambda}_j \in \mathbb{R}$ the eigenvalues of $A$ corresponding to $\mathbf{v}_j$, and assume that $\widetilde{\lambda}_1\widetilde{\lambda}_2>0$. Define $V_c := \mathrm{span}\,\{ \mathbf{v}_1 + \varepsilon \mathbf{v}_2 \}$ with $\varepsilon \in \mathbb{R}$, and $\gamma := \mathbf{v}_1^\top \mathbf{v}_2 \in [-1,1]$. Then \begin{itemize}\itemsep0em \item[{\rm (A)}]$\,$ The spectral radius of $\widetilde{T}(\varepsilon)$ is $\rho(\widetilde{T}(\varepsilon))=\max\{ |\lambda(\varepsilon,\gamma)| , | \lambda_3 | \}$, where \begin{equation}\label{Ciaramella_mini_10_thm:lam} \lambda(\varepsilon,\gamma) = \frac{\lambda_1 \widetilde{\lambda}_2 \varepsilon^2 + \gamma(\lambda_1 \widetilde{\lambda}_2 + \lambda_2 \widetilde{\lambda}_1)\varepsilon + \lambda_2 \widetilde{\lambda}_1}{\widetilde{\lambda}_2 \varepsilon^2 + \gamma (\widetilde{\lambda}_1+\widetilde{\lambda}_2)\varepsilon + \widetilde{\lambda}_1}. \end{equation} \item[{\rm (B)}]$\,$ Let $\gamma=0$. If $\lambda_1>\lambda_2>0$ or $0>\lambda_2>\lambda_1$, then $\min\limits _{\varepsilon \in \mathbb{R}} \rho(\widetilde{T}(\varepsilon)) = \rho(\widetilde{T}(0))$. \item[{\rm (C)}]$\,$ Let $\gamma=0$, If $\lambda_2>0>\lambda_1$ or $\lambda_1>0>\lambda_2$, then there exists an $\widetilde{\varepsilon} \neq 0$ such that $\rho(\widetilde{T}(\widetilde{\varepsilon})) = |\lambda_3| = \min\limits_{\varepsilon \in \mathbb{R}} \rho(\widetilde{T}(\varepsilon)) < \rho(\widetilde{T}(0))$. \item[{\rm (D)}]$\,$ Let $\gamma\neq 0$. If $\lambda_1>\lambda_2>0$ or $0>\lambda_2>\lambda_1$, then there exists an $\widetilde{\varepsilon} \neq 0$ such that $|\lambda(\widetilde{\varepsilon},\gamma)|<|\lambda_2|$ and hence $\rho(\widetilde{T}(\widetilde{\varepsilon})) = \max\{|\lambda(\widetilde{\varepsilon},\gamma)|,|\lambda_3|\} < \rho(\widetilde{T}(0))$. \item[{\rm (E)}]$\,$ Let $\gamma\neq 0$. If $\lambda_2>0>\lambda_1$ or $\lambda_1>0>\lambda_2$, then there exists an $\widetilde{\varepsilon} \neq 0$ such that $\rho(\widetilde{T}(\widetilde{\varepsilon})) = |\lambda_3| = \min\limits _{\varepsilon \in \mathbb{R}} \rho(\widetilde{T}(\varepsilon)) < \rho(\widetilde{T}(0))$. \end{itemize} \end{theorem} \begin{proof} Since $m=1$, a direct calculation allows us to compute the matrix $$\widetilde{T}_{\widetilde{m}}(\varepsilon)=\begin{bmatrix} \lambda_1 - \frac{\lambda_1\widetilde{\lambda}_1(1+\varepsilon \gamma)}{g} & -\varepsilon \frac{\lambda_1\widetilde{\lambda}_1(1+\varepsilon \gamma)}{g} \\ - \frac{\lambda_2\widetilde{\lambda}_2(\varepsilon + \gamma)}{g} & \lambda_2 - \frac{(\varepsilon\lambda_2\widetilde{\lambda}_2)(\varepsilon + \gamma)}{g} \\ \end{bmatrix},$$ where $g=\widetilde{\lambda}_1 + \varepsilon \gamma[ \widetilde{\lambda}_1+\widetilde{\lambda}_2] + \varepsilon^2 \widetilde{\lambda}_2$. The spectrum of this matrix is $\{0, \lambda(\varepsilon,\gamma)\}$, with $\lambda(\varepsilon,\gamma)$ given in \eqref{Ciaramella_mini_10_thm:lam}. Hence, point ${\rm (A)}$ follows recalling \eqref{Ciaramella_mini_10_thm:TTTT}. To prove points ${\rm (B)}$, ${\rm (C)}$, ${\rm (D)}$ and ${\rm (E)}$ we use some properties of the map $\varepsilon \mapsto \lambda(\varepsilon,\gamma)$. First, we notice that \begin{equation}\label{Ciaramella_mini_10_thm:prop} \lambda(0,\gamma)=\lambda_2, \; \lim_{\varepsilon \rightarrow \pm \infty} \lambda(\varepsilon,\gamma) = \lambda_1, \; \lambda(\varepsilon,\gamma)=\lambda(-\varepsilon,-\gamma). \end{equation} Second, the derivative of $\lambda(\varepsilon,\gamma)$ with respect to $\varepsilon$ is \begin{equation}\label{Ciaramella_mini_10_thm:der} \frac{d \lambda(\varepsilon,\gamma)}{d \varepsilon} = \frac{(\lambda_1-\lambda_2)\widetilde{\lambda}_1\widetilde{\lambda}_2(\varepsilon^2+2\varepsilon/\gamma+1)\gamma}{(\widetilde{\lambda}_2 \varepsilon^2+\gamma(\widetilde{\lambda}_1+\widetilde{\lambda}_2)\varepsilon+\widetilde{\lambda}_1)^2}. \end{equation} Because of $\lambda(\varepsilon,\gamma)=\lambda(-\varepsilon,-\gamma)$ in \eqref{Ciaramella_mini_10_thm:prop}, we can assume without loss of generality that $\gamma \geq 0$. Let us now consider the case $\gamma=0$. In this case, the derivative \eqref{Ciaramella_mini_10_thm:der} becomes $\frac{d \lambda(\varepsilon,0)}{d \varepsilon} = \frac{(\lambda_1-\lambda_2)\widetilde{\lambda}_1\widetilde{\lambda}_2 2\varepsilon}{(\widetilde{\lambda}_2 \varepsilon^2+\widetilde{\lambda}_1^2)^2}$. Moreover, since $\lambda(\varepsilon,0)=\lambda(-\varepsilon,0)$ we can assume that $\varepsilon \geq 0$. Case ${\rm (B)}$. If $\lambda_1>\lambda_2>0$, then $\frac{d \lambda(\varepsilon,0)}{d \varepsilon}>0$ for all $\varepsilon>0$. Hence, $\varepsilon \mapsto \lambda(\varepsilon,0)$ is monotonically increasing, $\lambda(\varepsilon,0) \geq 0$ for all $\varepsilon>0$ and, thus, the minimum of $\varepsilon \mapsto |\lambda(\varepsilon,0)|$ is attained at $\varepsilon = 0$ with $|\lambda(0,0)|=|\lambda_2|>|\lambda_3|$, and the result follows. Analogously, if $0>\lambda_2>\lambda_1$, then $\frac{d \lambda(\varepsilon,0)}{d \varepsilon}<0$ for all $\varepsilon>0$. Hence, $\varepsilon \mapsto \lambda(\varepsilon,0)$ is monotonically decreasing, $\lambda(\varepsilon,0) < 0$ for all $\varepsilon>0$ and the minimum of $\varepsilon \mapsto |\lambda(\varepsilon,0)|$ is attained at $\varepsilon = 0$. Case ${\rm (C)}$. If $\lambda_1>0>\lambda_2$, then $\frac{d \lambda(\varepsilon,0)}{d \varepsilon}>0$ for all $\varepsilon >0$. Hence, $\varepsilon \mapsto \lambda(\varepsilon,0)$ is monotonically increasing and such that $\lambda(0,0)=\lambda_2<0$ and $\lim_{\varepsilon \rightarrow \infty} \lambda(\varepsilon,0) = \lambda_1>0$. Thus, the continuity of the map $\varepsilon \mapsto \lambda(\varepsilon,0)$ guarantees the existence of an $\widetilde{\varepsilon}>0$ such that $\lambda(\widetilde{\varepsilon},0)=0$. Analogously, if $\lambda_2>0>\lambda_1$, then $\frac{d \lambda(\varepsilon,0)}{d \varepsilon}<0$ for all $\varepsilon>0$ and the result follows by the continuity of $\varepsilon \mapsto \lambda(\varepsilon,0)$. Let us now consider the case $\gamma>0$. The sign of $\frac{d \lambda(\varepsilon,\gamma)}{d \varepsilon}$ is affected by the term $f(\varepsilon):=\varepsilon^2+2\varepsilon/\gamma+1$, which appears at the numerator of \eqref{Ciaramella_mini_10_thm:der}. The function $f(\varepsilon)$ is strictly convex, attains its minimum at $\varepsilon=-\frac{1}{\gamma}$, and is negative in $(\bar{\varepsilon}_1,\bar{\varepsilon}_2)$ and positive in $(-\infty,\bar{\varepsilon}_1)\cup(\bar{\varepsilon}_2,\infty)$, with $\bar{\varepsilon}_1,\bar{\varepsilon}_2=-\frac{1\mp \sqrt{1-\gamma^2}}{\gamma}$. Case ${\rm (D)}$. If $\lambda_1>\lambda_2>0$, then $\frac{d \lambda(\varepsilon,\gamma)}{d \varepsilon}>0$ for all $\varepsilon > \bar{\varepsilon}_2$. Hence, $\frac{d \lambda(0,\gamma)}{d \varepsilon}>0$, which means that there exists an $\widetilde{\varepsilon}<0$ such that $|\lambda(\widetilde{\varepsilon},\gamma)|<|\lambda(0,\gamma)|=|\lambda_2|$. The case $0>\lambda_2>\lambda_1$ follows analogously. Case ${\rm (E)}$. If $\lambda_1>0>\lambda_2$, then $\frac{d \lambda(\varepsilon,\gamma)}{d \varepsilon}>0$ for all $\varepsilon>0$. Hence, by the continuity of $\varepsilon \mapsto \lambda(\varepsilon,\gamma)$ (for $\varepsilon\geq 0$) there exists an $\widetilde{\varepsilon}>0$ such that $\lambda(\widetilde{\varepsilon},\gamma)=0$. The case $\lambda_2>0>\lambda_1$ follows analogously. \end{proof} Theorem \ref{Ciaramella_mini_10_thm:perturb} and its proof say that, if the two eigenvalues $\lambda_1$ and $\lambda_2$ have opposite signs (but they could be equal in modulus), then it is always possible to find an $\varepsilon \neq 0$ such that the coarse space $V_c := \mathrm{span}\{ \mathbf{v}_1 + \varepsilon \mathbf{v}_2 \}$ leads to a faster method than $V_c := \mathrm{span}\{ \mathbf{v}_1 \}$, even though both are one-dimensional subspaces. In addition, if $\lambda_3 \neq 0$ the former leads to a two-level operator $T$ with a larger kernel than the one corresponding to the latter. The situation is completely different if $\lambda_1$ and $\lambda_2$ have the same sign. In this case, the orthogonality parameter $\gamma$ is crucial. If $\mathbf{v}_1$ and $\mathbf{v}_2$ are orthogonal ($\gamma=0$), then one cannot improve the effect of $V_c:= \mathrm{span}\{ \mathbf{v}_1 \}$ by a simple perturbation using $\mathbf{v}_2$. However, if $\mathbf{v}_1$ and $\mathbf{v}_2$ are not orthogonal ($\gamma \neq 0$), then one can still find an $\varepsilon \neq 0$ such that $\rho(\widetilde{T}(\varepsilon)) < \rho(\widetilde{T}(0))$. Notice that, if $|\lambda_3|=|\lambda_2|$, Theorem \ref{Ciaramella_mini_10_thm:perturb} shows that one cannot obtain a $\rho(T)$ smaller than $|\lambda_2|$ using a one-dimensional perturbation. However, if one optimizes the entire coarse space $V_c$ (keeping $m$ fixed), then one can find coarse spaces leading to better contraction factor of the two-level iteration, even though $|\lambda_3|=|\lambda_2|$. This is shown in the next section. \section{Optimizing the coarse-space functions}\label{Ciaramella_mini_10_sec:opt} \vspace*{-4mm} Consider the elliptic problem \vspace*{-2mm} \begin{equation}\label{Ciaramella_mini_10_eq:elliptic} - \Delta u + c \, (\partial_x u + \partial_y u) = f \; \text{ in $\Omega=(0,1)^2$},\quad u = 0 \; \text{ on $\partial \Omega$}. \end{equation} \vspace*{-2mm} \noindent Using a uniform grid of size $h$, the standard second-order finite-difference scheme for the Laplace operator and the central difference approximation for the advection terms, problem \eqref{Ciaramella_mini_10_eq:elliptic} becomes $A \mathbf{u} = \mathbf{f}$, where $A$ has constant and positive diagonal entries, $D=\mathrm{diag}(A)=4/h^2 I$. A simple calculation shows that, if $c\geq 0$ satisfies $c\leq 2/h$, then the eigenvalues of $A$ are real. The eigenvectors of $A$ are orthogonal if $c=0$ and non-orthogonal if $c>0$. One of the most used smoothers for \eqref{Ciaramella_mini_10_eq:elliptic} is the damped Jacobi method: $\mathbf{u}^{k+1} = \mathbf{u}^k + \omega D^{-1}( \mathbf{f} - A \mathbf{u}^k)$, where $\omega \in (0,1]$ is a damping parameter. The corresponding iteration matrix is $G=I-\omega D^{-1} A$. Since $D=4/h^2 I$, the matrices $A$ and $G$ have the same eigenvectors. For $c=0$, it is possible to show that, if $\omega=1$ (classical Jacobi iteration), then the nonzero eigenvalues of $G$ have positive and negative signs, while if $\omega=1/2$, the eigenvalues of $G$ are all positive. Hence, the chosen model problem allows us to work in the theoretical framework of Section \ref{Ciaramella_mini_10_sec:pert}. To validate numerically Theorem \ref{Ciaramella_mini_10_thm:perturb}, we set $h=1/10$ and consider $V_c:=\left\{\mathbf{v}_1+\varepsilon \mathbf{v}_2 \right\}$. Figure \ref{Ciaramella_mini_10_eq:validate_thm} shows the dependence of $\rho(T(\varepsilon))$ and $|\lambda(\varepsilon,\gamma)|$ on $\varepsilon$ and $\gamma$. On the top left panel, we set $c=0$ and $\omega=1/2$ so that the hypotheses of point (B) of Theorem \ref{Ciaramella_mini_10_thm:perturb} are satisfied, since $\gamma=0$ and $\lambda_1\geq \lambda_2>0$. As point (B) predicts, we observe that $\min\limits_{\varepsilon\in\mathbb{R}}\rho(T(\varepsilon))$ is attained at $\varepsilon=0$, i.e. $\min_{\varepsilon\in\mathbb{R}}\rho(T(\varepsilon))=\rho(T(0))=\lambda_2$. Hence, adding a perturbation does not improve the coarse space made only by $\mathbf{v}_1$. Next, we consider point (C), by setting $c=0$ and $\omega=1$. Through a direct computation we get $\lambda_1=-0.95$, $\lambda_2=-\lambda_1$ and $\lambda_3=0.90$. The top-right panel shows, on the one hand, that for several values of $\varepsilon$, $\rho(T(\varepsilon))=\lambda_3<\lambda_2$, that is with a one-dimensional perturbed coarse space, we obtain the same contraction factor we would have with the two-dimensional spectral coarse space $V_c=\text{span}\left\{\mathbf{v}_1,\mathbf{v}_2\right\}$. On the other hand, we observe that there are two values of $\varepsilon$ such that $\rho(\widetilde{T}_{\widetilde{m}}(\varepsilon))=0$, which (recalling \eqref{Ciaramella_mini_10_eq:wT} and \eqref{Ciaramella_mini_10_eq:wTT}) implies that $T$ is nilpotent over the $\mathrm{span}\{\mathbf{v}_1,\mathbf{v}_2\}$. To study point (D), we set $c=10$, $\omega=1/2$, which lead to $\lambda_1=0.92$, $\lambda_2=\lambda_3=0.90$. The left-bottom panel confirms there exists an $\varepsilon^*<0$ such that $|\lambda(\varepsilon^*,\gamma)|\leq \lambda_2$, which implies $\rho(T(\varepsilon^*))\leq \lambda_2$. Finally, we set $c=10$ and $\omega=1$. Point (E) is confirmed by the right-bottom panel, which shows that $|\lambda(\varepsilon,\gamma)|<|\lambda_2|$, and thus $\min_{\varepsilon}\rho(T(\varepsilon))=|\lambda_3|$, for some values of $\varepsilon$. \begin{figure}[t] \centering \includegraphics[scale=0.32]{FigA-eps-converted-to.pdf} \includegraphics[scale=0.32]{FigB-eps-converted-to.pdf} \includegraphics[scale=0.32]{FigC-eps-converted-to.pdf} \includegraphics[scale=0.32]{FigD-eps-converted-to.pdf} \caption{Behavior of $|\lambda(\varepsilon,\gamma)|$ and $\rho(T(\varepsilon))$ as functions of $\varepsilon$ for different $c$ and $\gamma$.}\label{Ciaramella_mini_10_eq:validate_thm} \end{figure} We have shown both theoretically and numerically that the spectral coarse space is not necessarily the one-dimensional coarse space minimizing $\rho(T)$. Now, we wish to go beyond this one-dimensional analysis and optimize the entire coarse space $V_c$ keeping its dimension $m$ fixed). This is equivalent to optimize the prolongation operator $P$ whose columns span $V_c$. Thus, we consider the optimization problem \begin{equation}\label{Ciaramella_mini_10_eq:optimization_problem} \min_{P \in\mathbb{R}^{n\times m}} \rho(T(P)). \end{equation} To solve approximately \eqref{Ciaramella_mini_10_eq:optimization_problem}, we follow the approach proposed by \cite{katrutsa2017deep}. Due to the Gelfand formula $\rho(T)=\lim_{k\rightarrow \infty} \sqrt[k]{\|T^k\|_F}$, we replace \eqref{Ciaramella_mini_10_eq:optimization_problem} with the simpler optimization problem $\min_{P} \|T(P)^k\|^2_F$ for some positive $k$. Here, $\|\cdot\|_F$ is the Frobenius norm. We then consider the unbiased stochastic estimator \cite{hutchinson1989stochastic} \[\|T^k\|^2_F=\text{trace}\left((T^k)^\top T^k\right)=\mathbb{E}_{\mathbf{z}}\left[ \mathbf{z}^\top (T^k)^\top T^k \mathbf{z}\right]=\mathbb{E}_{\mathbf{z}}\left[ \|T^k \mathbf{z}\|^2_2\right] ,\] where $\mathbf{z}\in\mathbb{R}^n$ is a random vector with Rademacher distribution, i.e. $\mathbb{P}(\mathbf{z}_i=\pm 1)=1/2$. Finally, we rely on a sample average approach, replacing the unbiased stochastic estimator with its empirical mean such that \eqref{Ciaramella_mini_10_eq:optimization_problem} is approximated by \begin{equation}\label{Ciaramella_mini_10_eq:optimization_problem_emp} \min_{P \in\mathbb{R}^{n\times m}} \frac{1}{N}\sum_{i=1}^N\|T(P)^k \mathbf{z}_i\|^2_F, \end{equation} where $\mathbf{z}_i$ are a set of independent, Rademacher distributed, random vectors. The action of $T$ onto the vectors $\mathbf{z}_i$ can be interpreted as the feed-forward process of a neural net, where each layer represents one specific step of the two-level method, that is the smoothing step, the residual computation, the coarse correction and the prolongation/restriction operations. In our setting, the weights of most layers are fixed and given, and the optimization is performed only on the weights of the layer representing the prolongation step. The restriction layer is constraint to have as weights the transpose of the weights of the prolongation layer. We solve \eqref{Ciaramella_mini_10_eq:optimization_problem_emp} for $k=10$ and $N=n$ using Tensorflow \cite{tensorflow2015-whitepaper} and its stochastic gradient descend algorithm with learning parameter 0.1. The weights of the prolongation layer are initialized with an uniform distribution. Table \ref{Ciaramella_mini_10_eq:Tab} reports both $\rho(T(P))$ and $\|T(P)\|_A$ using a spectral coarse space and the coarse space obtained solving \eqref{Ciaramella_mini_10_eq:optimization_problem_emp}. \begin{table}[t] \centering \def1.2{1.2} \setlength{\tabcolsep}{5pt} \begin{tabular}{ l |c c | c c c c} & $c$ & $\omega$ & $m=1$ & $m =5$ & $m=10$ & $m=15$\\ \hline \parbox[b][-24pt][c]{8pt}{\rotatebox[origin=c]{90}{$\rho(T)$}} & 0 & 1/2 & 0.95 - 0.95 & 0.90 - 0.90 & 0.82 - 0.83 & 0.76 - 0.78 \\ & 0 & 1 & 0.95 - 0.90 & 0.90 - 0.80 & 0.80 - 0.65 & 0.74 - 0.53 \\ & 10 & 1/2 & 0.90 - 0.90 & 0.85 - 0.82 & 0.79 - 0.74 & 0.73 - 0.68 \\ & 10 & 1 & 0.85 - 0.80 & 0.80 - 0.67 & 0.71 - 0.55 & 0.66 - 0.37 \\ \hline \parbox[b][-7pt][c]{8pt}{\rotatebox[origin=c]{90}{$\|T\|_{A}$}} & 0 & 1/2 & 0.95 - 0.95 & 0.90 - 0.90 & 0.82 - 0.84 & 0.76 - 0.77 \\ & 0 & 1 & 0.95 - 0.95 & 0.90 - 0.94 & 0.80 - 0.88 & 0.74 - 0.88 \\ \hline \parbox[b][-8pt][c]{8pt}{\rotatebox[origin=c]{90}{$\kappa_2$}} & 0 & 1 & 46.91 - 29.45 & 18.48 - 14.40 & 9.37 - 8.22 & 6.69 - 8.53 \\ & 10 & 1 & 27.25 - 23.98 & 22.44 - 12.36 & 17.34 - 11.35 & 13.06 - 9.71 \\ \end{tabular} \caption{Values of $\rho(T)$, $\|T\|_{A}$ and condition number $\kappa_2$ of the matrix $A$ preconditioned by the two-level method for different $c$ and $\omega$ and using either a spectral coarse space (left number), or the coarse space obtained solving \eqref{Ciaramella_mini_10_eq:optimization_problem_emp} (right number).}\label{Ciaramella_mini_10_eq:Tab} \end{table} We can clearly see that there exist coarse spaces, hence matrices $P$, corresponding to values of the asymptotic convergence factor $\rho(T(P))$ much smaller than the ones obtained by spectral coarse spaces. Hence, Table \ref{Ciaramella_mini_10_eq:Tab} confirms that a spectral coarse space of dimension $m$ is not necessarily a (global) minimizer for $\min\limits_{P \in \mathbb{R}^{n\times m}}\rho(T(P))$. This can be observed not only in the case $c=0$, for which the result of \cite[Theorem 5.5]{xu_zikatanov_2017} states that (recall that $M$ is symmetric) the spectral coarse space minimizes $\|T(P)\|_A$, but also for $c> 0$, which corresponds to a nonsymmetric $A$. Interestingly, the coarse spaces obtained by our numerical optimizations lead to preconditioned matrices with better condition numbers, as shown in the last row of Table \ref{Ciaramella_mini_10_eq:Tab}, where the condition number $\kappa_2$ of the matrix $A$ preconditioned by the two-level method (and different coarse spaces) is reported. \bibliographystyle{plain}
1,116,691,499,491
arxiv
\section{Introduction} \label{sec:introduction} Routing of multiple vehicles is an important and difficult problem with applications in the logistic domain~\cite{schmid2013rich}, especially in the area of customer servicing~\cite{Flatberg2007}. In postal services, after-sales services, and in business to business delivery or pick up services one or more vehicles have to be efficiently routed towards customers. If customers can request services over time, the problem becomes dynamic: besides a set of fixed customers, new requests can appear at any point in time. Of course, it is desirable that as many customers as possible are serviced while the tour of any vehicle is kept short. However, it is usually infeasible (due to human resources, labor regulations, or other constraints) to service all customer requests. And clearly, the less customers are left unserviced, the longer the tours become. Thus, the problem is inherently multi-objective. Any efficient solution (smallest maximum tour across all vehicles) is a compromise between the desire to service as many customers as possible (e.g. maximize revenue) and the necessity to keep vehicle routes short (minimize costs). At the same time, the dynamic appearance of new customer requests may significantly change the scenario over and over again: new but ignored requests negatively contribute to the objective of visiting as many customers as possible, while the inclusion of new customers (usually) increases tour length and thus changes the compromises on which a selection of a route was originally made by a decision maker (DM). This dynamic problem has been studied by Bossek et al.~\cite{BGMRT2019BiObjective} for the special case of a single vehicle which answers all requests and travels (in an open tour) from a start to an end depot. The authors devised a dynamic evolutionary approach based on an interactive algorithmic framework that incorporated an evolutionary multi-objective optimization algorithm (EMOA) applied in eras and repeated decision making. However, the applied EMOA is rather unrealistically based on the assumption, that only one vehicle is available. In this work, we will reuse the framework proposed by Bossek et al.~\cite{BGMRT2019BiObjective} but replace the internal EMOA~\cite{BGMRT2018} by an adapted algorithm that is capable of considering multiple vehicles. The inclusion of multiple vehicles changes the problem (and thus the algorithm) considerably: Instead of a single tour (single open TSP), as many tours as considered vehicles have to be optimized simultaneously. This implies changes in problem encoding, in information transfer between generations, and in variation operators. At the same time, the number of vehicles is explicitly not considered as additional objective. To keep the scenario realistic, the number of vehicles can neither be changed during the optimization process nor in each era. A dynamic change in the number of used vehicles during the process would require most flexible (and thus costly) human resources and is therefore usually infeasible for a company.\footnote{Visits at single or few customers (including direct travel from and return to a depot) would immediately contribute high costs to the total tour costs. } Additionally, a third objective would turn the originally bi-objective problem in a more complex-to-handle decision scenario for the DM. The goal and contribution of this work is twofold: \begin{enumerate} \item The dynamic multi-objective vehicle routing problem (MO-VRP) and the dynamic solution approach are extended towards a more realistic scenario by including multiple vehicles. We introduce a significantly changed algorithm within the interactive framework proposed in~\cite{BGMRT2019BiObjective}. \item We analyze the benefit of multiple vehicles in dynamic vehicle routing and compare our approach to an (extended) version of an a-posteriori evolutionary solution approach for this problem~\cite{GMT+2015}. This approach unrealistically knows of all service requests in advance (clairvoyant) and thus needs no dynamic decision making during optimization. Further, we compare the multi-vehicle approach to the dynamic approach by Bossek et al.~\cite{BGMRT2019BiObjective} and investigate the activities of vehicles. The individual activities of vehicles provide information on whether each vehicle contributes to the solution or whether some vehicles stay idle. This evaluation can eventually justify our decision to not include the number of vehicles as third objective. \end{enumerate} The work is structured as follows: the next section briefly reflects the related work, while Section~\ref{sec:probform} formally introduces the dynamic multi-objective problem as described by Bossek et al.~\cite{BGMRT2019BiObjective}. Section~\ref{sec:demoa} then details the algorithmic extensions. The experimental setup as well as empirical results are described and discussed in Sections~\ref{sec:experiments} and \ref{sec:experimental_results}. Section~\ref{sec:conclusion} finally concludes the work and highlights perspectives for future research. \section{Related work} \label{sec:related} As the traveling salesperson problem (TSP) is a major sub-problem of the here considered dynamic and multi-objective vehicle routing problem, this paper is naturally related to work on special TSPs, where not all customer locations (or cities) have to be visited. In research, these problems are sometimes referred to as orienteering problems~\cite{GLV87}, selective TSP~\cite{GLS98a,LM90}, or as TSP with profits~\cite{FDG05}. However, most of these problems are discussed as single-objective problems~\cite{DMV95}, although some early work already recognized the (at least) bi-objective character of these problems~\cite{KG88}. Only later work started to solve the orienteering problem in a bi-objective way using an $\epsilon$-constraint approach~\cite{BGP09} or approximation schemes~\cite{FS13} that produce Pareto-$\varepsilon$-approximations of the efficient solution set. While the both before mentioned approaches are based on repeated single-objective optimization, some authors \cite{Ombuki2006,JGL08,tan2006,kang2018enhanced,wang2018bi} explicitly solve the bi-objective variants of the orienteering problem using an evolutionary algorithm, however, excluding service requests over time or considering the problem as a-posteriori (non-dynamic). Many of these approaches~\cite{Ombuki2006,tan2006,kang2018enhanced} introduce the number of vehicles as an objective to be minimized while simultaneously minimizing the tour length. A related a-posteriori variant of the here considered dynamic problem is described by Grimme et al.~\cite{GMT+2015}, who propose an NSGA-II-based EMOA. This work has been extended later on by the integration of local search mechanisms~\cite{MGB2015} and the analysis of local search effects~\cite{BGMRT2018}. These works only allow one vehicle (like also described in \cite{GLV87,VSV11}) but include the number of visited customers (revenue) as second objective besides tour length (costs). While considering only one vehicle seems to be unrealistic, the inclusion of the number of vehicles as objective is only feasible in the a-posteriori and non-dynamic case. When problem instances change constantly due to customer requests (related examples from logistics and other domains can be found here: \cite{GFK17,pinedo2012scheduling,MPGL06}) decision making is also a repetitive process over time. However, over time, decisions are constantly renewed building on past decisions which of course cannot be changed. In vehicle routing, one or more vehicles start at a depot and travel initially decided tours. Later on, new decisions have to take into account the current location as well as newly received or not yet serviced customer requests~\cite{BGMRT2019BiObjective}. Rewinding of previous decisions (i.e. visits of customers) is impossible. As such, the initial decision for a fixed number of vehicles could only be changed by sending vehicles home or activating additional ones. This however, causes additional traveling costs and contradicts (in the real world) human resources' availability or labor regulations. Thus, it is most realistic not to consider the number of vehicles as additional objective in the dynamic case. In general, dynamic vehicle routing is usually addressed by designing online decision rules, see e.g.~\cite{Pillac2013,Meisel2011}. According to Braekers et al.~\cite{Braekers2016} only little work is available on dynamic multi-objective problems. In their survey they mention authors who consider dynamics in service time windows and changing structures of the network~\cite{Wen2010,Lorini2011,Khouadjia2012,Hong2012,Barkaoui:2013}. \section{Problem notation} \label{sec:probform} The here considered dynamic multi-objective VRP can be denoted as follows: we consider a set of customer locations, which can be partitioned into three disjoint sets, $C=M\cupD\cup\{N-1, N\}$. The subset $M$ contains all \underline{m}andatory customers locations that are initially known and have to be visited, while the subset $D$ contains all \underline{d}ynamic customers that appear over time and are not known to the algorithm beforehand. The third subset $\{N-1,N\}$ denotes the locations of the start and the end depot. Note that we consider the more general case here, in which start and end depot can be different. The more common special case of start and end depot being at the same location is of course included. We consider two objectives in a minimization problem. The first objective aims for the minimization of the maximum tour length for all vehicles. Let $n_v$ be the number of vehicles and $x$ a solution to the problem, then we denote the tour length for each vehicle $i\in\{1\dots,n_v\}$ as $L_i(x)$ and determine $f_1(x)=\max_i L_i(x)$. By using the maximum tour length as objective, we expect a balanced usage of vehicles in any solution. The second objective $f_2$ minimizes the number of unserved dynamic customers\footnote{Note that we consider the number of unserved dynamic customers to realize a minimization of all objectives. Clearly, the second objective is equivalent to the maximization of served dynamic customers.}. Clearly, the objectives are in conflict and we need to adopt the notion of Pareto-optimality and dominance to describe compromise solutions for the resulting multi-objective optimization problem. For two solutions $x$ and $y$, we denote $x \prec y$ ($x$ dominates $y$), if $x$ is not worse in any objective and better in at least one objective than $y$. The set of all non-dominated solutions in search space is called \emph{Pareto set}; its image in objective space is called \emph{Pareto front}~\cite{Coello2006}. As we consider a dynamic problem, anytime a dynamic customer requests service, the Pareto-set would have to be recomputed for the still unvisited mandatory and dynamic customers and a desired solution needs to be selected. As this is usually infeasible in practice, we discretize time and define $\neras$ intervals of length $\Delta \in \mathbb{R}_{\ge 0}$ called \emph{eras} that partition dynamic requests and subsequent decision making into phases~\cite{RY13}. At the onset $t = (j-1)\cdot \Delta$ of each era $j$, new dynamic customer requests may have appeared. Based on this set and the remaining (not yet visited) mandatory customers in $M$, we can consider the problem as a static multi-objective optimization problem (MOP) and apply an EMOA to approximate the current Pareto-set. Then, a decision maker (DM) is provided with the compromises and allowed to pick a solution (i.e. a set of $n_v$ tours) which will be realized until the onset of the next era. Note that in each era $j > 0$ the vehicle has started to realize a tour and possibly has already visited mandatory and/or dynamic customers. Naturally, already realized parts of earlier picked solutions are not reversible anymore. Thus, decisions made in earlier eras may have significant influence on later solutions. We address this challenge by introducing an automated decision making process as proposed in~\cite{BGMRT2019BiObjective} and evaluating different configurations and decision chains, later on. \section{A Dynamic Multi-Objective Evolutionary Algorithm} \label{sec:demoa} Next we dive into the working principles and algorithmic details of the proposed DEMOA. The algorithm is a natural extension of the DEMOA proposed in \cite{BGMRT2019} for the single-vehicle version of the considered bi-objective problem. The algorithmic steps are outlined in Alg.~\ref{alg:demoa}. The algorithm requires the following parameters: the problem instance comprising of the subsets $M$ of mandatory and $D$ of dynamic customers. Further parameters control the number and the length of eras ($\neras$ and $\Delta$), the number of vehicles $n_v$ and EA-specific arguments like the population size $\mu$ and parameters controlling for the strength of mutation (details discussed later). We now describe the DEMOA procedure in general and discuss implementation details (initialization and variation) subsequently. \begin{algorithm}[htb] \caption{DEMOA} \label{alg:demoa} \begin{algorithmic}[1] \Require{\textbf{a)} Customer sets $M$, $D$, \textbf{b)} nr. of eras $\neras$, \newline{}\textbf{c)} era length $\Delta$, \textbf{d)} nr. of vehicles $n_v$, \textbf{e)} population size $\mu$, \newline{}\textbf{f)} prob. to swap $p_\text{swap}$, \textbf{g)} nr. of swaps $n_\text{swap}$} \State $t \gets 0$ \Comment{current time} \State $P \gets \emptyset$ \Comment{population (initialized below)} \State $T \gets$ list of tours \Comment{empty at the beginning of 1st era} \For{$i$ $\gets$ 1 to $\neras$} \Comment{era loop} \label{algline:eraloop} \State $T^{\leq t} \gets$ list of $n_v$ partial tours already driven by vehicles at time $t$ extracted from list $T$ \Comment{empty in 1st era} \State $P \gets \Call{initialize}{\mu, T^{\leq t}, t, P}$ \Comment{see Alg.~\ref{alg:initialize}; pass last population of previous era as template}\label{algline:call_initialize} \While{stopping condition not met} \Comment{EMOA loop} \State $Q \gets \{\Call{mutate}{x, T^{\leq t}, p_\text{swap}, n_\text{swap}} \,|\, x \in P\}$ \Comment{Alg.~\ref{alg:mutate}} \State $Q \gets \{\Call{localsearch}{x} \, | \, x \in P\}$ \State $P \gets \Call{select}{Q \cup P}$ \Comment{NSGA-II survival-selection} \EndWhile \State $T \gets \Call{choose}{P}$ \Comment{DM makes choice $\leadsto$ list of $n_v$ tours} \State $t \gets t + \Delta$ \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm}[htb] \caption{INITIALIZE} \label{alg:initialize} \begin{algorithmic}[1] \Require{\textbf{a)} pop. size $\mu$, \textbf{b)} initial tours $T^{\leq t}$, \textbf{c)} time $t$,\newline{} \textbf{d)} template population $P$} \State $Q \gets \emptyset$ \For{$j \gets 1$ to $\mu$} \If{$P$ is empty} \Comment{1st era; no template given} \State $x.v_i \gets$ random vehicle from $\{1, \ldots, n_v\}$ for all $i \in C$ \State $x.a_i \gets 1$ for all $i \in M$ \State $x.a_i \gets 0$ for all $i \in D$ \State $x.p \gets$ random permutation of $C = M \cup D$ \Else \Comment{repair template} \State $x \gets P_j \in P$ \State $x.a_i \gets 1$ for all $i \in T^{\leq t}$ \State $x.v_i \gets $ vehicle nr. assigned to $i$ in $T^{\leq t}$ \State In $x.p$ move sub-sequence of driven tour in $T^{\leq t}$ at the beginning for each vehicle. \State $x.a_i \gets 1$ for each $i$ in random subset of $\CD^{\text{new}}$ \EndIf \State $Q \gets Q \cup \{x\}$ \EndFor \Return{Q} \end{algorithmic} \end{algorithm} \subsection{General (D)EMOA} Initialization steps (Alg.~\ref{alg:demoa}, lines 1-3) consist of declaring a population $P$ and a list $T$ where $T_v$ contains the tour of the corresponding vehicle $v \in \{1, \ldots, n_v\}$, i.e., $T$ stores the solution the decision maker picked at the end of the previous era. Before the first era begins these tours are naturally empty since no planning was conducted at all. Line \ref{algline:eraloop} iterates over the eras. Here, the actual optimization process starts. The first essential step in each era is -- given the passed time $t$ -- to determine for each vehicle $v \in \{1, \ldots, n_v\}$ the initial tour already realized by vehicle $v$. This information is extracted from the list of tours $T$ and stored in the list $T^{\leq t}$. Note that again, in the first era the initial tours are empty, as is $T$, since the vehicles are all located at the start depot. Next, the population is initialized in line \ref{algline:call_initialize} and a static EMOA does his job in lines $7-10$. Here, offspring solutions are generated by mutation followed by a sophisticated genetic local search procedure with the aim to reduce the tour lengths of solutions. Finally, following a $(\mu + \lambda)$-strategy the population is updated. Therein, the algorithm relies on the survival selection mechanism of the NSGA-II algorithm~\cite{Deb02}. Once a stopping condition is met, e.g., a maximum number of generations is reached, the era ends and the final solution set is presented to a decision maker who needs to choose exactly one solution~(Alg.~\ref{alg:demoa}, line~11). This choice (the list $T$ of $n_v$ tours) determines the (further) order of customers to be visited by the respective service vehicles. \begin{algorithm}[ht] \caption{MUTATE} \label{alg:mutate} \begin{algorithmic}[1] \Require{\textbf{a)} individual $x$, \textbf{b)} initial tours $T^{\leq t}$, \textbf{c)} swap prob. $p_\text{swap}$, \textbf{d)} $n_\text{swap}$} \State $D_{av} \gets$ dyn. customers available at time $t$ and \underline{not in} $T^{\leq t}$ \State $C_{av} \gets$ all customers \underline{av}ailable at time $t$ and \underline{not in} $T^{\leq t}$ \State $p_a \gets 1 / |D_{av}|$ \State $p_v \gets 1 / |C_{av}|$ \State Flip $x.a_i$ with prob. $p_a$ for all $i \in D_{av}$ \State Change vehicle $x.v_i$ with prob. $p_v$ for all $i \in C_{av}$ \If{random number in $(0, 1) \leq p_\text{swap}$} \State Exchange $n_\text{swap}$ times each two nodes from $C_{av}$ in $x.t$ \EndIf \Return{x} \end{algorithmic} \end{algorithm} \subsection{Initialization} We strongly advice the reader to consult Alg.~\ref{alg:initialize} and in particular Fig.~\ref{fig:encoding} in the course of reading the following explanations for visual support. Each individual is built of three vectors $x.v, x.a$ and $x.p$ of length $N-2$ each of which stores information on the \underline{v}ehicles assigned, the \underline{a}ctivation status of each customer and a \underline{p}ermutation of all customers. In the first era there are no already visited customers, i.e. both $P$ and $T^{\leq t}$ are empty, and the algorithm does not need to take these into account. Hence, in lines 4-7 each individual $x \in P$ is created from scratch as follows: each customer $i \in (M \cup D)$ is assigned a vehicle $v \in \{1, \ldots, n_v\}$ uniformly at random. This information is stored in the vector $x.v$. Next, all mandatory customers $i \in M$ are activated by setting the value of a binary string $x.a \in \{0, 1\}^{N-2}$ to 1 which means \enquote{active}. In contrast, all dynamic customers $i \in D$ are deactivated ($x.a_i = 0$) since they did not ask for service so far. The final step is to store a random permutation of all customers in the permutation vector $x.p$. Note that during fitness function evaluation in order to calculate the individual tour lengths $L_v(x)$ for each vehicle $v$ only the sub-sequence of positions $i \in \{1, ..., N-2\}$ in $x.p$ is considered for which $x.a_i = 1$ and $x.v_i = v$ holds. Initialization in later eras is different and more complex. Now, the parameter $P$ passed to Alg.~\ref{alg:initialize} -- the final population of the previous era -- is non-empty and its solutions serve as templates for the new population. We aim to transfer as much information as possible. However, usually the majority of individuals $x \in P$ is in need of repair. This is because as time advances (note that additional $\Delta$ time units have passed) further customers $i \in C$ may already have been visited by the vehicle fleet, but it is possible that some of these are inactive in $x$ (i.e., $x.a_i = 0$). Hence, (a) all customers which have already been visited, stored in the list $T^{\leq t}$, are activated and assigned to the responsible vehicle (here we use an overloaded element-of relation on lists in the pseudo-codes for convenience) and (b) furthermore their order in the permutation string $x.p$ is repaired. The latter step is achieved by moving the sub-sequence of visited customers before the remaining customers assigned to the corresponding vehicle in the permutation string. This step completes the repair procedure and the resulting individual is guaranteed to be feasible. Subsequent steps involve randomly activating customers that asked for service within the last $\Delta$ time units. \begin{figure}[htb] \centering \scalebox{0.92}{ \begin{tikzpicture}[scale=1,label distance=-6mm] \begin{scope}[every node/.style={draw=white}] \node (enc) at (0, 0) { \begin{tabular}{lcccccc} \multicolumn{7}{l}{\textbf{Customer sets}} \\ \multicolumn{7}{l}{$M = \{1,5,6\}, D = \{2, 3, 4\}$} \\ \multicolumn{7}{l}{$\CD^{\leq t} = \{3,4\}$} \\ \multicolumn{7}{l}{} \\ \multicolumn{7}{l}{\textbf{(Partial) tours}} \\ \multicolumn{7}{l}{$T_1 = (1,5,3), T_2 = (6)$} \\ \multicolumn{7}{l}{$T^{\leq t}_1 = (1,5), T^{\leq t}_2 = (6)$} \\ \multicolumn{7}{l}{} \\ \multicolumn{7}{l}{\textbf{Encoding of solution $x$}} \\ $i$ & 1 & 2 & 3 & 4 & 5 & 6 \\ \midrule $x.v$ & 1 & 2 & 1 & 1 & 1 & 2 \\ $x.a$ & 1 & 0 & 1 & 0 & 1 & 1 \\ $x.p$ & 4 & 1 & 6 & 5 & 3 & 2 \\ \end{tabular} }; \end{scope} \begin{scope}[every node/.style={circle, draw=black, inner sep=4pt}] \node[fill=black!80, label={[yshift=-1.2cm]start depot}] (depot1) at (3.2, -1.5) {\textcolor{white}{$7$}}; \node[fill=gray!20, above = 1cm of depot1] (v4) {$4$}; \node[right = 1cm of v4] (v5) {$5$}; \node[right = 1cm of v5] (v6) {$6$}; \node[above = 1cm of v4] (v1) {$1$}; \node[fill=gray!20, opacity = 0.2, above = 1cm of v5] (v2) {$2$}; \node[fill=gray!20, above = 1cm of v6] (v3) {$3$}; \node[fill=black!80, below = 1cm of v6, label={[yshift=-1.2cm]end depot}] (depot2) {\textcolor{white}{$8$}}; \draw (depot1) edge[-latex, ultra thick, bend left=25] (v1); \draw (v1) edge[-latex, ultra thick] (v5); \draw (v5) edge[-latex] (v3); \draw (v3) edge[-latex, bend left=25] (depot2); \draw (depot1) edge[-latex, dashed, ultra thick, bend right=25] (v6); \draw (v6) edge[-latex, dashed] (depot2); \end{scope} \end{tikzpicture} } \caption{Illustration of the encoding of an individual $x$. Here, customers $i \in \{1, 3, 5, 6\}$ are active ($x.a_i = 1$) while customers $i \in \{2, 4\}$ are inactive ($x.a_i = 0$); customer 4 however asked for service already since $4 \in \CD^{\leq t}$. In contrast, customer $2$ did not ask for service so far (illustrated by reduced opacity in the plot). The vehicles already visited a subset of customers (illustrated with thick edges): vehicle one serviced customers 1 and 5 (thus $T^{\leq t}_1 = (1, 5)$) while customer 6 was visited by vehicle 2 (thus $T^{\leq t}_2 = (6)$).} \label{fig:encoding} \end{figure} \subsection{Offspring generation} The mutation operator (see Alg.~\ref{alg:mutate}) is designed to address all three combinatorial aspects of the underlying problem, i.e., vehicle re-assignment, customer (de)activation and tour permutation. Here, special attention has to be paid to not produce infeasible individuals. Therefore, mutation operates on the subset of customers which have asked for service until now and have not yet been visited. More precisely, each dynamic customer $i \in D_{av} = (\CD^{\leq t} \setminus T^{\leq t})$ which has not yet been visited is (de)activated with a small probability $p_a$ (note that we treat the list $T^{\leq t}$ as a set here for convenience). Likewise, each of the customers $i \in \CA^{\leq t} = (\CA^{\leq t} \setminus T^{\leq t})$ is assigned another vehicle independently with equal probability $p_v$. The mutation probabilities $p_a$ and $p_v$ are set dynamically such that in expectation only one (de)activation or (re)assignment happens; small changes are preferred. Finally, with probability $p_\text{swap}$ the permutation vector $x.p$ undergoes $n_\text{swap}$ sequential exchange/swap operations (limited to customers which are not fixed so far). Occasionally, at certain iterations, a local search (LS) procedure is applied to each individual $x \in P$ (see line~9 in Alg.~\ref{alg:demoa}). The LS takes the vehicle mapping $x.v$ and customer activation $x.a$ as fixed and aims to improve the individual path length by means of the sophisticated solver EAX~\cite{nagata_powerful_2013} for the Traveling-Salesperson-Problem (TSP). To accomplish this goal, given a solution $x \in P$ and a vehicle $v \in \{1, \ldots, n_v\}$ all customers assigned to $v$ in $x$ (appending start and end depot) and their pairwise distances are extracted (nodes 1, 3, 5, 7, 8 for vehicle~1 in Fig.~\ref{fig:encoding}). Next, since the optimization of each vehicle tour is a Hamilton-Path-Problem (HPP) on the assigned customers (no round-trip tour), a sequence of distance matrix transformations is necessary such that the TSP solver EAX can be used to find an approximate solution to the HPP~(see \cite{jonker:transforming} for details). Again, since partial tours might already have been realized, the HPP-optimization starts at the last node visited by the corresponding vehicle (node 5 in Fig.~\ref{fig:encoding} for vehicle~1, since $T^{\leq t}_1 = (1,5)$ is fixed already and not subject to changes). \section{Experimental Methodology} \label{sec:experiments} Our benchmark set consists of in total 50 instances with each $n=100$ customers taken from \cite{MGB2015}. There are 10 instances with points spread uniformly at random in the Euclidean plane and each 10 clustered instances with the number of clusters $n_c \in \{2, 3, 5, 10\}$. The cluster centers are placed by space-filling Latin-Hypercube-Sampling to ensure good spread. Subsequently, $\lfloor n/n_c\rfloor$ nodes are placed around each cluster center assuring cluster segregation with no overlap. We refer the reader to~\cite{MGB2015} for more details on the instance generation process. The proportion of dynamic customers is $50\%$ and $75\%$ for each half of the instance set enabling the study of highly dynamic scenarios. \begin{table}[htb] \centering \caption{DEMOA parameter settings.} \label{tab:demoa_parameter_settings} \begin{tabular}{rl} \toprule {\bf Parameter} & {\bf Value} \\ \midrule Nr. of function evaluations per era & $65\,000$ \\ Population size $\mu$ & 100 \\ Swap probability $p_\text{swap}$ & $0.6$ \\ Nr. of swaps $n_\text{swap}$ & $10$ \\ Local Search at generations & first, half-time, last \\ \bottomrule \end{tabular} \end{table} \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth, trim=0 6pt 0 0, clip]{images/eraplot.pdf} \caption{Exemplary Pareto-front approximations for two instances with $75\%$ dynamic requests colored by era and split by problem type (columns) for a single vehicle. Solutions selected by the respective decision maker (\underline{\textbf{0.25-strategy}}) are highlighted and labeled with the era number. Dashed horizontal lines represent the maximum number of dynamic requests which can remain unserved in the respective era.} \label{fig:eraplot_examples} \end{figure} \subsection{Dynamic aspects and decision making} We fix $\neras = 7$ eras and set the era-length to $\Delta = \lceil\max_{i \in D} r(i) / \neras\rceil$ where $r(i)$ is the request time of customer $i$. $\Delta$ is consistently $\approx 150$ time units across all instances. Naturally, one could start a new epoch once a new customer requests for service. This would result in 50 and 75 eras respectively on our benchmark instances\footnote{Note that the benchmark sets contains instances with $N=100$ customers (\underline{including two depots}) and $\{50\%, 70\%\}$ dynamic customers.}. However, we argue that in a real-world scenario it is more realistic to make decisions after chunks of requests came in and not every single time. For computational experimentation we automate the decision-making process by considering three different decision maker (DM) strategies. To do so, at the end of each era, we sort the final DEMOA population $P$ in ascending order of the first objective (tour length), i.e. $P_{(1)} \leq P_{(2)} \leq \ldots \leq P_{(\mu)}$ where the $\leq$-relation is with respect to $f_1$. Note that in the bi-objective space this sorting results in a descending order with respect to the number of unvisited dynamic customers, our second objective $(f_2)$. The automatic DM now picks the solution $P_{(k)}$ with $k = \lceil d \cdot \mu \rceil$, $d \in [0, 1]$ where increasing $d$-values correspond to stronger \enquote{customer-greediness}, i.e. higher emphasis on keeping the number of unvisited customers low. In our study we cover $d \in \{0.25, 0.5, 0.75\}$ to account for different levels of greediness and refer to such a policy as a $d$-strategy in the following. Certainly, in real-world scenarios, the DM might change his strategy throughout the day reacting to specific circumstances. However, for a systematic evaluation and to keep our study within feasible ranges, we stick to this subset of decision policies. \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth, trim=0 6pt 0 0, clip]{images/eraplot_final_only_by_dmstrategy.pdf} \caption{Exemplary visualization of final DM-decisions (i.e., in last era) for three representative instances with $75\%$ dynamic requests colored by decision maker strategy. The data is split by instance (columns) and number of vehicles used (rows).} \label{fig:eraplot_by_dmstrategy} \end{figure} \subsection{Further parameters} The further parameter settings of the DEMOA stem from preliminary experimentation and are gathered in Table~\ref{tab:demoa_parameter_settings}. For each number of vehicles $n_v \in \{1, 2, 3\}$ and each DM strategy $d \in \{0.25, 0.5, 0.75\}$ we run the DEMOA 30 times on each instance for $\neras$ eras following the $d$-strategy. Moreover, for a baseline comparison, we run the clairvoyant EMOA multiple times for each $n_v$ on each instance with a stopping condition of $20\,000\,000$ function evaluations. The clairvoyant EMOA works the same way the DEMOA does. However, it has complete knowledge of the request times of dynamic customers a-priori and treats the problem as a static problem.\footnote{Consider the clairvoyant EMOA as the proposed DEMOA (see Alg.~\ref{alg:demoa}) run for one era with $t=0$ and all customers available from the beginning accounting for request times in the tour length calculations.} This idea was originally introduced in \cite{GMT+2015} for an a-posteriori evaluation of decision making for the single-vehicle variant of the considered problem. We use the Pareto-front approximations of the clairvoyant EMOA as a baseline for performance comparison. We provide the R implementation of our algorithm in an accompanying repository~\cite{implementation}. \section{Experimental results} \label{sec:experimental_results} In the following, we analyze the proposed dynamic approach (DEMOA) for one to three vehicles with an adapted clairvoyant implementation of the original approach by Bossek et al.~\cite{BGMRT2018}. For a fair comparison, that approach has been extended to deal with multiple vehicles and is denoted as EMOA in the following results. \begin{figure*}[htb] \centering \includegraphics[width=\textwidth, trim=0 6pt 0 0, clip]{images/HV_all.pdf} \caption{Boxplots of hypervolume-indicator (lower is better) for all instances split by problem type and fleet size. Results are shown for the 0.5-strategy for visual clarity, but the omitted results show the same patterns. Hypervolume values are calculated by (1) instance-wise calculation of the minimal upper bound for the number of unvisited dynamic customers in the last era across all independent runs of the DEMOA, (2) determining the reference point based on the union of all Pareto-front approximations of DEMOA and EMOA chopped at the bound and (3) calculating the HV-value based on this reference set.} \label{fig:HV_all} \end{figure*} In a first step, we show exemplary results of our dynamic approach to visually introduce the era concept. In continuation of the approach of Bossek et al.~\cite{BGMRT2019BiObjective}, we also briefly investigate the influence of decision making to final decision location, when different greediness preferences are considered. Then, we compare the performance of the EMOA and the proposed DEMOA with respect to three different performance measures. In detail, we investigate the overall performance with respect to problem type and fleet size, the performance with respect to different decision strategies, and the overall performance gain induced by using multiple vehicles. Finally, we zoom into dedicated solution instances (and their dynamic evolution process) to learn about the behavior of vehicles on clustered and uniform problem instances. \subsection{Pareto-front approximations and decisions in the dynamic scenario} \label{sec:exp_dynamic} The dynamic nature of the problem and online decision making imply that we do not have a single point in time at which the algorithm performance can be evaluated. In the beginning only mandatory customers are available and the tour planning task is equivalent to (multi-vehicle and open) TSP solving. However, as dynamic customer requests appear over time, the initially planned tour(s) must be modified to allow compromises between tour length and number of visited customers. From this point on, a Pareto-set of solutions has to be considered. Following the era concept of decision making, at a dedicated point in time, a compromise is chosen for realization by a DM. From that time onward, realization starts and vehicles travel the decided tour. Of course, new dynamic requests appear over time. These are considered at the end of the next era and form a set of new compromises. However, that set of compromises has to consider the already realized partial tour of the vehicles, which cannot be reverted. Consequently, already visited (formerly) dynamic customers reduce the upper bound of unserved customers in compromise sets for future decisions. The effect of repeated decision making and continuous realization of decisions is exemplarily shown in Figure~\ref{fig:eraplot_examples}. The non-dominated fronts for decisions in all seven eras are shown for a uniform and a clustered instance, respectively. For visual comparison, the clairvoyant EMOA results are also shown. Horizontal dashed lines denote the upper bound of unserved dynamic customers in each stage of decision making. Clearly, in the last era (7, brown points), more customers have been visited by the traveling vehicles than in the first era. Thus, the upper bound has decreased. \begin{figure*}[ht] \centering \includegraphics[width=\textwidth, trim=0 6pt 0 0, clip]{images/f1measure_all.pdf} \caption{Boxplots of the $f_1$-measure (tour length of DEMOA solution minus tour length of best clairvoyant EMOA solution with the same number of unvisited customers) calculated on basis of all solutions in the last era.} \label{fig:f1measure_all} \end{figure*} The upper bound and the range of possible decisions is also depending on the decision strategy. A greedy strategy, which aims to reduce the number of unserved customers will favor solutions with many served customers and thus influence realization of longer tours. Less greedy strategies will favor realizations with shorter tours and less visited customers. This behavior was already observed by Bossek et al.~\cite{BGMRT2019BiObjective} for a single vehicle. In Figure~\ref{fig:eraplot_by_dmstrategy}, we confirm an analogous behavior also for multiple vehicles and our algorithmic approach. Therein, visualizations of the results of different strategies (0.25, 0.5, and 0.75 priority of the second objective) and multiple runs for different topologies (uniform and clustered) as well as for different numbers of vehicles are shown. We find, that decision strategies are reflected in the final decision locations. The less greedy strategy produces solutions with more unserved customers than very greedy strategies - independent of the vehicle number. \subsection{Dynamic and clairvoyant performance and the influence of multiple vehicles} In order to evaluate the approximation quality of the DEMOA, the dynamic nature of the problem has to be respected. From the decision maker's perspective, the last era (and thus the last non-dominated front) includes all previous decisions and can be compared with the clairvoyant results. \begin{figure*}[htb] \centering \includegraphics[width=\textwidth]{images/tours_uniform.pdf} \includegraphics[width=\textwidth]{images/tours_clustered.pdf} \caption{Exemplary tours in eras 1, 3 and 7 (final) with $0.75$-strategy for a uniform instance (top rows) and a clustered instance (bottom rows). For each instance we show the tours the DM picked for a scenario of a single vehicle (left-most column), two vehicles (columns 2 and 3) and three columns (remaining three columns). Bold edges represent the irreversible tour parts which already have been realized by the corresponding vehicle (columns) in the respective era (rows). } \label{fig:examplary_tours} \end{figure*} \begin{figure}[htb] \centering \includegraphics[width=0.7\textwidth, trim=0 6pt 0 0, clip]{images/HV_vehicle_comparison.pdf} \caption{Classical dominated hypervolume distributions (higher is better) of representative instances. The trend is the same for all 50 benchmark instances.} \label{fig:hv_vehicle_comparison} \end{figure} However, due to the continuously decreasing upper bounds of unserved customers during the optimization process (see~\ref{sec:exp_dynamic}), the EMOA approximation covers a wider range of solutions than the approximated Pareto-front of the final era. This is considered by our comparison of equivalent ranges of the DEMOA and EMOA results. In Figure~\ref{fig:HV_all}, we compare the Hypervolume indicator~\cite{ZDT00} of the DEMOA and EMOA by (1) determining instance-wise the minimal upper bound for the number of unserved customers (objective $f_2$) in the last approximated Pareto-front and for all independent runs of the DEMOA. Then (2), we reduce the solutions of the EMOA to those below the before determined upper bound for $f_2$. From the union of the reduced EMOA results and the DEMOA results, we (3) compute a reference point for Hypervolume computation. Here we compare both algorithms for the medium greedy $0.5$ strategy. We find, that the DEMOA results outperform the EMOA results for uniform problem instances with both $50\%$ and $75\%$ dynamic customers. For clustered instances, performance is in the same range but seemingly depending on the specific instance topology and dynamic customer ratio and service request times. However, we can conclude that the results of the DEMOA for the more realistic dynamic scenario are not necessarily worse than those of the clairvoyant approach. While the clairvoyant EMOA approach knows about the request times of all dynamic customers at $t=0$ and considers all potential customers in compromise generation, the DEMOA optimizes tours of era $i+1$ based on partially realized (and unchangeable) tours from era $i$. This often reduces the size of the tour planning problem significantly and allows to gain very good solutions - some of those even outperform the EMOA solutions in uniform instances. Another observation from Figure~\ref{fig:HV_all} shows that the aforementioned advantage of problem complexity reduction due to dynamism becomes neglectable, when both algorithms consider multiple vehicles. With increasing number of vehicles, the EMOA tends to outperform the approximation quality of the DEMOA. While we focused on a single decision strategy in the previous discussion, Figure~\ref{fig:f1measure_all} investigates multiple strategies together with tour length properties. Instead of the hypervolume, we now focus on the tour length objective ($f_1$) and measure the distance of solutions to the clairvoyant solution. The idea behind this measure is, that (in the range of the common objective space of DEMOA and EMOA) usually any number of unvisited dynamic customers is covered by the approximated Pareto-front. The convergence quality of a solution set can thus also be expressed by the difference of DEMOA tour length and EMOA tour length. If DEMOA outperforms EMOA, the respective value is negative. The equilibrium of solution quality is denoted by a gray vertical line at $0$ in Figure~\ref{fig:f1measure_all}. While we are able to confirm the general observations from above, we now have detailed insights into the effect of decision strategies. As a clear trend, we find better tour lengths per instance topology and dynamism, when greediness in decision making w.r.t. reduction of number of unserved customers ($f_2$) is increased. This holds for the single and multiple vehicle case. The reason for this observation is similar to the argument discussed before. The more customers have to be serviced, the more complex the tour planning problem becomes for the clairvoyant EMOA, while more and more customers are fixed in the dynamic scenario and do not have to be considered for tour planning anymore. A dedicated view on the effect of using multiple vehicles in the scenario (online and offline) is provided in Figure~\ref{fig:hv_vehicle_comparison}. We find for all investigated cluster configurations and the uniform distribution of customers, that multiple vehicles are advantageous regarding classical hypervolume comparison\footnote{Here the reference point is determined for each problem (independent of the number of vehicles and the strategy). The dominated hypervolume is then calculated for each algorithm, DM-strategy and number of vehicles.}; the results are statistically highly significant with respect to Wilcoxon-Mann-Whitney tests at significance level $\alpha = 0.001$ in $100\%$ of the cases, i.e. two vehicles are significantly better than one and three better than two. At the same time, we observe that the highest gain in solution quality is associated with the step from one to two vehicles. By including a third vehicle, only little is gained. This is probably rooted in the overhead associated with the distances travelled from the start depot to the first customer and from the last customer to the end depot. These distances occur for each vehicle and have to be travelled no matter how short the remaining tour becomes. Thus, this overhead will naturally bound the amount of reasonably applicable vehicles. \subsection{Exemplary analysis of vehicle tours} In this paragraph, we briefly investigate representative examples of generated solutions. Figure~\ref{fig:examplary_tours} details the evolution of tours for a single vehicle as well as two, and three vehicles for a uniform and clustered instance in eras one, three, and 7. In the single vehicle case, we can just observe the dynamic adaptation of the planned tour (thin line) towards the realized (and irreversible) tour (bold lines) over time. In the two and three vehicle scenarios, we can nicely observe, how the vehicles automatically partition the customers space. It is obvious, that no vehicle stays idle. Moreover, we find, that each vehicle is assigned similar workload. This behavior is partly rooted in the design of our algorithm. As objective $f_1$ minimizes the maximum tour length across all vehicles, selection pressure forces similar workload to each vehicle. Future research has to clarify, whether this desirable feature from a real-world application perspective is always advantageous from the optimization perspective. \section{Conclusion and Outlook} \label{sec:conclusion} In this work, we successfully extended an already high-performing single-vehicle approach to the more realistic dynamic multiple vehicle scenario and we proposed two measures for comparing DEMOA quality to the performance of the related clairvoyant EMOA variant. We find that the algorithmic enhancements ensure a nice distribution of workload (in terms of tour lengths and number of customers served) between the involved vehicles without the necessity for explicitly optimizing for this kind of balance. At the same time and especially on instances with random uniformly distributed locations, the DEMOA can even outperform the offline EMOA variant with full knowledge of request times. Due to concurrent realization of planned tours, Pareto front approximations of the DEMOA's decision eras naturally concentrate on constantly shrinking problem sizes. These reduced problems can then be solved more effectively than the complete (offline) problem. Also, variations of decision makers' preferences and decision chains were investigated. With increasing degree of "greediness", i.e. a stronger focus on minimizing the number of unserved customers, of course overall tour lengths increase, both in the single- as well as in the multiple vehicle scenario. The informative and sophisticated visualization approaches presented here can possibly foster the facilitation of decision processes along the DEMOA run and may offer perspectives for future dynamic, tool-based decision support systems. \bibliographystyle{unsrt} \subsection{General (D)EMOA} Initialization steps (Alg.~\ref{alg:demoa}, lines 1-3) consist of declaring a population $P$ and a list $T$ where $T_v$ contains the tour of the corresponding vehicle $v \in \{1, \ldots, n_v\}$, i.e., $T$ stores the solution the decision maker picked at the end of the previous era. Before the first era begins these tours are naturally empty since no planning was conducted at all. Line \ref{algline:eraloop} iterates over the eras. Here, the actual optimization process starts. The first essential step in each era is -- given the passed time $t$ -- to determine for each vehicle $v \in \{1, \ldots, n_v\}$ the initial tour already realized by vehicle $v$. This information is extracted from the list of tours $T$ and stored in the list $T^{\leq t}$. Note that again, in the first era the initial tours are empty, as is $T$, since the vehicles are all located at the start depot. Next, the population is initialized in line \ref{algline:call_initialize} and a static EMOA does his job in lines $7-10$. Here, offspring solutions are generated by mutation followed by a sophisticated genetic local search procedure with the aim to reduce the tour lengths of solutions. Finally, following a $(\mu + \lambda)$-strategy the population is updated. Therein, the algorithm relies on the survival selection mechanism of the NSGA-II algorithm~\cite{Deb02}. Once a stopping condition is met, e.g., a maximum number of generations is reached, the era ends and the final solution set is presented to a decision maker who needs to choose exactly one solution~(Alg.~\ref{alg:demoa}, line~11). This choice (the list $T$ of $n_v$ tours) determines the (further) order of customers to be visited by the respective service vehicles. \begin{algorithm}[ht] \caption{MUTATE} \label{alg:mutate} \begin{algorithmic}[1] \Require{\textbf{a)} individual $x$, \textbf{b)} initial tours $T^{\leq t}$, \textbf{c)} swap prob. $p_\text{swap}$, \textbf{d)} $n_\text{swap}$} \State $D_{av} \gets$ dyn. customers available at time $t$ and \underline{not in} $T^{\leq t}$ \State $C_{av} \gets$ all customers \underline{av}ailable at time $t$ and \underline{not in} $T^{\leq t}$ \State $p_a \gets 1 / |D_{av}|$ \State $p_v \gets 1 / |C_{av}|$ \State Flip $x.a_i$ with prob. $p_a$ for all $i \in D_{av}$ \State Change vehicle $x.v_i$ with prob. $p_v$ for all $i \in C_{av}$ \If{random number in $(0, 1) \leq p_\text{swap}$} \State Exchange $n_\text{swap}$ times each two nodes from $C_{av}$ in $x.t$ \EndIf \Return{x} \end{algorithmic} \end{algorithm} \subsection{Initialization} We strongly advice the reader to consult Alg.~\ref{alg:initialize} and in particular Fig.~\ref{fig:encoding} in the course of reading the following explanations for visual support. Each individual is built of three vectors $x.v, x.a$ and $x.p$ of length $N-2$ each of which stores information on the \underline{v}ehicles assigned, the \underline{a}ctivation status of each customer and a \underline{p}ermutation of all customers. In the first era there are no already visited customers, i.e. both $P$ and $T^{\leq t}$ are empty, and the algorithm does not need to take these into account. Hence, in lines 4-7 each individual $x \in P$ is created from scratch as follows: each customer $i \in (M \cup D)$ is assigned a vehicle $v \in \{1, \ldots, n_v\}$ uniformly at random. This information is stored in the vector $x.v$. Next, all mandatory customers $i \in M$ are activated by setting the value of a binary string $x.a \in \{0, 1\}^{N-2}$ to 1 which means \enquote{active}. In contrast, all dynamic customers $i \in D$ are deactivated ($x.a_i = 0$) since they did not ask for service so far. The final step is to store a random permutation of all customers in the permutation vector $x.p$. Note that during fitness function evaluation in order to calculate the individual tour lengths $L_v(x)$ for each vehicle $v$ only the sub-sequence of positions $i \in \{1, ..., N-2\}$ in $x.p$ is considered for which $x.a_i = 1$ and $x.v_i = v$ holds. Initialization in later eras is different and more complex. Now, the parameter $P$ passed to Alg.~\ref{alg:initialize} -- the final population of the previous era -- is non-empty and its solutions serve as templates for the new population. We aim to transfer as much information as possible. However, usually the majority of individuals $x \in P$ is in need of repair. This is because as time advances (note that additional $\Delta$ time units have passed) further customers $i \in C$ may already have been visited by the vehicle fleet, but it is possible that some of these are inactive in $x$ (i.e., $x.a_i = 0$). Hence, (a) all customers which have already been visited, stored in the list $T^{\leq t}$, are activated and assigned to the responsible vehicle (here we use an overloaded element-of relation on lists in the pseudo-codes for convenience) and (b) furthermore their order in the permutation string $x.p$ is repaired. The latter step is achieved by moving the sub-sequence of visited customers before the remaining customers assigned to the corresponding vehicle in the permutation string. This step completes the repair procedure and the resulting individual is guaranteed to be feasible. Subsequent steps involve randomly activating customers that asked for service within the last $\Delta$ time units. \begin{figure}[htb] \centering \scalebox{0.92}{ \begin{tikzpicture}[scale=1,label distance=-6mm] \begin{scope}[every node/.style={draw=white}] \node (enc) at (0, 0) { \begin{tabular}{lcccccc} \multicolumn{7}{l}{\textbf{Customer sets}} \\ \multicolumn{7}{l}{$M = \{1,5,6\}, D = \{2, 3, 4\}$} \\ \multicolumn{7}{l}{$\CD^{\leq t} = \{3,4\}$} \\ \multicolumn{7}{l}{} \\ \multicolumn{7}{l}{\textbf{(Partial) tours}} \\ \multicolumn{7}{l}{$T_1 = (1,5,3), T_2 = (6)$} \\ \multicolumn{7}{l}{$T^{\leq t}_1 = (1,5), T^{\leq t}_2 = (6)$} \\ \multicolumn{7}{l}{} \\ \multicolumn{7}{l}{\textbf{Encoding of solution $x$}} \\ $i$ & 1 & 2 & 3 & 4 & 5 & 6 \\ \midrule $x.v$ & 1 & 2 & 1 & 1 & 1 & 2 \\ $x.a$ & 1 & 0 & 1 & 0 & 1 & 1 \\ $x.p$ & 4 & 1 & 6 & 5 & 3 & 2 \\ \end{tabular} }; \end{scope} \begin{scope}[every node/.style={circle, draw=black, inner sep=4pt}] \node[fill=black!80, label={[yshift=-1.2cm]start depot}] (depot1) at (3.2, -1.5) {\textcolor{white}{$7$}}; \node[fill=gray!20, above = 1cm of depot1] (v4) {$4$}; \node[right = 1cm of v4] (v5) {$5$}; \node[right = 1cm of v5] (v6) {$6$}; \node[above = 1cm of v4] (v1) {$1$}; \node[fill=gray!20, opacity = 0.2, above = 1cm of v5] (v2) {$2$}; \node[fill=gray!20, above = 1cm of v6] (v3) {$3$}; \node[fill=black!80, below = 1cm of v6, label={[yshift=-1.2cm]end depot}] (depot2) {\textcolor{white}{$8$}}; \draw (depot1) edge[-latex, ultra thick, bend left=25] (v1); \draw (v1) edge[-latex, ultra thick] (v5); \draw (v5) edge[-latex] (v3); \draw (v3) edge[-latex, bend left=25] (depot2); \draw (depot1) edge[-latex, dashed, ultra thick, bend right=25] (v6); \draw (v6) edge[-latex, dashed] (depot2); \end{scope} \end{tikzpicture} } \caption{Illustration of the encoding of an individual $x$. Here, customers $i \in \{1, 3, 5, 6\}$ are active ($x.a_i = 1$) while customers $i \in \{2, 4\}$ are inactive ($x.a_i = 0$); customer 4 however asked for service already since $4 \in \CD^{\leq t}$. In contrast, customer $2$ did not ask for service so far (illustrated by reduced opacity in the plot). The vehicles already visited a subset of customers (illustrated with thick edges): vehicle one serviced customers 1 and 5 (thus $T^{\leq t}_1 = (1, 5)$) while customer 6 was visited by vehicle 2 (thus $T^{\leq t}_2 = (6)$).} \label{fig:encoding} \end{figure} \subsection{Offspring generation} The mutation operator (see Alg.~\ref{alg:mutate}) is designed to address all three combinatorial aspects of the underlying problem, i.e., vehicle re-assignment, customer (de)activation and tour permutation. Here, special attention has to be paid to not produce infeasible individuals. Therefore, mutation operates on the subset of customers which have asked for service until now and have not yet been visited. More precisely, each dynamic customer $i \in D_{av} = (\CD^{\leq t} \setminus T^{\leq t})$ which has not yet been visited is (de)activated with a small probability $p_a$ (note that we treat the list $T^{\leq t}$ as a set here for convenience). Likewise, each of the customers $i \in \CA^{\leq t} = (\CA^{\leq t} \setminus T^{\leq t})$ is assigned another vehicle independently with equal probability $p_v$. The mutation probabilities $p_a$ and $p_v$ are set dynamically such that in expectation only one (de)activation or (re)assignment happens; small changes are preferred. Finally, with probability $p_\text{swap}$ the permutation vector $x.p$ undergoes $n_\text{swap}$ sequential exchange/swap operations (limited to customers which are not fixed so far). Occasionally, at certain iterations, a local search (LS) procedure is applied to each individual $x \in P$ (see line~9 in Alg.~\ref{alg:demoa}). The LS takes the vehicle mapping $x.v$ and customer activation $x.a$ as fixed and aims to improve the individual path length by means of the sophisticated solver EAX~\cite{nagata_powerful_2013} for the Traveling-Salesperson-Problem (TSP). To accomplish this goal, given a solution $x \in P$ and a vehicle $v \in \{1, \ldots, n_v\}$ all customers assigned to $v$ in $x$ (appending start and end depot) and their pairwise distances are extracted (nodes 1, 3, 5, 7, 8 for vehicle~1 in Fig.~\ref{fig:encoding}). Next, since the optimization of each vehicle tour is a Hamilton-Path-Problem (HPP) on the assigned customers (no round-trip tour), a sequence of distance matrix transformations is necessary such that the TSP solver EAX can be used to find an approximate solution to the HPP~(see \cite{jonker:transforming} for details). Again, since partial tours might already have been realized, the HPP-optimization starts at the last node visited by the corresponding vehicle (node 5 in Fig.~\ref{fig:encoding} for vehicle~1, since $T^{\leq t}_1 = (1,5)$ is fixed already and not subject to changes). \subsection{Benchmark instances} Our benchmark set consists of in total 50 instances with each $n=100$ customers taken from \cite{MGB2015}. There are 10 instances with points spread uniformly at random in the Euclidean plane and each 10 clustered instances with the number of clusters $n_c \in \{2, 3, 5, 10\}$. The cluster centers are placed by space-filling Latin-Hypercube-Sampling to ensure good spread. Subsequently, $\lfloor n/n_c\rfloor$ nodes are placed around each cluster center assuring cluster segregation with no overlap. We refer the reader to~\cite{MGB2015} for more details on the instance generation process. \begin{table}[htb] \caption{DEMOA parameter settings.} \label{tab:demoa_parameter_settings} \begin{tabular}{rl} \toprule {\bf Parameter} & {\bf Value} \\ \midrule Nr. of function evaluations per era & $65\,000$ \\ Population size $\mu$ & 100 \\ Swap probability $p_\text{swap}$ & $0.6$ \\ Nr. of swaps $n_\text{swap}$ & $10$ \\ Local Search at generations & first, half-time, last \\ \bottomrule \end{tabular} \end{table} \begin{figure}[htb] \centering \includegraphics[width=\columnwidth, trim=0 6pt 0 0, clip]{images/eraplot.pdf} \caption{Exemplary Pareto-front approximations for two instances with $75\%$ dynamic requests colored by era and split by problem type (columns) for a single vehicle. Solutions selected by the respective decision maker (\underline{\textbf{0.25-strategy}}) are highlighted and labeled with the era number. Dashed horizontal lines represent the maximum number of dynamic requests which can remain unserved in the respective era.} \label{fig:eraplot_examples} \end{figure} The proportion of dynamic customers is $50\%$ and $75\%$ for each half of the instance set enabling the study of highly dynamic scenarios. \subsection{Dynamic aspects and decision making} We fix $\neras = 7$ eras and set the era-length to $\Delta = \lceil\max_{i \in D} r(i) / \neras\rceil$ where $r(i)$ is the request time of customer $i$. $\Delta$ is consistently $\approx 150$ time units across all instances. Naturally, one could start a new epoch once a new customer requests for service. This would result in 50 and 75 eras respectively on our benchmark instances\footnote{Note that the benchmark sets contains instances with $N=100$ customers (\underline{including two depots}) and $\{50\%, 70\%\}$ dynamic customers.}. However, we argue that in a real-world scenario it is more realistic to make decisions after chunks of requests came in and not every single time. For computational experimentation we automate the decision-making process by considering three different decision maker (DM) strategies. To do so, at the end of each era, we sort the final DEMOA population $P$ in ascending order of the first objective (tour length), i.e. $P_{(1)} \leq P_{(2)} \leq \ldots \leq P_{(\mu)}$ where the $\leq$-relation is with respect to $f_1$. Note that in the bi-objective space this sorting results in a descending order with respect to the number of unvisited dynamic customers, our second objective $(f_2)$. The automatic DM now picks the solution $P_{(k)}$ with $k = \lceil d \cdot \mu \rceil$, $d \in [0, 1]$ where increasing $d$-values correspond to stronger \enquote{customer-greediness}, i.e. higher emphasis on keeping the number of unvisited customers low. In our study we cover $d \in \{0.25, 0.5, 0.75\}$ to account for different levels of greediness and refer to such a policy as a $d$-strategy in the following. Certainly, in real-world scenarios, the DM might change his strategy throughout the day reacting to specific circumstances. However, for a systematic evaluation and to keep our study within feasible ranges, we stick to this subset of decision policies. \begin{figure}[htb] \centering \includegraphics[width=0.95\columnwidth, trim=0 6pt 0 0, clip]{images/eraplot_final_only_by_dmstrategy.pdf} \caption{Exemplary visualization of final DM-decisions (i.e., in last era) for three representative instances with $75\%$ dynamic requests colored by decision maker strategy. The data is split by instance (columns) and number of vehicles used (rows).} \label{fig:eraplot_by_dmstrategy} \end{figure} \subsection{Further parameters} The further parameter settings of the DEMOA stem from preliminary experimentation and are gathered in Table~\ref{tab:demoa_parameter_settings}. For each number of vehicles $n_v \in \{1, 2, 3\}$ and each DM strategy $d \in \{0.25, 0.5, 0.75\}$ we run the DEMOA 30 times on each instance for $\neras$ eras following the $d$-strategy. Moreover, for a baseline comparison, we run the clairvoyant EMOA multiple times for each $n_v$ on each instance with a stopping condition of $20\,000\,000$ function evaluations. The clairvoyant EMOA works the same way the DEMOA does. However, it has complete knowledge of the request times of dynamic customers a-priori and treats the problem as a static problem.\footnote{Consider the clairvoyant EMOA as the proposed DEMOA (see Alg.~\ref{alg:demoa}) run for one era with $t=0$ and all customers available from the beginning accounting for request times in the tour length calculations.} This idea was originally introduced in \cite{GMT+2015} for an a-posteriori evaluation of decision making for the single-vehicle variant of the considered problem. We use the Pareto-front approximations of the clairvoyant EMOA as a baseline for performance comparison. We provide the R implementation of our algorithm in an accompanying repository~\cite{implementation}. \section{Experimental results} \label{sec:experimental_results} In the following, we analyze the proposed dynamic approach (DEMOA) for one to three vehicles with an adapted clairvoyant implementation of the original approach by Bossek et al.~\cite{BGMRT2018}. For a fair comparison, that approach has been extended to deal with multiple vehicles and is denoted as EMOA in the following results. \begin{figure*}[htb] \centering \includegraphics[width=\textwidth, trim=0 6pt 0 0, clip]{images/HV_all.pdf} \caption{Boxplots of hypervolume-indicator (lower is better) for all instances split by problem type and fleet size. Results are shown for the 0.5-strategy for visual clarity, but the omitted results show the same patterns. Hypervolume values are calculated by (1) instance-wise calculation of the minimal upper bound for the number of unvisited dynamic customers in the last era across all independent runs of the DEMOA, (2) determining the reference point based on the union of all Pareto-front approximations of DEMOA and EMOA chopped at the bound and (3) calculating the HV-value based on this reference set.} \label{fig:HV_all} \end{figure*} In a first step, we show exemplary results of our dynamic approach to visually introduce the era concept. In continuation of the approach of Bossek et al.~\cite{BGMRT2019BiObjective}, we also briefly investigate the influence of decision making to final decision location, when different greediness preferences are considered. Then, we compare the performance of the EMOA and the proposed DEMOA with respect to three different performance measures. In detail, we investigate the overall performance with respect to problem type and fleet size, the performance with respect to different decision strategies, and the overall performance gain induced by using multiple vehicles. Finally, we zoom into dedicated solution instances (and their dynamic evolution process) to learn about the behavior of vehicles on clustered and uniform problem instances. \subsection{Pareto-front approximations and decisions in the dynamic scenario} \label{sec:exp_dynamic} The dynamic nature of the problem and online decision making imply that we do not have a single point in time at which the algorithm performance can be evaluated. In the beginning only mandatory customers are available and the tour planning task is equivalent to (multi-vehicle and open) TSP solving. However, as dynamic customer requests appear over time, the initially planned tour(s) must be modified to allow compromises between tour length and number of visited customers. From this point on, a Pareto-set of solutions has to be considered. Following the era concept of decision making, at a dedicated point in time, a compromise is chosen for realization by a DM. From that time onward, realization starts and vehicles travel the decided tour. Of course, new dynamic requests appear over time. These are considered at the end of the next era and form a set of new compromises. However, that set of compromises has to consider the already realized partial tour of the vehicles, which cannot be reverted. Consequently, already visited (formerly) dynamic customers reduce the upper bound of unserved customers in compromise sets for future decisions. The effect of repeated decision making and continuous realization of decisions is exemplarily shown in Figure~\ref{fig:eraplot_examples}. The non-dominated fronts for decisions in all seven eras are shown for a uniform and a clustered instance, respectively. For visual comparison, the clairvoyant EMOA results are also shown. Horizontal dashed lines denote the upper bound of unserved dynamic customers in each stage of decision making. Clearly, in the last era (7, brown points), more customers have been visited by the traveling vehicles than in the first era. Thus, the upper bound has decreased. \begin{figure*}[ht] \centering \includegraphics[width=\textwidth, trim=0 6pt 0 0, clip]{images/f1measure_all.pdf} \caption{Boxplots of the $f_1$-measure (tour length of DEMOA solution minus tour length of best clairvoyant EMOA solution with the same number of unvisited customers) calculated on basis of all solutions in the last era.} \label{fig:f1measure_all} \end{figure*} The upper bound and the range of possible decisions is also depending on the decision strategy. A greedy strategy, which aims to reduce the number of unserved customers will favor solutions with many served customers and thus influence realization of longer tours. Less greedy strategies will favor realizations with shorter tours and less visited customers. This behavior was already observed by Bossek et al.~\cite{BGMRT2019BiObjective} for a single vehicle. In Figure~\ref{fig:eraplot_by_dmstrategy}, we confirm an analogous behavior also for multiple vehicles and our algorithmic approach. Therein, visualizations of the results of different strategies (0.25, 0.5, and 0.75 priority of the second objective) and multiple runs for different topologies (uniform and clustered) as well as for different numbers of vehicles are shown. We find, that decision strategies are reflected in the final decision locations. The less greedy strategy produces solutions with more unserved customers than very greedy strategies - independent of the vehicle number. \subsection{Dynamic and clairvoyant performance and the influence of multiple vehicles} In order to evaluate the approximation quality of the DEMOA, the dynamic nature of the problem has to be respected. From the decision maker's perspective, the last era (and thus the last non-dominated front) includes all previous decisions and can be compared with the clairvoyant results. \begin{figure*}[htb] \centering \includegraphics[width=0.495\textwidth]{images/tours_uniform.pdf} \hfill \includegraphics[width=0.495\textwidth]{images/tours_clustered.pdf} \vspace{-0.1cm} \caption{Exemplary tours in eras 1, 3 and 7 (final) with $0.75$-strategy for a uniform instance (left) and a clustered instance (right). For each instance we show the tours the DM picked for a scenario of a single vehicle (left-most column), two vehicles (columns 2 and 3) and three columns (remaining three columns). Bold edges represent the irreversible tour parts which already have been realized by the corresponding vehicle (columns) in the respective era (rows). } \label{fig:examplary_tours} \end{figure*} \begin{figure}[htb] \centering \includegraphics[width=\columnwidth, trim=0 6pt 0 0, clip]{images/HV_vehicle_comparison.pdf} \caption{Classical dominated hypervolume distributions (higher is better) of representative instances. The trend is the same for all 50 benchmark instances.} \label{fig:hv_vehicle_comparison} \end{figure} However, due to the continuously decreasing upper bounds of unserved customers during the optimization process (see~\ref{sec:exp_dynamic}), the EMOA approximation covers a wider range of solutions than the approximated Pareto-front of the final era. This is considered by our comparison of equivalent ranges of the DEMOA and EMOA results. In Figure~\ref{fig:HV_all}, we compare the Hypervolume indicator~\cite{ZDT00} of the DEMOA and EMOA by (1) determining instance-wise the minimal upper bound for the number of unserved customers (objective $f_2$) in the last approximated Pareto-front and for all independent runs of the DEMOA. Then (2), we reduce the solutions of the EMOA to those below the before determined upper bound for $f_2$. From the union of the reduced EMOA results and the DEMOA results, we (3) compute a reference point for Hypervolume computation. Here we compare both algorithms for the medium greedy $0.5$ strategy. We find, that the DEMOA results outperform the EMOA results for uniform problem instances with both $50\%$ and $75\%$ dynamic customers. For clustered instances, performance is in the same range but seemingly depending on the specific instance topology and dynamic customer ratio and service request times. However, we can conclude that the results of the DEMOA for the more realistic dynamic scenario are not necessarily worse than those of the clairvoyant approach. While the clairvoyant EMOA approach knows about the request times of all dynamic customers at $t=0$ and considers all potential customers in compromise generation, the DEMOA optimizes tours of era $i+1$ based on partially realized (and unchangeable) tours from era $i$. This often reduces the size of the tour planning problem significantly and allows to gain very good solutions - some of those even outperform the EMOA solutions in uniform instances. Another observation from Figure~\ref{fig:HV_all} shows that the aforementioned advantage of problem complexity reduction due to dynamism becomes neglectable, when both algorithms consider multiple vehicles. With increasing number of vehicles, the EMOA tends to outperform the approximation quality of the DEMOA. While we focused on a single decision strategy in the previous discussion, Figure~\ref{fig:f1measure_all} investigates multiple strategies together with tour length properties. Instead of the hypervolume, we now focus on the tour length objective ($f_1$) and measure the distance of solutions to the clairvoyant solution. The idea behind this measure is, that (in the range of the common objective space of DEMOA and EMOA) usually any number of unvisited dynamic customers is covered by the approximated Pareto-front. The convergence quality of a solution set can thus also be expressed by the difference of DEMOA tour length and EMOA tour length. If DEMOA outperforms EMOA, the respective value is negative. The equilibrium of solution quality is denoted by a gray vertical line at $0$ in Figure~\ref{fig:f1measure_all}. While we are able to confirm the general observations from above, we now have detailed insights into the effect of decision strategies. As a clear trend, we find better tour lengths per instance topology and dynamism, when greediness in decision making w.r.t. reduction of number of unserved customers ($f_2$) is increased. This holds for the single and multiple vehicle case. The reason for this observation is similar to the argument discussed before. The more customers have to be serviced, the more complex the tour planning problem becomes for the clairvoyant EMOA, while more and more customers are fixed in the dynamic scenario and do not have to be considered for tour planning anymore. A dedicated view on the effect of using multiple vehicles in the scenario (online and offline) is provided in Figure~\ref{fig:hv_vehicle_comparison}. We find for all investigated cluster configurations and the uniform distribution of customers, that multiple vehicles are advantageous regarding classical hypervolume comparison\footnote{Here the reference point is determined for each problem (independent of the number of vehicles and the strategy). The dominated hypervolume is then calculated for each algorithm, DM-strategy and number of vehicles.}; the results are statistically highly significant with respect to Wilcoxon-Mann-Whitney tests at significance level $\alpha = 0.001$ in $100\%$ of the cases, i.e. two vehicles are significantly better than one and three better than two. At the same time, we observe that the highest gain in solution quality is associated with the step from one to two vehicles. By including a third vehicle, only little is gained. This is probably rooted in the overhead associated with the distances travelled from the start depot to the first customer and from the last customer to the end depot. These distances occur for each vehicle and have to be travelled no matter how short the remaining tour becomes. Thus, this overhead will naturally bound the amount of reasonably applicable vehicles. \subsection{Exemplary analysis of vehicle tours} In this paragraph, we briefly investigate representative examples of generated solutions. Figure~\ref{fig:examplary_tours} details the evolution of tours for a single vehicle as well as two, and three vehicles for a uniform and clustered instance in eras one, three, and 7. In the single vehicle case, we can just observe the dynamic adaptation of the planned tour (thin line) towards the realized (and irreversible) tour (bold lines) over time. In the two and three vehicle scenarios, we can nicely observe, how the vehicles automatically partition the customers space. It is obvious, that no vehicle stays idle. Moreover, we find, that each vehicle is assigned similar workload. This behavior is partly rooted in the design of our algorithm. As objective $f_1$ minimizes the maximum tour length across all vehicles, selection pressure forces similar workload to each vehicle. Future research has to clarify, whether this desirable feature from a real-world application perspective is always advantageous from the optimization perspective.
1,116,691,499,492
arxiv
\section{Hermite analysis over $\R^n$} \label{sec:hermite} We consider functions $f : \R^n \to \R$, where we think of the inputs $x$ to $f$ as being distributed according to the standard $n$-dimensional Gaussian distribution $N(0,1)^n$. In this context we view the space of all real-valued square-integrable functions as an inner product space with inner product $\langle{f},{h} \rangle= \E_{\bx \sim N(0,1)^n}[f(\bx)h(\bx)]$. In the case $n = 1$, there is a sequence of Hermite polynomials $h_0(x) \equiv 1, h_1(x) = x, h_2(x) = (x^2 -1)/\sqrt{2},\ldots$ that form a complete orthonormal basis for the space. These polynomials can be defined via $\exp(\lambda x-\lambda^2/2)=\sum{d=0}^{\infty}(\lambda^d/\sqrt{d!}) h_d(x)$. In the case of general $n$, we have that the collection of $n$-variate polynomials $\{H_S(x) := \prod_{i=1}^{n} h_{S_i}(x_i)\}_{S \in \N^n}$ forms a complete orthonormal basis for the space. Given a square integrable function $f : \R^n \to \R$ we define its Hermite coefficients by $\tilde{f}(S) = \langle{f},{H_S}\rangle,$ for $S\in \N^n$ and we have that $f(x) = \sum_{S}\tilde{f}(S)H_S(x)$ (with the equality holding in $\calL^2$). Plancherel's and Parseval's identities are easily seen to hold in this setting, i.e.~for square-integrable functions $f,g$ we have $\E_{\bx \sim N(0,1)^n}[f(\bx)g(\bx)] = \sum_{S \in \N^n} \tilde{f}(S) \tilde{g}(S)$ and as a special case $\E_{\bx \sim N(0,1)^n}[f(\bx)^2] = \sum_{S \in \N^n} \tilde{f}(S)^2$. Since $\tilde{f}(S)=\E_{\bg \sim N(0,1)^n}[f(\bg)]$, we observe that $\sum_{|S| \geq 1} \tilde{f}(S)^2 = \Var[f(\bg)]$. For $S \in \N^n$ we write $|S|$ to denote $S_1 + \cdots + S_n$. For $j=0,1\dots$ we write $\mathsf{W}^{=j}[f]$ to denote the level-$j$ Hermite weight of $f$, i.e.~$\sum_{S\subset \N^n, |S| = j} \tilde{f}(S)^2.$ We similarly write $\mathsf{W}^{\leq j}[f]$ to denote $\sum_{S\subset \N^n, |S| \leq j} \tilde{f}(S)^2.$ \section{Fourier weight of monotone functions} \label{app:hermite-weight} For completeness we give a proof of the following well known result in the analysis of Boolean functions: \begin{claim} Let $f: \bn \rightarrow \bits$ be a monotone function. Then the squared Fourier weight at levels $0$ and $1$ is $\Omega(\frac{\log^2 n}{n})$, i.e. we have \[ \sum_{S \subseteq n, |S| \leq 1} \hat{f}(S)^2 = \Omega\left(\frac{\log^2 n}{n}\right). \] This lower bound is best possible up to constant factors. \end{claim} \begin{proof} We first show that the $\Omega(\frac{\log^2 n}{n})$ lower bound on the level $0$ and $1$ Fourier weight cannot be asymptotically improved by considering the so-called \textsf{TRIBES} function. This is a simple read-once monotone DNF (see Section~4.2 of \cite{ODBook} for the exact definition). In particular, the $n$-variable function $f=\textsf{TRIBES}_n$ has the following properties: \begin{enumerate} \item $f$ is monotone (and hence the influence $\Inf_i(f)$ of variable $i$ on $f$ equals the degree-1 Fourier coefficient $\hat{f}(i)$); \item $\Pr_{\bx \in \bn} [f(\bx)=1] = \frac{1}{2} \pm O \big( \frac{\log n}{n}\big)$ (see Proposition~4.12 in \cite{ODBook}); \item For all $1 \le i \le n$, the influence of variable $i$ on $f$ is $O \big( \frac{\log n}{n}\big)$ (see Proposition~4.13 in \cite{ODBook}). \end{enumerate} Together, Items~1, 2 and~3 imply that the \textsf{TRIBES} function is indeed a tight example for our claim. We now prove a lower bound on the squared Fourier weight of any monotone $f$.This is done via a case analysis: \begin{enumerate} \item If $|\widehat{f}(0)| \geq 0.01$, then the squared Fourier weight at level $0$ is at least $10^{-4}$. \item We can now assume $|\widehat{f}(0)| < 0.01$. This implies that $\Var[f] > 0.99$. Now suppose that $\Inf(f) = \sum_{i=1}^n \widehat{f}(i)$ is at least $\frac{\log n}{C}$ where $C$ is some absolute constant which is fixed in Step 3. Then we immediately get that the squared level-$1$ weight is at least \[ \sum_{i=1}^n \widehat{f}(i)^2 \ge \frac{1}{n} \cdot \big( {\sum_{i=1}^n \widehat{f}(i)} \big)^2 = \frac{\Inf(f)^2}{n} = \Omega\bigg(\frac{\log^2 n}{n}\bigg). \] \item The only remaining case is that $\Var[f] >0.99$ and $\Inf(f) < \frac{\log n}{C}$. Now, by choosing $C>0$ to be large enough, by Friedgut's junta theorem (see Section~9.6 of \cite{ODBook} it follows that there is a set $J \subseteq [n]$ such that (i) $|J| \le \sqrt{n}$ and (ii) the Fourier spectrum of $f$ has total mass at most $0.01$ outside of the variables in $J$. From this, we get \[ \sum_{i \in J} \widehat{f}(i) = \sum_{i \in J} \Inf_i(f) \ge \sum_{S \subseteq J} \widehat{f}(S)^2 \geq 0.99. \] Since $|J| \le \sqrt{n}$, we get that $$ \sum_{i} \widehat{f}(i)^2 \ge \sum_{i \in J} \widehat{f}(i)^2 \ge \frac{0.99}{\sqrt{n}}, $$ and the proof is complete. \end{enumerate} \end{proof} \section{Correlation of a fixed vector with a random unit vector} In this section, we prove the following lemma. \begin{lemma}~\label{lem:corr} Let $v \in \mathbb{S}^{n-1}$ be a fixed vector and $\bu \in \mathbb{S}^{n-1}$ be a uniformly drawn element of $\mathbb{S}^{n-1}$. For $0 < \eps < 1$ and $1/2 \ge \beta \ge \alpha >\frac{1}{\sqrt{n}}$ such that $\beta = (1+ \epsilon) \alpha$, we have\ignore{\rnote{This was \[ 1 \ge \frac{\Pr[|\langle v, \bu \rangle| \ge \alpha]}{\Pr[|\langle v, \bu \rangle| \ge \beta]} \ge 1 {-} O(n \alpha^2 \epsilon). \] but that inequality goes the wrong way, doesn't it --- the numerator is larger than the denominator? Things I changed are in \red{red} below. }} \[ 1 \le \frac{\Pr[|\langle v, \bu \rangle| \ge \alpha]}{\Pr[|\langle v, \bu \rangle| \ge \beta]} \le 1 + O(n \alpha^2 \epsilon) \] provided that $n\cdot \alpha^2 \cdot \epsilon \le\frac{ 1}{8e^2}$. \end{lemma} \begin{proof} It is well-known (see~\cite{Baum:90}) and easy to verify that \begin{equation} \label{eq:baum} \Pr[\langle v, \bu \rangle \ge \alpha] = \frac{A_{n-2}}{A_{n-1}} \int_{z=\alpha}^{1} (1-z^2)^{\frac{n-3}{2}}dz. \end{equation} Here $A_{n-1}$ is the surface area of the sphere $\mathbb{S}^{n-1}$. By symmetry, this implies that \begin{equation} \frac{\Pr[|\langle v, \bu \rangle| \ge \alpha]}{\Pr[|\langle v, \bu \rangle| \ge \beta]} = \frac{\int_{z=\alpha}^{1} (1-z^2)^{\frac{n-3}{2}}dz}{\int_{z=\beta}^{1} (1-z^2)^{\frac{n-3}{2}}dz. } \end{equation} Define $F(\alpha)$ as \[ F(\alpha) = (1-\alpha^2)^{\frac{n-3}{2}}. \] Define $\Delta = \frac{1}{n \alpha}$. Observe that $\Delta \le \alpha$ (for our choice of $\alpha$) and $\Delta \alpha = \frac{1}{n}$. Using this, we have \[ (1-(\alpha + \Delta)^2) \ge (1-\alpha^2) (1-4 \alpha \Delta). \] This implies \begin{equation}~\label{eq:Falphadelta} F(\alpha + \Delta) = (1-(\alpha + \Delta)^2)^{\frac{n-3}{2}} \ge (1-\alpha^2)^{\frac{n-3}{2}} \cdot (1-4\alpha \Delta)^{\frac{n-3}{2}} \geq F(\alpha) \cdot \frac{1}{e^2}. \end{equation} Then, using \eqref{eq:Falphadelta}, \begin{eqnarray}~\label{eq:Falphadelta2} \int_{z=\alpha}^{1} (1-z^2)^{\frac{n-3}{2}}dz \ge \int_{z=\alpha}^{z=\alpha + \Delta} (1-z^2)^{\frac{n-3}{2}}dz \ge \frac{\Delta}{e^2} F(\alpha). \end{eqnarray} On the other hand, \begin{equation}~\label{eq:Falphadelta3} \int_{z=\alpha}^{z=\beta} (1-z^2)^{\frac{n-3}{2}} dz \le (\beta-\alpha) F(\alpha) = \epsilon \cdot \alpha \cdot F(\alpha) \end{equation} Note that the assumption $n \alpha^2 \epsilon \le 1/(8e^2)$ translates to $\epsilon \alpha \le \frac{\Delta}{8e^2}$. Combining \eqref{eq:Falphadelta3}, \eqref{eq:Falphadelta2} and this observation, we get \ignore{\rnote{This was \[ 1 \ge \frac{\Pr[|\langle v, \bu \rangle| \ge \alpha]}{\Pr[|\langle v, \bu \rangle| \ge \beta]} \ge 1 - O(n \alpha^2 \epsilon). \] } } \[ 1 \le \frac{\Pr[|\langle v, \bu \rangle| \ge \alpha]}{\Pr[|\langle v, \bu \rangle| \ge \beta]} \le 1 + {\frac {\eps \alpha}{\Delta/e^2 - \eps \alpha}} \le 1 + O(n \alpha^2 \epsilon). \] \ignore{\rnote{Right now I am confused: suppose $\Delta=\alpha=1/\sqrt{n}$ and $\eps=0.9$, then it seems the denominator can be negative, which shouldn't be possible.} } \end{proof} \section*{Acknowledgement} We thank Ran Raz for alerting us to an error in an earlier proof of \Cref{claim:raz} and for telling us about the Borel-Kolmogorov paradox. \section{Kruskal-Katona for convex sets} \label{sec:main} In this section we give formal statements and proofs of our main structural results, \Cref{lem:key,lem:key-general} below, which are analogues of the Kruskal-Katona theorem for convex and centrally symmetric convex sets. To do this, we first recall the definition of the shell density function $\alpha_K (\cdot)$ from \Cref{eq:shell-density-def}: for $r \geq 0$, \[ \alpha_K(r) := \Prx_{\bx \in \mathbb{S}^{n-1}_r} [\bx \in K]. \] So $\alpha_K(r)$ is equal to the fraction of the origin-centered radius-$r$ sphere which lies in $K$. (A view which will be useful later is that it is the probability that a random Gaussian-distributed point $\bg \sim N(0,1)^n$ lies in $K$, conditioned on $\|\bg\|=r.$) An easy fact about the function $\alpha_K(\cdot)$ is the following: \begin{fact}~\label{fact:convex-decreasing} If $K$ is convex and $0^n \in K$ then $\alpha_K(\cdot)$ is non-increasing. \end{fact} \begin{proof} By convexity, if $x \in K$ then $\lambda x \in K$ for any $\lambda \in [0,1]$. This immediately implies that $ \Prx_{\bx \in \mathbb{S}^{n-1}_r} [\bx \in K] \leq \Prx_{\bx \in \mathbb{S}^{n-1}_{\lambda r}} [\bx \in K]$ and consequently $\alpha_K(\cdot)$ is non-increasing. \end{proof} We begin with our analogue of the Kruskal-Katona theorem for centrally-symmetric convex sets, since it is somewhat easier to state. The following is a more general version of \Cref{thm:informal-centrally-symmetric-density-increment}: \begin{theorem} [Kruskal-Katona for centrally symmetric convex sets] \label{lem:key} Let $K \subset \mathbb{R}^n$ be a centrally symmetric convex body and let $r>0$ be such that $\alpha _K(r) \in (0,1)$. Let $0 < \kappa <1/10$. Then \[ \alpha_K (r (1-\kappa)) \geq \alpha_K(r) + \kappa \cdot \Theta((\alpha_K(r)(1-\alpha_K(r)))^2). \] \end{theorem} Intuitively, the above theorem says that at any input $r$ where the shell density function $\alpha_K(r)$ is not too close to 0 or 1, slightly decreasing the input $r$ will cause the shell density function to noticeably increase. As was noted earlier, such a density increment statement does not hold for general convex bodies $K$ that contain the origin (for example, if $K = \{x: x_1 \ge 0\}$ then $\alpha_K(r) =1/2$ for all $r>0$). The next theorem establishes a density increment for general convex bodies that contain an origin-centered ball (and implies \Cref{thm:informal-convex-density-increment}): \begin{theorem} [Kruskal-Katona for general convex sets] \label{lem:key-general} Let $K \subset \mathbb{R}^n$ be a convex body that contains the radius-$r$ origin-centered ball $B(0^n, r_{\mathsf{small}})$. Let $r$ be such that $\alpha _K(r) \in (0,1)$ and let $0 < \kappa <1/10$. Then \begin{eqnarray*} \alpha_K (r (1-\kappa)) \geq \begin{cases} \alpha_K(r) + \Theta\big(\kappa \cdot \alpha_K(r ) \cdot \frac{ r_{\mathsf{small}}}{r}\big) \ &\textrm{if } \ 0 < \alpha_K(r) \le 1/2 \\ \alpha_K(r) + \Theta\big( \kappa \cdot (1-\alpha_K(r)) \cdot \min \big\{1-\alpha_K(r), \frac{r_{\mathsf{small}}}{r} \big\} \big) \ &\textrm{if } \ 1/2 < \alpha_K(r) \le 1. \\ \end{cases} \end{eqnarray*} \end{theorem} Note that the increment in the above theorem is linearly dependent on $r_{\mathsf{small}}/r$ (and thus vanishes when no origin-centered ball is contained in $K$). It is not difficult to see that this dependence is best possible by considering some fixed $r_{\mathsf{small}}>0$ and the convex set $K = \{x \in \mathbb{R}^n: x_1 + r_{\mathsf{small}}\ge 0\}$. \subsection{Proof of \Cref{lem:key}} \label{sec:proof-of-key-lemma} The proofs of \Cref{lem:key} and \Cref{lem:key-general} have a substantial overlap; in particular, the first part of the proofs are identical. To avoid repetition we will explicitly note the places where the two proofs diverge. We now start with the proof of \Cref{lem:key}. Note that because $K$ contains the origin, $\alpha_K(\cdot)$ is a non-increasing function. Set $\beta = \min \{\alpha_K(r), 1- \alpha_K(r)\}$ so that $0 < \beta \leq 1/2$ and thus $\alpha_K(r) \in [\beta, 1-\beta]$. For simplicity of exposition, we now rescale the convex body by a factor of $1/r$; after this rescaling we have that $\alpha_K(1) \in [\beta, 1-\beta]$, and thus we need to prove a lower bound on $\alpha_K(1-\kappa)$. Let $C_1 := K \cap \mathbb{S}^{n-1}$, and let us write\footnote{We include the subscript $1$ on $\mu$ because we will soon be considering spheres of radii other than 1.} $\mu_1(C_1)$ to denote the measure of $C_1$ as a fraction of $\mathbb{S}^{n-1}$, so $\mu_1(C_1)$ satisfies $\mu_1(C_1) \in [\beta, 1-\beta].$ In other words, under the Haar measure (i.e.~the uniform distribution) on the unit sphere, $\mu_1(C_1)$ is the probability that a randomly drawn point lies in $C_1$. Our argument makes crucial use of a variant of a lemma due to Raz~\cite{raz1999exponential}. In particular, Raz showed that for any subset $A \subset \mathbb{S}^{n-1}$ with $\mu_1(A)$ bounded away from 0 and 1\ignore{\rnote{This was a condition that Raz required, right - didn't Klartag and Regev dial this down to $e^{-n^{1/3}}$ or something?}} and a random subspace $\bV$ of $\mathbb{R}^n$ of dimension $\approx 1/\epsilon^2$, the Haar measure of $A \cap \bV$ (as a fraction of the unit sphere in $\bV$) is $\epsilon$-close to $\mu_1(A)$ with high probability. We adapt Raz's arguments to show a variant of this result. Roughly speaking, our variant implies that under the above conditions, \ignore{in particular, provided the measure of $A$ is bounded away from $0$ and $1$,} the measure of $A \cap \bV$ as a fraction of the unit sphere in $\bV$ is bounded away from $0$ and $1$ with non-negligible probability \emph{even if $\bV$ is a random subspace of dimension only $2$}. This variant is useful for us because once the ambient dimension is $2$, we can use elementary geometric arguments to prove \Cref{lem:key}. We now state our variant of Raz's lemma: \begin{claim} [Variant of the main lemma of \cite{raz1999exponential}] \label{claim:raz} Let $\bV$ be a uniform random 2-dimensional subspace of $\R^n$ and let $C$ be a subset of $\mathbb{S}^{n-1}$ such that $\mu_1(C) \in [\beta,1/2]$ for $0 < \beta \le 1/2$. Then $$ \Prx_{\bV} [\mu_{\bV,1}(C \cap \bV) \in [\beta/4, 9/10]] \ge \frac{\beta}{2}. $$ Similarly, if $\mu_1(C) \in [1/2,1-\beta]$, then $$ \Prx_{\bV} [\mu_{\bV,1}(C \cap \bV) \in [1/10, 1-\beta/4]] \ge \frac{\beta}{2}. $$ Here $\mu_{\bV,1}(C \cap \bV)$ denotes the measure of $C \cap \bV$ as a fraction of the unit sphere $\bV \cap \mathbb{S}^{n-1}$ of the $2$-dimensional subspace $\bV$. \end{claim} We defer the proof of \Cref{claim:raz} to \Cref{sec:claim:raz:proof} and continue with the proof of \Cref{lem:key} assuming \Cref{claim:raz}. For $ 0 < \kappa <1$ let $\mathbb{S}^{n-1}_{1-\kappa}$ denote the origin-centered $n$-dimensional sphere of radius $1-\kappa$. Define $C_{1-\kappa} := K \cap \mathbb{S}^{n-1}_{1-\kappa}$ and let $\mu_{1-\kappa}(C_{1-\kappa})$ denotes the fractional density of $C_{1-\kappa}$ in $\mathbb{S}^{n-1}_{1-\kappa}$. Note that $\alpha_K(1) = \mu_1(C_1)$ and $\alpha_K(1-\kappa) = \mu_{1-\kappa}(C_{1-\kappa})$. For $V$ any $2$-dimensional subspace of $\mathbb{R}^n$, define $K_V := K \cap V$, $C_{V, 1} := K \cap V \cap \mathbb{S}^{n-1},$ and $C_{V, 1-\kappa} := K \cap V \cap \mathbb{S}^{n-1}_{1-\kappa}$. We further define $\mu_{V,1}(C_{V, 1})$ (respectively $\mu_{V,1-\kappa}(C_{V, 1-\kappa})$) as the measure of $C_{V, 1}$ (respectively $C_{V, 1-\kappa}$) as a fraction of $\mathbb{S}^{n-1}_{1}\cap V$ (respectively $\mathbb{S}^{n-1}_{1-\kappa}\cap V$). Note that $\mathbb{S}^{n-1}_{1}\cap V$ (respectively $\mathbb{S}^{n-1}_{1-\kappa}\cap V$) is the origin-centered $2$-dimensional sphere of radius $1$ (respectively $1-\kappa$) inside the subspace $V$. \Cref{lem:key} and \Cref{lem:key-general} are essentially lower bounds on $ \mu_{1-\kappa}(C_{1-\kappa}) - \mu_1(C_1)$. To establish these lower bounds, we first observe that the density of $K$ in an $n$-dimensional sphere is an average of two-dimensional ``cross-sectional'' densities; more precisely, for $\bV$ a uniform random $2$-dimensional subspace of $\R^n$, we have that \begin{equation}~\label{eq:avg-1} \mu_1(C_1) = \Ex_{\bV} [\mu_{\bV,1}(C_{\bV,1})] \quad \quad \textrm{and} \quad \quad \mu_{1-\kappa}(C_{1-\kappa}) = \Ex_{\bV} [\mu_{\bV,1-\kappa}(C_{\bV,1-\kappa})]. \end{equation} Another simple but crucial observation is that for any fixed $2$-dimensional subspace $V$, it follows directly from \Cref{fact:convex-decreasing} that \begin{equation}~\label{eq:avg-2} \mu_{V,1}(C_{V, 1}) \leq \mu_{V,1-\kappa}(C_{V, 1-\kappa}). \end{equation} The high level idea of our argument is to strengthen \Cref{eq:avg-2} to a strict inequality for a non-negligible fraction of subspaces $V$ and thereby by \Cref{eq:avg-1} obtain an overall density increment. Towards this end, let us partition $C_{V,1-\kappa}$ into two sets $A_{K,V}$ and $B_{K,V} = C_{V,1-\kappa} \setminus A_{K,V}$ as follows: \[ A_{K,V} := \bigg\{z \in \mathbb{S}^{n-1}_{1-\kappa} \cap V: \frac{1}{1-\kappa} \cdot z \in C_{V, 1} \bigg\}. \] We observe that $\mu_{V,1-\kappa}(A_{K,V}) = \mu_{V,1}(C_{V, 1})$, and hence we have that \begin{eqnarray}~\label{eq:diff-reexp} \mu_{V,1-\kappa}(C_{V, 1-\kappa})- \mu_{V,1}(C_{V, 1}) = \mu_{V,1-\kappa}(B_{K,V}). \end{eqnarray} The next claim proves a lower bound on $\mu_{V,1-\kappa}(B_{K,V})$. We note that this is the first and essentially the only point of departure between the proofs of \Cref{lem:key} and \Cref{lem:key-general}. \begin{claim}~\label{clm:two-d-increment} Let $\mu_{V,1}(C_{V, 1}) =p \in (0,1)$. Then for all $0 \le \kappa \le \frac{1}{10}$, we have that \[ \mu_{V,1-\kappa}(B_{K,V}) \ge \frac{2 \pi \cdot \kappa \cdot (1-p)}{2} \cdot \sin (\pi \cdot p/2). \] \end{claim} \begin{proof} In this part of the proof we will refer to $\mathbb{S}_1^{n-1} \cap V$ as ``the unit circle'' and to $\mathbb{S}_{1-\kappa}^{n-1} \cap V$ as ``the circle of radius $1-\kappa$.'' Observe that $C_{V, 1}$ is a subset of the unit circle, and let us partition $\overline{C_{V, 1}}$ into a collection of disjoint arcs (whose end points belong to $C_{V, 1}$). Now, for any such arc $\mathcal{F}$, define $\mathcal{F}_{1-\kappa} := \{z (1-\kappa) : z\in \mathcal{F}\}$. Now we note three simple but crucial facts: (i) $\mathcal{F}_{1-\kappa} \subseteq B_{K,V}$; (ii) if $\mathcal{F}$ and $\mathcal{G}$ are disjoint arcs then so are $\mathcal{F}_{1-\kappa}$ and $\mathcal{G}_{1-\kappa}$; and (iii) the angle of any such arc $\mathcal{F} \subseteq \overline{C_{V, 1}}$ is strictly less than $\pi$. (The last fact holds because $C_{V, 1}$ is symmetric and $\mu_{V,1}(C_{V, 1}) >0$.) To finish the proof of \Cref{clm:two-d-increment}, we need the following useful claim: \begin{claim}~\label{clm:angle-include} Suppose that the angular measure of an arc $\mathcal{F} \subseteq \overline{C_{V, 1}}$ whose end points belong to $C_{V, 1}$ is $0 < t < \pi$. Then the angular measure of the arc $\mathcal{F}_{1-\kappa} \cap K$ is at least $(t \kappa \cdot \cos (t/2)) /2$.\ignore{\rnote{Sorry, right now I am confused: since $\mathcal{F}_{1-\kappa} := \{z (1-\kappa) : z\in \mathcal{F}\}$, why isn't the angular measure of ${\cal F}_{1-\kappa}$ just the same as the angular measure of ${\cal F}$, i.e. exactly $t$?}} \end{claim} \begin{proof} Without loss of generality assume that the center of the unit circle is $(0,0)$ and that the two endpoints of the arc $\mathcal{F}$ are located at $(\cos 0, \sin 0)$ and $(\cos t, \sin t)$. By definition both endpoints are in the set $C_{V, 1}$ and hence the line segment $L$ joining $(\cos 0, \sin 0)$ and $(\cos t, \sin t)$ is in the convex set $K$. Using this and the fact that the origin lies in $K$, it follows from a simple geometric argument that the angular measure of $\mathcal{F}_{1-\kappa}$ inside $K$ is exactly\ignore{\rnote{I guess it is possible that $\cos(t/2)>1-\kappa$, right --- in this case we are feeding a number bigger than 1 into $\arccos$, which doesn't seem kosher}} \begin{equation}~\label{eq:lb-ang-measure} \begin{cases} = t &\textrm{if} \ \cos(t/2) > 1-\kappa \\ = 2 \bigg( \frac{t}{2} - \arccos \bigg(\frac{\cos (t/2)}{1-\kappa}\bigg)\bigg) &\textrm{if} \ \cos(t/2) \leq 1-\kappa \end{cases} \end{equation} If $\cos(t/2) > 1-\kappa$ then by \eqref{eq:lb-ang-measure} we are done, since $t \geq (t \kappa \cdot \cos (t/2)) /2.$ If $\cos(t/2) \le 1-\kappa$, then we recall the fact that if $0 \le x,\Delta x$ and $x + \Delta x \le 1$ then \[ \arccos(x) - \arccos(x+ \Delta x) \ge \Delta x. \] Applying this inequality with $x=\cos(t/2)$ and $x + \Delta x = {\frac {\cos(t/2)}{1-\kappa}}$, we get that the angular measure of $\mathcal{F}_{1-\kappa}$ inside $K$ is at least ${\frac {2 \kappa}{1-\kappa}} \cos(t/2)$, which is easily seen to be at least $(t \kappa \cdot \cos (t/2)) /2$ since ${\frac 2 {1-\kappa}} \geq 2 \geq \pi/2 \geq t/2.$ \end{proof} Armed with \Cref{clm:angle-include}, we can now prove \Cref{clm:two-d-increment}. In particular, suppose $\overline{C_{V, 1}}$ is a union of disjoint arcs $\{\mathcal{F}^{(i)}\}_{i \in \mathbb{N}}$ of length $\{t_i\}_{i \in \mathbb{N}} $.\ignore{\footnote{{\color{red} I am assuming that a polytope intersects a circle at countable number of points. Check??}}}. We now make two observations: \begin{enumerate} \item Since $\mu_{V,1}(C_{V, 1}) = p$, the total angular measure of the arcs, $\sum_{i \in \mathbb{N}} t_i$, is $2\pi (1-p)$. \item Since $C_{V, 1}$ (and hence its complement) is centrally symmetric, each $t_i$ is at most $\pi(1-p)$. \end{enumerate} For each arc $\mathcal{F}^{(i)}$, by \Cref{clm:angle-include} the angular measure of $\mathcal{F}^{(i)}_{1-\kappa} \cap K$ is at least $\frac{t_i \kappa \cos (t_i/2)}{2}$. This means that the total angular measure of all the arcs $\mathcal{F}^{(i)}_{1-\kappa}$ is at least \[ \sum_i \frac{t_i \kappa \cos (t_i/2)}{2} \geq \frac{(1-p) \cdot 2\pi \cdot \kappa}{2} \cdot \cos (\pi(1-p)/2) = \frac{(1-p) \cdot 2\pi \cdot \kappa}{2} \cdot \sin (\pi \cdot p/2), \] where the inequality follows from items 1 and 2 above and the fact that the cosine function is monotonically decreasing in the interval $[0,\pi)$. Translating from the total angular measure of all the arcs $\mathcal{F}^{(i)}_{1-\kappa}$ to $\mu_{V,1-\kappa}(B_{K,V})$ via facts (i) and (ii) from the beginning of the proof, we get \Cref{clm:two-d-increment}. \end{proof} To finish the proof of \Cref{lem:key}, we consider two cases: (I) when $\mu_{1}(C_1) \le 1/2$, and (II) when $\mu_{1}(C_1) > 1/2$. In case (I) we set $\beta = \mu_1(C_1) $ and in case (II) we set $1-\beta = \mu_1(C_1)$, so in both cases it holds that $\beta \le 1/2$. We now define a two-dimensional subspace $V \subset \R^n$ to be \emph{good} if \begin{enumerate} \item In case (I), $\beta /4 \le \mu_{V,1}(C_{V, 1}) \le 9/10$; \item In case (II), $1/10 \le \mu_{V,1}(C_{V, 1}) \le 1-\beta/4$. \end{enumerate} Note that by \Cref{claim:raz}, in both cases $\Prx_{\bV} [\bV \textrm{ is good}] \ge \beta/2$. We thus have that \begin{align} \mu_{1-\kappa}(C_{1-\kappa}) &= \mathbf{E}_{\bV}[\mu_{\bV, 1-\kappa}(C_{\bV,1-\kappa})] \ \tag{by \eqref{eq:avg-1}}\nonumber \\ &= \mathbf{E}_{\bV}[\mu_{\bV, 1-\kappa}(C_{\bV,1-\kappa}) \ | \ \bV \textrm{ is not good}] \cdot \Pr[\bV \textrm{ is not good}] \nonumber \\ & \ \ \ \ + \mathbf{E}_{\bV}[\mu_{\bV, 1-\kappa}(C_{\bV,1-\kappa}) \ | \ \bV \textrm{ is good}] \cdot \Pr[\bV \textrm{ is good}] \nonumber \\ &\geq \mathbf{E}_{\bV}[\mu_{\bV, 1}(C_{\bV,1}) \ | \ \bV \textrm{ is not good}] \cdot \Pr[\bV \textrm{ is not good}] \nonumber \\ & \ \ \ \ + \mathbf{E}_{\bV}[\mu_{\bV, 1-\kappa}(C_{\bV,1-\kappa}) \ | \ \bV \textrm{ is good}] \cdot \Pr[\bV \textrm{ is good}] \ \ \ \ \ \ \ \text{(by \eqref{eq:avg-2})}\label{eq:ineq-Ck-1} \end{align} By applying \eqref{eq:diff-reexp} and \Cref{clm:two-d-increment}, we get that if $V$ is {good}, then \[ \mu_{V,1-\kappa}(C_{V, 1-\kappa}) \ge \mu_{V,1}(C_{V, 1}) + \Theta(\kappa \beta). \] Using \Cref{claim:raz}, we have that $\Prx_{\bV} [\bV \textrm{ is {good}}] \ge \beta/2$. Combining these two inequalities with \eqref{eq:ineq-Ck-1}, we get that \begin{align*} \mu_{1-\kappa}(C_{1-\kappa}) &\ge \mathbf{E}_{\bV}[\mu_{\bV, 1}(C_{\bV,1}) \ | \ \bV \textrm{ is not good}] \cdot \Pr[\bV \textrm{ is not good}] \\ & \ \ \ \ + \mathbf{E}_{\bV}[\mu_{\bV, 1}(C_{\bV,1}) \ | \ \bV \textrm{ is good}] \cdot \Pr[\bV \textrm{ is good}] + \Theta (\kappa \cdot \beta^2) \\ &\ge \mathbf{E}_{\bV}[\mu_{\bV, 1} (C_{\bV,1})] + \Theta (\kappa \cdot \beta^2) = \mu_{1}(C_1) + \Theta (\kappa \cdot \beta^2), \end{align*} where the last inequality again uses \eqref{eq:avg-1}. The proof of \Cref{lem:key} is complete modulo the proof of \Cref{claim:raz}, which we give below. \subsubsection{Proof of \Cref{claim:raz}} \label{sec:claim:raz:proof} Recall that $C \subset \mathbb{S}^{n-1}$ is such that $\mu_1(C)$ (the measure of $C$ as a fraction of $\mathbb{S}^{n-1}$) satisfies $0 < \min\{\mu_1(C) , 1 - \mu_1(C) \} = \beta \leq 1/2$. For conciseness let $c$ denote $\mu_1(C),$ so $c \in [\beta,1-\beta].$ We follow the general structure of Raz's original argument with some careful modifications. Let $\by^{(1)},\by^{(2)}$ be independent uniform random elements of $\mathbb{S}^{n-1}$. Let $\bY$ be the number of elements of $\{\by^{(1)},\by^{(2)}\}$ that lie in $C$, so $\bY$ is supported in $\{0,1,2\}$. Then we have \begin{equation}~\label{eq:Yprob} \Pr[\bY=2]=c^2, \ \ \Pr[\bY=0] = (1-c)^2. \end{equation} Given vectors $u,v \in \R^n$ let $\mathrm{span}(\{u,v\})$ denote the span of $u$ and $v$. We record a few easy but subtle facts about the distribution of independent uniform random $\by^{(1)},\by^{(2)}$ and their span: \begin{fact}~\label{fact:paradoxical} \begin{enumerate} \item For $\by^{(1)},\by^{(2)}$ chosen as above, with probability $1$ the vector space $\mathrm{span}(\{\by^{(1)},\by^{(2)}\})$ is uniform random over all 2-dimensional subspaces of $\mathbb{R}^n$. \item For any fixed 2-dimensional subspace $V'$, conditioned on $\mathrm{span}(\{\by^{(1)},\by^{(2)}\}) = V'$, each of $\by^{(1)},\by^{(2)}$ is uniformly randomly distributed over $V' \cap \mathbb{S}^{n-1}$. \end{enumerate} \end{fact} \begin{remark} We remark that conditioned on $\mathrm{span}(\{\by^{(1)},\by^{(2)}\}) = V'$, the distribution of $\by^{(1)}$ and $\by^{(2)}$ is no longer independent. An earlier draft of this paper gave an argument that $\by^{(1)}$ and $\by^{(2)}$ are independent conditioned on $\mathrm{span}(\{\by^{(1)},\by^{(2)}\}) = V'$, but there was a subtle flaw in the argument, which was pointed out to us by Raz in a personal communication~\cite{raz2019pc}, arising from the Borel-Kolmogorov paradox~\cite{Kolmogorov-Borel:wikipedia}. The purpose of this remark is to highlight the fact that subtle issues can arise when conditioning on measure zero events (such as $\mathrm{span}(\{\by^{(1)},\by^{(2)}\}) = V'$). \end{remark} \ignore{ Fix any subspace $V'$ in $\R^n$ of dimension 2. We will crucially use the following (this is the main point of departure from Raz's argument): \begin{lemma} \label{lem:its-cool} Conditioned on $\mathrm{span}\{\by^{(1)},\by^{(2)}\}=V'$, the random variables $\by^{(1)},\by^{(2)}$ are independent uniform random variables over $\mathbb{S}^{n-1} \cap V'$, i.e. they are independent uniform random points drawn from the unit circle in $V'$. \end{lemma} \begin{proof} By renaming coordinates, without loss of generality we may suppose that the subspace $V'$ is the span of $e_{n-1}$ and $e_n$, i.e. the set of all points of the form \[ (\overbrace{0,\dots,0}^{n-2 \text{~zeros}},x_{n-1},x_n). \] We recall that any point in $\mathbb{S}^{n-1}$ has a unique description in polar coordinates as \[ (\cos \theta_1, \sin \theta_1 \cos \theta_2, \sin \theta_1 \sin \theta_2 \cos \theta_3, \cdots, \sin \theta_1 \cdots \sin \theta_{n-2} \cos \theta_{n-1}, \sin \theta_1 \cdots \sin \theta_{n-2} \sin \theta_{n-1}) \] where $\theta_1,\dots,\theta_{n-2} \in [0,\pi]$ and $\theta_{n-1} \in [0,2\pi)$. Moreover there is a joint distribution ${\cal D}$ of $\btheta=(\btheta_1,\dots,\btheta_{n-1})$, i.e. a distribution over $(n-1)$-tuples of angles, such that if $\btheta \sim {\cal D}$ then the resulting $\by=\by(\btheta) \in \mathbb{S}^{n-1}$ defined by \begin{align*} \by_1 &= \cos \btheta_1\\ \by_2 &= \sin \btheta_1 \cos \btheta_2\\ & \vdots\\ \by_{n-1} &= \sin \btheta_1 \cdots \sin \btheta_{n-2} \cos \btheta_{n-1}\\ \by_n &= \sin \btheta_1 \cdots \sin \btheta_{n-2} \sin \btheta_{n-1} \end{align*} is uniform over $\mathbb{S}^{n-1}.$ Thus we may view the independent uniform points $\by^{(1)},\by^{(2)}$ as defined by $\btheta^{(1)}=(\btheta^{(1)}_1,\dots,\btheta^{(1)}_{n-1})$, $\btheta^{(2)}=(\btheta^{(2)}_1,\dots,\btheta^{(2)}_{n-1})$ where $\btheta^{(1)},\btheta^{(2)}$ are i.i.d.~according to ${\cal D}.$ Up to an event which has probability zero conditioned on $\mathrm{span}\{\by^{(1)},\by^{(2)}\} \subseteq V'$ (namely, the event that $\mathrm{span}\{\by^{(1)},\by^{(2)}\} \subseteq V'$ has dimension one), the event ``$\mathrm{span}\{\by^{(1)},\by^{(2)}\}=V'$'' is the same as the event ``$\by^{(1)}_1=\dots=\by^{(1)}_{n-2}=0, \by^{(2)}_1=\dots=\by^{(2)}_{n-2}=0.$'' This is in turn the same as the event ``$\btheta^{(1)}_1=\cdots=\btheta^{(1)}_{n-2}=\pi/2, \btheta^{(2)}_1=\cdots=\btheta^{(2)}_{n-2}=\pi/2.$'' Thus the conditioned pair of random variables $((\by^{(1)},\by^{(2)})\ | \ \mathrm{span}\{\by^{(1)},\by^{(2)}\}=V')$ is distributed as \begin{align*} ( \by^{(1)} \ | \ \mathrm{span}\{\by^{(1)},\by^{(2)}\}=V') &= (\overbrace{0,\dots,0}^{n-2 \text{~zeros}},\cos \btheta^{(1)}_{n-1}, \sin \btheta^{(1)}_{n-1}),\\ (\by^{(2)} \ | \ \mathrm{span}\{\by^{(2)},\by^{(2)}\}=V') &= (\overbrace{0,\dots,0}^{n-2 \text{~zeros}},\cos \btheta^{(2)}_{n-1}, \sin \btheta^{(2)}_{n-1}) \end{align*} where each $\btheta^{(i)}_{n-1}$ is distributed according to ${\cal D}_{n-1}$ (the marginal of ${\cal D}$ corresponding to the final $(n-1)$-th coordinate). Since $\btheta^{(1)},\btheta^{(2)}$ are independent, so are $\btheta^{(1)}_{n-1}$ and $\btheta^{(2)}_{n-1}$, and so are $( \by^{(1)} \ | \ \mathrm{span}\{\by^{(1)},\by^{(2)}\}=V')$ and $( \by^{(2)} \ | \ \mathrm{span}\{\by^{(1)},\by^{(2)}\}=V')$. As stated by Raz, each random variable $( \by^{(i)} \ | \ \mathrm{span}\{\by^{(1)},\by^{(2)}\}=V')$ is uniformly distributed over $\mathbb{S}^{n-1} \cap V'$ (this is easy to infer from the definition of the distribution ${\cal D}$), and the proof of \Cref{lem:its-cool} is complete. \end{proof} } We are now ready to prove \Cref{claim:raz}. We start with the case when $\mu_1(C) = c = \beta \leq 1/2$. Later we will consider the case when $\mu_1(C) = c = 1-\beta > 1/2$. \medskip \noindent \textbf{Case I: $\mu_1(C) = c = \beta <1/2$.} Our aim is to bound the probabilities $\Prx_{\bV}[\mu_{\bV,1}(C \cap \bV) > 9/10]$ and $\Prx_{\bV}[\mu_{\bV,1}(C \cap \bV) < \beta/4]$. First, fix a two-dimensional subspace $V'$ such that $\mu_{V',1}(C \cap V') > 9/10.$ By the second item in \Cref{fact:paradoxical}, we have that \[ \Prx_{\red{\by^{(1)},\by^{(2)}}}[\bY=2 \ | \ \mathrm{span}(\{\by^{(1)},\by^{(2)}\}) = V'] > \frac{8}{10}. \] Since this is true for any such subspace $V'$, we have that \[ \Prx_{\red{\by^{(1)},\by^{(2)}}}[\bY=2 \ | \ \mu_{ \red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\})} ,1}(C \cap \red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\}})) > 9/10] > \frac{8}{10}, \] where $\bV$ is a random variable distributed as a uniform random 2-dimensional subspace of $\R^n$. We thus have \begin{align*} c^2 &= \Prx_{\red{\by^{(1)},\by^{(2)}}}[\bY=2] \geq \Prx_{\red{\by^{(1)},\by^{(2)}}}[\bY = 2 \ \& \ \mu_{\red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\})},1}(C \cap \red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\})}) > 9/10]\\ &=\Prx_{\red{\by^{(1)},\by^{(2)}}}[\bY = 2 \ | \ \mu_{\red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\})},1}(C \cap \red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\})}) > 9/10] \cdot\\ & \ \ \ \ \Prx_{\red{\by^{(1)},\by^{(2)}}}[ \mu_{\red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\})},1}(C \cap \red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\})}) > 9/10]\\ &>\frac{8}{10} \cdot \Prx_{\red{\by^{(1)},\by^{(2)}}}[ \mu_{\red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\})},1}(C \cap \red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\})}) > 9/10], \end{align*} which gives \[ \Prx_{\red{\by^{(1)},\by^{(2)}}}[ \mu_{\red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\})},1}(C \cap \red{\mathrm{span}(\{\by^{(1)},\by^{(2)}\})})> 9/10] < {c^2 \cdot \frac{10}{8}}. \] \red{Since by the first item of \Cref{fact:paradoxical}, the vector space $\mathrm{span}(\{\by^{(1)},\by^{(2)}\})$ is with probability 1 a uniform random two-dimensional subspace $\bV$ of $\R^n$, we may restate this last bound as } \begin{equation}~\label{eq:bv-lb1} \Prx_{\bV}[ \mu_{\bV,1}(C \cap \bV)> 9/10] < {c^2 \cdot \frac{10}{8}}. \end{equation} An identical argument (now using the event $\bY=0$ rather than the event $\bY=2$) gives \begin{equation}~\label{eq:bv-lb2} \Prx_{\bV}[ \mu_{\bV,1}(C \cap \bV)<\beta/4] < {\frac {(1-c)^2}{1-\beta/2}}. \end{equation} Combining (\ref{eq:bv-lb1}) and (\ref{eq:bv-lb2}) (and using $c=\beta$ in this case), we get \begin{equation}~\label{eq:bv-lb-f1} \Prx_{\bV} [\mu_{\bV,1}(C \cap \bV)\in [\beta/4, 9/10]] \ge 1- {\beta^2 \cdot \frac{10}{8}} - {\frac {(1-\beta)^2}{1-\beta/2}}. \end{equation} \medskip \noindent \textbf{Case II: $\mu_1(C) = c = 1-\beta >1/2$.} Applying the above analysis \emph{mutatis mutandis} in this setting, we get \begin{equation}~\label{eq:bv-lb-f2} \Prx_{\bV} [\mu_{\bV,1}(C \cap \bV)\in [1/10, 1-\beta/4]] \ge 1- {\beta^2 \cdot \frac{10}{8}} - {\frac {(1-\beta)^2}{1-\beta/2}}. \end{equation} Elementary calculus shows that for all $0 \le \beta \le 1/2$, \[ 1- {\beta^2 \cdot \frac{10}{8}} - {\frac {(1-\beta)^2}{1-\beta/2}} \ge \frac{\beta}{2}. \] Combining the above inequality with \eqref{eq:bv-lb-f1} (respectively \eqref{eq:bv-lb-f2}) gives Case I (respectively~Case II) of the claim, and the proof of \Cref{claim:raz} is complete. \qed \subsection{Proof of Theorem~\ref{lem:key-general}} The proof of \Cref{lem:key-general} is almost exactly the same as the proof of \Cref{lem:key} up to the statement of \Cref{clm:two-d-increment} (including exactly the same definitions). The only difference is that now as we rescale to set $r=1$, the guarantee for $K$ is that $B(0^n, r_{\mathsf{inner}}) \subseteq K$ where $r_{\mathsf{inner}} = r_{\mathsf{small}} / r$. Finally, we can assume in the current context that $1-\kappa \ge r_{\mathsf{inner}}$, since otherwise, the conclusion trivially holds. Instead of \Cref{clm:two-d-increment}, we now have the following claim. \begin{claim}~\label{clm:2d-increment-new} Let $\mu_{V,1}(C_{V, 1}) = p \in (0,1)$. Then, for all $0 \le \kappa \le 1/20$, \[ \mu_{V,1-\kappa}(B_{K,V}) \ge \min \bigg\{ {\kappa \cdot r_{\mathsf{inner}}}{}, \frac{(1-p) \cdot \kappa}{4} \bigg\}. \] \end{claim} \begin{proof} As in \Cref{clm:two-d-increment}, we will refer to $\mathbb{S}_1^{n-1} \cap V$ as ``the unit circle'' and $\mathbb{S}_{1-\kappa}^{n-1} \cap V$ as ``the circle of radius $1-\kappa$.'' As before, $C_{V, 1}$ is a subset of the unit circle and we partition $\overline{C_{V, 1}}$ into a collection of disjoint arcs (whose end points belong to $C_{V, 1}$). As before, for any such arc $\mathcal{F}$, define $\mathcal{F}_{1-\kappa} = \{z (1-\kappa) : z\in \mathcal{F}\}$. Now we make two observations: (i) $\mathcal{F}_{1-\kappa} \subseteq B_{K,V}$, (ii) if $\mathcal{F}$ and $\mathcal{G}$ are disjoint arcs, then so are $\mathcal{F}_{1-\kappa}$ and $\mathcal{G}_{1-\kappa}$. Note that unlike \Cref{clm:two-d-increment}, now it is possible for a single arc to have measure more than $\pi$. We deal with ``large arcs'' through the following claim (recall that in our setting we have $\mathcal{B}(0^n, r_{\mathsf{inner}}) \subseteq K$): \begin{claim}~\label{clm:angle-include2} Suppose the angular measure of an arc $\mathcal{F} \subseteq \overline{C_{V, 1}}$ whose end points belong to $C_{V, 1}$ is $\pi/2 \le t < 2\pi$. Then the angular measure of the arc $\mathcal{F}_{1-\kappa} \cap K$ is at least ${\kappa \cdot r_{\mathsf{inner}}}$. \end{claim} \begin{proof} Without loss of generality we may suppose that one of the two endpoints of the arc $\mathcal{F}$ is $y=(\cos 0, \sin 0)$ and the other is $y' = (\cos t, \sin t)$. Define $\mathcal{F}_{r_{\mathsf{inner}}}$ to be the arc of the radius-$r_{\mathsf{inner}}$ circle corresponding to $\mathcal{F}$, i.e.~$\mathcal{F}_{r_{\mathsf{inner}}} = \{z: z/r_{\mathsf{inner}} \in \mathcal{F}\}$. Next, define $z$ to be the point on $\mathcal{F}_{r_{\mathsf{inner}}}$ such that the tangent at $z$ passes through $y$. (Such a point $x$ must exist because the angular measure of the arc $\mathcal{F}$ is at least $\pi/2$.) Recalling that $B(0^n, r_{\mathsf{inner}}) \subseteq K$, we have that the point $z$, the origin, and $y$ all lie in $K$. By elementary trigonometry, it follows that the angular measure of $\mathcal{F}_{1-\kappa}$ inside $K$ is at least \begin{eqnarray} \arccos ({r_{\mathsf{inner}}}) -\arccos \bigg(\frac{r_{\mathsf{inner}}}{1-\kappa}\bigg) &\ge& \arcsin \bigg(\frac{r_{\mathsf{inner}}}{1-\kappa}\bigg) -\arcsin ({r_{\mathsf{inner}}}) \nonumber \\ &\ge& \bigg(\frac{r_{\mathsf{inner}}}{1-\kappa}\bigg) - r_{\mathsf{inner}}. \nonumber \end{eqnarray} The last inequality uses the simple fact that $\arcsin x - \arcsin y \ge x-y$ when $0\le y \le x\le 1$. Using the fact that $\kappa \le 1/10$, we get \Cref{clm:angle-include2}. \end{proof} To prove \Cref{clm:2d-increment-new}, we consider two possibilities. The first is that there is a single arc contained in $\overline{C_{V, 1}}$ whose angular length is at least $\pi/2$; in this case we get \Cref{clm:2d-increment-new} using \Cref{clm:angle-include2}. The other possibility is that $\overline{C_{V, 1}}$ is split into arcs $\{\mathcal{F}^{(i)}\}$ of length $\{t_i \}$ where each $t_i \le \pi/2$. Note that the total angular measure of the arcs is $\sum t_i = 2\pi (1-p)$. For each such arc $\mathcal{F}^{(i)}$, by \Cref{clm:angle-include}, we get that the angular measure of $\mathcal{F}^{(i)}_{1-\kappa} \cap K$ is at least $\frac{t_i \kappa \cos (t_i/2)}{2}$. Thus, the total angular measure of the intersection of $K$ with all the arcs $\mathcal{F}^{(i)}_{1-\kappa}$ is \[ \sum_i \frac{t_i \kappa \cos(t_i/2)}{2} \ge \sum_i \frac{t_i \kappa }{4} = \frac{2\pi \kappa (1-p)}{4}, \] where the inequality holds because each $\cos(t_i/2)$ is at least $1/2.$ Translating from angular measure of $K \cap \cup_i \mathcal{F}^{(i)}_{1-\kappa}$ to $\mu_{V,1-\kappa}(B_{K,V})$, we get the stated claim. \end{proof} As with the proof of \Cref{lem:key}, we split the analysis into two cases: (i) when $\mu_{1}(C_1) \le 1/2$, and (ii) when $\mu_{1}(C_1) > 1/2$. In case (i) we set $\mu_1(C_1) = \beta$ and in case (ii) we set $\mu_1(C_1) = 1-\beta$ so that in both cases $\beta \le 1/2$. We now define a two-dimensional subspace $V$ to be \emph{good} if \begin{enumerate} \item In case (i), $\beta /4 \le \mu_{V,1}(C_{V, 1}) \le 9/10$. \item In case (ii), $1/10 \le \mu_{V,1}(C_{V, 1}) \le 1-\beta/4$. \end{enumerate} Note that in both cases $\Prx_{\bV} [\bV \textrm{ is good}] \ge \beta/2$ (using \Cref{claim:raz}). Recall that \eqref{eq:ineq-Ck-1} says that \begin{eqnarray} \mu_{1-\kappa}(C_{1-\kappa}) &\geq& \mathbf{E}_{\bV}[\mu_{\bV, 1}(C_{\bV,1}) | \bV \textrm{ is not good}] \cdot \Pr[\bV \textrm{ is not good}] \nonumber \\ &+& \mathbf{E}_{\bV}[\mu_{\bV, 1-\kappa}(C_{\bV,1-\kappa}) | \bV \textrm{ is good}] \cdot \Pr[\bV \textrm{ is good}] \ \nonumber \end{eqnarray} When $V$ is good, by applying \eqref{eq:diff-reexp} and \Cref{clm:2d-increment-new}, we get \begin{eqnarray} \mu_{V,1-\kappa}(C_{V, 1-\kappa}) &\ge& \mu_{V,1}(C_{V, 1}) + \Theta(\kappa r_{\mathsf{inner}}) \ \ \textrm{in case (i);} \nonumber \\ \mu_{V,1-\kappa}(C_{V, 1-\kappa}) &\ge& \mu_{V,1}(C_{V, 1}) + \Theta(\kappa \min\{r_{\mathsf{inner}}, \beta\}) \ \ \textrm{in case (ii).} \nonumber \end{eqnarray} Using the fact that $\Prx_{\bV}[\bV \textrm{ is good}] \ge \beta/2$ and doing exactly the same calculation as the end of Lemma~\ref{lem:key-general}, \begin{eqnarray*} \mu_{1-\kappa}(C_{1-\kappa}) &\ge& \mu_1(C_1) + \Theta (\kappa r_{\mathsf{inner}} \beta) \ \ \textrm{ in Case 1;} \\ \mu_{1-\kappa}(C_{1-\kappa}) &\ge& \mu_1(C_1) + \Theta (\kappa \beta \min\{r_{\mathsf{inner}} , \beta\}) \ \ \textrm{ in Case 2.} \end{eqnarray*} Plugging in $r_{\mathsf{inner}} = r_{\mathsf{small}}/r$, we get \Cref{lem:key-general}. \section{Hermite mass at low weight levels for convex sets}~\label{sec:hermite-convex} In this section we prove lower bounds on the Hermite weight of convex sets at levels $0$, $1$ and $2$. As mentioned in the introduction, this immediately implies a lower bound on the noise stability of convex sets at large noise rates. We begin by proving \Cref{thm:centrally-symmetric-weight}, which gives a lower bound on the Hermite weight of centrally symmetric convex sets up to level $2$. \Cref{thm:centrally-asymmetric-weight} extends this to general convex sets, though the bound is quadratically worse than for centrally symmetric convex sets. Finally, \Cref{sec:symmetric-fourier-weight-tightness} shows that \Cref{thm:centrally-symmetric-weight} is tight up to constant factors. Throughout the section we can and do assume that $n$ is at least some sufficiently large absolute constant. \subsection{Hermite mass at low weight levels for centrally symmetric convex sets} \label{sec:hermite-centrally-symmetric} In this subsection we prove \Cref{thm:centrally-symmetric-weight}. This will follow as an immediate consequence of the following two lemmas, by instantiating \Cref{lem:constant-bias-Hermite-weight} with $\delta = \frac{c}{\sqrt{n}}$ where $c$ is the constant appearing in \Cref{lem:unbias-Hermite-weight}: \begin{lemma}~\label{lem:constant-bias-Hermite-weight} Let $\delta>0$ and $K \subseteq \mathbb{R}^n$.\ignore{Define $\vol(K) = \Pr_{\bg \sim N(0,1)^n}[\bg \in K]$.} If $|\vol(K) - 1/2| \ge\delta$, then $\mathsf{W}^{= 0}[K] \ge 4 \delta^2$. \end{lemma} \begin{lemma}~\label{lem:unbias-Hermite-weight} There exists an absolute constant $c>0$ such that the following holds: Let $K \subseteq \mathbb{R}^n$ be a centrally symmetric convex set. If $|\vol(K) - 1/2| \le \frac{c}{\sqrt{n}}$, then $\mathsf{W}^{=2}[K] = \Omega(1/n)$. \end{lemma} \Cref{lem:constant-bias-Hermite-weight} follows immediately from the fact that $\mathsf{W}^{= 0}[K] = 4 \left| \vol(K) - 1/2 \right|^2$. In the rest of this subsection we prove \Cref{lem:unbias-Hermite-weight}. To do this, we recall the definitions of the functions $r(\cdot)$ and $\beta(\cdot)$ from \Cref{sec:wl-given-kgl}. Namely, for $r: [0,1) \rightarrow [0, \infty)$, $r(\nu)$ is defined as \[ \Prx_{\br \sim \chi(n)} [\br \leq r(\nu)] = \nu, \] and $\beta: [0,1) \rightarrow [0,1]$ is defined as \[ \beta(\nu) = \Prx_{x \in \mathbb{S}^{n-1}_{r(\nu)}} [x \in K]. \] Let us now define the function $\overline{\beta} : [0,1) \rightarrow [-1,1]$ as $\overline{\beta}(\nu) = \beta(\nu) -\vol(K)$. By performing an arbitrarily small perturbation of $K$, we may assume that $\beta(\cdot)$ is a continuous function; since $\overline{\beta}$ is non-increasing, there exists a value $\nu_\ast \in [0,1)$ such that $\overline{\beta}(\nu_\ast)=0$. Let us define $r_{\ast} := r(\nu_\ast)$. Now we define $p_{r_\ast}: [0,\infty) \rightarrow \mathbb{R}$ and $\overline{p}_{r_\ast}: \mathbb{R}^n \rightarrow \mathbb{R}$ as \[ p_{r_\ast}(r) := r_\ast^2 -r^2 \quad \quad \text{and} \quad \quad \overline{p}_{r_\ast}(x) := p_{r_\ast}(\Vert x \Vert_2). \] The next claim uses our Kruskal-Katona theorem for centrally symmetric convex sets (\Cref{lem:key}) to prove upper and lower bounds on $r_\ast$: \begin{claim}~\label{clm:r-ast-bound1} There exists an absolute constant $c>0$ such that the following holds: Let $K$ be a symmetric convex set such that $|\vol(K) - 1/2| \le c$. Then, \[ \frac{n}{4} \le r_\ast^2 \le 4n. \] \end{claim} \begin{proof} We will show $r_\ast \le 2\sqrt{n}$. The other direction is similar. Towards a contradiction, assume that $r_\ast > 2\sqrt{n}$. Define $r_{\ast,\mathsf{outer}} := \sqrt{2n}$. Since $r_{\ast,\mathsf{outer}} < r_\ast/\sqrt{2}$, by applying \Cref{lem:key} we get that \begin{equation}\label{eq:inc-r-ast-r} \alpha_K(r_{\ast,\mathsf{outer}}) \ge \alpha_K(r_\ast) + \kappa = \vol(K) + \kappa, \end{equation} for an absolute constant $\kappa>0$. Next, we have \begin{eqnarray} \vol(K) = \int_{r=0}^{\infty} \chi(n,r) \alpha_K(r) dr \ge \int_{r=0}^{r_{\ast,\mathsf{outer}}} \chi(n,r) \alpha_K(r) dr \ge \alpha_K(r_{\ast,\mathsf{outer}}) \cdot \int_{r=0}^{r_{\ast,\mathsf{outer}}} \chi(n,r) dr, \label{eq:bias-calc1} \end{eqnarray} where the last inequality uses that $\alpha_K(\cdot)$ is non-increasing. By applying \Cref{lem:johnstone}, we have \begin{equation}~\label{eq:kappa-gap2} \int_{r=0}^{r_{\ast,\mathsf{outer}}} \chi(n,r) dr = 1 -\Prx_{\bg \sim N(0,1)^n}[\Vert \bg \Vert_2 \ge r_{\ast, \mathsf{outer}}] \ge 1- e^{-\frac{3n}{64}}. \end{equation} Since as stated earlier we may assume that $n \ge \frac{64}{3} \ln (4/\kappa)$, the right hand side is at least $1-\frac{\kappa}{4}$. Plugging this back into \eqref{eq:bias-calc1} and applying \Cref{eq:inc-r-ast-r}, we get \begin{eqnarray}~\label{eq:kappa-gap1} \vol(K) \ge \alpha_K(r_{\ast,\mathsf{outer}}) \cdot (1-\kappa/4) \ge (\vol(K) + \kappa) \cdot (1-\kappa/4) \ge \vol(K) + \frac{\kappa}{2}. \end{eqnarray} This contradiction implies that we must have $r_{\ast} \le 2\sqrt{n}$. As stated earlier, the proof of the other direction is similar. \end{proof} The main ingredient in the proof of \Cref{lem:unbias-Hermite-weight} is the following: \begin{claim}~\label{lem:integral-lower-bound} There is an absolute constant $c>0$ such that the following holds: Let $K \subset \R^n$ be a symmetric convex body such that $|\vol(K) -1/2| \le c$. Then \[ \Ex_{\bg \sim N(0,1)^n}[(K(\bg)-\vol(K)) \cdot {\overline{p}_{r_\ast}}(\bg)] = \Omega(1). \] \end{claim} \begin{proof} Applying \Cref{clm:Hermite-deg2}, we have that \begin{equation}~\label{eq:expect-equiv-beta1} \Ex_{\bg \sim N(0,1)^n}[(K(\bg)-\vol(K)) \cdot {\overline{p}_{r_\ast}}(\bg)] = \int_{\nu=0}^{1} \overline{\beta}(\nu) p_{r_\ast}(r(\nu)) d\nu. \end{equation} A crucial observation is that the value of $\overline{\beta}(\nu)$ is positive (respectively, negative) only if $\nu < \nu_\ast$ (respectively, $\nu > \nu_\ast$). Similarly, $p_{r_\ast}(r(\nu))$ is positive if and only if $r(\nu) \le r_\ast$, which holds if and only if $\nu \le \nu_\ast$. Thus we have that \begin{equation}~\label{eq:c-positive1} \overline{\beta}(\nu) p_{r_\ast}(r(\nu)) \ge 0 \text{~for all~} \nu \in [0,1]. \end{equation} We define the values $r_{\ast, \downarrow}< r_\ast$ and $r_{\ast, \uparrow}>r_\ast$ as \begin{equation}~\label{eq:r-star-up-down} r_{\ast, \downarrow} := r_\ast \cdot \bigg( 1 - \frac{1}{10 \sqrt{n}} \bigg) \quad \quad \textrm{and} \quad \quad r_{\ast, \uparrow} := r_\ast \cdot \bigg( 1 + \frac{1}{10 \sqrt{n}} \bigg). \end{equation} We also choose $\nu_{\ast,\downarrow}$ and $\nu_{\ast,\uparrow}$ to be such that $r(\nu_{\ast,\downarrow}) = r_{\ast,\downarrow}$ and $r(\nu_{\ast,\uparrow}) = r_{\ast,\uparrow}$. By \Cref{lem:key}, it follows that \begin{align} \alpha_K(r_{\ast,\downarrow} ) &\ge \alpha_K(r_{\ast}) + \frac{\Theta(1)}{\sqrt{n}} =\vol(K) + \frac{\Theta(1)}{\sqrt{n}} \textrm{ and } \nonumber \\ \alpha_K(r_{\ast,\uparrow} ) &\le \alpha_K(r_{\ast}) - \frac{\Theta(1)}{\sqrt{n}} = \vol(K) - \frac{\Theta(1)}{\sqrt{n}}, \label{eq:alpha-k-bounds} \end{align} where $\Theta(1)$ is an absolute positive constant (independent of $c$ in the statement of the claim). Recalling the definition of $\overline{\beta}$, this implies that \begin{equation}~\label{eq:c-gap1} \overline{\beta}(\nu_{\ast, \downarrow}) \ge \frac{{c_1}}{\sqrt{n}} \quad \quad \textrm{and} \quad \quad \overline{\beta}(\nu_{\ast, \uparrow}) \leq -\frac{{c_1}}{\sqrt{n}} \end{equation} {for an absolute constant $c_1>0$.} Likewise, \ignore{choosing $c$ to be a sufficiently small constant,}\Cref{clm:r-ast-bound1} implies that $r_\ast = \Theta(\sqrt{n})$. Using this, we have that \begin{equation}~\label{eq:r-gap1} \text{for all~} \nu \le \nu_{\ast, \downarrow}, \ \ p_{r_{\ast}}(r(\nu)) \ge {c_2}\sqrt{n} \quad \quad \textrm{and} \quad \quad \text{for all~} \nu \ge \nu_{\ast, \uparrow}, \ \ p_{r_{\ast}}(r(\nu)) \le -{c_2}\sqrt{n} \end{equation} {for an absolute constant $c_2>0$.} We now can infer that \begin{align} \Ex_{\bg \sim N(0,1)^n}[(K(\bg) -\vol(K)) \cdot \overline{p}_{r_\ast}(\bg)] &= \int_{\nu=0}^{1} \overline{\beta}(\nu) p_{r_\ast}(r(\nu)) d\nu \ \tag{using \eqref{eq:expect-equiv-beta1}} \nonumber \\ &\ge \int_{\nu=0}^{\nu_{\ast, \downarrow}} \overline{\beta}(\nu) p_{r_\ast}(r(\nu)) d\nu +\int_{\nu_{\ast, \uparrow}}^1 \overline{\beta}(\nu) p_{r_\ast}(r(\nu)) d\nu \ \tag{using \eqref{eq:c-positive1}}\nonumber \\ &\ge {c_1 c_2}\nu_{\ast, \downarrow} + {c_1 c_2}(1-\nu_{\ast, \uparrow}), \label{eq:lower-bound-K-f-corr1} \end{align} where the last line is using \eqref{eq:c-gap1} and \eqref{eq:r-gap1}. Now we observe that combining \Cref{clm:r-ast-bound1} and the definition of $r_{\ast, \downarrow}$ and $r_{\ast, \uparrow}$, we have that \[ r_{\ast}-r_{\ast,\downarrow} \le \frac{1}{5} \ \ \textrm{and} \ \ r_{\ast,\uparrow} -r_{\ast}\le \frac{1}{5}, \quad \quad \text{and hence} \quad \quad r_{\ast,\uparrow} - r_{\ast,\downarrow} \le \frac25. \] By \Cref{fact:chi-squared-2}, this implies that \[ \nu_{\ast,\uparrow} - \nu_{\ast,\downarrow} \le \frac25. \] It follows that \eqref{eq:lower-bound-K-f-corr1} is at least ${c_1 c_2}\cdot (3/5),$ which establishes the claim. \end{proof} Now it is straightforward to prove \Cref{lem:unbias-Hermite-weight}: \begin{proofof}{\Cref{lem:unbias-Hermite-weight}} \ignore{\rnote{I had to stare at this for a while to get it so I added some more explanation}} For notational convenience let us define the function $K'(g) := K(g)-\vol(K)$. We first observe that $\Ex_{\bg \sim N(0,1)^n}[K'(\bg)]=0$, which means that the constant Hermite coefficient $\widetilde{K'}(0^n)$ is zero. We further observe that $\Var[\overline{p}_{r_\ast}(\bg)]$ is equal to the variance of the chi-squared distribution with $n$ degrees of freedom, which is $2n$. Since $\overline{p}_{r_\ast}(x) = (r_\ast^2 - (x_1^2 + \ldots + x_n^2))$ is a linear combination of degree-$0$ and degree-$2$ Hermite polynomials, the fact that $\Var[\overline{p}_{r_\ast}(\bg)]=2n$ can be rephrased in Hermite terms as $\sum_{|S|=2} \widetilde{\overline{p}_{r_\ast}}(S)^2=2n.$ We thus get that \begin{align} \Ex_{\bg \sim N(0,1)^n}[K' (\bg) \cdot {\overline{p}_{r_\ast}}] &= \sum_{|S| =0,2} \widetilde{K'}(S) \widetilde{\overline{p}_{r_\ast}}(S)\tag{Plancherel} \nonumber\\ &= \sum_{|S| = 2} \widetilde{K'}(S) \widetilde{\overline{p}_{r_\ast}}(S) \tag{since $\widetilde{K'}(0^n)=0$} \nonumber\\ &\leq \sqrt{\mathsf{W}^{= 2}[K] \cdot \sum_{|S| = 2} \widetilde{\overline{p}_{r_\ast}}(S)^2} = \sqrt{\mathsf{W}^{= 2}[K] \cdot 2n}. \tag{Cauchy-Schwarz} \nonumber \end{align} Recalling that $K' (\bg)=K(\bg)-\vol(K)$, applying \Cref{lem:integral-lower-bound} finishes the proof. \end{proofof} \subsection{Hermite mass at low weight levels for general convex sets}~\label{sec:hermite-centrally-asymmetric} In this section we prove \Cref{thm:centrally-asymmetric-weight}, which will be a consequence of the following three lemmas. The first is \Cref{lem:constant-bias-Hermite-weight} which we repeat below for convenience: \begin{replemma}{lem:constant-bias-Hermite-weight} Let $\delta>0$ and $K \subseteq \mathbb{R}^n$. If $|\vol(K) - 1/2| \ge\delta$, then $\mathsf{W}^{= 0}[K] \ge 4 \delta^2$. \end{replemma} \begin{lemma}~\label{lem:W1-large} There exist positive constants {$0 < \tau < 10^{-4}, c\geq \tau$} such that the following holds: Let $K \subset \mathbb{R}^n$ be a convex set such that $|\vol(K) - 1/2| \le c/n$ and there is a point $x \not \in K$ with $\Vert x \Vert_2 \le \tau$. Then $\mathsf{W}^{\le 1}[K] \ge \frac{1}{18}$. \end{lemma} \begin{lemma}\label{lem:W2-large-asym} {For the constants $c,\tau$ in \Cref{lem:W1-large}} the following holds: Let $K \subset \mathbb{R}^n$ be a convex set such that $|\vol(K)-1/2| \leq c/n$ and $B(0^n, \tau) \subseteq K$. Then $\mathsf{W}^{\le 2}[K] =\Omega(1/n^2)$. \end{lemma} (As the above three lemmas suggest, the ideas in this section are reminiscent of those in \Cref{sec:weak-learner-general-convex}.) We first prove \Cref{lem:W1-large} followed by \Cref{lem:W2-large-asym}. \begin{proofof}{\Cref{lem:W1-large}} The proof of this Lemma is quite similar to the proof of \Cref{lem:halfspace-learning}.\ignore{So, the reader might find it helpful to recall the proof of that lemma.} In particular, exactly as in \Cref{lem:halfspace-learning}, using the supporting hyperplane theorem we get that there is a halfspace $s(x) = \sign(\ell \cdot x -\theta)$ such that \begin{enumerate} \item $K \subseteq s^{-1}(1)$; \item $\ell$ is a unit vector and $|\theta| \le \tau$. \end{enumerate} Now, {using $\tau \le c$}, following the same calculation that gave \eqref{eq:false-pos}, we get that \begin{equation}~\label{eq:false-pos-2} \Prx_{\bg \sim N(0,1)^n} [K(\bg) = s(\bg) ] \ge 1- 2\tau - \frac{2c}{n} \ge 1- 4 \tau, \end{equation} {where the last inequality holds for $n$ sufficiently large}. Next, we have \begin{eqnarray}\label{eq:bounding-deg1-corr} \Ex_{\bg \sim N(0,1)^n} [K(\bg) \cdot (\ell \cdot \bg - \theta)] = \Ex_{\bg \sim N(0,1)^n} [s(\bg) \cdot (\ell \cdot \bg - \theta)] - \Ex_{\bg \sim N(0,1)^n} [h(\bg) \cdot (\ell \cdot \bg - \theta)], \end{eqnarray} where $h: \mathbb{R}^n \rightarrow \{-2,0,2\}$ is defined as $h(x) := s(x) - K(x)$. We now bound the two expectations in \eqref{eq:bounding-deg1-corr} individually. For the first, we have that \begin{equation}~\label{eq:Khintchine} \Ex_{\bg \sim N(0,1)^n} [s(\bg) \cdot (\ell \cdot \bg - \theta)] = \Ex_{\bg \sim N(0,1)^n} [|(\ell \cdot \bg - \theta)| ] = \Ex_{\bg_1 \sim N(0,1)}[|\bg_1 - \theta|] \ge \Ex_{\bg_1 \sim N(0,1)} [| \bg_1| ] =\sqrt{\frac{2}{\pi}}. \end{equation} \ignore{ The last equality just follows from the fact that since $\ell$ is a unit vector, then $\ell \cdot \bg$ is distributed as a standard normal -- thus, the mean of the absolute value is $\sqrt{2/\pi}$.} On the other hand, by Cauchy-Schwartz, we have \begin{eqnarray}~\label{eq:CS-bound} \Ex_{\bg \sim N(0,1)^n} [h(\bg) \cdot (\ell \cdot \bg - \theta)] &\le& \sqrt{\Ex_{\bg \sim N(0,1)^n}[h^2(\bg)]} \cdot \sqrt{\Ex_{\bg \sim N(0,1)^n}[(\ell \cdot \bg - \theta)^2]}. \end{eqnarray} Since $\Pr_{\bg \sim N(0,1)^n} [K(\bg) \not = s(\bg)] \le 4 \tau$ (by \Cref{eq:false-pos-2}) and $|h|=2$ when $s \neq K$, we have that $\Ex_{\bg \sim N(0,1)^n}[h^2(\bg)] \le 16 \tau$. For the other expectation on the right-hand side of \eqref{eq:CS-bound}, since $\ell$ is a unit vector we have \begin{equation} \label{eq:square-lin} \Ex_{\bg \sim N(0,1)^n}[(\ell \cdot \bg - \theta)^2] = 1+\theta^2. \end{equation} Plugging this back into \eqref{eq:CS-bound}, {observing that $|\theta|\le \tau \le 1$}, we get that \[ \Ex_{\bg \sim N(0,1)^n} [h(\bg) \cdot (\ell \cdot \bg - \theta)] \le 4\sqrt{\tau} \cdot \sqrt{1+\theta^2} \le 8 \sqrt{\tau}. \] Using this and \eqref{eq:Khintchine} and applying this in \eqref{eq:bounding-deg1-corr}, we obtain that \begin{equation} \label{eq:onethird} \Ex_{\bg \sim N(0,1)^n} [K(\bg) \cdot (\ell \cdot \bg - \theta)] \ge \sqrt{\frac{2}{\pi}} - 8 \sqrt{\tau} \ge \frac{1}{3}, \end{equation} where the last inequality uses $\tau \le 10^{-4}$. \ignore{\rnote{Similar to the earlier Hermite/Cauchy-Schwarz argument, this wasn't obvious to me, so I added a little more explanation}} Next, we observe that by the linearity of $\ell \cdot \bg - \theta$, Plancherel's identity, and Cauchy-Schwarz,we get that (writing $v(g)$ for the function $\ell \cdot g - \theta$) \begin{align} \Ex_{\bg \sim N(0,1)^n}[K (\bg) \cdot (\ell \cdot \bg - \theta)] = \sum_{|S| \leq 1} \widetilde{K}(S) \widetilde{v}(S) &\leq \mathsf{W}^{\le 1}[K] \cdot \sqrt{\sum_{|S| \le 1} \widetilde{v}(S)^2} \nonumber\\ &\le \mathsf{W}^{\le 1}[K] \cdot \sqrt{\Ex_{\bg \sim N(0,1)^n}[(\ell \cdot \bg - \theta)^2]}.\label{eq:plancherel} \end{align} Finally, we can combine the above ingredients to get that \begin{eqnarray}\nonumber \mathsf{W}^{\le 1}[K] \ge \frac{\Ex_{\bg \sim N(0,1)^n}[K (\bg) \cdot (\ell \cdot \bg - \theta)]^2}{\Ex_{\bg \sim N(0,1)^n}[ (\ell \cdot \bg - \theta)^2]} \ge \frac{1}{9(1+\theta^2)} \ge \frac{1}{18}, \end{eqnarray} where the first inequality is from \Cref{eq:plancherel}, the second is from \Cref{eq:square-lin} and \Cref{eq:onethird}, and the final inequality again uses $|\theta| \le 1$. This finishes the proof of \Cref{lem:W1-large}. \end{proofof} \begin{proofof}{\Cref{lem:W2-large-asym}} The proof of this lemma is essentially the same as the proof of \Cref{lem:unbias-Hermite-weight}; the main difference is that we apply \Cref{lem:key-general} instead of \Cref{lem:key}. In particular, we define the functions $r(\cdot)$, $\beta(\cdot)$, $\overline{\beta}(\cdot)$, as well as the quantities $\nu_{\ast}$ and $r_{\ast}$, exactly as in the proof of \Cref{lem:unbias-Hermite-weight} (recall that all these quantities are defined right before \Cref{clm:r-ast-bound1}). The following claim is analogous to \Cref{clm:r-ast-bound1}: \begin{claim}~\label{clm:r-ast-asymmetric} {For the constants $c,\tau$ in \Cref{lem:W1-large}} the following holds: Let $K$ be a convex set such that $|\vol(K) - 1/2| \le c$ and ${B}(0^n, \tau) \subseteq K$. Then \[ \frac{n}{4} \le r_\ast^2 \le 4n. \] \end{claim} \begin{proof} The proof is essentially the same as the proof of \Cref{clm:r-ast-bound1}, so we just indicate the changes vis-a-vis the proof of \Cref{clm:r-ast-bound1}. Similar to \Cref{clm:r-ast-bound1}, we will show that $r_\ast \le 2\sqrt{n}$, and again the other direction is similar. Towards a contradiction, assume that $r_\ast > 2\sqrt{n}$ and define $r_{\ast,\mathsf{outer}} = \sqrt{2n}$. Then, by applying \Cref{lem:key-general}, we get that \begin{equation}\label{eq:inc-r-ast-r1} \alpha_K(r_{\ast,\mathsf{outer}}) \ge \alpha_K(r_\ast) + \kappa = \vol(K) + \kappa \end{equation} where $\kappa = \Theta(\tau/\sqrt{n})$. Note that in contrast with \Cref{eq:inc-r-ast-r}, in which $\kappa$ is an absolute constant, here $\kappa$ is $\Theta(1/\sqrt{n})$. We observe that \eqref{eq:bias-calc1} and \eqref{eq:kappa-gap2} both continue to hold. Further, as long as $n$ is sufficiently large, $n \ge \frac{64}{3} \ln (4/\kappa)$ continues to hold. Thus, exactly as in \eqref{eq:kappa-gap1}, we get that \begin{equation} \vol(K) \ge \alpha_K(r_{\ast,\mathsf{outer}}) \cdot (1-\kappa/4) \ge (\vol(K) + \kappa) \cdot (1-\kappa/4) \ge \vol(K) + \frac{\kappa}{2}. \end{equation} This contradiction implies that $r_{\ast} \le 2\sqrt{n}$. As in the proof of \Cref{clm:r-ast-bound1}, the proof of the other direction is similar. \end{proof} The following claim is analogous to \Cref{lem:integral-lower-bound}: \begin{claim}~\label{clm:integral-lowerb} {For the constants $c,\tau$ in \Cref{lem:W1-large}} the following holds: Let $K$ be a convex set such that $|\vol(K) - 1/2| \le c$ and ${B}(0^n, \tau) \subseteq K$. Then \[ \Ex_{\bg \sim N(0,1)^n}[(K(\bg) - \vol(K)) \cdot \overline{p}_{r_\ast}(\bg)] \ge \Theta(n^{-1/2}). \] \end{claim} \begin{proof} The proof is essentially the same as the proof of \Cref{lem:integral-lower-bound}, so we just indicate the changes vis-a-vis the proof of \Cref{lem:integral-lower-bound}. Equations \eqref{eq:expect-equiv-beta1} and \eqref{eq:c-positive1} holds exactly as before. We now define $r_{\ast, \downarrow}$ and $r_{\ast, \uparrow}$ as in \eqref{eq:r-star-up-down}, i.e. \[ r_{\ast, \downarrow} = r_\ast \cdot \bigg( 1 - \frac{1}{10 \sqrt{n}} \bigg) \ \textrm{and} \ r_{\ast, \uparrow} = r_\ast \cdot \bigg( 1 + \frac{1}{10 \sqrt{n}} \bigg). \] As before, we define $\nu_{\ast,\downarrow}$ and $\nu_{\ast,\uparrow}$ to be such that $r(\nu_{\ast,\downarrow}) = r_{\ast,\downarrow}$ and $r(\nu_{\ast,\uparrow}) = r_{\ast,\uparrow}$. Now applying \Cref{lem:key-general}, we get that \begin{align} \alpha_K(r_{\ast,\downarrow} ) &\ge \alpha_K(r_{\ast}) + \frac{\Theta(\tau)}{{n}} =\vol(K) + \frac{\Theta(\tau)}{{n}} \textrm{ and } \nonumber \\ \alpha_K(r_{\ast,\uparrow} ) &\le \alpha_K(r_{\ast}) - \frac{\Theta(\tau)}{{n}} = \vol(K) - \frac{\Theta(\tau)}{{n}}.\label{eq:alpha-k-bounds1} \end{align} Note that in contrast to \eqref{eq:alpha-k-bounds}, where the gap between $\alpha_K(r_{\ast,\downarrow} )$ (or $\alpha_K(r_{\ast,\uparrow} )$) and $\alpha_K(r_{\ast})$ was $\Theta(1/\sqrt{n})$, now the gap is only $\Theta(1/n)$. This implies that \begin{equation}~\label{eq:c-gap2} \overline{\beta}(\nu_{\ast, \downarrow}) \ge \frac{{c_1}\tau}{{n}} \ \textrm{and} \ \overline{\beta}(\nu_{\ast, \uparrow}) \leq -\frac{{c_1\tau}}{{n}} \end{equation} {for an absolute constant $c_1>0$.} Now, by applying \Cref{clm:r-ast-asymmetric}, we get that \begin{equation}~\label{eq:r-gap2} \text{for all~} \nu \le \nu_{\ast, \downarrow}, \ \ p_{r_{\ast}}(r(\nu)) \ge {c_2}\sqrt{n} \quad \quad \textrm{and} \quad\quad \text{for all~} \nu \ge \nu_{\ast, \uparrow}, \ \ p_{r_{\ast}}(r(\nu)) \le -{c_2}\sqrt{n} \end{equation} {for an absolute constant $c_2>0$.} This is exactly the same as \eqref{eq:r-gap1} except that we applied \Cref{clm:r-ast-asymmetric} to get this instead of \Cref{clm:r-ast-bound1} as was done in the proof of \Cref{lem:integral-lower-bound}. As before, we can now infer that \begin{align} \Ex_{\bg \sim N(0,1)^n}[(K(\bg) -\vol(K)) \cdot \overline{p_{r_\ast}}(\bg)] &=\int_{\nu=0}^{1} \overline{\beta}(\nu) p_{r_\ast}(r(\nu)) d\nu \ \tag{using \eqref{eq:expect-equiv-beta1}} \nonumber \\ &\ge \int_{\nu=0}^{\nu_{\ast, \downarrow}} \overline{\beta}(\nu) p_{r_\ast}(r(\nu)) d\nu +\int_{\nu_{\ast, \uparrow}}^1 \overline{\beta}(\nu) p_{r_\ast}(r(\nu)) d\nu \ \tag{using \eqref{eq:c-positive1}}\nonumber \\ &\ge \frac{{c_1 c_2}\tau}{\sqrt{n}} \nu_{\ast, \downarrow} + \frac{{c_1 c_2}\tau}{\sqrt{n}} (1-\nu_{\ast, \uparrow}), \label{eq:lower-bound-K-f-corr2} \end{align} where the last line is using \eqref{eq:c-gap2} and \eqref{eq:r-gap2}. Now we observe that combining \Cref{clm:r-ast-asymmetric} and the definition of $r_{\ast, \downarrow}$ and $r_{\ast, \uparrow}$, as before we have that \[ r_{\ast}-r_{\ast,\downarrow} \le \frac{1}{5} \ \ \textrm{and} \ \ r_{\ast,\uparrow} -r_{\ast}\le \frac{1}{5} \ \Rightarrow r_{\ast,\uparrow} - r_{\ast,\downarrow} \le \frac25, \] which implies (using \Cref{fact:chi-squared-2}) that \[ \nu_{\ast,\uparrow} - \nu_{\ast,\downarrow} \le \frac25. \] Plugging the above into \eqref{eq:lower-bound-K-f-corr2}, \Cref{clm:integral-lowerb} is proved. \end{proof} The rest of the proof of \Cref{lem:W2-large-asym} follows exactly the lines of the proof of \Cref{lem:unbias-Hermite-weight}. As before the polynomial $\overline{p}_{r_\ast}(x) = (r_\ast^2 - (x_1^2 + \ldots + x_n^2))$ is a linear combination of degree-$0$ and $2$ Hermite polynomials, $\Ex[(K(\bg) - \vol(K)) ] =0$, and $\mathsf{Var}( \overline{p}_{r_\ast}(\bg)) = 2n$, so the exact same argument as before, but now using \Cref{clm:integral-lowerb} instead of \Cref{lem:integral-lower-bound}, finishes the proof. \end{proofof} \ignore{ } \subsection{\Cref{thm:centrally-symmetric-weight} is almost tight} \label{sec:symmetric-fourier-weight-tightness} Recall that \Cref{thm:centrally-symmetric-weight} says that any centrally symmetric convex body $K$ (viewed as a function to $\bits$) has $\mathrm{W}^{\leq 2}[K] \geq {\Omega(1/n)}.$ In this section we show that this lower bound is best possible up to polylogarithmic factors: \begin{fact} \label{fact:cube} There is a centrally symmetric convex body $K$ (in fact, an intersection of $2n$ halfspaces) in $\R^n$ which, viewed as a function to $\bits$, has $\mathrm{W}^{\leq 2}[K] \leq O({\frac {\log^2 n} {n}}).$ \end{fact} \begin{proof} The body $K$ is simply an origin-centered axis-aligned cube of size chosen so that $\Vol(K)=1/2$ (and hence the constant Hermite coefficient $\tilde{K}(0^n)$ is precisely $0$). In more detail, let $c =c(n) > 0$ be the unique value such that \begin{equation} \label{eq:def-of-c} \Prx_{\bg \sim N(0,1)}[|\bg|\leq c] = (1/2)^{1/n} = 1 - {\frac {\Theta(1)}{n}} \end{equation} (so by standard bounds on the tails of the Gaussian distribution we have $c = \Theta(\sqrt{\log n})$). Let $a: \R \to \{0,1\}$ be the indicator function of the interval $[-c,c]$, so $a(t) := \mathbf{1}[|t| \leq c]$, and let $K_1: \R^n \to \{0,1\}$ be the indicator function of the corresponding $n$-dimensional cube, so \[ K_1(x_1,\dots,x_n) = \prod_{i=1}^n a(x_i) \] and $K: \R^n \to \bits$ (the $\bits$-valued version of $K_1$) is $K :=2K_1-1.$ We have that $\tilde{K}(0^n) = 2 \tilde{K_1}(0^n) - 1$ and $\tilde{K}(\overline{i}) = \tilde{K_1}(\overline{i})$ for every $\overline{i} \in \N^n \setminus \{0^n\}$ so it suffices to analyze the Hermite spectrum of $K_1.$ Since $K_1$ has such a simple structure (product of univariate functions $a(x_1),\dots,a(x_n)$) this is happily simple to do; details are below. By construction we have that $\tilde{K_1}(0^n) = 1/2$ and hence $\tilde{K}(0^n)=0$ as desired. Since $a$ is an even function we have $\tilde{a}(1)=0$ and hence $\mathrm{W}^{=1}(K_1)=0$.Since $\tilde{a}(1)=0$ the only nonzero degree-2 Hermite coefficients of $\tilde{K_1}$ are the coefficients indexed by $2e_i$, $i=1,\dots,n$, all of which are the same, so $\mathrm{W}^{\leq 2}(K_1)$ is equal to \begin{equation} \label{eq:goal} \mathrm{W}^{\leq 2}(K_1) = n \cdot \left( \tilde{a}(0)^{n-1} \cdot \tilde{a}(2)\right)^2 = n \cdot \left( \left( \Ex_{\bg \sim N(0,1)}[a(\bg)]\right)^{n-1} \cdot \tilde{a}(2) \right)^2 = \Theta(n) \cdot \tilde{a}(2)^2. \end{equation} Recalling that the degree-2 univariate Hermite basis polynomial is $h_2(x) = {\frac {x^2 - 1}{\sqrt{2}}}$, we have that \begin{align} \tilde{a}(2) &= \Ex_{\bg \sim N(0,1)}[ a(\bg) h_2(\bg)] = {\frac 1 {\sqrt{2}}} \cdot \Ex_{\bg \sim N(0,1)}[a(\bg)(\bg^2 - 1)]\nonumber \\ &= {\frac 1 {\sqrt{2}}} \int_{-c}^c e^{-x^2/2} (x^2 - 1) dx \nonumber \\ &= -\sqrt{2} c e^{-c^2/2}.\label{eq:expression-for-degree-2-coefficient} \end{align} We now recall the following tail bound on the normal distribution (Equation~2.58 of \cite{TAILBOUND}): \begin{equation} \label{eq:normal-tail} \phi(t) \left({\frac 1 t} - {\frac 1 {t^3}} \right) \leq \Prx_{\bg \sim N(0,1)}[\bg \geq t] \leq \phi(t) \left({\frac 1 t} - {\frac 1 {t^3}} + {\frac 3 {t^5}}\right), \end{equation} where $\phi(t) = {\frac 1 {\sqrt{2 \pi}}} e^{-t^2/2}$ is the density function of $N(0,1)$. Combining \Cref{eq:normal-tail}, \Cref{eq:expression-for-degree-2-coefficient} and \Cref{eq:def-of-c} we get that $|\tilde{a}(2)| = \Theta({\frac {\log n} n})$, which establishes the claimed fact by \Cref{eq:goal}. \end{proof} \section{Introduction} \label{sec:intro} Several results in Boolean function analysis and computational learning theory suggest an analogy between convex sets in Gaussian space and monotone Boolean functions\footnote{Recall that a function $f: \bn \to \bits$ is monotone if $f(x) \leq f(y)$ whenever $x_i \leq y_i$ for all $i \in [n].$} with respect to the uniform distribution over the hypercube. As an example, Bshouty and Tamon~\cite{BshoutyTamon:96} gave an algorithm that learns monotone Boolean functions over the $n$-dimensional hypercube to any constant accuracy in a running time of $n^{O(\sqrt{n} )}$. Much later, Klivans, O'Donnell and Servedio~\cite{KOS:08} gave an algorithm that learns convex sets over $n$-dimensional Gaussian space with the same running time. While the underlying technical tools in the proofs of correctness are different, the algorithms in \cite{KOS:08} and \cite{BshoutyTamon:96} are essentially the same: \cite{BshoutyTamon:96} (respectively \cite{KOS:08}) show that the Fourier spectrum (respectively Hermite spectrum\footnote{The Hermite polynomials form an orthonormal basis for the space of square-integrable real-valued functions over Gaussian space; the Hermite spectrum of a function over Gaussian space is analogous to the familiar Fourier spectrum of a function over the Boolean hypercube. See \Cref{sec:hermite} for more details.}) of monotone functions (respectively convex sets) is concentrated in the first $O(\sqrt{n})$ levels. Other structural analogies between convex sets and monotone functions are known as well; for example, an old result of Harris~\cite{harris1960lower} and Kleitman~\cite{kleitman1966families} shows that monotone Boolean functions over $\bits^n$ are positively correlated. The famous Gaussian correlation conjecture (now a theorem due to Royen~\cite{royen2014simple}) asserts the same for centrally symmetric convex sets under the Gaussian distribution. We note that while the assertions are analogous, the proof techniques are very different, and indeed the Gaussian correlation conjecture was open for more than half a century. Despite these analogies between convex sets and monotone functions, there are a number of prominent gaps in our structural and algorithmic understanding of convex sets when compared against monotone functions. We list several examples: \begin{enumerate} \item Nearly matching $\poly(n)$ upper and lower bounds are known for the query complexity of testing monotone functions over the $n$-dimensional Boolean hypercube~\cite{FLNRRS, KMS15, chakrabarty2016n, belovs2016polynomial, CDST15, CWX17}. However, the problem of convexity testing over the Gaussian space is essentially wide open, with the best known upper bound (in~\cite{chen2017sample}) being $n^{O(\sqrt{n})}$ queries and no nontrivial lower bounds being known. \item Kearns, Li and Valiant~\cite{KLV:94} showed that the class of all monotone Boolean functions over $\bn$ is \emph{weakly learnable} under the uniform distribution in polynomial time, meaning that the output hypothesis $h$ satisfies $\Pr_{\bx \in \bn} [h(\bx) = f(\bx) ] \ge 1/2 + 1/\poly(n)$, where $f: \bits^n \to \bits$ is the target monotone function. \cite{KLV:94} achieved an advantage of $\Omega(1/n)$ over $1/2$; this advantage was improved by Blum, Burch and Langford~\cite{BBL:98} to $\Omega(n^{-1/2})$ and subsequently by O'Donnell and Wimmer~\cite{OWimmer:09} to $\Omega(n^{-1/2}\log n )$ which is optimal up to constant factors for $\poly(n)$-time learning algorithms. On the other hand, prior to the current work, nothing non-trivial was known about weak learning convex sets under the Gaussian measure. \item Closely related to Item~2 is the folklore fact (see \Cref{app:hermite-weight}) that for every monotone function $f: \bn \rightarrow \bits$, the Fourier weight (sum of squared Fourier coefficients) at levels $0$ and $1$ is at least $\Omega({\frac {\log^2 n} n})$ .\ignore{\gray{In other words, for every monotone function $f$, there is a linear function $\ell(\cdot)$ of unit norm such that $\mathbf{E}_{\bx}[\ell(\bx) \cdot f(\bx)] = \Omega(n^{-1/2})$.}\rnote{Is it saying for every monotone $f: \bits \to \bits$ there is a linear function $\ell(x)=w\cdot x + w_0$ with $\sum_i w_i^2 = 1$ such that $\mathbf{E}_{\bx}[\ell(\bx) \cdot f(\bx)] = \Omega(n^{-1/2})$? Is this exactly right - what about the log factor? Could we just skip this sentence since the rest of the paragraph is all about Fourier/Hermite weight?}} On the other hand, prior to this work, it was consistent with the state of our knowledge that there is a convex set whose indicator function $f: \mathbb{R}^n \rightarrow \bits$ has zero Hermite weight (sum of squared Hermite coefficients) at levels $0,1,\dots,o(\sqrt{n})$.\ignore{that the the first $o(\sqrt{n} )$ levels of the Hermite spectrum of $f$ are all empty.} \end{enumerate} \paragraph*{Main contributions of this work.} The main technical contribution of this work is extending a fundamental result on monotone Boolean functions, called the Kruskal-Katona theorem~\cite{Kruskal:63, katona1968theorem}, to convex sets over Gaussian space. We use this result to address items~2 and 3 above. More precisely, we give a weak learning algorithm which achieves an accuracy of $1/2 + \Omega(n^{-1})$ for arbitrary convex sets, and we show that the Hermite weight at levels $0$, $1$ and $2$ of any convex set must be at least $\Omega(n^{-2})$. For centrally symmetric convex sets, we give a weak learner with accuracy $1/2 + \Omega(n^{-1/2})$ and show that the Hermite weight at levels 0 and 2 must be at least $\Omega(1/n).$ For centrally symmetric convex sets, we show that both our weak learning result and our bound on the Hermite weight at low levels are optimal up to $\polylog(n)$ factors; it follows that the corresponding results for general convex sets are also optimal up to a quadratic factor. We now explain our analogue of the Kruskal-Katona theorem in more detail. \ignore{ Bshouty/Tamon: learning monotone functions via Fourier concentration. KOS: analogous result for learning convex sets under Gaussian distribution, via very similar techniques. (Explain these results.) But, the full extent of this analogy is not understood. What about low degree Fourier weight? While convex sets have been intensively studied as geometric objects, we don't have such a good algorithmic understanding. For example, much is known about weak learning monotone functions (forward pointer to our discussion of this), but prior to this work nothing was known about weak learning convex sets. As another example, we don't know much about testing convexity, while testing monotonicity is very well understood.} \ignore{Main contribution of this work: we extend a fundamental result about monotone Boolean functions, the Kruskal-Katona theorem, to the setting of convex sets. (Reiterate density increment stuff from abstract, at same level of detail.) As an application, we leverage this understanding to get weak learning results and results about Hermite concentration of convex sets at low levels. For centrally symmetric convex sets, we show that our density increment, weak learning, and Hermite concentration results are essentially optimal. \rnote{This can be 2 or 3 paragraphs I think.}} \subsection{Background: the Kruskal-Katona theorem} We begin by recalling the Kruskal-Katona theorem over the Boolean hypercube. Informally, the Kruskal-Katona theorem is a \emph{density increment} result --- it asserts that the density of the $1$-set of a monotone function must increases non-trivially over the successive slices of the hypercube. More precisely, let $f: \bn \rightarrow \bits$ be a monotone function and for any $0 \le k \le n$, let $\binom{[n]}{k}$ denote the $k^{th}$ slice of the hypercube (the $n \choose k$-size set of strings that have exactly $k$ ones). Define $\mu_k(f)$ as $ \mu_k(f) := \Pr_{\bx \in \binom{[n]}{k}}[f(\bx)=1], $ i.e.~the density of $f$ restricted to the $k$-th slice of the hypercube. The Kruskal-Katona theorem states that for every monotone $f$ and every $k \in [0,n-1]$, the density $\mu_k(f)$ satisfies \begin{equation}~\label{eq:KK-cube} \mu_{k+1}(f) \ge \mu_k(f)^{1-\frac{1}{n-k}} \ge \mu_k(f) + \frac{\mu_k(f) \ln (1/\mu_k(f))}{n-k}. \end{equation} As an example, it is instructive to consider the following specific parameter settings: Suppose $k \in [n/2 - \sqrt{n}, n/2 + \sqrt{n}]$ and that $1/3 \le \mu_k(f) \le 2/3$ in this range of $k$. Then the theorem says that $ \mu_{k+1}(f) \ge \mu_k(f) + \Theta(1/n). $ Consequently, for $k_{\mathsf{up}} = n/2 + \sqrt{n}$ and $k_{\mathsf{down}} = n/2 -\sqrt{n}$, $ \mu_{k_{\mathsf{up}}}(f) \ge \mu_{k_{\mathsf{down}}}(f) + \Theta(n^{-1/2}). $ We mention here that the original result of Kruskal~\cite{Kruskal:63} and Katona~\cite{katona1968theorem} is stated in terms of the sizes of the upper and lower shadows of sets $A \subseteq \binom{[n]}{k}$, and that examples which are exactly extremal for the precise bounds given in those papers can be obtained by considering the ordering of the elements of $\binom{[n]}{k}$ in the so-called \emph{colexicographic order}. While their result is tight, it is often not as convenient to work with as the above formulation. The above version is due to Bollob{\'{a}}s and Thomason~\cite{BT87} and is the form most often used in computer science applications (for example, the weak learning results for monotone functions given in \cite{BBL:98} and \cite{OWimmer:09} used this version). For completeness, we mention that prior to \cite{BT87}, Lov{\'a}sz~\cite{lovasz2007combinatorial} also gave a simplified version of the Kruskal-Katona theorem, but in this paper, we will refer to \Cref{eq:KK-cube} as the Kruskal-Katona theorem. \subsection{Our main structural result: a Kruskal-Katona type theorem for convex sets} We now describe our main structural result for convex sets, which is closely analogous to the Kruskal-Katona theorem. In order to do this, we first need to identify an analogue of hypercube slices in the setting of Gaussian space. The most obvious choice is to consider spherical shells; namely, for $r>0$, define the radius-$r$ spherical shell to be $\mathbb{S}^{n-1}_r := \{x \in \mathbb{R}^n: \Vert x \Vert_2 =r\}$. Note that, analogous to slices of the hypercube, spherical shells are the level sets of the Gaussian distribution. Given a convex set $K \subseteq \mathbb{R}^n$, we define the \emph{shell-density function} $\alpha_ K: (0, \infty) \rightarrow [0,1]$ to be \begin{equation}~\label{eq:shell-density-def} \alpha_K(r) := \Prx_{\bx \sim \mathbb{S}^{n-1}_r} [\bx \in K]. \end{equation} Having defined $\alpha_K(\cdot)$, the most obvious way to generalize Kruskal-Katona to Gaussian space would be to conjecture that for $K$ a convex set $\alpha_K(\cdot)$ is a non-increasing function, and further, that as long as $\alpha_K(r)$ is bounded away from $0$ and $1$, it exhibits a non-trivial rate of decay as $r$ increases (similar to \eqref{eq:KK-cube}). However, a moment's thought shows that this conjecture cannot be true because of the following examples: \begin{enumerate} \item Let $K \subseteq \mathbb{R}^n$ be a convex body with positive Gaussian volume whose closest point to the origin is at some distance $t>0$. Then the shell density function $\alpha_K(r)$ is zero for $0 < r \leq t$ but subsequently becomes positive. Thus for $\alpha_K(\cdot)$ to be non-increasing, we require $0^n \in K$. In fact, it is easy to see that if $0^n \in K$ and $K$ is convex then $\alpha_K(\cdot)$ is in fact non-increasing (since by convexity the intersection of $K$ with any ray extending from the origin is a line segment starting at the origin). However, this does not mean that there is an actual decay in the value of $\alpha_K$, as witnessed by the next example: \item Let $K$ be an origin-centered halfspace, i.e.~$K = \{x : w \cdot x \ge 0\}$ for some nonzero $w \in \R^n$. $K$ is convex and $0^n \in K$, but $\alpha_K(r) =1/2$ for all $r >0$, and hence $\alpha_K(r)$ exhibits no decay as $r$ increases. \end{enumerate} The second example above shows that in order for $\alpha_K(\cdot)$ to have decay, it is not enough for the origin to belong to $K$; rather, what is needed is to have $B(0^n,s) \subseteq K$ for some $s>0$, where $B(0^n,s)$ is the ball of radius $s$ centered at the origin. Our Kruskal-Katona analogue, stated below in simplified form, shows that in fact the above examples are essentially the only obstructions to getting a decay for $\alpha_K(r)$. In order to avoid a proliferation of parameters at this early stage, for now we only state a corollary of our more general result, \Cref{lem:key-general} (the more general result does not put any restriction on the value of $\alpha_K(r)$): \begin{theorem} [Kruskal-Katona for general convex sets, informal statement]\label{thm:informal-convex-density-increment} Let $K \subseteq \R^n$ be a convex set which contains the origin-centered ball of radius $r_{\mathsf{small}}$, i.e.~$B(0^n,r_{\mathsf{small}}) \subseteq K.$ Let $r> r_{\mathsf{small}}$ be such that $0.1 \le \alpha_K(r) \le 0.9$ and let $0 \le \kappa \le 1/10$. Then \[ \alpha_K((1-\kappa) r) \geq \alpha_K(r) + \Theta\bigg(\kappa \cdot \frac{ r_{\mathsf{small}}}{r} \bigg). \] \end{theorem} A convex set is \emph{centrally symmetric} if $x \in K$ iff $-x \in K$. For centrally symmetric convex sets we obtain a density increment result without requiring an origin-centered ball to be contained in $K$. As with Theorem~\ref{thm:informal-convex-density-increment}, below we give a special case of our main density increment theorem for centrally symmetric sets (see \Cref{lem:key} for the more general result): \begin{theorem} [Kruskal-Katona for centrally symmetric convex sets, informal statement]\label{thm:informal-centrally-symmetric-density-increment} Let $K \subseteq \R^n$ be a centrally symmetric convex set. Let $r>0$ be such that $0.1 \le \alpha_K(r) \le 0.9$ and let $0 \le \kappa \le 1/10$. Then \[ \alpha_K((1-\kappa) r) \geq \alpha_K(r) + \Theta(\kappa). \] \end{theorem} An important feature of our density increment theorems is that while the results are for convex sets in $\mathbb{R}^n$, the density increment statements are independent of $n$. We give an overview of the high-level ideas underlying our \Cref{thm:informal-convex-density-increment,thm:informal-centrally-symmetric-density-increment} in the next subsection. \subsection{The ideas underlying the Kruskal-Katona type \Cref{thm:informal-convex-density-increment,thm:informal-centrally-symmetric-density-increment}} \label{sec:high-level-ideas} Let us provide an intuitive argument for why results of this sort should hold, focusing on \Cref{thm:informal-centrally-symmetric-density-increment} (\Cref{thm:informal-convex-density-increment} uses similar ideas). At the highest level, a probabilistic argument is used to reduce the $n$-dimensional geometric scenario to a two-dimensional scenario. In more detail, as described below, the proof essentially combines two extremely simple observations with two technical results. Recalling the setup of \Cref{thm:informal-centrally-symmetric-density-increment}, (after rescaling) we have a symmetric convex body $K \subset \R^n$ whose intersection with the unit sphere $\mathbb{S}^{n-1}_1$ is a certain fraction $\alpha_K(1)$ of $\mathbb{S}^{n-1}_1$. As stated in \Cref{thm:informal-centrally-symmetric-density-increment}, let us think of this fraction as being neither too close to 0 nor to 1. The goal is to argue that the intersection of $K$ with the slightly smaller sphere $\mathbb{S}^{n-1}_{1-\kappa}$ is a noticeably larger fraction of $\mathbb{S}^{n-1}_{1-\kappa}$. The first simple but crucial observation is that the density of $K$ in $\mathbb{S}^{n-1}_1$ is an average of two-dimensional ``cross-sectional'' densities, and the same is true for the density of $K$ in $\mathbb{S}^{n-1}_{1-\kappa}$. More precisely, the density of $K$ in $\mathbb{S}^{n-1}_1$ is the average over a random two-dimensional subspace $\bV$ of the density of the two-dimensional convex body $K \cap \bV$ in the two-dimensional unit circle obtained by intersecting $\mathbb{S}^{n-1}_1$ with $\bV$, and the same is true for $\mathbb{S}^{n-1}_{1-\kappa}.$ (See \Cref{eq:avg-1} for a precise formulation.) The next simple but crucial observation is that within any given specific cross-section (two-dimensional subspace $V$), the density of $K$ in the radius-$(1-\kappa)$ circle must be at least the density of $K$ in the radius-$1$ circle. In other words, within any specific cross-section, ``density is never lost'' by contracting from radius 1 to radius $1-\kappa.$ As mentioned already in the previous subsection, this is an immediate consequence of convexity and the fact that $K$ contains the origin. (See \Cref{fact:convex-decreasing}.) Now the first technical result mentioned above enters the picture: Fix a particular two-dimensional subspace $V$ and suppose that within $V$, the density of $K$ in the radius-$1$ circle is (like the original density of $K$ in the $n$-dimensional unit sphere $\mathbb{S}^{n-1}_1$) neither too close to 0 nor to 1. Then using elementary geometric arguments and the central symmetry of $K$, it can be shown that the density of $K$ in the radius-$(1-\kappa)$ circle must be ``noticeably higher'' than the density of $K$ in the radius-1 circle --- i.e.~``density is gained'' within this cross-section by contracting. See Figure~1 for an illustration and \Cref{clm:two-d-increment} for a precise formulation. \begin{figure}[t] \label{fig:1} \begin{tikzpicture} \centerarc[black, thick](20,20)(0:360:5); \centerarc[black, thick](20,20)(0:360:3); \centerarc[green, ultra thick](20,20)(0:30:5); \centerarc[green, ultra thick](20,20)(330:360:5); \centerarc[green, ultra thick](20,20)(150:210:5); \draw[ red, ultra thick] ({12},{20 + 5*sin(30)})--({28},{20 + 5*sin(30)}); \draw[ red, ultra thick] ({12},{20 - 5*sin(30)})--({28},{20 - 5*sin(30)}); \draw[ red, ultra thick] ({12},{20 - 5*sin(30)})--({12},{20 + 5*sin(30)}); \draw[ red, ultra thick] ({28},{20 - 5*sin(30)})--({28},{20 + 5*sin(30)}); \centerarc[blue, ultra thick](20,20)(30:{(asin(5/3 * sin(30)))}:3); \centerarc[blue, ultra thick](20,20)({360-(asin(5/3 * sin(30)))}:330:3); \centerarc[blue, ultra thick](20,20)({180-(asin(5/3 * sin(30)))}:150:3); \centerarc[blue, ultra thick](20,20)({210}:{180+(asin(5/3 * sin(30)))}:3); \draw[ black, dashed] ({20},{20})--({20 + 5*cos(30)},{20 + 5*sin(30)}); \draw[ black, dashed] ({20},{20})--({20 + 5*cos(30)},{20 - 5*sin(30)}); \draw[ black, dashed] ({20},{20})--({20 - 5*cos(30)},{20 - 5*sin(30)}); \draw[ black, dashed] ({20},{20})--({20 - 5*cos(30)},{20 + 5*sin(30)}); \end{tikzpicture} \caption{ Two concentric circles of radius $1$ and $1-\kappa$ and their intersections with a symmetric convex set. The boundary of the convex set is indicated in {\color{red}red}. The {\color{green}green} arcs are the portion of the radius-1 circle which intersects the convex set. Observe that the fraction of the radius-$(1-\kappa)$ circle which intersects the convex set is larger than the fraction of the radius-$1$ circle which intersects the convex set (by the angular measure of the {\color{blue}blue} arcs).} \end{figure} Given this, a natural proof strategy suggests itself: Suppose that for a random two-dimensional subspace $\bV$, with non-negligible probability the density of $K$ in the two-dimensional circle $\mathbb{S}^{n-1}_1 \cap \bV$ is ``not too far'' from the density of $K$ in the $n$-dimensional sphere $\mathbb{S}^{n-1}_1$. Then, by the preceding paragraph, there would be a noticeable density gain on a non-negligible fraction of subspaces; since density is never lost and the overall density is an average of the density over subspaces, this would give the result. The second technical result, \Cref{claim:raz}, shows precisely that the above supposition indeed holds. We give some elaboration on this second technical result. It is a variant of a lemma of Raz \cite{raz1999exponential}, who showed that for any subset $A \subset \mathbb{S}^{n-1}$ with $\mu_1(A)$ bounded away from 0 and 1, with high probability a random subspace $\bV$ of $\mathbb{R}^n$ of dimension roughly $1/\epsilon^2$ is such that the density of $A$ in $\mathbb{S}^{n-1} \cap \bV$ is $\pm \eps$-close to the density of $A$ in $\mathbb{S}^{n-1}.$ We establish a variant of this result in a different parameter regime; our requirement is that the measure of $A \cap \bV$ as a fraction of the unit sphere in $\bV$ remain bounded away from $0$ and $1$ with non-negligible probability \emph{even if $\bV$ is a random subspace of dimension only $2$}. This requires some modification of Raz's original arguments, as we highlight in \Cref{sec:claim:raz:proof}. \subsection{Applications and consequences of our Kruskal-Katona type results} \subsubsection{Weak learning under Gaussian distributions} In \cite{KOS:08} Klivans et al.~showed that convex sets are \emph{strongly learnable} (i.e.~learnable to accuracy $1-\epsilon$ for any $\eps > 0$) in time $n^{{O}(\sqrt{n}/\epsilon^2)}$ under the Gaussian distribution, given only random examples drawn from the Gaussian distribution. Up to a mildly better dependence on $\epsilon$, this matches the running time of the algorithm of \cite{BshoutyTamon:96} for learning monotone functions on the hypercube. However, there is a large gap in the state of the art between monotone Boolean functions on the cube and convex sets in the Gaussian space when it comes to \emph{weak learning}. In particular, while \cite{OWimmer:09} showed that monotone functions can be weakly learned to accuracy $1/2 + \Omega(n^{-1/2} \log n)$ in polynomial time, prior to this work nothing better than the $n^{\sqrt{n}}$ running time of~\cite{KOS:08} was known for weakly learning convex sets to any nontrivial accuracy (even accuracy $1/2 + \exp(-n)$). In particular, the \cite{KOS:08} result in and of itself does not imply anything about polynomial-time weak learning; the \cite{KOS:08} result is proved using Hermite concentration, but prior to this work it was conceivable that all of the Hermite weight of a convex body might sit at levels $\Omega(n^{1/2})$, which would necessitate an $n^{\Omega(\sqrt{n})}$ runtime for the \cite{KOS:08} algorithm. The main algorithmic contribution of this paper is to bridge this gap and give a polynomial-time weak learning algorithm for convex sets. We prove the following:\footnote{As stated below \Cref{thm:weak-learn-convex} is only for learning under the standard Gaussian distribution $N(0,1^n)$, but since convexity is preserved under affine transformations, the result carries over to weak learning with respect to any Gaussian distribution $N(\mu,\Sigma).$} \begin{theorem} [Weak learning convex sets] \label{thm:weak-learn-convex} There is a $\poly(n)$-time algorithm which uses only random samples from $N(0,1)^n$ and weak learns any unknown convex set $K \subseteq \R^n$ to accuracy $1/2 + \Omega(1/n)$ under $N(0,1^n)$ . \end{theorem} For centrally symmetric convex sets we give a result which is stronger in two ways. We achieve a stronger advantage, and we show that one of three fixed hypotheses always achieves this stronger advantage: the empty set, all of $\R^n$, or the origin-centered ball of radius $r_{\median}$ where $r_{\median}$ is the median of the chi-distribution with parameter $n$. This result is as follows:\footnote{Similar to the previous footnote, since a centrally symmetric convex set remains centrally symmetric and convex under any linear transformation, \Cref{thm:weak-learn-centrally-symmetric} directly implies an analogous result for any origin-centered Gaussian distribution $N(0^n,\Sigma).$} \begin{theorem} [Weak learning centrally symmetric convex sets] \label{thm:weak-learn-centrally-symmetric} For any centrally symmetric convex set $K \subseteq \R^n$, one of the following three hypotheses $h$ has $\Pr_{\bx \sim N(0,1)^n}[h(\bx)=K(\bx)] \geq 1/2 + \Omega(1/\sqrt{n})$: $h=$ the empty set, $h=$ all of $\R^n$, or $h=$ the origin-centered ball of radius $r_{\median}$. \end{theorem} From \Cref{thm:weak-learn-centrally-symmetric} it is straightforward to get a $\poly(n)$-time learner for centrally symmetric convex sets with advantage $\Omega(1/\sqrt{n})$. This is entirely analogous to the result of \cite{BBL:98}, who showed that for any monotone function $f: \bn \to \bits$ over the Boolean hypercube, one of the following three functions achieves an advantage of $\Omega( n^{-1/2})$ with respect to the uniform distribution: the constant $1$ function, the constant $-1$ function, or the majority function. We note that the main technical ingredient in proving \Cref{thm:weak-learn-convex} (respectively \Cref{thm:weak-learn-centrally-symmetric}) is \Cref{thm:informal-convex-density-increment} (respectively \Cref{thm:informal-centrally-symmetric-density-increment}). In particular, the proof of \Cref{thm:weak-learn-centrally-symmetric} is a modification of the argument of \cite{BBL:98} which uses the Kruskal-Katona theorem (over the hypercube) to show that one of the functions $\{+1,-1,\mathsf{MAJ}\}$ is a good weak hypothesis for any monotone Boolean function. \medskip \noindent {\bf A lower bound for weak learning convex sets.} We complement \Cref{thm:weak-learn-centrally-symmetric} with an information theoretic lower bound. This lower bound shows that any $\poly(n)$-time algorithm, even one which is allowed to query the target function on arbitrary inputs of its choosing, cannot achieve a significantly better advantage than our simple algorithm achieves for centrally symmetric convex sets: \begin{theorem} [Lower bound for weak learning centrally symmetric convex sets] \label{thm:our-BBL-lb} For sufficiently large $n$, for any $s \geq n$, there is a distribution ${\cal D}$ over centrally symmetric convex sets with the following property: for a target convex set $\boldf \sim {\cal D},$ for any membership-query (black box query) algorithm $A$ making at most $s$ many queries to $\boldf$, the expected error of $A$ (the probability over $\boldf \sim {\cal D}$, over any internal randomness of $A$, and over a random Gaussian $\bx \sim N(0,1^n)$, that the output hypothesis $h$ of $A$ predicts incorrectly on $\bx$) is at least $1/2 - {\frac {O(\log(s) \cdot \sqrt{\log n})}{n^{1/2}}}$.\ignore{\red{$1/2 - {\frac {O(\log(s))}{n^{1/2}}}$}.} \end{theorem} Theorem~\ref{thm:our-BBL-lb} shows that the advantage of our weak learner for centrally symmetric convex sets (Theorem~\ref{thm:weak-learn-centrally-symmetric}) is tight up to a logarithmic factor for polynomial time algorithms. It follows that the advantage of our weak learner for general convex sets (Theorem~\ref{thm:weak-learn-convex}) is tight up to a quadratic factor. \subsubsection{Noise stability / low-degree Hermite weight of convex sets} A well known structural fact about monotone functions is that they cannot have too little Fourier weight (sum of squared Fourier coefficients) at levels 0 and 1: every monotone function $f: \bn \to \bits$ has at least $\Omega({\frac {\log^2 n}{n}})$ amount of Fourier weight on levels 0 and 1, and further, this $\Omega({\frac {\log^2 n}{n}})$ lower bound is best possible. (For the sake of completeness we give a proof of this in~\Cref{app:hermite-weight}.) In contrast, it is easy to see that a convex set can have zero Hermite weight (sum of squared Hermite coefficients) at levels 0 and 1: this is the case for any centrally symmetric set of Gaussian volume $1/2$ (having Gaussian volume $1/2$ implies that the degree-0 Hermite coefficient is zero, and central symmetry implies that each degree-1 Hermite coefficient is also zero.) It is also possible for a convex set to have zero Hermite weight at levels 0 and 2; indeed the set $\{x: x_1 \geq 0\}$ is one such set. However, we show that any convex set must have some non-negligible Hermite mass at levels 0, 1 and 2. In particular, we show that the Hermite level $0$-and-$2$ weight of any centrally symmetric convex sets must be at least $\Omega(1/n)$: \begin{theorem}~\label{thm:centrally-symmetric-weight} Let $K$ be a centrally symmetric convex set (viewing it as a function $K: \mathbb{R}^n \rightarrow \bits$). Then the Hermite weight of $K$ at levels $0$ and $2$ is at least $\Omega(1/n)$. \end{theorem} Since a suitably scaled origin-centered cube has Hermite weight $O(\log^2(n)/n)$ at levels $0$ and $2$ (see \Cref{fact:cube}), our lower bound is tight up to logarithmic factors.\ignore{\gray{In fact, the $\tilde{O}(1/n)$ upper bound on the Hermite weight holds for the origin centered cube up to any $O(1)$-levels.}\rnote{I don't quite understand this, can we rephrase?}} For general convex sets we prove a quadratically weaker lower bound: \begin{theorem}~\label{thm:centrally-asymmetric-weight} Let $K$ be an arbitrary convex set (viewing it as a function $K: \mathbb{R}^n \rightarrow \bits$). Then the Hermite weight at levels $0$, $1$ and $2$ is at least $\Omega(1/n^2)$. \end{theorem} \paragraph{Noise stability of convex sets at high noise rates.} One motivation for understanding the low-level Hermite weight of bodies in $\R^n$ comes from its connection to the notion of noise stability. Recall that for $t \ge 0$ the \emph{Ornstein-Uhlenbeck} operator $P_t$ in the Gaussian space is defined as follows: for $f: \mathbb{R}^n \rightarrow \mathbb{R}$ the function $P_t f: \mathbb{R}^n \rightarrow \mathbb{R}$ is defined as \[ P_tf(x) := \Ex_{\by \sim N(0,1)^n} [f(e^{-t}x + \sqrt{1-e^{-2t}} \by)]. \] With the definition of $P_t$, the noise stability of $f$ at noise rate $t$ (denoted by $\mathsf{Stab}_t(f)$ ) is defined to be \[ \mathsf{Stab}_t(f) := \Ex_{\bx}[f(\bx) P_t f(\bx)] = \Ex_{\bx, \by \sim N(0,1)^n} [f(\bx) f(e^{-t}\bx + \sqrt{1-e^{-2t}} \by)]. \] The quantity $\mathsf{Stab}_t(f)$ is a measure of how sensitive $f$ is to perturbation in its input. In particular, as $t$ becomes large $\mathsf{Stab}_t(f)$ measures the correlation of $f$ on positively but only very mildly correlated inputs. On the other hand, as $t\rightarrow 0$, for $\bits$-valued functions $f$ this quantity measures the so-called \emph{Gaussian surface area} of the region $f^{-1}(1)$ (denoted by $\mathsf{surf}(f)$). If the set $\mathcal{A} =f^{-1}(1)$ has a smooth or piecewise smooth boundary, $\mathsf{surf}(f)$ is defined as \[ \mathsf{surf}(f) = \int_{x \in\partial \mathcal{A}} \gamma_n(x) d\sigma(x), \] where $\gamma_n(x) = (2\pi)^{-n/2} \cdot \exp(-\Vert x\Vert_2^2/2)$ is the standard Gaussian measure, $\partial \mathcal{A}$ is the boundary of $\mathcal{A},$ and $d\sigma(x)$ is the standard surface measure of $\mathbb{R}^n$. Ledoux~\cite{Ledoux:94} (and implicitly, earlier Pisier~\cite{Pisier:86}) showed that for $t>0$, \begin{equation}~\label{eq:surfnoise} \Prx_{\bx, \by \sim N(0,1)^n} [f(\bx) = f(e^{-t} \bx + \sqrt{1-e^{-2t}} \by)] \ge 1- \frac{2\sqrt{t}}{\sqrt{\pi}} \mathsf{surf}(f). \end{equation} Hence when $t$ is small the surface area of $f$ provides a good lower bound on the noise stability of $f$ (and as $t\rightarrow 0$ this inequality in fact becomes tight). We refer to \cite{Janson:97} for a detailed discussion. In \cite{Ball:93} Ball showed that the Gaussian surface area of any convex set $K \subseteq \mathbb{R}^n$ is at most $O(n^{1/4})$. Consequently, we get that for any convex set $K : \mathbb{R}^n \rightarrow \{-1,1\}$, \begin{equation} \label{eq:stab} \mathsf{Stab}_t(K) \ge 1- O(\sqrt{t} \cdot n^{1/4}). \end{equation} While this bound is meaningful for $t = o(n^{-1/2})$, it is vacuous once $t$ exceeds $Cn^{-1/2}$. In fact, the above inequality can be extended (see \cite{KOS:08}) to show that $ \mathsf{Stab}_t(K) \ge \exp (- O(t\sqrt{n})); $ however this bound is still quite weak for $t= \omega(n^{-1/2})$. Theorems~\ref{thm:centrally-symmetric-weight} and~\ref{thm:centrally-asymmetric-weight} yield the first nontrivial noise stability lower bounds for convex sets for large $t$. These bounds follow from a simple and standard fact from Hermite analysis (see Proposition~11.37 of \cite{ODBook}) which is that $\mathsf{Stab}_t(f) \ge e^{-2t} W^{\le 2}[f]$ where $W^{\le 2}[f]$ is the weight at levels $0$, $1$ and $2$ of the Hermite spectrum of $f$. Combining this fact with \Cref{thm:centrally-symmetric-weight,thm:centrally-asymmetric-weight}, we get the following corollary: \begin{corollary}~\label{corr:noise-stab} For any $t \ge 0$ and any convex set $K \subseteq \R^n$, it holds that $\mathsf{Stab}_t(K) \ge e^{-2t}/{n^2}.$ This bound can be improved to $\mathsf{Stab}_t(K) \ge e^{-2t} /{n}$ if $K$ is centrally symmetric and convex. \end{corollary} We note that the bound given by \Cref{corr:noise-stab} is significantly better than the bound that follows from \cite{Ball:93} as described above for $t \gg \frac{\log n}{\sqrt{n}}$. Beyond the quantitative aspect, we feel that there is an interesting qualitative distinction between our noise stability bound and the noise stability bound that follows from~\cite{Ball:93} for convex sets (as well as several others in the literature, as we explain below). Roughly speaking, one can analyze the noise stability of sets in two limiting cases of noise rates. The first (i) is the \emph{low noise rate (LNR) regime}: here the noise rate $t$ is close to zero and hence the correlation $e^{-t}$ is close to $1$. The second (ii) is the \emph{high noise rate (HNR) regime}: here the noise rate $t$ is large so the correlation $e^{-t}$ is close to $0$. Note that the noise stability bounds of Ball \eqref{eq:stab} are most interesting in the LNR regime, whereas ours (\Cref{corr:noise-stab}) are most interesting in the HNR regime. A number of other results in the literature, such as \cite{Nazarov:03, Kane14GL, DRST14, HKM12}, also show noise stability bounds for various families of functions (such as polytopes with few facets and low-degree polynomial threshold functions) in the LNR regime. The regime of applicability of noise stability bounds is closely connected to the methods used to prove those bounds. In particular, noise stability in the LNR regime is essentially controlled by the surface area of a set (this connection is made explicit in \Cref{eq:surfnoise}): if the surface area is low then the noise stability is high and vice versa. Not surprisingly, since surface area is a geometric quantity, methods of geometric analysis are used to show noise stability bounds in the LNR regime (as in \cite{Ball:93} and the other works cited above). However, this connection between surface area and noise stability breaks down in the HNR regime. For intution on why this occurs, note that the noise stability of $K:\mathbb{R}^n \rightarrow \{-1,1\}$ at noise rate $t$ is captured by $\Pr[K(\bg) = K \ (\bg' )]$ where $(\bg,\bg')$ is a correlated pair of $n$-dimensional Gaussians with variance $1$ and correlation $e^{-t}$ in each coordinate. In the LNR regime, i.e., when $e^{-t}$ is close to $1$, $\bg$ and $\bg'$ can be visualized as tiny perturbations of each other, and it is intuitively plausible that the above probability can be understood by analyzing the geometry of $K$. However, in the HNR regime, $e^{-t}$ approaches $0$ and hence (at least at an intuitive level) the resulting geometric picture is essentially indistinguishable from the case when $e^{-t} =0$, i.e., $\bg$ and $\bg'$ are totally uncorrelated. Thus, it is not clear how geometric arguments can be used to prove lower bounds on noise stability in the HNR regime. In fact, \Cref{app:stability} gives an example of two functions which have the same surface area (and hence essentially the same noise stability in the LNR regime), but in the HNR regime the noise stability of the second function is exponentially worse than the first. (In fact, the proof of this separation relies on \Cref{corr:noise-stab} --- one of the functions is convex and the other is not.) Despite the above intuitions, somewhat surprisingly our noise stability lower bounds are in fact established using geometric arguments. Our geometric arguments do not directly analyze mildly correlated Gaussians; rather, we use the (easy to prove but) deep connection between noise stability in the HNR regime and correlation with low degree polynomials. The phenomenon of noise stability in the HNR regime being completely controlled by correlation with low degree polynomials is, in our view, a completely non-geometric one; it relies on the fact that Hermite polynomials are eigenfunctions of the noise operator, or in more detail, on the fact that for every vector $\alpha \in \N^n$ the Hermite polynomial $h_\alpha$ is an eigenfunction of the noise operator with $e^{-t|\alpha|}$ as the corresponding eigenvalue. (See \Cref{sec:hermite} for basic background on Hermite analysis). Equipped with this connection, we use geometric arguments to show that convex bodies are either correlated with a degree-$1$ polynomial or else with one special type of degree-$2$ polynomial, namely one corresponding to a Euclidean ball. We close this discussion by remarking that, as mentioned before, several papers~\cite{Nazarov:03, Kane14GL, DRST14, HKM12} have studied noise stability in the LNR regime, but much less appears to be known about noise stability in the HNR regime. We believe that studying the noise stability of Boolean-valued functions in the HNR regime is well motivated both from the vantage point of structural analysis and through algorithmic applications such as learning. Our results can be viewed as a step in this direction. \ignore{ } \subsection{Directions for future work} Our results in this paper suggest a number of directions for future work; we close this introduction with a brief discussion of some of these. One natural goal is to establish quantitatively sharper versions of our results for general convex sets. While the weak learning results and bounds on low-degree Hermite concentration in this paper are essentially best possible (up to log factors) for centrally symmetric convex sets, there is potentially more room for improvement in our results for general convex sets. In a different direction, the basic Kruskal-Katona theorem for monotone Boolean functions has been extended in a number of different ways. Keevash \cite{Keevash:08} and O'Donnell and Wimmer \cite{OWimmer:09} have given incomparable ``stability'' results which extend the Kruskal-Katona theorem by giving information about the approximate structure of monotone functions for which the Kruskal-Katona density increment lower bound is close to being tight. In particular, \cite{OWimmer:09} show that (under mild conditions) if a monotone function $f: \bn \to \bits$ is not noticeably correlated with any single coordinate when restricted to the $k$-th slice of $\bn$, then the density increment $\mu_{k+1}(f) - \mu_k(f)$ must be at least $\Omega({\frac {\log n}{n}})$, strengthening the $\Omega({\frac 1 { n}})$ lower bound which follows from the original Kruskal-Katona theorem. \cite{OWimmer:09} use this sharper result to give a $\poly(n)$-time weak learning algorithm for monotone functions that achieves advantage $\Omega({\frac {\log n}{\sqrt{n}}})$, which is the best possible by the lower bound of \cite{BBL:98}. As another extension, in \cite{Bukh12} Bukh proves a multidimensional generalization of the Kruskal-Katona theorem. An intriguing goal for future work is to investigate possible Gaussian space analogues of the \cite{Keevash:08,OWimmer:09,Bukh12} results. In particular, if a Gaussian space analogue of \cite{OWimmer:09} could be obtained, this might lead to a $\poly(n)$-time weak learner for centrally symmetric convex sets achieving advantage $\Omega({\frac {\log n}{\sqrt{n}}})$, which would be quite close to optimal by our lower bound result \Cref{thm:our-BBL-lb}. A last goal is to obtain quantitatively stronger weak learning results for convex sets that have a ``simple structure.'' In addition to giving an $n^{O(\sqrt{n})}$-time strong learning algorithm for general convex sets, \cite{KOS:08} also gave an $n^{O(\log k)}$-time strong learning algorithm for convex sets that are intersections of $k$ halfspaces. Is there a $\poly(n)$-time weak learning algorithm that achieves accuracy $1/2 + o(1/\sqrt{n})$ for intersections of a small number of halfspaces? \ignore{ } \section{Lower bounds} In this section we prove \Cref{thm:our-BBL-lb}, which we restate here for the convenience of the reader: \begin{reptheorem}{thm:our-BBL-lb} For sufficiently large $n$, for any $s \geq n$, there is a distribution ${\cal D}_{\actual}$ over centrally symmetric convex sets with the following property: for a target convex set $\boldf \sim {\cal D}_{\actual},$ for any membership-query (black box query) algorithm $A$ making at most $s$ many queries to $\boldf$, the expected error of $A$ (the probability over $\boldf \sim {\cal D}_{\actual}$, over any internal randomness of $A$, and over a random Gaussian $\bx \sim N(0,1^n)$, that the output hypothesis $h$ of $A$ predicts incorrectly on $\bx$) is at least $1/2 - {\frac {O(\log(s) \cdot \sqrt{\log n})}{n^{1/2}}}$. \end{reptheorem} We note that this lower bound holds even in the membership query (hereafter abbreviated as MQ) model. In this model the learning algorithm has query access to a black-box oracle for the unknown target function $\boldf$; note that a learning algorithm in this model and can simulate a learning algorithm in the model where the algorithm receives only random labeled examples of the form $(\bx, \boldf(\bx))$ (with $\bx \sim N(0,1)^n$) with no overhead. Thus a lower bound in the MQ model holds \emph{a fortiori} for the random examples model (which is the model that our algorithms use). In particular, by instantiating $s= \poly(n)$ in the above theorem, we get that no algorithm which receives $\poly(n)$ samples (and hence no algorithm running in $\poly(n)$ time) can achieve an advantage of $\frac{\omega(\log^{3/2} n)}{\sqrt{n}}$ over random guessing for learning centrally symmetric convex sets. Thus, our algorithm for weak learning of centrally symmetric convex sets, i.e., Theorem~\ref{thm:weak-learn-centrally-symmetric}, achieves an optimal advantage (up to an $O(\log^{3/2} n)$ factor). Since the proof of \Cref{thm:our-BBL-lb} is somewhat involved we begin by explaining its general strategy: \begin{enumerate} \item We start by constructing a ``hard" distribution ${\cal D}_{\ideal}$ over centrally symmetric convex subsets of $\mathbb{R}^n$ (note that ${\cal D}_{\ideal}$ is different from the final distribution ${\cal D}_{\actual}$). We then analyze the case in which the learning algorithm is not allowed to make \emph{any} queries to the target function $\boldf \sim {\cal D}_{\ideal}$. It is easy to see that the maximum possible accuracy of any \red{zero-query} learning algorithm is achieved by the so-called \emph{Bayes optimal classifier} (which we denote by $BO_{{\cal D}_{\ideal}}$) which labels each $x \in \mathbb{R}^n$ as follows: \[ BO_{{\cal D}_{\ideal}} (x) = \begin{cases} 1 \ &\textrm{if} \ \Pr_{\boldf \sim {\cal D}_{\ideal}}[\boldf(x) = 1] \ge 1/2 \\ \red{0} \ &\textrm{otherwise}. \end{cases} \] We show that for ``most" $\bx$ sampled from $N(0,1)^n$, the accuracy of $BO_{{\cal D}_{\ideal}} (\bx)$ is close to $1/2$ and in fact, the average advantage over $1/2$ for $\bx\sim N(0,1)^n$ is bounded by ${\frac {O(\log(s) \cdot \sqrt{\log n})}{n^{1/2}}}$. \item The distribution ${\cal D}_{\ideal}$ is a continuous distribution defined in terms of a so-called \emph{Poisson point process}. While the construction of ${\cal D}_{\ideal}$ is particularly well-suited to the analysis of a zero-query learner, i.e.~of the Bayes optimal classifier (indeed this is the motivation for our introducing ${\cal D}_{\ideal}$), it becomes tricky to analyze ${\cal D}_{\ideal}$ when the learning algorithm is actually allowed to make queries to the target function $\boldf$. To deal with this, we ``discretize" the distribution ${\cal D}_{\ideal}$ to construct the actual hard distribution ${\cal D}_{\actual}$ (which is finitely supported). The discretization is carefully done to ensure that for ``most" $\bx$ (again sampled from $N(0,1)^n$), $\Pr_{\boldf \in {\cal D}_{\ideal}}[\boldf(\bx)=1]$ is close to $\Pr_{\boldf \in {\cal D}_{\actual}}[\boldf(\bx)=1]$. This implies that the average advantage of the Bayes optimal classifier for $\boldf \sim {\cal D}_{\actual}$ \red{(corresponding to the best possible zero-query learning algorithm)}, denoted by $BO_{{\cal D}_{\actual}}$, remains bounded by ${\frac {O(\log(s) \cdot \sqrt{\log n})}{n^{1/2}}}$. \item Finally, we consider the case when the learning algorithm is allowed to makes $s$ queries to the unknown target function $\boldf$. Roughly speaking, we show that for any choice of $s$ query points $\overline{y} = (y_1,\ldots, y_s)$, with high probability over both $\boldf \sim {\cal D}_{\actual}$ and $\bx \sim N(0,1)^n$, the advantage of the optimal classifier is close to that achieved by $BO_{{\cal D}_{\actual}}$ (see \Cref{sec:queries}). The techniques used to prove this crucially rely on the specific construction of ${\cal D}_{\actual}$, so we refrain from giving further details here. However, using this and the upper bound on the advantage of $BO_{{\cal D}_{\actual}}$, we obtain Theorem~\ref{thm:our-BBL-lb}. \end{enumerate} We note that the strategy outlined above (in particular, steps~2 and 3 and the general flavor of the analysis used to establish those steps) closely follows the lower bound approach of Blum, Burch and Langford~\cite{BBL:98}, who showed that no $s$-query algorithm in the MQ model can achieve an advantage of $\omega(\frac{\log s}{\sqrt{n}})$ over random guessing to learn monotone functions under the uniform distribution on $\bn$. Of course, the choice of the \emph{hard distribution} is quite different in our work than in \cite{BBL:98}; in particular, a draw from ${\cal D}_{\ideal}$ is essentially a random symmetric polytope with $\mathsf{poly}(s)$ facets where the hyperplane defining each facet is at distance around $O(\sqrt{\log s})$ away from the origin. The distribution ${\cal D}_{\actual}$ is obtained by essentially discretizing ${\cal D}_{\ideal}$ while retaining some crucial geometric properties. In contrast, the hard distribution in \cite{BBL:98} is constructed in one step and is essentially a random monotone DNF of width $O(\log s + \log n)$ with roughly $s$ terms. Another significant distribution between our argument and that of \cite{BBL:98} is the technical challenges that arise in our case because of dealing with a continuous domain and the resulting discretization that we have to perform. \red{Finally, we note that in the proof of \Cref{thm:our-BBL-lb}, which we give below, we may assume that $s = 2^{O(\sqrt{n}/\sqrt{\log n})}$, since otherwise the claimed bound trivially holds.} \subsection{The idealized distribution ${\cal D}_{\ideal}$ and the Bayes optimal classifier for it} We will define the distribution ${\cal D}_{\actual}$ by first defining a related distribution ${\cal D}_{\ideal}$. As mentioned earlier, the distribution ${\cal D}_{\actual}$ will be obtained by discretization of ${\cal D}_{\ideal}$. To define ${\cal D}_{\ideal}$, we need to recall the notion of a spatial Poisson point process; we specialize this notion to the unit sphere $\mathbb{S}^{n-1}$, though it is clear that an analogue of the definition we give below can be given over any bounded measurable set $B \subseteq \mathbb{R}^n$. \begin{definition} A \emph{point process $\bX$} on the \emph{carrier space $\mathbb{S}^{n-1}$} is a stochastic process such that a draw from this process is a sequence of points $\bx_1, \ldots, \bx_{\boldN} \in \mathbb{S}^{n-1}$. \red{(Note that each individual point $\bx_i$ as well as the number of points $\boldN$ are all random variables as described below.) } A \emph{spatial Poisson point process with parameter $\lambda$ on $\mathbb{S}^{n-1}$} is a point process on $\mathbb{S}^{n-1}$ with the following two properties: \begin{enumerate} \item For any subset $B \subseteq \mathbb{S}^{n-1}$, let $\boldN(B)$ denote the number of points which fall in $B$. Then, the distribution of $\boldN(B)$ follows $\mathsf{Poi}(\lambda \mu(B))$ where $\mu(B)$ is the fractional density of $B$ inside $\mathbb{S}^{n-1}$. \item If $B_1, \ldots, B_k \subseteq \mathbb{S}^{n-1}$ are pairwise disjoint sets, then $\boldN(B_1), \ldots, \boldN(B_k)$ are mutually independent. \end{enumerate} Finally, we note that the spatial Poisson point porcess with parameter $\lambda$ can be realized as follows: Sample $\boldN \sim \mathsf{Poi}(\red{\lambda}),$ and output $\boldN$ points $\bx_1, \ldots, \bx_{\boldN}$ that are chosen uniformly and independently at random from $\mathbb{S}^{n-1}$. \end{definition} We refer the reader to \cite{last2017lectures} and \cite{daley2007introduction} for details about Poisson point processes. We next choose $d>0$ so that for any unit vector $v$, \begin{equation}~\label{eq:choose-d} \mathop{\Pr}_{\bu \sim \mathbb{S}^{n-1}} \bigg[| v \cdot \bu | \ge \frac{d}{\sqrt{n}} \bigg] = \frac{1}{s^{100}}. \end{equation} Note that by symmetry the choice of $v$ is immaterial. We also recall the following fundamental fact about inner products with random unit vectors (which is easy to establish using e.g.~\Cref{eq:baum}): \begin{claim}~\label{clm:inner-product-random} Let $v \in \mathbb{S}^{n-1}$. \red{For any $0 < t < 1/2$,} \[ \mathop{\Pr}_{\bu \in \mathbb{S}^{n-1}} [| v \cdot \bu | \ge t ] = e^{-\Theta(\red{t^2 n})}. \] \end{claim} \red{Since we have $s = 2^{O(\sqrt{n/\log n})}$,} it follows from this fact that $d = \Theta(\sqrt{\log s})$ in \eqref{eq:choose-d}. Next, for any unit vector $z \in \mathbb{S}^{n-1}$, we define the ``slab" function \[ \mathrm{slab}_z(x) \coloneqq \mathds{1}\left[-d \leq z \cdot x \leq d\right]. \] It is clear that for any $z$ the function $\mathrm{slab}_z(\cdot)$ defines a centrally symmetric convex set. Finally, we define the parameter $\Lambda$ to be \begin{equation}~\label{eq:def-Lambda} \Lambda := s^{100} \cdot \ln 2. \end{equation} A function $\boldf$ is sampled from ${\cal D}_{\ideal}$ as follows: \begin{itemize} \item Sample $\bz_1, \ldots, \bz_{\boldN}$ from the spatial Poisson point process on $\mathbb{S}^{n-1}$ with parameter $\Lambda$. \item Set $\boldf$ to be \[ \boldf(x) = \bigwedge_{i=1}^{\boldN} \mathrm{slab}_{\bz_i}(x). \] \end{itemize} We have the following observation (whose proof is immediate from the construction): \begin{observation}~\label{obs:lb-const-obs} \begin{enumerate} \item Any $\boldf \sim {\cal D}_{\ideal}$ defines a centrally symmetric convex set. \item For any point $x \in \mathbb{R}^n$, the value of ${\cal D}_{\ideal}(x) \coloneqq\Pr_{\boldf \sim {\cal D}_{\ideal}} [\boldf(x) =1]$ is determined by $\Vert x \Vert_2$, \red{the distance of $x$ from the origin.}\ignore{ (if $\Lambda$ and $d$ are fixed). } \end{enumerate} \end{observation} \subsubsection{Analyzing the Bayes optimal classifier for ${\cal D}_{\ideal}$} We now bound the advantage of the Bayes optimal classifier (denoted by $BO_{{\cal D}_{\ideal}}$) for ${\cal D}_{\ideal}$, which, \red{as stated earlier, corresponds to the best possible learning algorithm that} makes zero queries to the unknown target function $\boldf \sim {\cal D}_{\ideal}$. Observe that on input $x \in \R^n$, the classifier $BO_{{\cal D}_{\ideal}}(x)$ outputs 1 if ${\cal D}_{\ideal}(x) \geq 1/2$ and outputs 0 on $x$ if ${\cal D}_{\ideal}(x) < 1/2$. Thus, the expected error of $BO_{{\cal D}_{\ideal}}$ is \begin{align*} \opt({\cal D}_{\ideal}) &:= \Ex_{\bx \sim N(0,1)^n}[\min\{{\cal D}_{\ideal}(\bx),1-{\cal D}_{\ideal}(\bx)\}], \end{align*} \red{and the expected advantage of $BO_{{\cal D}_{\ideal}}$ is $1/2 - \opt({\cal D}_{\ideal})$.} The next lemma bounds $\opt({\cal D}_{\ideal})$ \red{and completes Step~1 of the proof outline given earlier:} \begin{lemma}~\label{lem:BO-accuracy} We have \[ \ignore{\bigg| \opt({\cal D}_{\ideal}) - \frac12 \bigg|} {\frac 1 2} - \opt({\cal D}_{\ideal}) = \frac{O(\log s \sqrt{\log n})}{\sqrt{n}}. \] \end{lemma} \begin{proof} Define the set $\mathrm{S}_{\mathrm{med}} = \{x \in \mathbb{R}^n: |\Vert x \Vert_2^2 -n | \le \red{8} \sqrt{n \red{\ln n}} \}$. \red{ Intuitively, this is the set of points whose distance from the origin is ``roughly typical'' for the Gaussian distribution; more formally,} by \Cref{lem:johnstone}, we have that \begin{equation}~\label{eq:relevant-x} \Prx_{\bg \sim N(0,1)^n} [\bg \red{\notin} \mathrm{S}_{\mathrm{med}}] \le \frac{1}{n^5}. \end{equation} We will show that the value of ${\cal D}_{\ideal}(x)$ is close to $1/2$ for every $x \in \mathrm{S}_{\mathrm{med}}$, which easily implies the lemma. To do this, we define $\mathrm{Region}(x)$ to be \red{the set of those unit vectors $z$ such that $x$ does \emph{not} lie within the slab defined by $z$, i.e.} \[ \mathrm{Region} (x) := \{ z \in \mathbb{S}^{n-1}: | z \cdot x | > d\}. \] Observe that the fractional density of $\mathrm{Region}(x)$ inside $\mathbb{S}^{n-1}$, which we denote by $\mu_1(\mathrm{Region}(x))$, is determined by $\Vert x \Vert_2$. \red{We would like to analyze $\mu_1(\mathrm{Region}(x))$ for all points $x \in \mathrm{S}_{\mathrm{med}}$; to do this, we first analyze it for points at distance exactly $\sqrt{n}$ from the origin.} So choose any point $a_0 \in \mathbb{R}^n$ such that $\Vert a_0 \Vert_2=\sqrt{n}$. By the definition of $d$ in \eqref{eq:choose-d} and observing that $a_0/\sqrt{n}$ is a unit vector, we have \begin{equation}~\label{eq:density-root-n} \mu_1(\mathrm{Region}(a_0)) = \Prx_{\bu \sim \mathbb{S}^{n-1}} \left[ {\frac {a_0}{\sqrt{n}}} \cdot \bu \ge \frac{d}{\sqrt{n}} \right] = \frac{1}{s^{100}}. \end{equation} Next, consider any $b_0 \in \mathrm{S}_{\mathrm{med}}$, and note that $\Vert b_0\Vert_2 = \sqrt{n} (1+\delta)$ where $|\delta| =O(\sqrt{\frac{\log n}{n}})$. Hence \[ \mu_1(\mathrm{Region}(b_0)) = \Prx_{\bu \sim \mathbb{S}^{n-1}} \left[ {\frac {b_0}{\sqrt{n} (1+\delta)}} \cdot \bu \ge \frac{d}{\sqrt{n}(1+\delta)} \right], \] \red{where ${\frac {b_0}{\sqrt{n} (1+\delta)}}$ is a unit vector.} Recalling that we can assume \ignore{by the bound on $s$ stated at the end of that we can assume without loss of generality,} $\log s \sqrt{\log n} \le c_0 \sqrt{n}$ for a sufficiently small positive constant $c_0>0$ \red{and that $d=\Theta({\sqrt{\log s}})$,}\ignore{ -- otherwise, the conclusion of the lemma holds trivially. However, with this,} we can apply \Cref{lem:corr} to get that \begin{equation}~\label{eq:density-perturbation} \bigg| \frac{\mu_1(\mathrm{Region}(a_0))}{\mu_1(\mathrm{Region}(b_0))} - 1 \bigg| = O\bigg(d^2 \cdot \frac{\sqrt{\log n}}{\sqrt{n}} \bigg) = O\bigg(\frac{\log s \sqrt{\log n}}{\sqrt{n}} \bigg). \end{equation} From \eqref{eq:density-perturbation} and \eqref{eq:density-root-n}, we get that every $x \in \mathrm{S}_{\mathrm{med}}$ satisfies \begin{equation}~\label{eq:bound-region} \mu_1(\mathrm{Region}(x)) = \frac{1}{s^{100}} \cdot \bigg( 1+ O\bigg(\frac{\log s \sqrt{\log n}}{\sqrt{n}} \bigg)\bigg). \end{equation} To finish the proof, we observe that sampling $\boldf \sim {\cal D}_{\ideal}$ is equivalent to sampling $\bz_1, \ldots, \bz_{\boldN}$ from the spatial Poisson point process on $\mathbb{S}^{n-1}$ with parameter $\Lambda$. Let $\mathbf{{Num}}_x$ be the random variable defined as \red{$|\{\bz_i\}_{i=1}^{\boldN} \cap \mathrm{Region}(x)|$}. Observe that \begin{enumerate} \item $\boldf(x) =1$ iff $\mathbf{{Num}}_x =0$; \item $\mathbf{{Num}}_x$ is distributed as $\mathsf{Poi}(\Lambda \cdot \mu(\mathrm{Region}(x)))$. \end{enumerate} Putting these two items together with \eqref{eq:bound-region} and \eqref{eq:def-Lambda}, we get that for $x \in \mathrm{S}_{\mathrm{med}}$, \[ \mathop{\Pr}_{\boldf \sim {\cal D}_{\ideal}} [\boldf(x) =1] = \Pr[\mathsf{Poi}(\Lambda \cdot \mu(\mathrm{Region}(x))))=0] = e^{-\Lambda \cdot \mu(\mathrm{Region}(x))} = \frac{1}{2} + O \bigg( \frac{\log s \cdot \sqrt{\log n}}{\sqrt{n}}\bigg). \] Combining the above equation with \eqref{eq:relevant-x}, we get \Cref{lem:BO-accuracy}. \end{proof} \subsection{Discretizing ${\cal D}_{\ideal}$ to obtain ${\cal D}_{\actual}$, and the Bayes optimal classifier for ${\cal D}_{\actual}$} We now discretize the distribution ${\cal D}_{\ideal}$ to construct the distribution ${\cal D}_{\actual}$. We begin by recalling some results which will be useful for this construction. \begin{definition} Let ${\cal X}_1$, ${\cal X}_2$ be two distributions supported on $\mathbb{R}^n$. The \emph{Wasserstein distance} between ${\cal X}_1$ and ${\cal X}_2$, denoted by $\mathrm{d}_{\mathrm{W},1}({\cal X}_1, {\cal X}_2)$ is defined to be \[ \mathrm{d}_{\mathrm{W},1} ({\cal X}_1, {\cal X}_2) = \min_{{\cal Z}}\mathbf{E}_{{\cal Z}} [\Vert {\cal Z}_1 -{\cal Z}_2 \Vert_1], \] where $\mathcal{Z}= (\mathcal{Z}_1, \mathcal{Z}_2)$ is a coupling of $\mathcal{X}_1$ and $\mathcal{X}_2$. \end{definition} The following fundamental result is due to Dudley~\cite{dudley1969speed}: \begin{theorem}~\label{thm:Dudley} Let ${\cal X}$ be any compactly supported measure on $\mathbb{R}^n$. Let $\bx_1, \ldots, \bx_M$ be $M$ random samples from ${\cal X}$ and let $\bX_M$ be the resulting empirical measure. Then \[ \mathbf{E}[\mathrm{d}_{\mathrm{W},1} ({\cal X}, \bX_M)] = O(M^{-1/n}). \] \end{theorem} Let $\mathrm{U}_{\mathbb{S}^{n-1}}$ denote the Haar measure (i.e., the uniform measure) on $\mathbb{S}^{n-1}$. Instantiating \Cref{thm:Dudley} with $\mathrm{U}_{\mathbb{S}^{n-1}}$, we get the following corollary: \begin{corollary}~\label{corr:unif-approx} For any error parameter $\zeta>0$, there exists $M_{n,\zeta}$ such that for any $M \ge M_{n,\zeta}$, there is a distribution $U_{M, \mathrm{emp}}$ which satisfies the following: \begin{enumerate} \item $\mathrm{d}_{\mathrm{W},1}(U_{M, \mathrm{emp}}, \mathrm{U}_{\mathbb{S}^{n-1}}) \le \zeta$. \item The distribution $U_{M, \mathrm{emp}}$ is uniform over its $M$-element support, which we denote by $S_{\actual}$. \end{enumerate} \end{corollary} We are now ready to construct the distribution ${\cal D}_{\actual}$. We fix parameters $\zeta$, $p$ and $M$ as follows: \begin{equation}\label{eq:set-params} \zeta \sqrt{\log(1/\zeta)}\coloneqq \frac{1}{\Lambda \cdot\sqrt{n}}, \quad \quad \quad M := \max \left\{ M_{n,\zeta}, \frac{\Lambda^2}{\zeta}\right\}, \quad \quad \quad p :=\frac{\Lambda}{M}. \end{equation} \begin{definition}~\label{def:calD} A draw of a function $\boldf \sim {\cal D}_{\actual}$ is sampled as follows: For each $z$ in $S_{\actual}$, define an independent Bernoulli random variable $\bW_z$ which is $1$ with probability $p$. The function $\boldf$ is \[ \boldf(x) := \bigwedge_{z \in S_{\actual}: \bW_z=1} \mathrm{slab}_z(x). \] Given such a $\boldf$, we define $\mathrm{Rel} (\boldf) := \{z \in S_{\actual}: \bW_z=1\}$ \end{definition} For intuition, $\mathrm{Rel}(\boldf)$ can be viewed as the set of those elements of $S_{\actual}$ that are ``relevant'' to $\boldf$. With the definition of ${\cal D}_{\actual}$ in hand, we define ${\cal D}_{\actual}(x)$ (analogous to ${\cal D}_{\ideal}(x)$) as follows: \[ {\cal D}_{\actual}(x) = \Prx_{\boldf \sim {\cal D}_{\actual}} [\boldf(x) =1]. \] Similar to ${\cal D}_{\ideal}$, we now consider the Bayes optimal classifier $BO_{{\cal D}_{\actual}}(x)$, which corresponds to the output of the best zero-query learning algorithm for an unknown target function $\boldf \sim {\cal D}_{\actual}$. The expected error of $BO_{{\cal D}_{\actual}}$ is given by \[ \opt({\cal D}_{\actual}) := \Ex_{\bx \sim N(0,1)^n} [\min\{{\cal{D}}_{\actual}(\bx), 1- {\cal{D}}_{\actual}(\bx)\}]. \] The next lemma is the main result of this subsection and the rest of this subsection is devoted to its proof. It relates $\opt({\cal D}_{\actual})$ to $\opt({\cal D}_{\ideal})$ \red{and completes Step~2 of the outline given earlier:} \begin{lemma}\label{lem:rel-D-ideal-D-actual} For ${\cal D}_{\actual}$ and ${\cal D}_{\ideal}$ as defined above and parameters $\zeta$, $M$ and $p$ as set in \eqref{eq:set-params}, \[ |\opt({\cal D}_{\actual}) - \opt({\cal D}_{\ideal})| = O(n^{-1/2}). \] \end{lemma} The proof of \Cref{lem:rel-D-ideal-D-actual} requires several claims. \begin{claim}\label{clm:vector-diff} Let vectors $z, z' \in \mathbb{S}^{n-1}$ satisfy $\Vert z- z'\Vert_2\le 1/3$. Then \[ \Prx_{\bx \sim N(0,1)^n}[\mathrm{slab}_z(\bx) \not = \mathrm{slab}_{z'}(\bx)] \le 5 \Vert z-z'\Vert_2 \sqrt{\ln \bigg(\frac{1}{\Vert z-z'\Vert_2}\bigg)}. \] \end{claim} \begin{proof} Define $\mathsf{Bd}_\kappa := \{ y \in \R: ||y| - d| \le \kappa\}$.\ignore{\rnote{Was ``$ \{ y \in \R: ||y| - |d|| \le \kappa\}$'' but I think there is no reason to have absolute value around $d$, right?}} For any parameter $t>0$ and any $x \in \mathbb{R}^n$, observe that \begin{equation}~\label{eq:bounding-diff-vector} \mathrm{slab}_z(x) \not = \mathrm{slab}_{z'}(x) \ \textrm{only if} \ (| (z-z') \cdot x | \ge t \Vert z-z' \Vert_2) \ \textrm{and} \ ( z \cdot x \in \mathsf{Bd}_{t \Vert z-z' \Vert_2}). \end{equation} Let us write $\mathsf{erfc}(t)$ to denote $\Pr_{\bx \sim N(0,1)^n}[|\bx| \ge t]$. Recalling that $\mathsf{erfc}(t) \le (e^{-t^2} + e^{-2t^2})/2$ (e.g., see equation 10 in \cite{chiani2003new}), we have that \[ \mathop{\Pr}_{\bx \sim N(0,1)^n}[| (z-z') \cdot \bx | \ge t \Vert z-z' \Vert_2] \le \frac{e^{-t^2} + e^{-2t^2}}{2}. \] Likewise, using the fact that the density of the standard normal is bounded by $1$ everywhere, we have that \[ \mathop{\Pr}_{\bx \sim N(0,1)^n}[ z \cdot \bx \in \mathsf{Bd}_{t \Vert z-z' \Vert_2}] \le 4t \Vert z-z' \Vert_2. \] Plugging the last two equations back into \eqref{eq:bounding-diff-vector}, we have that \begin{align*} \mathop{\Pr}_{\bx \sim N(0,1)^n} [\mathrm{slab}_z(\bx) \not = \mathrm{slab}_{z'}(\bx)] &\le \min_{t>0} \left\{ \frac{e^{-t^2} + e^{-2t^2}}{2} + 4t \Vert z-z' \Vert_2\right\}\\ & \le 5 \Vert z-z'\Vert_2 \sqrt{\ln \bigg(\frac{1}{\Vert z-z'\Vert_2}\bigg)}, \end{align*} giving \Cref{clm:vector-diff}. \end{proof} The next (standard) claim relates the Poisson point process over a finite set $\mathcal{A}$ to the process of sampling each element independently (with a fixed probability) from $\mathcal{A}$. \begin{claim}~\label{clm:Poisson-binomial} Let $\mathcal{A}$ be any set of size $M$ and let $\Lambda>0$. Consider the following two stochastic processes (a draw from the first process outputs a subset of $\mathcal{A}$ while a draw from the second process outputs a multiset of elements from $\mathcal{A}$): \begin{enumerate} \item The process $\mathsf{Indsample}(\mathcal{A}, \Lambda)$ produces a subset $\mathcal{B}_{b} \subseteq \mathcal{A}$ where each element $a \in \mathcal{A}$ is included independently with probability $p= \Lambda/|\mathcal{A}|$. \item The process $\mathsf{Poisample}(\mathcal{A}, \Lambda)$ produces a multiset $\mathcal{B}_{p}$ of elements from $\mathcal{A}$ where we first draw $\bL \sim \mathsf{Poi}(\Lambda)$ and then set $\mathcal{B}_p$ to be a multiset consisting of $\bL$ independent uniform random elements from $\mathcal{A}$ (drawn with replacement). \end{enumerate} Then the statistical distance $\Vert \mathsf{Indsample}(\mathcal{A}, \Lambda) -\mathsf{Poisample}(\mathcal{A}, \Lambda) \Vert_1$ is at most $2\Lambda^2/M$. \end{claim} \begin{proof} A draw of $\mathcal{B}_p$ from $\mathsf{Poisample}(\mathcal{A}, \Lambda)$ can equivalently be generated as follows: for each $a \in A$, sample $\bx_a \sim \mathsf{Poi}(p)$ independently at random and then include $\bx_a$ many copies of $a$ in $\mathcal{B}_p$. For $0 \le q \le 1$, let $\mathsf{Bern}(q)$ denote a Bernoulli random variable with expectation $q$. Recalling that $\Vert \mathsf{Poi}(q) - \mathsf{Bern}(q) \Vert_1 \le 2q^2$, applying this bound to every $a \in A$ and taking a union bound, we have that \[ \Vert \mathsf{Indsample}(\mathcal{A}, \Lambda) -\mathsf{Poisample}(\mathcal{A}, \Lambda) \Vert_1 \le \sum_{a \in A}2 p^2 = 2 \frac{\Lambda^2}{M^2} \cdot M = \frac{2 \Lambda^2}{M}. \] \end{proof} Finally, to prove Lemma~\ref{lem:rel-D-ideal-D-actual}, we will use an intermediate distribution of functions defined as follows: \begin{definition}~\label{def:inter-D} For the parameter $\Lambda$ defined earlier, we define the distribution ${\cal D}_{\mathrm{inter}}$ as follows: to sample a draw $\boldf \sim {\cal D}_{\mathrm{inter}}$, we (i) first sample $\bL \sim \mathsf{Poi}(\Lambda)$, and (ii) then sample $\bz_1, \ldots, \bz_{\bL} \sim U_{M, \mathrm{emp}}$. The function $\boldf$ is \[ \boldf (x) := \bigwedge_{i=1}^{\bL} \mathrm{slab}_{\bz_i}(x). \] \end{definition} As with ${\cal D}_{\actual}$ and ${\cal D}_{\ideal}$, we define ${\cal D}_{\mathrm{inter}}(x)$ and $\opt({\cal D}_{\mathrm{inter}})$ as \[ {\cal D}_{\mathrm{inter}}(x) := \mathop{\Pr}_{\boldf \sim {\cal D}_{\mathrm{inter}}} [\boldf(x) =1], \quad \quad \quad \opt({\cal D}_{\mathrm{inter}}) := \Ex_{\bx \sim N(0,1)^n}[\min\{{\cal D}_{\mathrm{inter}}(\bx), 1-{\cal D}_{\mathrm{inter}}(\bx)\} ]. \] \ignore{ } Now we are ready for the proof of \Cref{lem:rel-D-ideal-D-actual}: \begin{proofof}{\Cref{lem:rel-D-ideal-D-actual}} We begin with the following easy claim which \red{shows that ${\cal D}_{\mathrm{inter}}(x)$ is very close to ${\cal D}_{\actual}(x)$ for every $x$:}\ignore{relates the accuracy of the Bayes optimal estimator for $\boldf \sim {\cal D}_{\mathrm{inter}}$ versus for $\boldf \sim {\cal D}_{\actual}$ at any point $x$.} \begin{claim}~\label{clm:D-Dinter} For any $x \in \mathbb{R}^n$, \[ |{\cal D}_{\mathrm{inter}}(x) - {\cal D}_{\actual}(x)| \leq \frac{2\Lambda^2}{M}. \] \end{claim} \begin{proof} Observe that $\boldf_{\mathrm{inter}} \sim {\cal D}_{\mathrm{inter}}$ ($\boldf_{\actual} \sim {\cal D}_{\actual}$, respectively) can be sampled as follows: Sample $(\bz_1, \ldots, \bz_{\bL}) \sim \mathsf{Poisample} (S_{\actual}, \Lambda)$ ($(\by_1, \ldots, \by_{\boldQ} \sim \mathsf{Indsample} (S_{\actual}, \Lambda)$, respectively), and set \[ \boldf_{\mathrm{inter}} (x)= \bigwedge_{i=1}^{\bL} \mathrm{slab}_{\bz_i}(x) \quad \quad \quad \text{and} \quad \quad \quad \boldf_{\actual} (x)= \bigwedge_{i=1}^{\boldQ} \mathrm{slab}_{\by_i}(x). \] It follows from \Cref{clm:Poisson-binomial} that $\Vert \mathsf{Poisample} (S_{\actual}, \Lambda) - \mathsf{Indsample} (S_{\actual}, \Lambda) \Vert_1 \le 2\Lambda^2/M$ and consequently $\Vert {\cal D}_{\mathrm{inter}} - {\cal D}_{\actual} \Vert_1 \le 2\Lambda^2/M$. This implies that \[ |{\cal D}_{\mathrm{inter}}(x) - {\cal D}_{\actual}(x)| = \left| \mathop{\Pr}_{\boldf_{\mathrm{inter}} \sim {\cal D}_{\mathrm{inter}}}[ \boldf_{\actual}(x)=1] - \mathop{\Pr}_{\boldf_{\actual} \sim {\cal D}_{\actual}}[ \boldf_{\actual}(x)=1]\right| \le \frac{2\Lambda^2}{M}. \] \end{proof} Next we relate the average value of ${\cal D}_{\mathrm{inter}}$ (for $\bx \sim N(0,1)^n$) to the average value of ${\cal D}_{\ideal}$: \begin{claim}~\label{clm:d-inter-d-ideal} \[ \Ex_{\bx \sim N(0,1)^n}[|{\cal D}_{\mathrm{inter}}(\bx) -{\cal D}_{\ideal}(\bx)|] = O(\Lambda \zeta \sqrt{\log(1/\zeta)}). \] \end{claim} \begin{proof} Recall that by \Cref{corr:unif-approx} there exists a coupling $\bZ=(\bz_1, \bz_2)$ between $U_{M, \mathrm{emp}}$ and $\mathrm{U}_{\mathbb{S}^{n-1}}$ such that $\Ex [\Vert \bz_1 - \bz_2 \Vert_1] \le \zeta$. We consider the following coupling between ${\cal D}_{\mathrm{inter}}$ and ${\cal D}_{\ideal}$: \begin{enumerate} \item Sample $\bL \sim \mathsf{Poi}(\Lambda)$. \item Sample $\{(\bz_1^{(j)}, \bz_2^{(j)})\}_{1 \le j \le \bL}$ independently from $\bZ^{\bL}$. \item Define \[ \boldf_{\mathsf{in}}(x) = \bigwedge_{j=1}^{\bL} \mathrm{slab}_{\bz_1^{(j)}}(x) \quad \quad \quad \text{and} \quad \quad \quad \boldf_{\mathsf{id}}(x) = \bigwedge_{j=1}^{\bL} \mathrm{slab}_{\bz_2^{(j)}}(x). \] \end{enumerate} Observe that $\boldf_{\mathsf{in}}$ follows the distribution ${\cal D}_{\mathrm{inter}}$ and $\boldf_{\mathsf{id}}$ follows the distribution ${\cal D}_{\ideal}$. Thus, the process above indeed describes a coupling between ${\cal D}_{\mathrm{inter}}$ and ${\cal D}_{\ideal}$. We consequently have \begin{align} |\opt({\cal D}_{\ideal}) - \opt({\cal D}_{\mathrm{inter}})| &\le \mathop{\mathbf{E}}_{\bx \sim N(0,1)^n} \left[\left| \mathop{\Pr}_{\boldf_{\mathsf{id}}}[\boldf_{\mathsf{id}}(\bx)=1] -\mathop{\Pr}_{\boldf_{\mathsf{in}}}[\boldf_{\mathsf{in}}(\bx)=1] \right| \right] \nonumber\\ &= \mathop{\mathbf{E}}_{\bx \sim N(0,1)^n} \left[\left| \mathop{\mathbf{E}}_{\bL \sim \mathsf{Poi}(\Lambda)} \mathop{\mathbf{E}}_{\bZ^{\bL}} \left[ \mathop{\wedge}_{i=1}^{\bL} \mathrm{slab}_{\bz_1^{(i)}}(\bx) - \mathop{\wedge}_{i=1}^{\bL} \mathrm{slab}_{\bz_2^{(i)}}(\bx) \right] \right| \right] \nonumber \\ &\le \mathop{\mathbf{E}}_{\bx \sim N(0,1)^n} \mathop{\mathbf{E}}_{\bL \sim \mathsf{Poi}(\Lambda)}\left[\left| \mathop{\mathbf{E}}_{\bZ^{\bL}} \left[ \mathop{\wedge}_{i=1}^{\bL} \mathrm{slab}_{\bz_1^{(i)}}(\bx) - \mathop{\wedge}_{i=1}^{\bL} \mathrm{slab}_{\bz_2^{(i)}}(\bx) \right] \right| \right] \nonumber \\ &\le \mathop{\mathbf{E}}_{\bx \sim N(0,1)^n} \mathop{\mathbf{E}}_{\bL \sim \mathsf{Poi}(\Lambda)}\left[\left| \mathop{\mathbf{E}}_{\bZ^{\bL}} \left[ \mathop{\sum}_{i=1}^{\bL} \mathrm{slab}_{\bz_1^{(i)}}(\bx) - \mathop{\sum}_{i=1}^{\bL} \mathrm{slab}_{\bz_2^{(i)}}(\bx) \right] \right| \right] \nonumber \\ &\le \mathop{\mathbf{E}}_{\bL \sim \mathsf{Poi}(\Lambda)} \mathop{\mathbf{E}}_{\bZ^{\bL}} \sum_{i=1}^{\bL} \bigg(\mathop{\mathbf{E}}_{\bx \sim N(0,1)^n} \left[\left| \mathrm{slab}_{\bz_1^{(i)}}(\bx)-\mathrm{slab}_{\bz_2^{(i)}}(\bx)\right|\right] \bigg). \label{eq:last-modify1} \end{align} Now, by \Cref{clm:vector-diff}, we have that \[ \bigg(\mathop{\mathbf{E}}_{\bx \sim N(0,1)^n} \left[\left| \mathrm{slab}_{\bz_1^{(i)}}(\bx)-\mathrm{slab}_{\bz_2^{(i)}}(\bx)\right|\right] \bigg) \le 5 \Vert \bz_1^{(i)}- \bz_2^{(i)} \Vert_2 \sqrt{\log\bigg(\frac{1}{\Vert \bz_1^{(i)}- \bz_2^{(i)} \Vert_2}\bigg) }. \] Plugging this back into \eqref{eq:last-modify1}, we have that \begin{eqnarray} |\opt({\cal D}_{\ideal}) - \opt({\cal D}_{\mathrm{inter}})| &\leq& \mathop{\mathbf{E}}_{\bL \sim \mathsf{Poi}(\Lambda)} \mathop{\mathbf{E}}_{\bZ^{\bL}} \sum_{i=1}^{\bL} \left[5 \Vert \bz_1^{(i)}- \bz_2^{(i)} \Vert_2 \sqrt{\log\bigg(\frac{1}{\Vert \bz_1^{(i)}- \bz_2^{(i)} \Vert_2}\bigg) } \right] \nonumber \\ &\leq& \mathop{\mathbf{E}}_{\bL \sim \mathsf{Poi}(\Lambda)} \sum_{i=1}^{\bL} \big [5 \cdot \zeta \sqrt{\log(1/\zeta)} \big] \nonumber \\ &\leq& 5 \Lambda \zeta \sqrt{\log(1/\zeta)}, \end{eqnarray} where the penultimate inequality used $\Ex [\Vert \bz_1 - \bz_2 \Vert_1] \le \zeta$ and the concavity of the function $x \sqrt{\log(1/x)}$. \end{proof} \Cref{lem:rel-D-ideal-D-actual} follows from \Cref{clm:D-Dinter} and \Cref{clm:d-inter-d-ideal}, recalling the values of the parameters set in \eqref{eq:set-params}. \end{proofof} \ignore{ } \subsection{Analyzing query algorithms} \label{sec:queries} \Cref{lem:rel-D-ideal-D-actual} and \Cref{lem:BO-accuracy} together imply a bound on the accuracy of the Bayes optimal classifier for ${\cal D}_{\actual}$ when the algorithm makes zero queries to the target function $\boldf \sim {\cal D}_{\actual}$. To analyze the effect of queries, it will be useful to first consider an alternate combinatorial formulation of ${\cal D}_{\actual}(x)$. For any point $x \in \mathbb{S}^n$, define $S_{\actual} (x) = \{ z \in S_{\actual}: \mathrm{slab}_z(x)=0\}$.\ignore{ and $\# S_{\actual}(x) = |S_{\actual} (x)|$. } By definition of ${\cal D}_{\actual}$, we have that for any $x \in \R^n$, \begin{equation} \label{eq:dactual-useful} \mathop{\Pr}_{\boldf \sim {\cal D}_{\actual}} [ \boldf(x) =1] = (1-p)^{|S_{\actual} (x)|}. \end{equation} Restated in these terms, \Cref{lem:BO-accuracy} and \Cref{lem:rel-D-ideal-D-actual} give us that \begin{equation}~\label{eq:relate-S(x)} \Ex_{\bx \sim N(0,1)^n} \big[\big| (1-p)^{| S_{\actual}(\bx)|} - 1/2 \big|\big] = O\bigg( \frac{\log s \cdot \sqrt{\log n}}{\sqrt{n}}\bigg) \end{equation} We return to our overall goal of analyzing the Bayes optimal classifier when the learning algorithm makes at most $s$ queries to the unknown target $\boldf$. While the actual MQ oracle, when invoked on $x \in \R^n$, returns the binary value of $\boldf(x)$, for the purposes of our analysis we consider an augmented oracle which provides more information and is described below. \subsubsection{An augmented oracle, and analyzing learning algorithms that use this oracle} Similar to \cite{BBL:98}, to keep the analysis as clean as possible it is helpful for us to consider an augmented version of the MQ oracle. (Note that this is in the context of ${\cal D}_{\actual}$, so the set $S_{\actual}$ is involved in what follows.) Fix an ordering of the elements in $S_{\actual}$, and let $f$ be a function in the support of ${\cal D}_{\actual}$. Recalling the definition of $\mathrm{Rel} (f)$ from \Cref{def:calD}, we observe that for any point $x \in \mathbb{R}^n$, \[ f(x) =1 \ \textrm{ if and only if } \ S_{\actual}(x) \cap \mathrm{Rel} (f) = \emptyset. \] This motivates the definition of our ``augmented oracle" for $f$. Namely, \begin{enumerate} \item On input $x$, if $f(x)=1$ then the oracle returns $1$ (thereby indicating that $S_{\actual}(x) \cap \mathrm{Rel} (f) = \emptyset$). \item On input $x$, if $f(x)=0$ then the oracle returns the first $z \in S_{\actual}$ (according to the above-described ordering on $S_{\actual}$) for which $z \in S_{\actual}(x) \cap \mathrm{Rel} (f)$.\footnote{We note that the need to define this ``first $z$" is the main reason that we do not work with ${\cal D}_{\ideal}$ directly and instead discretized it to obtain ${\cal D}_{\actual}$.} \end{enumerate} It is clear that on any query string $x$, the augmented oracle for $f$ provides at least as much information as the standard oracle for $f$. Thus, it suffices to prove a query lower bound for learning algorithms which have access to this augmented oracle. At any point in the execution of the $s$-query learning algorithm, let $X$ represent the list of query-answer pairs that have been received thus far from this augmented oracle. Let ${\cal D}_{\actual, X}$ denote the conditional distribution of $\boldf \sim {\cal D}_{\actual}$ conditioned on the query-answer list given by $X$. As in \cite{BBL:98}, the distribution ${\cal D}_{\actual, X}$ is quite clean and easy to describe. To do so, consider a vector $V_X$ whose entries are indexed by the elements of $S_{\actual}$. For $z \in S_{\actual}$, we define ${V}_{X}(z)$ as \[ {V}_{X}(z) := \mathop{\Pr}_{\boldf \sim {\cal D}_{\actual, X}} [z \in \mathrm{Rel} (\boldf)]. \] Let us also define the Bernoulli random variables $\{{\bW}_{X,z}\}_{z \in S_{\actual}}$, where ${\bW}_{X,z}$ is $1$ if $z \in \mathrm{Rel}(\boldf)$ for $\boldf \sim {\cal D}_{\actual, X}$. We begin by making the following observation: \begin{claim} When $X$ is the empty list (i.e. when zero queries have been made), each $V_X(z)$ is equal to $p$, and the Bernoulli random variables $\{{\bW}_{X,z}\}_{z \in S_{\actual}}$ are mutually independent. \end{claim} Let us consider what happens when the ``current'' query-answer list $X$ is extended with a new query $x$. We can view the augmented oracle as operating as follows: it proceeds over each entry $z$ in $S_{\actual}(x)$ (according to the specified ordering), and: \begin{enumerate} \item If $V_X(z) =0$, this means that the query-answer pairs already in $X$ imply that $z \not \in \mathrm{Rel}(\boldf)$. Then the augmented oracle proceeds to the next $z$. \item If $V_X(z)=1$, this means that the query-answer pairs already in $X$ imply that $z \in \mathrm{Rel}(\boldf)$. In this case, the oracle stops and returns $z$ (recall that this is a vector in $\R^n$, specifically an element of $S_{\actual}$) to the algorithm. Note that this $z$ is the first $z \in S_{\actual}$ (in order) such that $\mathrm{slab}_z(x)=0$. \item Finally, if $V_X(z)=p$, then the oracle fixes $\bW_{X,z}$ to $1$ with probability $p$ and to $0$ with probability $1-p$. (Recall that the random variable $\bW_{X,z}$ corresponds to the event that $z \in \mathrm{Rel}(\boldf)$.) If $\bW_{X,z}$ is fixed to 0 then the oracle moves on to the next $z$, and if it is fixed to $1$ then the oracle stops and returns $z$. As in the previous case, this is then the first $z$ in $S_{\actual}$ such that $\mathrm{slab}_z(x)=0$. \end{enumerate} Finally, we augment $X$ with the query $x$ and the above-defined response from the oracle. Based on the above description of the oracle, it is easy to see that the following holds: \begin{claim}~\label{clm:conditional-independence} For any $X$, each entry of $V_X(z)$ is either $0$, $1$ or $p$. Further, for any $X$, the random variables $\bW_{X,z}$ are mutually independent. Consequently, we can sample $\boldf \sim {\cal D}_{\actual, X}$ as \[ \boldf(x) = \mathop{\bigwedge}_{z \in S_{\actual}: \bW_{X,z}=1} \mathrm{slab}_z(x). \] \end{claim} Next, we have the following two claims (which correspond respectively to Claim~1 and Claim~2 of \cite{BBL:98}: \begin{claim}~\label{clm:s-queries-bound1} If the learning algorithm makes $s$ queries, then the number of entries in $V_X(\cdot)$ which are set to $1$ is at most $s$. \end{claim} \Cref{clm:s-queries-bound1} is immediate from the above description of the oracle. The next claim is also fairly straightforward: \begin{claim}~\label{clm:s-queries-bound2} If the learning algorithm makes $s$ queries, then with probability at least $1-e^{-\frac{s}{4}}$, the number of zero entries in $V_X$ is bounded by $2s/p$. \end{claim} \begin{proof} Given any $X$, on a new query $x$ the oracle iterates over all $z \in S_{\actual}(x)$ and sets $V_X(z)$ to $0$ with probability $1-p$ and $1$ with probability $p$, stopping this process as soon as (a) it sets the first $1$, or (b) it has finished iterating over all $z \in S_{\actual}(x)$, or (c) the current $V_X(z)$ was already set to $1$ in a previous round. Thus, given any $X$, the number of new zeros added to $V_X$ on a new query $x$ is stochastically dominated by $\mathsf{Geom}(p)$, the geometric random variable with parameter $p$. It follows that the (random variable corresponding to the) total number of zeros in $V_X$ is stochastically dominated by a sum of $s$ independent variables, each following $\mathsf{Geom}(p)$. We now recall the following standard tail bound for sums of geometric random variables \cite{janson2018tail}: \begin{theorem}~\label{thm:tail-bound-geometric} Let $\BR_1$, $\ldots$, $\BR_s$ be independent $\mathsf{Geom}(p)$ random variables. For $\lambda \ge 1$, \[ \Pr\big[\BR_1 + \ldots + \BR_s \ge \frac{\lambda s}{p}\big] \le e^{-s(\lambda -1 - \ln \lambda)}. \] \end{theorem} Substituting $\lambda = 2$, we get that the number of zeros in $V_X$ is bounded by $2s/p$ with probability at least $1- e^{-s/4}$. This finishes the proof. \end{proof} \subsection{Proof of~\Cref{thm:our-BBL-lb}} All the pieces are now in place for us to finish our proof of \Cref{thm:our-BBL-lb}. \red{The high-level idea is that thanks to \Cref{clm:s-queries-bound1} and \Cref{clm:s-queries-bound2}, the distribution ${\cal D}_{\actual,X}$ cannot be too different from ${\cal D}_{\actual}$ as far as the accuracy of the Bayes optimal classifier is concerned; this, together with \Cref{lem:rel-D-ideal-D-actual} and \Cref{lem:BO-accuracy}, gives the desired result.} Let $\mathcal{E}$ be the event (defined on the space of all possible outcomes of $X$, the list of at most $s$ query-answer pairs) that the number of zero entries \red{in $V_X$} is at most $2s/p$. Observe that $\Pr[\overline{\mathcal{E}}] \le e^{-s/4}$ by \Cref{clm:s-queries-bound2}. We now bound the performance of the Bayes optimal estimator for ${\cal D}_{\actual, X}$ conditioned on the event $\mathcal{E}$. Let $\mathcal{A}_1 = \{z \in S_{\actual} : V_X(z) =1\}$ and $\mathcal{A}_0 = \{z \in S_{\actual} : V_X(z) =0\}$. Using \Cref{clm:conditional-independence} and \Cref{clm:s-queries-bound1}, we have the following observations: \begin{align} & \bullet \ \ \ \textrm{If }x\in \mathbb{R}^n \ \textrm{is such that }S_{\actual}(x) \cap \mathcal{A}_1 \not =\emptyset, \text{~then~} \mathop{\Pr}_{\boldf \sim {\cal D}_{\actual, X}} [\boldf(x)=0] =1. \nonumber\\ & \bullet \ \ \ \textrm{If }x\in \mathbb{R}^n \ \textrm{is such that }S_{\actual}(x) \cap \mathcal{A}_1 =\emptyset, \text{~then} \mathop{\Pr}_{\boldf \sim {\cal D}_{\actual, X}} [\boldf(x)=1] =(1-p)^{|S_{\actual}(x) \setminus \mathcal{A}_0|}. \nonumber \\ &\bullet \ \ \ \mathop{\Pr}_{\bx \sim N(0,1)^n}[S_{\actual}(\bx) \cap \mathcal{A}_1 \not = \emptyset] \le \sum_{z \in \mathcal{A}_1} \Pr[\mathrm{slab}_z(\bx)=0] \leq \frac{|\mathcal{A}_1|}{s^{100}} \le \frac{1}{s^{99}}. \label{eq:negative-points} \end{align} The last inequality uses \Cref{clm:s-queries-bound1} to bound the size of $|\mathcal{A}_1|$ and the definition of $\mathrm{slab}_z(\cdot)$. Next, for any $z \in \mathcal{A}_0$, observe that \[ \Ex_{\bx \sim N(0,1)^n}[\mathds{1}[z \in S_{\actual}(\bx)]] = \mathop{\Pr}_{\bx \sim N(0,1)^n}[\mathrm{slab}_z(\bx)=1] = \frac{1}{s^{100}}. \] This immediately implies that \[ \Ex_{\bx \sim N(0,1)^n} \bigg[\sum_{z \in \mathcal{A}_0} \mathds{1} [z \in S_{\actual}(\bx)] \bigg] = \frac{|\mathcal{A}_0|}{s^{100}} \le \frac{2}{p \cdot s^{99}}. \] By Markov's inequality, this implies that \begin{equation}~\label{eq:Markov} \mathop{\Pr}_{\bx \sim N(0,1)^n} \bigg[ \sum_{z \in \mathcal{A}_0} \mathds{1} [z \in S_{\actual}(\bx)] \ge \frac{2}{ps^{98}}\bigg] \le \frac{1}{s}. \end{equation} Let us say that $x \in \mathbb{R}^n$ is \emph{good} if $S_{\actual}(x) \cap \mathcal{A}_1 = \emptyset$ and \[ \sum_{z \in \mathcal{A}_0} \mathds{1} [z \in S_{\actual}(x)] \red{\ge} \frac{2}{ps^{98}}. \] By \eqref{eq:Markov}, we have that $\Pr_{\bx \sim N(0,1)^n}[\bx \textrm{ is good}] \le 1/s$. We observe that for any good $x$, we have \[ |S_{\actual}(x) | - \frac{2}{ps^{98}} \le |S_{\actual}(x) \setminus \mathcal{A}_0| \le |S_{\actual}(x) | . \] It follows that \[ (1-p)^{|S_{\actual}(x)|} \cdot (1-p)^{-\frac{2}{ps^{98}}} \ge (1-p)^{|S_{\actual}(x) \setminus \mathcal{A}_0|} \ge (1-p)^{|S_{\actual}(x)|} . \] Using the fact that $(1-p)^{-\frac{2}{ps^{98}}} \le 1 + \frac{\red{4}}{s^{98}}$, we have that \[ (1-p)^{|S_{\actual}(x)|} \cdot \bigg( 1+ \frac{\red{4}}{s^{98}}\bigg) \geq (1-p)^{|S_{\actual}(x) \setminus \mathcal{A}_0|} \ge (1-p)^{|S_{\actual}(x)|}. \] This implies that for any $x \in \R^n$ which is good, \begin{equation}~\label{eq:x-good} \bigg| {\cal D}_{\actual, X}(x) - \frac{1}{2} \bigg| = \bigg| (1-p)^{|S_{\actual}(x) \setminus \mathcal{A}_0|} - \frac{1}{2} \bigg| \le \bigg| (1-p)^{|S_{\actual}(x) |} - \frac{1}{2} \bigg| + \frac{\red{4}}{s^{98}}. \end{equation} Combining this with \eqref{eq:relate-S(x)}, \eqref{eq:Markov} and \eqref{eq:negative-points}, we get that \[ \Ex_{\bx \sim N(0,1)^n} \bigg[ \bigg| {\cal D}_{\actual, X}(x) - \frac{1}{2} \bigg|\bigg] \le \frac{1}{s} + \frac{\red{4}}{s^{98}} + \frac{1}{s^{99}} + O\bigg( \frac{\log s \cdot \sqrt{\log n}}{\sqrt{n}}\bigg). \] This bounds the error of the Bayes optimal classifier for ${\cal D}_{\actual,X}$ conditioned on $\mathcal{E}$ to be at least $\red{{\frac 1 2} -} O(\log s \cdot \sqrt{\log n}/\sqrt{n})$. Observing that $\Pr[\mathcal{E}] \ge 1-e^{-s/4}$ and $s\ge n$, the proof of \Cref{thm:our-BBL-lb} is complete. \qed \section{Background results on the Gaussian distribution} In this brief section we give some technical preliminaries for the Gaussian distribution, which will be used in our weak learning and Hermite concentration results. We endow $\mathbb{R}^n$ with the standard Gaussian measure $N(0,1)^n$ (i.e. each coordinate is independently distributed as a standard normal). We define the Gaussian volume of a region $K \subseteq \R^n$, denoted $\vol(K)$, to be $\Pr_{\bg \sim N(0,1)^n}[K(\bg)=1]$. We note some basic but crucial properties of the chi-squared distribution with $n$ degrees of freedom. Recall that a non-negative random variable $\br^2$ is distributed according to the chi-squared distribution $\chi^2(n)$ if $\br^2 = \bg_1^2 + \cdots + \bg_n^2$ where $\bg \sim N(0,1)^n,$ and that a draw from the chi distribution $\chi(n)$ is obtained by making a draw from $\chi^2(n)$ and then taking the square root. We recall the following tail bound: \begin{lemma} [Tail bound for the chi-squared distribution \cite{Johnstone01}] \label{lem:johnstone} Let $\br^2 \sim \chi^2(n)$. Then we have \[\Prx\big[|\br^2-n| \geq tn\big] \leq e^{-(3/16)nt^2}\quad\text{for all $t \in [0, 1/2)$.}\] It follows that for $\br \sim \chi(n)$, \[ \Prx \big[ \sqrt{{n}/{2}} \le \br \le \sqrt{{3n}/{2}} \big] \ge 1- e^{-\frac{3n}{64}}. \] \end{lemma} The following fact about the anti-concentration of the chi distribution will be useful: \begin{fact} \label{fact:chi-squared-2} For $n > 1$, the maximum value of the pdf of the chi distribution $\chi(n)$ is at most $1$, and hence for any interval $I=[a,b]$ we have $\Pr_{\br^2 \sim \chi^2(n)}[\br \in [a,b]] \leq b-a.$ \end{fact} \ignore{ \medskip The next lemma shows that if $K$ is centrally symmetric and convex and has Gaussian volume less than 1, then $\alpha_K(r)$ decays to $0$ as $r \rightarrow \infty$: \begin{lemma}~\label{lem:alpha-K-decay} Let $K$ be a nonempty centrally symmetric convex set such that $\Pr_{\bg \sim N(0,1)^n}[K(\bg)]<1$. Then $\alpha _K(0) =1$ and $\lim_{r \rightarrow \infty} \alpha_K(r) =0$ (so for every $\epsilon>0$ there exists $R = R(\epsilon)$ such that $\alpha_K(R) \le \epsilon$). \end{lemma} \begin{proof} The proof of this lemma crucially hinges on the following claim. In the claim and subsequently $\measuredangle (v, H)$ denotes the angle between $v$ and the subspace $H\subseteq \R^n$. \begin{claim}~\label{clm:vector-ind} Let $A$ be any set of density $c>0$ on the unit sphere in $\R^n$. Then there is a set of $n$ vectors $v_1, \ldots, v_n$ on the unit sphere with the following properties: \begin{enumerate} \item All the vectors $v_1, \ldots, v_n$ lie in $A$. \item For $i \in [n]$, let $H_{i-1} = \mathsf{span}(v_1, \ldots, v_{i-1})$. Then $\measuredangle (v_i, H_{i-1}) > \frac{c}{3\sqrt{n}}$. \end{enumerate} \end{claim} \begin{proof} We choose $v_1, \ldots, v_n$ by an iterative process such that the invariants (1) and (2) always hold. Clearly we can choose $v_1$ so that (1) and (2) both hold. Suppose that we have chosen $v_1, \ldots, v_k,k<n,$ satisfying (1) and (2). We now recall a fundamental fact about concentration of measure on the sphere: \begin{fact} For $H$ any subspace of dimension $n-1$ or less on $\mathbb{S}^{n-1}$, we have that \[ \Prx_{\bx \in \mathbb{S}^{n-1}} \big[\measuredangle (\bx,H) \le {\epsilon}/{\sqrt{n}}\big] \le 2\epsilon. \] \end{fact} (This is a direct consequence of the CDF-closeness of $\bg_1 \sim N(0,1)$ and $\bx_1$ where $\bx$ is uniform over $\mathbb{S}^{n-1}$.) Applying this fact to $H_k=\mathsf{span}\{v_1,\dots,v_k\}$, we get that \[ \Prx_{\bx \in \mathbb{S}^{n-1}} \big[\measuredangle (\bx,H_k) \le {c}/{(3\sqrt{n})}\big] \le \frac{2c}{3}. \] Since the density of $A$ is $c$, we can choose $v_{k+1} \in A$ such that $\measuredangle (v_{k+1},H_k) > \frac{c}{3\sqrt{n}}$. This finishes the proof of \Cref{clm:vector-ind}. \end{proof} We return to the proof of \Cref{lem:alpha-K-decay}. By assumption we have that $\Pr_{\bg \sim N^n(0,1)} [\bg \in K] = 1-\delta$ for some $\delta>0$. Define $r_{\mathsf{in}} := \sqrt{2n \cdot \ln (1/\delta)}$ and let $\mathcal{B}_{\mathsf{in}}=B(0^n,r_{\mathsf{in}})$ be the ball of radius $r_{\mathsf{in}}$ centered at $0$. By \Cref{lem:johnstone} we have that $\Pr_{\bg \sim N^n(0,1)} [\bg\in \mathcal{B}_{\mathsf{in}} ] > 1-\delta$, and hence it follows that $\mathcal{B}_{\mathsf{in}} \not \subseteq K$. Now we are ready to prove that $\alpha_K(r) \rightarrow 0$ as $r\rightarrow \infty$. Choose any (small) $\eta>0$ and define $r_{\mathsf{out}}$ as \[ r_{\mathsf{out}} := n \cdot r_{\mathsf{in}} \cdot \bigg(\frac{3 \sqrt{n}}{\eta} \bigg)^{2n+1} \cdot \frac{1}{2^{n-2}}. \] We show below that $\alpha_K(r_{\mathsf{out}}) \le \eta$, which implies that $\alpha_K(r) \rightarrow 0$ as $r\rightarrow \infty$ as claimed by \Cref{lem:alpha-K-decay}. Towards a contradiction, assume that $\alpha_K(r_{\mathsf{out}}) > \eta$, and let $A := K \cap \mathbb{S}^{n-1}_{r_{\mathsf{out}}}$. By \Cref{clm:vector-ind}, we can choose vectors $v_1, \ldots, v_n \in A$ such that $\measuredangle (v_{i+1}, H_i) > \eta/ 3 \sqrt{n}$, where $H_i = \mathsf{span}(v_1, \ldots, v_i)$, for all $i \in [n-1]$. For each $i \in [n]$ let us define a ``scaled-down'' version of vector $v_i$ to be $w_i := v_i \cdot \frac{r_{\mathsf{in}}}{r_{\mathsf{out}}}$, {so $\|w_i\| = r_{\mathsf{in}}$.} Observe that $H_i = \mathsf{span}(w_1, \ldots, w_i)$ and that $\measuredangle (w_{i+1}, H_i) >\eta/ 3 \sqrt{n}$ for all $i \in [n-1]$. It follows from \Cref{lem:small-sing} (stated and proved below) that if $W$ is the $n \times n$ matrix whose $i$-th column is $w_i$, then the smallest singular value of $W$ is at least $r_{\mathsf{in}} \cdot (3 \sqrt{n} /\eta)^{-2n-1} \cdot 2^{n-2}$. \blue{Consequently, any vector $v \in \mathbb{S}^{n-1}_{r_{\mathsf{in}}}$ can be expressed as \[ v = \sum_{i=1}^n \alpha_i w_i, \] where each $\alpha_i$ has magnitude at most $(3 \sqrt{n} /\eta)^{2n+1} \cdot (1/2^{n-2})$.}\rnote{I guess this is an immediate consequence of the following: ``Suppose $M$ is a square matrix with unit vector columns $u_1,\dots,u_n$ and the smallest singular value of $M$ is $\tau$. Then any unit vector $w$ can be expressed as $w = \sum_{i=1}^n \alpha_i u_i$ where each $|\alpha_i| \leq 1/\tau.$'' Is this an immediate fact for anyone who (unlike me) is fluent with singular values?} Now, observe that the vector $n \cdot \alpha_i w_i$ lies on the line connecting $v_i$ and $-v_i$ and hence lies in $K$ (we are using the symmetry and convexity of $K$ here) and further, that we can express $v$ as $\sum_{i=1}^n \beta_i v_i$ where each $|\beta_i| \leq 1/n$. Using the fact that $0^n \in K$, this expression for $v$, together with the convexity and central symmetry of $K$ and the fact that each $v_i \in K$, imply that $v \in K$. Since $v$ was chosen to be an arbitrary point on $\mathbb{S}^{n-1}_{r_{\mathsf{in}}}$, it follows that $\mathcal{B}_{r_{\mathsf{in}}} \subseteq K$, which gives the desired contradiction and completes the proof of \Cref{lem:alpha-K-decay}. \end{proof} \begin{lemma}~\label{lem:small-sing} Let $A \in \mathbb{R}^{n \times n}$ be a matrix such that for every $1\le i \le n$, the $i^{th}$ column is the unit vector\ignore{\rnote{The vector was called $v_i$ in this proof but in the proof of \Cref{lem:alpha-K-decay} $v_i$ is a vector of length $r_{\mathsf{out}}$, so I renamed the vector $u_i$ in this proof.}} ${{u_i}} \in \mathbb{R}^n$. Define $H_i=\mathsf{span}({u_1}, \ldots, {{u_i}})$ and suppose that $\measuredangle(H_i,{u_{i+1}}) \ge \delta$ for $0<i<n$, where $\delta \le 1/8$. Then the smallest singular value of $A$ is at least $2^{n-3} \cdot \delta^{2n+1} $. \end{lemma} \begin{proof} Let\ignore{\rnote{Changed $\alpha$ to $x$; poor $\alpha$ is already overloaded with its use as a coefficient above and the shell density function}} $x \in \mathbb{R}^n$ be a unit vector. Observe that $A x = w$ where $w = \sum_{i=1}^n x_i {{u_i}}$. Let $j$ be the smallest index such that \ignore{\rnote{Changed ``$x_j$'' to ``{$|x_j|$}'' here}} ${|x_j|} \ge C \cdot (2\delta^2)^j$, where $C$ satisfies\ignore{\rnote{changed the exponent of $2n$ to ${n}$ here}} $C ((2\delta^2) + \ldots + (2\delta)^{{n}}) =1$ (note that $C \ge 1/2$ when $\delta \le 1/8$); note that such a $j$ must exist since otherwise $\sum |x_i| \leq 1$ which contradicts $x$ being a unit vector. We have \begin{eqnarray*} \Vert w \Vert_2 = \left\Vert \sum_{i=1}^n x_i {{u_i}} \right\Vert _2 &\ge& \left\Vert \sum_{i=1}^j x_i {{u_i}} \right\Vert_2 - \sum_{\ell>j} \left\Vert x_\ell {u_\ell} \right\Vert_2 \ge \left\Vert \sum_{i=1}^j x_i {{u_i}} \right\Vert_2 - C \cdot ((2\delta^2)^{j+1} + \ldots + (2\delta^2)^{n}) \\ &\ge& \left\Vert \sum_{i=1}^j x_i {{u_i}} \right\Vert_2 - \frac{C (2\delta^2)^{(j+1)}}{1-2 \delta^2}. \end{eqnarray*} Now recall that by definition the projection of $\sum_{i=1}^j x _i {{u_i}}$ orthogonal to $H_{i-1}$ has magnitude at least {$|x_i| \sin \delta$}.\ignore{$\delta$.\rnote{I didn't change this, but is this 100\% correct? We have $\measuredangle(H_{i-1},{u_i}) \ge \delta$, so doesn't that mean that the projection of $u_{i+1}$ orthogonal to $H_{i-1}$ has magnitude at least $\sin(\delta)$ and then what we would get is that the projection of $\sum_{i=1}^j x _i {{u_i}}$ orthogonal to $H_{i-1}$ has magnitude at least $|x_i| \sin(\delta)$. Or am I wrong about this? }} This implies that $\Vert \sum_{i=1}^j x_i {{u_i}} \Vert_2 \ge |x_j| \cdot {\sin \delta} \ge {(3/4)} C \cdot (2\delta^2)^{j} \cdot \delta.$\ignore{\rnote{I didn't change this either, but if the previous footnote is right then I think this last line should be $\Vert \sum_{i=1}^j x_i {{u_i}} \Vert_2 \ge |x_j| \cdot \sin(\delta)$, and we can say that this is at least $(3/4)C \cdot (2\delta^2)^{j} \cdot \delta.$ }} Consequently, we get \[ \Vert w \Vert_2 \ge \Vert \sum_{i=1}^j x_i {{u_i}} \Vert_2 - \frac{C (2\delta^2)^{(j+1)}}{1-(2 \delta^2)} \ge {{\frac 3 4}} \cdot C \cdot (2\delta^2)^j \cdot \delta - \frac{C (2\delta^2)^{(j+1)}}{1-(2 \delta^2)} \ge \frac{ C \cdot (2\delta^2)^j \cdot \delta}{{4}} \] \ignore{ }This proves a lower bound on $\Vert w \Vert_2$, which proves a lower bound on the smallest singular value of $A$. \end{proof} } \section{Surface area and noise stability in the high noise rate regime}\label{app:stability} In this section we give a concrete example showing that surface area is not a good proxy for noise stability at high noise rates. We do this by exhibiting two functions $\Psi_1$ and $\Psi_2$ such that (i) they have the same surface area (up to a $\Theta(1)$ factor), but (ii) at noise rate $t= \Theta(\log n)$, the noise stability of $\Psi_1$ is exponentially smaller than that of $\Psi_2$. We now define the functions $\Psi_1$ and $\Psi_2$. \begin{definition}~\label{def:Psi} Let functions $\Psi_1, \Psi_2: \mathbb{R}^n \rightarrow \{-1,1\}$ be defined as follows: \begin{enumerate} \item Let $T := n^{1/4}$ and define $\Psi_1(x) := \prod_{i=1}^T \sign(x_i)$. \item $\Psi_2: \mathbb{R}^n \rightarrow \{-1,1\}$ is the function defined by Nazarov in \cite{Nazarov:03}; it is the indicator function of a convex body with surface area $\Theta(n^{1/4})$. \end{enumerate} \end{definition} We observe that the boundary of $\Psi_1^{-1}(1)$ consists of $T$ hyperplanes that pass through the origin, and consequently we have that $\mathsf{surf}(\Psi_1) = T/\sqrt{2\pi},$ where the constant $1/\sqrt{2\pi}$ is the value of the pdf of a standard univariate $N(0,1)$ Gaussian at zero.\ignore{ } Thus, we have that $\mathsf{surf}(\Psi_1) = \Theta(\mathsf{surf}(\Psi_2))$. However, the following claim shows that the noise stabilities of these two functions are very different from each other at large noise rates: \begin{claim}~\label{clm:noise-stability-gap} For $t \ge 0$, $\mathsf{Stab}_{t} (\Psi_1) = ({\frac 2 \pi} \cdot \arcsin(e^{-t}))^{n^{1/4}}$ and $\mathsf{Stab}_{t} (\Psi_2) = \Omega(n^{\red{-2}} \cdot e^{-2t})$. In particular, for $t = \Theta(1)$ we have that $\mathsf{Stab}_t(\Psi_1) = e^{-\Theta(n^{1/4})}$ and $\mathsf{Stab}_{t} (\Psi_2) = \Omega(1/\red{n^2})$. \end{claim} \begin{proof} The lower bound on the noise stability of $\Psi_2$ is simply \Cref{corr:noise-stab}. The noise stability of $\Psi_1$ can be computed as \begin{eqnarray} \mathsf{Stab}_{t} (\Psi_1) &=& \Ex_{\bg, \bg' \sim N(0,1)^n}[\Psi_1(\bg) \cdot \Psi_1(e^{-t} \bg + \sqrt{1-e^{-2t}} \bg')] \nonumber\\ &=& \prod_{i=1}^T \Ex_{\bg, \bg' \sim N(0,1)^n}[ \sign(\bg_i) \sign (e^{-t} \bg_i + \sqrt{1-e^{-2t}} \bg'_i)] \label{eq:stab-t-product} \end{eqnarray} The well known Sheppard's Formula (see e.g.~Example~11.19 of \cite{ODBook}) states that \[ \Ex_{\bg_i, \bg'_i \sim N(0,1)}[ {\sign}(\bg_i) {\sign} (e^{-t} \bg_i + \sqrt{1-e^{-2t}} \bg'_i)] = \frac{2}{\pi} \arcsin(e^{-t}). \] Plugging this back into \eqref{eq:stab-t-product}, we get the claim. \end{proof} \subsection{A weak learner for general convex sets} \label{sec:weak-learner-general-convex} In this section we prove \Cref{thm:weak-learn-convex}. The proof uses the fact that there are efficient ``weak agnostic'' learning algorithms for halfspaces under the Gaussian distribution. Several papers in the literature, including \cite{KKMS:08, DDFS14, ABL13, DKS18-nasty}, can be straightforwardly shown to yield a result which suffices for our purposes. For concreteness we will use the following: \begin{theorem} [Theorem~1.2 from \cite{DKS18-nasty}, taking ``$d=1$''] \label{thm:DKS18} There is an algorithm \textsf{Learn-halfspace} with the following guarantee: Let $f: \R^n \rightarrow \bits$ be a target halfspace such that the algorithm gets access to samples of the form $(\bg, h(\bg))$ where $\bg \sim N(0,1)^n$ and $h: \R^n \to \bits$ satisfies $\Pr_{\bg} [h(\bg) \not = f(\bg)] \le \epsilon$. Then \textsf{Learn-halfspace} runs in time $\mathsf{poly}(n,1/\epsilon)$ and outputs a hypothesis halfspace $f': \mathbb{R}^n \rightarrow \bits$ such that $\Pr_{\bg \sim N(0,1)^n} [f(\bg) \not = f'(\bg)] \le \eps^c$, where $c>0$ is an absolute constant. \end{theorem} We note that the type of noise in the above theorem statement is referred to in the literature as \emph{adversarial label noise} (see~\cite{KSS:94}); while we will not need this stronger guarantee, the algorithm of \cite{DKS18-nasty} in fact also works in the stronger \emph{nasty noise} model. An immediate corollary of \Cref{thm:DKS18} is the following. \begin{corollary}~\label{corr:DKS} There is a positive constant $\zeta>0$ such that the algorithm \textsf{Learn-halfspace} has the following guarantee: let $f: \mathbb{R}^n \rightarrow \bits$ be a target halfspace such that the algorithm gets access to samples of the form $(\bg, h(\bg))$ where $\bg \sim N(0,1)^n$ and $h: \R^n \to \bits$ satisfies $\Pr_{\bg} [h(\bg) \not = f(\bg)] \le \zeta$. Then \textsf{Learn-halfspace} runs in $\poly(n)$ time and outputs a halfspace $f': \mathbb{R}^n \rightarrow \bits$ such that $\Pr_{\bg \sim N(0,1)^n} [f(\bg) \not = f'(\bg)] \le 1/16$. \end{corollary} For this section we set $c=\min\{1/40,\zeta/8\}$ where $\zeta$ is the positive constant from \Cref{corr:DKS}. We also recall the definition of the functions $h_0(\cdot)$, $h_1(\cdot)$ and $h_{1/2}(\cdot)$ from \eqref{eq:def-h12}. Next, we recall \Cref{lem:dictator-learning} from \Cref{sec:wl-given-kgl} (which we state below for convenience): \begin{replemma}{lem:dictator-learning} If $|\vol(K) -1/2| > c \cdot n^{-1/2}$, then either $h=h_0$ or $h=h_1$ achieves \[ \Pr_{\bg \sim N(0,1)^n}[h(\bg) = K(\bg)] \ge \frac{1}{2} + \Theta(n^{-1/2}). \] \end{replemma} The next lemma (which is a key technical ingredient) gives a weak learner for convex sets if there is a point outside $K$ which is close to the origin: \begin{lemma}~\label{lem:halfspace-learning} Let $K$ be a convex body such that $|\vol(K)-1/2| \le c \cdot n^{-1/2}$ and for which there is a point $z^\ast \not \in K$ such that $\Vert z^\ast \Vert_2 \leq c$. Then the output of the algorithm \textsf{Learn-halfspace}, when given samples of the form $(\bg, K(\bg))$, is a halfspace $f': \mathbb{R}^n \rightarrow \bits$ such that \[ \Prx_{\bg \sim N(0,1)^n} [f'(\bg) = K(\bg)] \ge \frac78. \] \end{lemma} \begin{proof} Using the supporting hyperplane theorem (see page~510 in \cite{luenberger1984linear}), there exists a unit vector $\ell \in \mathbb{R}^n$ and a threshold $\theta \in \mathbb{R}$ such that \begin{enumerate} \item $K \subseteq \{ x: \ell \cdot x - \theta > 0\}$, but \item $ \ell \cdot z^\ast - \theta \leq0$. \end{enumerate} Thus we get a halfspace $r (x) = \sign(\ell \cdot x - \theta)$ such that $r(z^\ast) =-1$ and $K \subseteq r^{-1}(1)$. Next, using the fact that $z^\ast$ lies on the hyperplane $\{x: \ell \cdot x - \theta=0\}$, we get that $|\theta| \le c$. Hence (using the fact that the pdf of an $N(0,1)$ Gaussian is everywhere at most 1) we get that \begin{eqnarray*} \Prx_{\bg \sim N(0,1)^n}[r(\bg) = 1] \le \frac12 + c. \end{eqnarray*} This implies that \begin{eqnarray}~\label{eq:false-neg} \Prx_{\bg \sim N(0,1)^n}[K(\bg) =1| r(\bg)=1] \ge \frac{\frac12 - \frac{c}{\sqrt{n}}}{\frac12 +c} \ge 1-2c -\frac{2c}{\sqrt{n}}. \end{eqnarray} On the other hand, by construction of $r(\cdot)$, we have that $\Prx_{\bg \sim N(0,1)^n}[K(\bg) =-1| r(\bg)=-1]=1$. Combining this with \eqref{eq:false-neg}, we get that \begin{eqnarray}\label{eq:false-pos} \Prx_{\bg \sim N(0,1)^n}[K(\bg) = r(\bg)] \ge 1-2c -\frac{2c}{\sqrt{n}} \ge 1-4c. \end{eqnarray} Recalling that $4c \le \zeta/2$, if we run the algorithm \textsf{Learn-halfspace} on samples of the form $(\bg, K(\bg))$, then by \Cref{corr:DKS} the output $f'$ satisfies \[ \Prx_{\bg \sim N(0,1)^n}[r(\bg) = f'(\bg)] \ge \frac{15}{16}. \] Combining this with \eqref{eq:false-pos}, we get \[ \Prx_{\bg \sim N(0,1)^n}[K(\bg) = f'(\bg)] \ge 1-4c- \frac{1}{16} > \frac78 \] and the proof of \Cref{lem:halfspace-learning} is complete. \end{proof} The last lemma we need for this section is a variant of \Cref{lem:sphere-learning} from \Cref{sec:wl-given-kgl}. \begin{lemma}~\label{lem:sphere-learning-asymmetric} If $K \subseteq \mathbb{R}^n$ is a convex body such that (i) $|\vol(K) -1/2| \le c \cdot n^{-1/2}$\ignore{for $c$ defined above} and (ii) $\mathcal{B}(0^n, c) \subseteq K$, then \[ \Prx_{\bg \sim N(0,1)^n}[h_{1/2}(\bg) = K(\bg)] \ge \frac{1}{2} + \Theta(n^{-1}). \] \end{lemma} \begin{proof} The proof is essentially the same as the proof of \Cref{lem:sphere-learning}. In particular, the proof including all the notation is the same up to and including \Cref{eq:rel-alpha}. The first and only point of departure from \Cref{lem:sphere-learning} is \Cref{eq:beta-dec-symmetric}. Using the fact that $3/4 \ge \beta(1/4) \ge 3/10$, \eqref{eq:rel-alpha} and applying \Cref{lem:key-general} (instead of \Cref{lem:key}), where the ``$\kappa$'' and ``$ \frac{r_{\mathsf{small}}}{r}$'' of \Cref{lem:key-general} are both $\Theta(n^{-1/2})$, it follows that \begin{equation}~\label{eq:beta-dec-symmetric-1} \beta(1/4) \ge \beta(3/4) - C \cdot n^{-1} \end{equation} for some absolute constant $C>0$. Note that \Cref{eq:beta-dec-symmetric-1} is the analogue of \Cref{eq:beta-dec-symmetric} in the current setting. Finally substituting \Cref{eq:beta-dec-symmetric-1} for \Cref{eq:beta-dec-symmetric} and otherwise doing the same calculation as the one leading up to \eqref{eq:advantage-symmetric}, we get that \[ \Prx_{\bg \in N(0,1)^n} [h_{1/2}(\bg) = K(\bg) ] \geq \frac{1}{2} + \frac{C}{4n}. \] This finishes the proof of \Cref{lem:sphere-learning-asymmetric}. \end{proof} Theorem~\ref{thm:weak-learn-convex} is now a straightforward combination of \Cref{lem:dictator-learning}, \Cref{lem:halfspace-learning} and \Cref{lem:sphere-learning-asymmetric}. \section{First application of our Kruskal-Katona theorems: Weak learning convex sets and centrally symmetric convex sets} \noindent {\bf Intuition.} Before entering into the detailed analysis of our weak learners we give some basic intuition for why a Kruskal-Katona type statement for convex sets should be useful for obtaining a weak learning result. In particular, below we give an informal explanation of why \Cref{lem:key} should be useful for weak learning. Let $K \subset \R^n$ be an unknown nonempty symmetric convex body.\ignore{with $\vol(K)<1$, so by \Cref{lem:alpha-K-decay} we have $\alpha_K(0)=0$ and $\lim_{r \to \infty} \alpha_K(r)=0.$} For the purpose of this intuitive explanation let us suppose that there is a value $r_{1/2}$ such that $\alpha_K(r_{1/2})=1/2$.\footnote{In general the function $\alpha_K(\cdot)$ need not be continuous, but it can be made continuous by perturbing $K$ by an arbitrarily small amount, so this is essentially without loss of generality.} The high-level idea is that in this case the polynomial threshold function $f(x) := \sign \left((r_{1/2})^2 - \sum_{i=1}^n x_i^2 \right)$ (i.e. the indicator function of the origin-centered ball of radius $r_{1/2}$) must have some non-negligible correlation with $K$ and can serve as a weak hypothesis. To justify this claim, we first establish that the advantage of $f$ is at least non-negative. To see this, first observe that \[ \Prx_{\bg \sim N(0,1)^n}[K(\bg) = f(\bg)] = \Ex_{\br^2 \sim \chi^2(n)} \Prx_{\bx \sim \mathbb{S}^{n-1}_{\br}} [K(\bx) = f(\bx)], \] and next observe that for each $r>0$, by the choice of $r_{1/2}$ and the definition of $f$, we have that \[ \Prx_{\bx \sim \mathbb{S}^{n-1}_{r}} [K(\bx) = f(\bx)] = \begin{cases} \alpha_K(r) & \text{if~}r<r_{1/2}\\ 1 - \alpha_K(r) & \text{if~}r \geq r_{1/2}, \end{cases} \] which is at least $1/2$ in each case by ~\Cref{fact:convex-decreasing}. Extending this simple reasoning, it is easy to see that if we have \begin{equation} \label{eq:advantage} \Prx_{\br^2 \sim \chi^2(n)} [\overbrace{|\Prx_{\bx \sim \mathbb{S}^{n-1}_{\br}} [K(\bx)=1]}^{=\alpha_K(\br)} - 1/2| \ge \beta] \ge \gamma, \end{equation} for some $\beta,\gamma>0$, then $f$ is a weak hypothesis for $K$ with advantage $\Omega(\gamma \beta)$. Putting it another way, the only way that $f$ could fail to be a weak hypothesis with non-negligible advantage would be if the function $\alpha_K(\cdot)$ ``stayed very close to $1/2$'' for a ``wide range of values around $r_{1/2}$'' --- but this sort of behavior of $\alpha_K(\cdot)$ is precisely what is ruled out by our density increment result, \Cref{lem:key}. \subsection{A weak learner for centrally symmetric convex sets} \label{sec:wl-given-kgl} In this subsection we prove \Cref{thm:weak-learn-centrally-symmetric}, which gives a weak learner for centrally symmetric convex sets. In the next subsection we will prove \Cref{thm:weak-learn-convex}, which gives a weak learner for general convex sets. As a major technical ingredient in proving \Cref{thm:weak-learn-convex} is a variant of \Cref{thm:weak-learn-centrally-symmetric}, we will explicitly note the places in the current proof where we use the central symmetry of $K$. Recall from \Cref{sec:intro} that $r_{\median}$ is the median value of $\chi(n)$. Let us define the function $r:[0,1) \rightarrow [0,\infty)$ by \[ \Pr_{\br \sim \chi(n)} [\br \le r(c)] =c. \] Observe that since the pdf of $\chi^2(n)$ is always positive, the function $r(c)$ is well-defined. Also, with this notation, we have that $r(1/2) = r_{\median}$. \Cref{lem:johnstone} and \Cref{fact:chi-squared-2} together easily yield the following claim: \begin{claim}~\label{clm:chi-percentile} The median $r_{\median}$ of the $\chi(n)$ distribution satisfies $|r_{\median} - \sqrt{n}| = O(1)$.\footnote{In fact it is known that $r_\median \approx \sqrt{n} \cdot (1 - {\frac 2 {9n}})^{3/2}$, though we will not need this more precise bound.} Further, there exist positive constants $A, \ B \ge 1/4$\ignore{\footnote{@Rocco: Earlier, $A$, $B$ were posited to be in the interval $[1/4, 1/2]$. I don't see why the upper bound is true or for that matter, useful}} such that $r(1/4) = r_{\median} - A$ and $r(3/4) = r_{\median} + B$. \end{claim} Now we are ready to embark on the proof of \Cref{thm:weak-learn-centrally-symmetric}. As noted earlier, the high level structure of the proof is similar to the argument used in \cite{BBL:98} to show that one of the three functions $+1$, $-1$ or Majority is a good weak hypothesis for any monotone Boolean function. \paragraph{Proof of \Cref{thm:weak-learn-centrally-symmetric}.} \Cref{thm:weak-learn-centrally-symmetric} is an immediate consequence of \Cref{lem:dictator-learning} and \Cref{lem:sphere-learning} which are stated below. Before presenting these lemmas, let us define the following three functions: \begin{equation}~\label{eq:def-h12} h_{1/2}(x) := \sign (r_{\median}^2 -(x_1^2 +\ldots + x_n^2 )), \quad h_{0}(x):= -1, \quad \textrm{ and } \quad h_{1}(x) := 1. \end{equation} Note that one can interpret $h_0(x)$ (respectively $h_1(x)$) as the indicator function of the ball of radius $0$ (respectively $\infty$). We set $c:= 1/40$ for Lemmas \ref{lem:dictator-learning} and \ref{lem:sphere-learning} (the precise value is not important as long as it is positive and sufficiently small). Finally, recall that $\vol(K) = \Pr_{\bg \sim N(0,1)^n} [K(\bg)=1]$. \begin{lemma}~\label{lem:dictator-learning} If $|\vol(K) -1/2| > c \cdot n^{-1/2}$ for $c$ defined above, then either $h=h_0$ or $h=h_1$ achieves \[ \Prx_{\bg \sim N(0,1)^n}[h(\bg) = K(\bg)] \ge \frac{1}{2} + \Theta(n^{-1/2}). \] \end{lemma} \begin{lemma}~\label{lem:sphere-learning} If $|\vol(K) -1/2| \le c \cdot n^{-1/2}$ for $c$ defined above, then \[ \Prx_{\bg \sim N(0,1)^n}[h_{1/2}(\bg) = K(\bg)] \ge \frac{1}{2} + \Theta(n^{-1/2}). \] \end{lemma} \Cref{lem:dictator-learning} is immediate, so it remains to prove \Cref{lem:sphere-learning}. \begin{proofof}{\Cref{lem:sphere-learning}} We begin by defining the function $\beta: [0,1) \rightarrow [0,1)$ as \[ \beta(c) := \Prx_{\bx \in \mathbb{S}^{n-1}_{r(c)}}[\bx \in K] = \alpha_K(r(c)). \] \begin{fact}~\label{fact:centre-decreasing} If $K$ is a convex body that contains the origin, then $\beta (\cdot)$ is a non-increasing function. \end{fact} \begin{proof} This holds since $r(\cdot)$ is strictly increasing and the function $\alpha_K(\cdot)$ is non-increasing when $0^n \in K$ (\Cref{fact:convex-decreasing}). \end{proof} \ignore{ } Next, we prove the following claim (in the current section we will only need it for the case in which the function $p$ is identically $1$, but later we will need the more general version): \begin{claim}~\label{clm:Hermite-deg2} Let $p: [0, \infty) \rightarrow \mathbb{R}$ and extend it to $\overline{p}: \mathbb{R}^n \rightarrow \mathbb{R}$ by defining $\overline{p}(x) = p(\Vert x \Vert_2)$. Let $\Gamma: \mathbb{R}^n \rightarrow \mathbb{R}$ and define $\beta_\Gamma: [0,1) \rightarrow \mathbb{R}$ as \[ \beta_\Gamma(\nu) := \Ex_{\bx \sim \mathbb{S}^{n-1}_{r(\nu)}} [\Gamma(\bx)]. \] Then \[ \Ex_{\bg \sim N(0,1)^n} [ \Gamma(\bg) \overline{p}(\bg)] = \int_{\nu=0}^1 \beta_{\Gamma}(\nu) p(r(\nu)) d\nu. \] \end{claim} \begin{proof} Let $\chi(n,r)$ denote the pdf of the $\chi$-distribution with $n$ degrees of freedom at $r$. Then, \begin{align*} \Ex_{\bg \sim N(0,1)^n}[\Gamma(\bg) \cdot \overline{p}(\bg)] &= \int_{r=0}^{\infty} \chi(n,r) \bigg( \Ex_{x \sim \mathbb{S}^{n-1}_r}[ \Gamma(x) \overline{p}(x) ] \bigg) dr\\ &= \int_{r=0}^{\infty} \chi(n,r) p(r) \bigg(\Ex_{x \sim \mathbb{S}^{n-1}_r}[ \Gamma(x)]\bigg) dr. \end{align*} Substituting $r$ by $r(\nu)$ (as $\nu$ ranges from $0$ to $1$), we have \begin{equation}\label{eq:beta-correlation} \Ex_{\bg \sim N(0,1)^n}[\Gamma(\bg) \cdot \overline{p}(\bg)] = \int_{\nu=0}^1 \chi(n,r(\nu)) r'(\nu) p(r(\nu)) \beta_\Gamma(\nu) d\nu. \end{equation} Finally, by definition of $r(\nu)$, we have that \[ \int_{z=0}^{r(\nu)} \chi(n,z) dz= \nu. \] Taking derivative of this with respect to $\nu$, we get that $\chi(n,r(\nu)) r'(\nu) =1$, and substituting this back into \eqref{eq:beta-correlation}, we get the claim. \end{proof} By instantiating Claim~\ref{clm:Hermite-deg2} with $p=1$ and $\Gamma(x)=\mathbf{1}_{x \in K}$, we have the following corollary. \begin{corollary}~\label{claim:area} $\int_{x \in [0,1)} \beta(x) dx = \vol(K)$. \end{corollary} Now we are ready to analyze $h_{1/2}$. The following claim says that if $\beta(1/4)$ is ``somewhat large'', then $h_{1/2}$ is a weak hypothesis with constant advantage: \begin{claim}~\label{clm:beta-large} If $\beta(1/4) \ge \frac{3}{4}$ then $\Pr_{\bg \sim N(0,1)^n} [h_{1/2}(\bg) = K(\bg) ] \ge \frac{1}{2} + \frac{1}{24}$. \end{claim} \begin{proof} Define $s= \int_{x=0}^{1/4} \beta(x) $ and $t = \int_{x=1/4}^{1/2} \beta(x) dx $. Using the fact that $\beta(\cdot)$ is non-increasing we have \begin{eqnarray} \textrm{(i)} \ \ s = \int_{x=0}^{1/4} \beta(x)dx \ge \frac{3}{4} \cdot \frac{1}{4} = \frac{3}{16}, \ \ \ \textrm{(ii)} \ \ t= \int_{x=1/4}^{1/2} \beta(x)dx \ge \frac{1}{3} \cdot \bigg( \int_{x=1/4}^1 \beta(x) dx\bigg) = \frac{\vol(K) -s}{3} ~\label{eq:two-items-bound} \end{eqnarray} (where Corollary~\ref{claim:area} was used for the last inequality of (ii)). We thus get \begin{eqnarray} \int_{x=0}^{1/2} \beta(x) - \int_{x=1/2}^1 \beta(x) = 2s + 2t - \vol(K) \ge \frac{4s}{3} - \frac{\vol(K)}{3} \ge \frac{1}{24}, \label{eq:st-bound} \end{eqnarray} where the first inequality above follows by item (ii) of \eqref{eq:two-items-bound} and the second inequality uses item (i) of of \eqref{eq:two-items-bound} along with the hypothesis $|\vol(K) -1/2| \le c/\sqrt{n} \le 1/40$. Combining these bounds, we have \begin{align*} \Prx_{\bg \in N(0,1)^n} [h_{1/2}(\bg) = K(\bg)] &= \int_{x=0}^{1/2} \beta(x) + \int_{x =1/2}^{1} (1-\beta(x)) \\ &\ge \frac{1}{2} + \int_{x=0}^{1/2} \beta(x) - \int_{x=1/2}^{1} \beta(x) \geq \frac{1}{2} + \frac{1}{24}, \tag{by \eqref{eq:st-bound}} \end{align*} and \Cref{clm:beta-large} is proved. \end{proof} Thus to prove \Cref{lem:sphere-learning}, it remains to consider the case that $\beta(1/4) \le 3/4$. By the monotonicity of $\beta(\cdot)$, Corollary~\ref{claim:area}, and the hypothesis of \Cref{lem:sphere-learning}, we have that \[ \frac{1}{2} - \frac{1}{40} \le \int_{x=0}^{1} \beta(x) dx \le \frac{1}{4} + \frac{3}{4} \cdot \beta (1/4). \] and hence $\beta(1/4) \ge 3/10$, so we subsequently assume that $3/10 \le \beta(1/4) \le 3/4$. Now, recall that \[ r(1/4) = r_{\median} - A \ \textrm{and} \ r(3/4) = r_{\median} + B, \] where $A, B \ge 1/4$ and $r_{\median} = \sqrt{n} \pm O(1)$. Thus \begin{equation}~\label{eq:rel-alpha} \beta(1/4) = \alpha_K(r_{\median}- A) \quad \quad \text{and} \quad \quad \beta(3/4) = \alpha_K(r_{\median}+B) \quad \quad \text{where~}A,B \geq 1/4. \end{equation} Using the fact that $3/10 \le \beta(1/4) \le 3/4$, \Cref{eq:rel-alpha}, and \Cref{lem:key}, it follows that \begin{equation}~\label{eq:beta-dec-symmetric} \beta(1/4) \ge \beta(3/4) \red{+} C \cdot n^{-1/2}, \end{equation} for some absolute constant $C>0$. This implies that \begin{eqnarray} && \int_{x=0}^{1/2} \beta(x) dx - \int_{x=1/2}^1 \beta(x) dx \nonumber \\ &=& \int_{x=0}^{1/4} \beta(x) dx - \int_{x=3/4}^1 \beta (x) dx + \int_{x=1/4}^{1/2} \beta(x) dx - \int_{x=1/2}^{3/4} \beta (x) dx \nonumber \\ &\ge& \frac{C}{4 \sqrt{n}} + \int_{x=1/4}^{1/2} \beta(x) dx - \int_{x=1/2}^{3/4} \beta (x) dx \ge\frac{C}{4 \sqrt{n}}, \label{eq:beta-dec2} \end{eqnarray} where the penultimate inequality uses \Cref{eq:beta-dec-symmetric} and the last inequality uses the monotonicity of $\beta(\cdot)$. Applying this, we get \begin{eqnarray} \Prx_{\bg \in N(0,1)^n} [h_{1/2}(\bg) = K(\bg) ] &=& \int_{x=0}^{1/2} \beta(x) dx + \int_{x=1/2}^1 (1-\beta(x)) dx \nonumber \\ &=& \frac12 + \int_{x=0}^{1/2} \beta(x) dx - \int_{x=1/2}^{1} \beta(x) dx \nonumber \\ &\geq& \frac12 + \frac{C}{4 \sqrt{n}}. \label{eq:advantage-symmetric} \end{eqnarray} This finishes the proof of \Cref{lem:sphere-learning} and hence also the proof of \Cref{thm:weak-learn-centrally-symmetric}. \end{proofof}
1,116,691,499,493
arxiv
\section{Introduction} Polarization of starlight and of dust thermal emission are commonly used as observational tracers of interstellar magnetic field orientation, within the Milky Way as well as in external galaxies~\citep[see e.g.][]{Hiltner1949,Chapman2011,Sadavoy2018,Planck2018XII,Lopez-Rodriguez2019}. This polarization is produced by the dichroism of the solid phase of the interstellar medium (ISM), composed of elongated dust grains that are spinning and precessing around the local magnetic field. Different mechanisms have been proposed to explain how the spin axis of dust grains can become aligned with interstellar magnetic fields, overcoming the random torques produced by impinging gas particles, which tend to disalign them. Shortly after the discovery of starlight polarization \citep{Hall1949,Hiltner1949}, grain alignment was proposed to result from magnetic relaxation~\citep[][DG hereafter]{Davis1951}. The interstellar magnetic field strength is however too low for the DG mechanism to work in the diffuse ISM. Furthermore, grain alignment by magnetic relaxation works like a heat engine, which requires a temperature difference between gas and dust. It must fail in dense cores, where $T_{\mathrm{dust}} \approx T_{\mathrm{gas}}$. Hence, the DG mechanism cannot account for the observed level of dust polarization on lines of sight (LOS) passing through dense molecular regions. \cite{JonesSpitzer1967} demonstrated that these limitations of the DG mechanism could be overcome if grains had the superparamagnetic properties that the presence of ferromagnetic inclusions in the grain matrix provides. \cite{Purcell1979} found that the formation of molecular hydrogen on the grain surface might spin-up the grain to suprathermal velocities, allowing for grain alignment even though $T_{\mathrm{dust}} \approx T_{\mathrm{gas}}$. The radiative torques (RATs) exerted onto grains by the absorption and scattering of photons can also spin-up and align grains with the magnetic field, provided that grains have a certain asymmetry called helicity \citep{Dolginov1976,Draine_Weingartner1996,DraineWeingartner1997}. Through a number of papers, a model of grain alignment by RATs was constructed \citep{LazarianHoang2007,Lazarian2008,Hoang2008,Hoang2014,HoangLazarian2016,LH18}, opening the path to quantitative comparisons with observations \citep{Bethell2007,Seifried2019}. A number of studies have looked for the distinctive signatures of the RAT mechanism in polarization observations. The observed variations of the polarization fraction in the optical or in the submillimeter are found to be in qualitative agreement with what is expected from the RAT theory: a strong drop in starless cores \citep{Alves2014,Jones2015}, an increase with the radiation field intensity in dense clouds with embedded YSOs \citep{Whittet2008} or around a star \citep{Andersson2011}, a modulation by the angle between the magnetic field and the direction of anisotropy of the radiation field \citep{Andersson2010,VA15}, or a correlation with the wavelength $\lambda_{\rm max}$ where starlight polarization peaks \citep{AnderssonPotter2007}. For a review of observational constraints favouring grain alignment by RATs, see~\cite{Andersson2015}. On the contrary, studies where the polarization fraction was corrected for the effect of the magnetic field before the analysis do not find any drop in the grain alignment efficiency, whether in the diffuse and translucent ISM \citep{Planck2018XII} or in dense cores \citep{Kandori2018,Kandori2020}. Clearly, more work is needed to solve this discrepancy and reach conclusions that are statistically significant. The purpose of this paper is to confront the predictions of the RAT theory to observations in a quantitative way through synthetic dust polarized emission maps built from a magnetohydrodynamics (MHD) simulation of interstellar turbulence with state-of-the-art grain alignment physics and an accurate treatment of radiative transfer. In our new modelling we post-process the MHD simulation of ~\cite{hennebelle_08} used in \cite{Planck2015XX} with the radiative transfer (RT) code \texttt{POLARIS}\footnote{\tt http://www1.astrophysik.uni-kiel.de/${\sim}$polaris/}~\citep{Reissl2016}, using a physical dust model designed to reproduce the mean extinction and polarization curves observed in the diffuse ISM. The \texttt{POLARIS}\ tool, which incorporates the detailed physics of the RAT alignment theor, was also applied to predict line emission including the Zeeman effect \citep[][]{Brauer2017A,Brauer2017B,Reissl2018,Pellegrini2019} as well as galactic radio observations \citep{Reissl2019}. Contrary to other dust emission codes, \texttt{POLARIS}\ is a full Monte-Carlo dust heating and polarization code solving the RT problem in the Stokes vector formalism for dichroic extinction and thermal re-emission by dust, simultaneously. Furthermore, \texttt{POLARIS}\ keeps track of each of the photon packages in order to simulate the radiation field in complex environments, allowing for the determination of the parameters required by the grain alignment physics. In essence, this paper is a follow-up of \citet{Planck2015XX} in which the modelling was done within the simplifying assumption of uniform grain alignment, and of \cite{Seifried2019} where grain alignment was properly computed with \texttt{POLARIS}\ but lacked a well-constrained dust model. In this article, we use the numerical model of the RAT theory outlined by \cite{Hoang2014} to estimate the relative importance of the radiation field properties and of the gas pressure in establishing the level of grain alignment under physical conditions representative of the diffuse and translucent ISM. The alignment of dust grains with the magnetic field by mechanical torques \citep[MAT][]{Lazarian2007C,Das2016,Hoang2018A} is also of great interest for our purpose. However, MAT is not yet a predictive theory like RAT is, and cannot therefore be part of our modeling. Still, we will discuss some implications of the possible grain alignment by mechanical torques. The paper is structured as follows. In Section~\ref{sect:MHDRT}, we introduce the MHD simulation used in this study and the radiative transfer that is applied to it. The modelling of dust is described in Section~\ref{sect:DustModel} and that of grain alignment in Section~\ref{sect:RATAlignment}. The output data cubes and maps from the \texttt{POLARIS}\ modelling are presented in Sects.~\ref{sec:ISRF} and ~\ref{sec:RT_STAR} for two different setups of the radiation field. Our results are discussed in Section~\ref{sec:discussion} and summarized in Section~\ref{sect:summary}. \section{MHD simulations and radiative transfer} \label{sect:MHDRT} \subsection{The $\texttt{RAMSES}$ MHD simulation} \label{sect:MHDSimulation} \begin{figure*} \begin{center} \includegraphics[width=.36\textwidth]{dens_xy_isrf.pdf} \includegraphics[width=.31\textwidth]{pres_xy_isrf.pdf} \includegraphics[width=.31\textwidth]{tg_xy_isrf.pdf} \end{center} \caption{$\texttt{RAMSES}$ MHD parameters averaged cell by cell along the Z axis of the MHD cube: gas density $n_{\mathrm{gas}}$ (left), gas pressure $P_{\mathrm{gas}}$ (center), and gas temperature $T_{\mathrm{gas}}$ (right). The vector field shows the averaged magnetic $B_{\mathrm{x}}$ and $B_{\mathrm{y}}$ components and the colorbar shows $B=(B_{\mathrm{x}}^2+B_{\mathrm{y}}^2)^{1/2}$. Contour lines indicate the logarithm of column density $N_{\mathrm{H}}$.} \label{fig:MHD_input} \end{figure*} As a model for a volume of neutral ISM material including both diffuse and dense gas on the way to forming molecular clouds, we consider a single snapshot from an MHD simulation computed with the adaptive-mesh-refinement code $\texttt{RAMSES}$~\citep[][]{Teyssier2002,Fromang2006}. This particular simulation of interstellar MHD turbulence is the same as the one used in \cite{Planck2015XX}, and we refer the reader to that paper and to \cite{hennebelle_08}, where the simulation was originally presented, for more detail. To give its essential characteristics, it follows the formation of structures of cold neutral medium gas (CNM, $n_{\mathrm{gas}} \sim 100\,\mathrm{cm}^{-3}$, $T_{\mathrm{gas}} \sim 50\,\mathrm{K}$) within head-on colliding flows of warm neutral medium (WNM, $n_{\mathrm{gas}} \sim 1\,\mathrm{cm}^{-3}$, $T_{\mathrm{gas}} \sim 8000\,\mathrm{K}$). The colliding flow setup provides a convenient way to form such a mixture of diffuse and dense structures reproducing several observational properties of turbulent molecular clouds~\citep[see, e.g.,][]{Hennebelle-Falgarone-2012}, although cloud formation may actually proceed through other mechanisms such as spiral density waves~\citep{Dobbs_2006}. The simulation volume is threaded by a magnetic field that is initially aligned with the direction of the flows. From this simulation, we extract data over a cubic subset $18\ \mathrm{pc}$ along each side, located near the center of the full $50\ \mathrm{pc}$ box. The extracted data comprise total gas density $n_{\mathrm{gas}}$, pressure $P_{\mathrm{gas}}$, and components $B_{\mathrm{x}}$, $B_{\mathrm{y}}$, and $B_{\mathrm{z}}$ of the magnetic field. Unlike in \cite{Planck2015XX}, however, we perform this extraction using the full resolution of the simulation ($0.05\,\mathrm{pc}$ per pixel) instead of the coarser $0.1\,\mathrm{pc}$ per pixel resolution that was used in \cite{Planck2015XX}. The average total gas density in the simulation cube is about $15\ \mathrm{cm}^{-3}$ leading to a total gas mass of $\approx 3400\,\mathrm{M}_{\odot}$, assuming a molecular weight $\mu=1.4$. The components of the magnetic field have a dispersion of $3\,\mu\mathrm{G}$ and a mean value of about $5\,\mu\mathrm{G}$ with a direction that is typically aligned with the flow~\citep[][Figure~15]{Planck2015XX}. We use this simulation first to allow for a direct comparison of our results with those obtained with the same simulation assuming a uniform alignment of dust grains along the magnetic field lines \citep{Planck2015XX}, and second because it is representative of the diffuse ISM, while still harboring dense cores ($n_{\mathrm{gas}} \sim 10^4$ cm$^{-3}$) where the drop in the grain alignment efficiency may be more pronounced. From this simulation we utilize the gas density $n_{\mathrm{gas}}$, the gas temperature $T_{\mathrm{gas}}$, and the magnetic field magnitude as well as its direction as input for our subsequent RT post-processing. In Figure \ref{fig:MHD_input} we show the gas density, temperature, pressure as well as the magnetic field direction. The maps show direct, unweighted average quantities over the LOS, i.e., along the $z$-axis of the simulation cube, for each direction. \subsection{Monte-Carlo propagation scheme of $\texttt{POLARIS}$} \label{sect:MCPropagation} The post-processing steps of the MHD data consist of two parts. First, the radiation field is calculated with a Monte-Carlo (MC) approach in order to derive the necessary quantities for dust heating and grain alignment. In a second step we create synthetic dust emission and polarization maps. For all the RT simulations we make use of the RT code \texttt{POLARIS}\ \citep[][]{Reissl2016}.\\ The local radiation field is determined by the 3D distribution of the dust and of the photon emitting sources. Commonly, the radiation field is quantified by the dimensionless parameter \citep[][]{Habing1968} \begin{equation} G_{\mathrm{0}}= \frac{1}{5.29\times 10^{-14}\ \mathrm{erg}\ \mathrm{cm^{-3}}} \int_{6\ \mathrm{eV}}^{13.6\ \mathrm{eV}} u_{E}\,\mathrm{d} E\, , \label{eq:G0} \end{equation} where $u_{E}$ is the spectral energy density of the radiation field within the energy band where photoelectric heating is most relevant. In this paper we consider two separate setups concerning the sources of radiation. For the setup ISRF we do only use a parametrization of the spectral energy distribution (SED) as presented in \cite{Mathis1983} (see table \ref{tab:Setups}) for the MC sampling of wavelengths. Note that we keep track of both the wavelength and direction $\hat{k}$ of each photon package per grid cell. For the setup ISRF, photon packages are injected into the MHD simulation from a sphere surrounding the grid with a randomly sampled $\hat{k}$ unit vector. Since the considered grain alignment (see Section \ref{sect:RATAlignment}) is sensitive to the radiation field we investigate a second case with an additional source of radiation. For this setup STAR we consider a star (see table \ref{tab:Setups}) at the very center of the grid, in addition to the ISRF radiation, in order to quantify the influence of the radiation field on dust heating and grain alignment. Here, the photon packages start with a random direction $\hat{k}$ from the very position of the star whereas the wavelengths of the photons are samples from the Planck function. We note that the setup STAR is not entirely self-consistent since the star is added in post-processing and does not form in the MHD simulation itself, and so that the magnetic field and the gas do not respond accurately to the stellar feedback. Hence, our model lacks the expected density cavity and the deformation of field lines in the vicinity of the star. Nevertheless, we provide the STAR setup in order to explore the influence of the radiation field and subsequent RAT alignment on ISM polarization patterns in a controlled environment. This way, we can ensure that any deviation compared to the ISRF setup is purely due to radiation since magnetic field and gas properties remain the same. \begin{table*}[t] \centering \begin{tabular}[]{ |l | c |} \hline setup & description \\ \hline ISRF & diffuse and isotropic ISRF with the SED from \cite{Mathis1983} with $G_{\mathrm{0}}=1$\\ \hline STAR & ISRF plus one additional star at the very center of the grid with $R_{*}=15\ R_\odot$ and $T_{*}=15000\ \mathrm{K}$\\ \hline \end{tabular} \caption{Properties of the radiation field setups for the MC dust grain heating and alignment simulations.} \label{tab:Setups} \end{table*} \begin{table}[b] \centering \begin{tabular}[]{ |l | c |} \hline alignment & description \\ \hline FIXED & $a_{\mathrm{alig}}=100\,\mathrm{nm}$\\ \hline RAT & $a_{\mathrm{alig}}$ calculated by RATs (see Section \ref{sect:RATAlignment})\\ \hline \end{tabular} \caption{Definition of the considered grain alignment mechanisms.} \label{tab:Alignment} \end{table} All MC RT simulations are performed with 100 wavelength bins logarithmically distributed over a spectrum of ${\lambda \in [92\,\mathrm{nm} - 2\,\mathrm{mm}]}$. For the photon package propagation scheme we apply a combination of the continuous absorption technique introduced by \cite{Lucy1999} to keep track of the photons per grid cell and the temperature correction of \cite{BjorkmanWood2001} to ensure the correct spectral shift when a photon package gets absorbed and re-emitted. Assuming thermal equilibrium between the absorbed and emitted energy, the dust temperature per cell can be calculated with \citep[see][for details]{Lucy1999,BjorkmanWood2001,Reissl2016} \begin{equation} \int \overline{C}_{\mathrm{abs},\lambda}\,J_{{\lambda}}\,\mathrm{d}\lambda = \int \overline{C}_{\mathrm{abs},\lambda}\,B_{\lambda}(T_{\mathrm{dust}})\,\mathrm{d}\lambda\;, \label{eq:Tdust} \end{equation} where $\overline{C}_{\mathrm{abs},\lambda}$ is the size averaged cross section of absorption. Here, we keep track of each of the temperatures corresponding to the distinct grain populations (silicate and graphite) individually as well as an average dust temperature. In detail, we solve Equation~\eqref{eq:Tdust} with ${\overline{C}_{\mathrm{abs},\lambda}=\overline{C}_{\mathrm{abs},\lambda, \mathrm{silicate}}}$, ${\overline{C}_{\mathrm{abs},\lambda}=\overline{C}_{\mathrm{abs},\lambda, \mathrm{graphite}}}$, and ${\overline{C}_{\mathrm{abs},\lambda}=\overline{C}_{\mathrm{abs},\lambda, \mathrm{silicate}}+\overline{C}_{\mathrm{abs},\lambda, \mathrm{graphite}}}$ separately once the radiation field per cell is known (see also Appendix \ref{app:RTequation}). Consequently, all the dust polarization simulations performed in this work are for an average dust grain and ignore the possible size dependence of the dust temperature. For the MC simulations of the radiation field we consider the dust grains to be spherical since the dust shape and orientation has, for moderate elongations, only a minor influence on the grain absorption and scattering cross-sections \citep[see \textit{e.g.}][Figure~2]{DraineFraisse2009}. The local spectral energy density, $u_\lambda$, is defined as the sum over the directions of intensity $\vec{J}_{\lambda}$ of all photon packages that crossed a particular grid cell, \begin{equation} u_{\lambda} = \frac{4\pi}{c}\sum \left| \vec{J}_{\lambda} \right|\;, \end{equation} and the spectra anisotropy factor of the radiation field, $\gamma_\lambda$, can be defined as the vector sum of all radiation normalized by the total energy density \begin{equation} \gamma_{\lambda} = \frac{\left|\sum \vec{J}_{\lambda} \right|}{\sum \left| \vec{J}_{\lambda} \right|}\, . \label{eq:gamma_lambda} \end{equation} Consequently, $\gamma_{\lambda}=1$ stands for an anisotropic radiation per wavelength i.e., a plane wave and $\gamma_{\lambda}=0$ for a totally diffuse i.e. fully isotropic radiation field. The total energy density per cell, $u_{\mathrm{rad}}$, is then defined by \begin{equation} u_{\mathrm{rad}}=\int u_{\lambda}\,\mathrm{d}\lambda\,, \end{equation} where we integrate over the full wavelength range, from which we derive the normalized quantity $U_{\rm rad} = u_{\rm rad}/u_{\rm ISRF}$, where $u_{\rm ISRF}=8.64\times 10^{-13}\ \mathrm{erg\ cm}^{-3}$ is the total energy of the ISRF per unit volume in our solar neighborhood as introduced by \cite{Mezger1982}. For later analysis and discussion, we define for each position in the MHD cube the average anisotropy factor of the radiation field \citep[e.g.][]{Bethell2007,Tazaki2017} to be \begin{equation} \left\langle \gamma \right\rangle = \frac{1}{u_{\mathrm{rad}}} \int{ \gamma_\lambda \, u_{\lambda} \,\mathrm{d}\lambda } \end{equation} and the average $\cos{\vartheta}$ as \begin{equation} \left\langle \cos{\vartheta} \right\rangle = \frac{1}{u_{\mathrm{rad}}} \int{ \cos{\vartheta_\lambda} \,u_{\lambda} \,\mathrm{d}\lambda}\,, \label{eq:AvgCos} \end{equation} where ${ \vartheta_\lambda = \angle \left(\vec{k}_\lambda,\vec{B} \right) }$ is the angle between the direction $\vec{k}_\lambda$ of the radiation field per wavelength bin and the direction $\vec{B}$ of the local magnetic field lines. The quantification of polarized radiation can be done in the Stokes vector formalism with $S=(I,Q,U,V)^T$ where $I$ is the total intensity, $Q$ and $U$ are the components of the linear polarization, and $V$ is the circularly polarized part. For the subsequent ray-tracing we make use of the full set of RT equations in the Stokes vector formalism in order to carry the full information of dust emission and extinction including polarization through the grid. We use a Runge-Kutta solver to project the rays onto a detector that stores each of the Stokes components as well as optical depth and column density. For the intensity $I$ we handle the contribution of silicate and graphite grains separately. Finally, the fraction of linear polarization is defined to be \begin{equation} p=\frac{\sqrt{Q^2+U^2}}{I}\, . \end{equation} The orientation angle of the polarization vectors can be derived by \begin{equation} \psi = \frac{1}{2}\mathrm{atan2} \left(U,Q\right) \label{eq:PolAng} \end{equation} in the IAU convention for angles, as in \cite{PlanckXIX2015}. We use the polarization angle dispersion function $\S$ introduced by \cite{Serkowski1958} and \cite{Hildebrand2009}. This function is a measure of the local dispersion of magnetic field orientations within an annulus $\vec{\delta}$ around each position $\vec{r}$. The dispersion function reads \begin{equation} \S(\vec{r},\delta) = \sqrt{ \frac{1}{N} \sum_{i=1}^N \left[\psi(\vec{r}) - \psi(\vec{r}+\vec{\delta}_{\mathrm{i}})\right]^2} \label{eq:PolCorAngle} \end{equation} where $N$ is the number of pixels within the annulus. Following \cite{Planck2015XX}, we place the MHD cube at a distance of $D=100\ \mathrm{pc}$, use ${\delta ={\rm FWHM}/2}$, here with a FWHM of $5'$, and a pixellisation of 3 pixels per beam. \subsection{Radiative transfer post-processing of the $\texttt{RAMSES}$ simulation} \label{sect:RTPostProcessing} Since the MC method is based on stochastic sampling, derived quantities are inherently prone to noise \citep{Hunt1995}, a qualitative analysis of the noise is provided in Appendix \ref{app:MCNoise}. Hence, we perform the MC simulation with $5\times 10^8$ photon packages per wavelength for the ISRF setup. For the STAR simulation we apply $5\times 10^8$ photon packages per wavelength for the ISRF and $2\times 10^7$ photon packages for the star in the very center of the MHD cube constituting a balance between noise reduction and run-time. The quantity $G_{\mathrm{0}}$ seems to be particularly sensitive to the number of photons. For low photon numbers $G_{\mathrm{0}}$ stays far below unity, whereas $G_0=1$ is expected at the borders of the MHD grid considering the \cite{Mathis1983} ISRF. These photons are emitted towards the computational domain from a sphere of radius twice the sidelength of the MHD cube. Only this combination of amount of photons and sphere radius guarantees a $G_{\mathrm{0}} \approx 1$ and an anisotropy factor $\gamma_\lambda \approx 0$ on average over all photons entering the simulation domain. Photons permanently scatter or become absorbed and are subsequently re-emitted in the $\texttt{POLARIS}$ RT simulations. Photons newly injected into the grid may be deflected out of the grid already after a few such events and cannot carry their energy deeper inside. Consequently, the average energy density $u_{\mathrm{rad}}$ is about $2-5\ \%$ lower than the expected $u_{\rm ISRF}$ towards the center of the $\texttt{RAMSES}\ $ MHD domain. As we will discuss below, such a loss of energy will only result in a modification of the polarization fractions by a fraction of a per cent. \section{The dust model} \label{sect:DustModel} \subsection{Grain properties}\label{sec:grainprop} Dust models compatible with observational constraints in the diffuse ISM in extinction, emission and polarization require distinct dust populations \citep{DraineFraisse2009,Siebenmorgen2014,Guillet2018}: one population of very small grains to reproduce the UV bump and the mid-IR emission bands, one population of non-spherical silicate grains to account for the observed polarization in the optical, in the mid-IR silicate bands and in the FIR and sub-millimeter, and one population of carbonaceous grains (graphite or amorphous carbon, spherical or not) to complete the fit. To confront the predictions of the RAT theory \citep{LazarianHoang2007,Hoang2014} to the statistics of dust polarized emission at $353\ \mathrm{GHz}$ obtained by the {\it Planck}\ collaboration \citep{Planck2015XX,Planck2018XII}, we use a simplified dust model adapted to the \texttt{POLARIS}\ code. It is composed of two distinct size distributions of graphite ($\rho_{\rm G} = 2.24$ g.cm$^{-3}$) and silicate ($\rho_{\rm S} = 3.0$ g.cm$^{-3}$) grains. Graphite grains are assumed to be spherical, while silicate grains are spheroidal with an oblate shape. We note $a_\parallel$ (resp. $a_\perp$) the size of the oblate silicate grain along (resp. perpendicular to) its symmetry axis, and $s = a_\parallel/a_\perp = 0.5$ its aspect ratio. The sphere of equal volume has a radius $a = a_\parallel^{1/3}\,a_\perp^{2/3}$. Each size distribution follows a power-law of index $q$, with cut offs at $a_{\mathrm{min}}$ and $a_{\mathrm{max}}$, and mass per H $m$ [g/H]: \begin{equation} \frac{\text{d}n\left(a\right)}{\text{d}a} = \frac{3m\left(q+4\right)}{4\pi\rho\left(a_{\mathrm{max}}^{q+4}-a_{\mathrm{min}}^{q+4}\right)} \,a^q, \end{equation} where $a$ is the radius of the grain for spherical grains, and the radius of the sphere of equal volume for spheroidal grains. The absorption, scattering and polarization coefficients of spheroidal grains are calculated with the \texttt{DDSCAT}\ 7.3 code \cite[][]{DraineFlatau2013}. \texttt{DDSCAT}\ provides the differential cross sections for extinction, absorption, and circular polarization required for an all-encompassing radiative transfer (RT) scheme, but has numerical limitations for large dust grains and small wavelengths. For this reason, we do not calculate those cross-sections for $\lambda < 0.25\,\mu$m, a domain of the UV that is not of interest for our study. The absorption and scattering coefficients for spherical dust grains are calculated on the fly at all wavelengths by \texttt{POLARIS}\ itself with Mie theory, based on the refractive indices of the silicate and graphite grains \citep[][]{Weingartner2001}. Following the RAT theory as outlined in \cite{LazarianHoang2007} and \cite{Hoang2014}, we will assume that grains larger than a certain threshold-size, $a_{\mathrm{alig}}$, are aligned along magnetic field lines, while other grains are not aligned, \textit{i.e.} they do not present any preferred orientation. The value of $a_{\mathrm{alig}}$, which depends on the local physical conditions, will be determined using the RAT theory implemented in the \texttt{POLARIS}\ code. However, to be able to compare our results with those obtained ignoring variations in the grain alignment efficiency \citep{Planck2015XX}, we also define a FIXED alignment setup in which $a_{\mathrm{alig}}=100\ \mathrm{nm}$ for silicate grains throughout the cube, independently of the local physical conditions (see table \ref{tab:Alignment}). We outline the physics of grain alignment later in Section \ref{sect:RATAlignment} in more detail. \subsection{Radiative transfer with spherical grains} A few arguments favour using the optical properties of spherical grains, and not that of spheroidal grains, to compute the radiation transfer and dust temperature with \texttt{POLARIS}\ (see Section \ref{sect:RTPostProcessing} and \ref{sect:RATAlignment} for details). First, the dust shape and orientation has, for moderate elongations, only a minor influence on the grain absorption and scattering cross-sections \citep[see \textit{e.g.}][Figure~2]{DraineFraisse2009}. Second, only silicate grains are spheroidal in our model, contributing to $\sim50\%$ of the dust extinction in the optical \citep{Weingartner2001}. Third, we are not able to compute correctly the radiation transfer by oblate grains in the far-UV ($\lambda<0.25\,\mu\mathrm{m}$) because the dust cross-sections for oblate grains could not be calculated for large grains at these wavelengths (see Section \ref{sec:grainprop}), and would impose an extrapolation of those cross-sections down to the geometrical limit. Fourth, in the RAT theory the properties of the radiation field in each cell must be known to determine the grain alignment efficiency. As a consequence, the \texttt{POLARIS}\ code must first make an assumption on the grain alignment - random or perfect alignment for example - when computing the radiation transfer and dust temperature. Therefore, for the radiation transfer calculations leading to the determination of the characteristic of the radiation field and dust temperature in each cell, and for this only, we will replace oblate silicate grains by spherical grains of the same equivalent size. \subsection{Fitting extinction and polarization curves in the optical}\label{sec:fitdustmodel} \begin{figure} \includegraphics[width=0.49\textwidth]{psNH_aalig} \caption{Starlight polarization percentage for a column density $N_{\rm H} = 10^{21}\,$cm$^{-2}$, as a function of the wavelength, for increasing values of the alignment parameter $a_{\mathrm{alig}}$. Silicate grains are assumed to be aligned, while graphite grains are not. The dashed curve is the mean polarization curve observed in the diffuse ISM ($A_V\sim 1$), which constrains both the upper limit $a_{\rm max}^{\rm S}$ of the silicate size distribution and the minimal size of aligned grains $a_{\mathrm{alig}}$. Using our model, it is best fitted in the optical and NIR for $a_{\mathrm{alig}} = 100\,\mathrm{nm}$ (see Section \ref{sec:fitdustmodel} for a discussion of the poor fit in the UV). } \label{fig:p_lambda} \end{figure} Our dust model must reproduce the mean extinction and polarization curves observed in the diffuse interstellar medium for moderate extinction ($A_V \sim 1$). Polarization curves in extinction are usually modelled by the Serkowski law \citep{Serkowski1975,Whittet1992}, \begin{equation} p(\lambda) = p_{\rm max}\,\exp\left\{ -K[\ln({\lambda/\lambda_{\rm max}})]^2 \right\}\;, \end{equation} where $\lambda_{\rm max}$ is the wavelength at which $p(\lambda)$ peaks, and $K$ controls the width at half maximum of the curve. The mean values observed in the diffuse ISM are $\lambda_{\rm max}=0.55\,\mu$m, and $K=0.92$ \citep{Whittet1992}. The maximal value of $p_{\rm max}/E(B-V)$ was long considered to be of 9\% \citep{Serkowski1975}. It has been recently reevaluated to at least 13\% \citep{Planck2018XII,Panopoulou2019}, corresponding to a polarization fraction $p_V/\tau_V \simeq 4.5\%$, with $\tau_V=1.086\,A_V$. For simplicity, lacking more constraints, we fix the power-law index of the silicate size distribution to $q_{\rm S} = -3.5$ \citep{Mathis1977}. The value of $\lambda_{\rm max}$ severely constrains the minimal size of aligned grains $a_{\mathrm{alig}}$, while the value of $K$ provides a looser constraint on the upper cut-off $a_{\rm max}^{\rm S}$ of the silicate size distribution, which is here the only aligned population. We adapt $a_{\rm max}^{\rm S}$ and $a_{\mathrm{alig}}$ to reproduce the overall shape of Serkowski's curve. Figure~\ref{fig:p_lambda} shows that a reasonable fit is obtained for $a_{\rm max}^{\rm S}=400\,$nm and $a_{\mathrm{alig}} = 100\,$nm. A better correspondence could not be obtained in the UV ($\lambda < 400\,$nm) using oblate grains if we imposed a peak close to $550\,$nm. This weakness is however of little importance for our investigation, first because we do not study this part of the polarization spectrum, and second because the weak UV polarization in the mean polarization curve is known to be entirely produced by large ($a \ge 0.1\,\mu$m) aligned grains \citep{KimMartin1995}, whose abundance is constrained by the optical and NIR part of the spectrum. Therefore, we do not expect any change in our conclusions with a better fit of the UV polarization spectrum using more complex size distributions as per \cite{DraineFraisse2009}, or a power-law size distribution with prolate grains replacing oblate grains as per \cite{Guillet2018}. \begin{figure} \centering \begin{minipage}[c]{1.0\linewidth} \begin{flushleft} \includegraphics[width=1.0\textwidth]{dust_model.pdf} \end{flushleft} \end{minipage} \caption{Extinction curve from the UV to the mid-IR of the applied \texttt{POLARIS}\ dust models of spherical (red line) and non-spherical (blue line) grains in comparison with the measurements (black line) of \cite{Mathis1990}.} \label{fig:DiffuseDust} \end{figure} The remaining parameters can be constrained with the mean extinction curve observed in the diffuse ISM. We use the \cite{Mathis1990} extinction curve per hydrogen, between 0.1 and 1 $\mu$m with $a_{\rm amin}^{\rm S}= 8\,$nm, $a_{\rm amin}^{\rm S}= 400\,$nm, $a_{\rm amin}^{\rm G} = 10\,$nm, $a_{\rm amax}^{\rm G} = 170\,$nm, $q_{\rm G}=-3.9$, $m_{\rm S} = 0.0034$ and $m_{\rm G}=0.0021$. This makes a total dust mass to gas mass ratio ${m_{\mathrm{dust}}/m_{\mathrm{gas}} = 0.55\ \%}$. Figure~\ref{fig:DiffuseDust} shows a comparison of the resulting output of the \texttt{POLARIS}\ code with the \cite{Mathis1990} mean extinction curve per H. The fit is correct in the UV and optical, but not in the NIR, as expected in a silicate-graphite model with power-law size distributions. Replacing spheres by oblate grains only marginally affects the resulting extinction curve (Figure~\ref{fig:DiffuseDust}). In Section \ref{sec:sizedist}, we further discuss the potential impact on our results of the extinction curve in the NIR used for this particular MHD simulation. \begin{figure} \includegraphics[width=0.49\textwidth]{PsI_pvstauv_aalig.pdf} \caption{(Left axis) Polarization fraction at $353\ \mathrm{GHz}$, $P/I$, as a function of the alignment parameter $a_{\mathrm{alig}}$. (Right axis) Same for the polarization fraction in the V band, $p_V/\tau_V$, and the polarization ratio $R_{\rm S/V} = P/I/(p_V/\tau_V)$. The vertical dotted line indicates the value of $a_{\mathrm{alig}}$ needed to reproduce the mean value of the Serkowski's parameter $\lambda_{\rm max}$ of $0.55\,\mu$m observed in diffuse and translucent LOS. An empirical fit to the dependence of $P/I$ on $a_{\rm alig}$ is provided for convenience.} \label{fig:pst_PsI_lambda} \end{figure} Figure~\ref{fig:pst_PsI_lambda} presents the resulting dependence of the polarization fraction in the optical (V band) and at $353\ \mathrm{GHz}$ as a function of our alignment parameter, $a_{\mathrm{alig}}$. Figure~\ref{fig:p_lambda} demonstrated that the mean value value of $\lambda_{\rm max}$ observed in the diffuse ISM is obtained with our model for $a_{\mathrm{alig}} \simeq 100$\,nm. According to Figure~\ref{fig:p_lambda}, the observed range of variation of $\lambda_{\rm max}$ through the ISM ($0.4 \ge \lambda_{\rm max} \ge 0.8\, \mu m$, \citealt{Whittet1992,Vosh2016}) translates into a range of values for $a_{\mathrm{alig}}$, between 75 and 150 nm. Between these two values $\lambda_{\rm max}$, we expect of drop of the polarization fraction by a factor of 2 in the optical, and only by a factor 1.4 at $353\ \mathrm{GHz}$. This figure makes it clear that the dependence of the polarization fraction on the grain alignment efficiency differs in emission at $353\ \mathrm{GHz}$ and in extinction in the optical. A drop in grain alignment would therefore be easier to observe in the optical than at $353\ \mathrm{GHz}$, because of a steeper dependence on $a_{\mathrm{alig}}$. For the mean $\lambda_{\rm max} = 0.55\,\mu$m, our model predicts $p_ V/\tau_V = 4.8\%$, thereby reproducing the highest polarization fraction $p/E(B-V) > 13\%$ observed in the optical \citep{Planck2018XII,Panopoulou2019}. The situation is different in emission where, with $P/I=16.3\%$, our model is 20\% below the highest polarization fraction at $353\,\mathrm{GHz}$ \citep[$p_{\rm max} = 20-22\%$,][]{PlanckXIX2015,Planck2018XII}. Correspondingly, the value of the polarization ratio $R_{S/V} = P/I/(p_V/\tau_V)\simeq 3.4$, is weaker than the value $R_{S/V} \simeq 4.2$ observed in the diffuse and translucent ISM \citep{Planck2018XII}, but as expected within the range of the values obtained with compact astrosilicates \citep[see][for a discussion of dust optical properties adapted to {\it Planck}\ observations]{Guillet2018}. These small discrepancies will have no consequence on our conclusions as we will not attempt to the reproduce the absolute value of the polarization fraction at $353\ \mathrm{GHz}$, but its relative variations with the environment, and especially with the column density. \section{Insights on the radiative spin-up model} \label{sect:RATAlignment} Simulating dust polarization by means of extinction and emission not only requires non-spherical dust grains but also a detailed knowledge of the grain alignment efficiency with the magnetic field orientation. Here, we focus explicitly on the RAT alignment physics as it is outlined in \cite{Hoang2014}. \subsection{The fiducial radiative torque (RAT) physics}\label{sec:RATphysics} Abandoning the idea of perfectly aligned dust grains requires to model the physics of non-spherical spinning dust grains having their minor principle axis precessing around the magnetic field direction. In the RATs framework, a non-spherical irregular dust grain of equivalent radius $a$ can gain angular momentum through the torques $\Gamma_{\mathrm{rad}}$ exerted by an anisotropic radiation field \citep{Hoang2014}: \begin{equation} \Gamma_{\mathrm{rad}} = \,\pia^2\int \left(\frac{\lambda}{2\pi}\right)\, \gamma_\lambda\, \cos(\vartheta_\lambda) \,Q_\Gamma(a,\lambda)\, u_\lambda\,\mathrm{d}\lambda\, , \label{eq:JRAT} \end{equation} where $\gamma_\lambda$ is the spectra anisotropy factor (Equation~(\ref{eq:gamma_lambda})), and $Q_\Gamma$ is the RAT efficiency \citep{Draine_Weingartner1996,Hoang2014}: \begin{equation} Q_{\mathrm{\Gamma}} = \begin{cases} Q_{\Gamma}^{\rm ref} &\mbox{if } \lambda \leq 1.8\, a \\ Q_{\Gamma}^{\rm ref} \times\left(\frac{\lambda}{1.8\, a} \right)^{\alpha_Q} & \mbox{otherwise} \end{cases}\, . \label{eq:QGamma} \end{equation} $Q_{\Gamma}^{\rm ref}$ and $\alpha_Q$ are parameters that depend on the grain shape and grain material. We note that the exact values of the parameters $Q_{\Gamma}^{\rm ref}$ and $\alpha_Q$ are not well constrained. Numerical calculations show that $Q_{\Gamma}^{\rm ref}$ can present a range of values comprised between 0.01 and 0.4, and that $\alpha_Q$ is between $-2.6$ and $-4$ \citep{Hoang2014,Herranen2019}. We take an average of $\alpha_Q = -3$ as a reference value, and the exact value of $Q_{\Gamma}^{\rm ref}$ is determined for our dust model in Section \ref{sect:Calibrating}. Spinning grain tends to be disaligned by the random momentum transferred in collisions with gas particle \citep{Davis1951}, as well as by the emission of IR photons \citep{DraineLazarian1998}. The gas drag on dust grains acts on a characteristic timescale of \begin{equation} \tau_{\mathrm{gas}} = \frac{3}{4\sqrt{\pi}}\frac{I_{\mathrm{||}}}{\mu m_{\mathrm{H}}\,n_{\mathrm{gas}}\,v_{\mathrm{th}} \,a^4 \,\Gamma_{\mathrm{||}}} = \frac{2\sqrt{\pi}}{5} \frac{\rho s^{-2/3}}{\mu m_{\mathrm{H}} \,\Gamma_{\mathrm{||}}} \frac{a}{n_{\mathrm{gas}}\,v_{\mathrm{th}}}\;, \label{eq:taugas} \end{equation} where $v_{\mathrm{th}}=(2k_{\mathrm{B}}T_{\mathrm{gas}}/(\mu m_{\mathrm{H}}))^{1/2}$ is the thermal velocity of the gas particles (of mean mass $\mu m_{\mathrm{H}}$), $\Gamma_{\mathrm{||}}\approx 1.1$ is a geometrical factor for oblate grains of aspect ratio $s=0.5$, and ${ I_{\mathrm{||}}=\frac{8\pi}{15}\rho_{\rm S} a_\parallel a_\perp^4 = \frac{8\pi}{15}\rho_{\rm S} s^{-2/3}\,a^5 }$ is the moment of inertia of the oblate grain with respect to the minor axis. The drag timescale $\tau_{\rm FIR}$ by IR photon emission can be accounted for by a single parameter $\mathrm{FIR} \equiv \tau_{\rm gas}/\tau_{\rm FIR}$ \citep[][]{DraineLazarian1998,LH18}: \begin{equation} \mathrm{FIR} = 0.4\left(\frac{0.1\,\mu{\rm m}}{a}\right)\left(\frac{30\,{\rm cm}^{-3}}{n_{\mathrm{gas}}}\right)\sqrt{\frac{100\,{\rm K}}{T_{\mathrm{gas}}}}\left(\frac{u_{\mathrm{rad}}}{u_{\rm ISRF}}\right)^{2/3}\;, \label{eq:FIR} \end{equation} Combining gas drag and FIR photon emission, this leads to a total drag timescale for the dust grain of \begin{equation} \tau_{\mathrm{drag}}=\frac{\tau_{\mathrm{gas}}}{1+\mathrm{FIR}}\,. \end{equation} The alignment of dust grains with their minor axis parallel to the magnetic field direction is closely connected to overcoming the randomization of the rotation axis by gas bombardment and emission of IR photons. In the absence of any aligning torques, dust grain rotation is at thermal equilibrium with the gas, leading to a grain angular momentum of \begin{equation} J_{\mathrm{th}}=\sqrt{k_{\mathrm{B}}T_{\mathrm{gas}} I_{\mathrm{||}}} \propto a^{2.5}\sqrt{T_{\mathrm{gas}}}\,. \label{eq:Jth} \end{equation} We note that the magnitude $J_{\mathrm{th}}$ becomes constant as the dust grains thermalize with the gas and the orientation remains randomized over time. In order to ensure the alignment of dust with the magnetic field direction, the spin-up process by RATs needs to dominate over gas collision and IR photon emission and bring grains to suprathermal rotation \citep{Hoang2014}. Following \citet{Hoang2008}, we will assume that dust grains are aligned in a stable configuration for \begin{equation} \frac{J_{\mathrm{rad}}}{J_{\mathrm{th}}} = \frac{\tau_{\mathrm{drag}}\,\Gamma_{\mathrm{rad}}}{J_{\mathrm{th}}} \ge 3\, \label{eq:JradoverJth} \end{equation} This condition defines the minimal grain size $a_{\mathrm{alig}}$ for dust grains to be aligned. If we use the approximate expression $Q_\Gamma \propto a^{-2.7}$ \citep{Hoang2014}, and momentarily restrict our study to the case where the disaligning effect of IR photon emission can be neglected with respect to collisions with gas particles ($\mathrm{FIR} \ll 1$), this minimal grain size follows the scaling : \begin{equation} a_{\mathrm{alig}}^{\mathrm{FIR}\ll 1} \propto \left(\frac{Q_{\Gamma}^{\rm ref}\,\langle\gamma\rangle\,\langle\cos\vartheta\rangle\,U_{\rm rad}\,}{n_{\mathrm{gas}}\,T_{\mathrm{gas}}}\right)^{-1/3.2} \,. \label{eq:aalig_powerlaw} \end{equation} This expression shows that the grain alignment radius is a slowly varying function of the ratio between the gas pressure and an effective intensity $Q_{\Gamma}^{\rm ref}\,\langle\gamma\rangle\,\langle\cos\vartheta\rangle\,U_{\rm rad}$ of the anisotropic component of the radiation field. The final condition for grains to be aligned with the magnetic field direction requires a stable Larmor precession around the magnetic field. This condition can be estimated by comparing the Larmor precession timescale $\tau_{\rm larm}$ \citep{LazarianHoang2007} \begin{equation} \tau_{\rm larm} \propto \frac{a^2\,s^2\,\rho\,T_{\mathrm{dust}}}{\chi\,B}\;, \end{equation} accounting for the interplay of field strength and the paramagnetic properties of the grain, with the gas drag timescale $\tau_{\rm gas}$ (Equation~(\ref{eq:taugas})). If the grain can complete its precession before any gas-grain interaction significantly affects its angular momentum, it can be considered to be aligned with the magnetic field direction. Consequently, for $\tau_{\rm larm} < \tau_{\rm gas}$ a grain is considered to be aligned with the magnetic field direction, which defines the maximal grain radius $a_{\mathrm{larm}}$ where grains ceased to be aligned along the magnetic field lines \citep{LazarianHoang2007} \begin{equation} a_{\mathrm{larm}}=2.71\times 10^5 \frac{s^2\,\left( \chi/10^{-4}\right)\left(B/5\,\mu\mathrm{G}\right)} {\left(n_{\mathrm{gas}}/30\,\mathrm{cm}^{-3}\right)\left(T_{\mathrm{dust}}/15\, \mathrm{K}\right) \left(\sqrt{T_{\mathrm{gas}}/100\, \mathrm{K}}\right)}\, \mathrm{cm}\, . \label{eq:TimeLarmor} \end{equation} Here, $\chi$ is the paramagnetic susceptibility of the grain material. Graphite grains have a magnetic susceptibility of about $\chi=9.6\times 10^{-10}$ \citep[][]{Weingartner2006} whereas for ordinary paramagnetic silicate we have $\chi=4.2\times 10^{-4}$ \citep[][]{Hunt1995,Hoang2014A}. In essence, graphite can barely perform a stable Larmor precession for the range of parameters of the $\texttt{RAMSES}$ simulation (see Equation \ref{eq:TimeLarmor}) due to this difference of about six orders of magnitude in. Laboratory experiments also suggest that most of the iron is bound within the silicate component of the ISM dust \citep[see e.g.][]{Davoisne2006,Demyk2017}, providing super-paramagnetic properties to the silicate populations for which a much better alignment is predicted \citep{JonesSpitzer1967}, if not perfect \citep{Lazarian2008,HoangLazarian2016}. In our model we will therefore assume that only silicate grains are aligned with the magnetic field direction, a choice that is consistent with observations of dust polarization \citep[][]{Mathis1986,Costantini2005,DraineFraisse2009,Vaillancourt2012}. The nutation of the grain during its precession will tend to reduce polarization. This can be quantified by the Rayleigh reduction factor ${R=\left\langle Q_{\mathrm{J}}Q_{\mathrm{X}} \right\rangle}$ \citep[][see also Appendix \ref{app:RTequation}]{Greenberg1968,Roberge1999}. $Q_{\mathrm{J}}$ characterizes the degree of alignment of the angular momentum $J$ with the magnetic field direction, whereas $Q_{\mathrm{X}}$ describes the internal degree of alignment between the minor principle axis $a_{||}$ of the dust grain and $J$. The average is then done over the time on the distribution function of the angle between the spin axis and the minor axis (for $Q_{\mathrm{x}}$) and between the spin axis and the magnetic field (for $Q_{\mathrm{J}}$). Radiative torques can align grains with the magnetic field $\vec{B}$ in two distinct attractor points. One is characterized by $J \gg J_{\mathrm{th}}$ (highJ hereafter) and the other one where $J$ is of the same order as $ J_{\mathrm{th}}$ (lowJ), respectively. While highJ corresponds to a perfect alignment, meaning $Q_{\mathrm{J}}\approx 1$ and $Q_{\mathrm{X}}\approx 1$, the lowJ attractor point is less well constrained. For paramagnetic materials such as pure silicate without iron inclusions, the fraction of highJ to lowJ alignment, together with the values for $Q_{\mathrm{J}}$ and $Q_{\mathrm{X}}$ in the lowJ case, are not well determined by the RATs theory \citep{Hoang2014}. As discussed in \cite{HoangLazarian2016}, a significant fraction of dust grains in the lowJ attractor would prevent the model from reproducing the highest polarization fractions observed by {\it Planck} in the diffuse ISM \citep[$p_{\mathrm{max}}\simeq 20\%$,][]{PlanckXIX2015}. Alternatively, the polarization fraction could also be increased by introducing larger grains \citep[][]{Bethell2007} because this would increase the mass fraction of aligned grains. However, the presence of a significant fraction of large grains would prevent the dust model introduced in Section \ref{sect:DustModel} from reproducing the mean Serkowski's law and extinction curve of the diffuse ISM. Thus, we make the assumption that silicate grains have ferromagnetic inclusions. Consequently, silicate grains align only at the highJ attractor point, and the Rayleigh reduction factor for RAT alignment is: \begin{equation} R(a) = \begin{cases} 1 & \mbox{if } a_{\mathrm{alig}} < a < a_{\mathrm{larm}} \\0 & \mbox{otherwise} \end{cases}\, . \label{eq:RayleighRAT} \end{equation} Assuming that silicates settle only at highJ would also prevent the so called wrong alignment, that is the alignment of the minor principal grain axis with the magnetic field direction \citep[see][]{LazarianHoang2007}. Thus, we do not model or discuss the implications of a possible wrong alignment of dust grains. Nevertheless, we note that $\texttt{POLARIS}$ is in principle able to calculate the internal alignment efficiency \citep[][]{Reissl2016} at lowJ. Furthermore, $a_{\mathrm{larm}}$ is only of minor relevance for silicate grains in our dust model since $a_{\mathrm{max}} \ll a_{\mathrm{larm}}$ even for ordinary paramagnetic grains let alone superparamagnetic ones (see also appendix \ref{app:Alignment}). Altogether, the exact parametrization of $R(a)$ (i.e. with or without internal alignment) is of minor relevance for the dust polarization calculations presented in this article since more sophisticated assumptions would scale down the overall degree of linear polarization, without affecting the polarization patterns at the smaller scales \citep[see e.g.][]{Brauer2016}. \subsection{Calibrating the RAT efficiency $Q_\Gamma$ on observational data} \label{sect:Calibrating} Our Equation~\eqref{eq:QGamma} involves the physical parameter $Q_{\Gamma}^{\rm ref}$ that controls the efficiency of radiative torques. The higher $Q_{\Gamma}^{\rm ref}$, the better grains are aligned. The value of $Q_{\Gamma}^{\rm ref}$ must be determined using numerical tools like \texttt{DDSCAT}\ by calculating the radiative torques efficiency for a particular grain shape and material that constitutes the aligned dust population, here oblate silicate grains of axis ratio $s=1/2$. \cite{Herranen2019} did the most recent and extensive study of the dependence of $Q_\Gamma$ on the ratio $\lambda/a$, for various grain shapes and materials. Although $Q_{\Gamma}$ is not strictly constant at low $\lambda/a$ according to these calculations, a constant value for $Q_{\Gamma}^{\rm ref}$ between 0.05 and 0.4 appears to be a reasonable model owing to the scatter in the calculations presented for different shapes \citep[][Figure~20]{Herranen2019}. This theoretical value for $Q_{\Gamma}^{\rm ref}$ can be compared to the value that is needed to obtain an alignment parameter $a_{\mathrm{alig}}$ of $100\,$nm, the value necessary for our model to reproduce the mean Serkowski's curve observed in the diffuse and translucent ISM (see Section \ref{sect:DustModel}). Using our \texttt{RAMSES}\ simulation with \texttt{POLARIS}, we find $Q_{\Gamma}^{\rm ref}=0.14$, a value that we will use from now on. \subsection{Phase diagram for the grain alignment efficiency}\label{sec:phasediagram} It has been long established that the suprathermal rotation can allow for grain alignment \citep{Purcell1975,Purcell1979}. In the RAT modeling of \cite{Hoang2014}, grains are assumed to be aligned if the local physical conditions make them rotate three times faster than in thermal equilibrium with the gas. Given any efficient spin-up process \citep[][]{Lazarian2015}, this necessary prerequisite allows for dust grains to align with the magnetic field direction because of paramagnetic effects acting on a microscopic level \citep[see e.g.][for details]{Barnett1915,Davis1951,JonesSpitzer1967,Purcell1979}. Figure~\ref{fig:aalig_modelHL14} presents a synthetic view on the dependence of $a_{\mathrm{alig}}$ on the local physical conditions in the \cite{Hoang2014} RAT theory, in the form of a phase-diagram for the diffuse ISM. The $y$ axis is the spin-up parameter $\Qgammazeta\,\langle\gamma\rangle\,\langle\cos{\vartheta}\rangle\,U_{\rm rad}/(n_{\rm gas} T_{\rm gas})$ (see Eq.~\eqref{eq:aalig_powerlaw}). The $x$ axis is the FIR ratio (Eq.~\eqref{eq:FIR}) calculated for the reference value $a=100\,$nm. This phase diagram allows to estimate the grain alignment radius predicted by RATs for any physical conditions, as long as the wavelength-dependence of the radiation field can be reasonably described by the ISRF with a scaling factor $U_{\rm rad}$. At low FIR ratio, $\tau_{\mathrm{drag}}\simeq \tau_{\mathrm{gas}}$, and the alignment radius becomes independent of the FIR ratio. At high FIR ratio, $\tau_{\mathrm{drag}} \simeq \tau_{\mathrm{FIR}}\propto a^2\,U_{\rm rad}^{-2/3}$, and the alignment efficiency $a_{\mathrm{alig}}$ becomes independent of the gas density. Arrows indicate how $a_{\mathrm{alig}}$ varies when the corresponding parameter increases by a factor 10. Because of the exponent $-1/3.2$ in Equation~\eqref{eq:aalig_powerlaw}, variations by orders of magnitude of any of these parameters are needed to significantly affect the value of $a_{\mathrm{alig}}$. \section{Grain alignment in the translucent and diffuse ISM}\label{sec:ISRF} In this section, we present the results of our calculations with \texttt{POLARIS}. Our MHD cube is representative of the diffuse and translucent ISM. We start by presenting the statistics of the radiation field in the MHD cube, which controls the radiative torque efficiency. Then, we look for the physical variables that, under these conditions, control the variations of the grain alignment efficiency under the RAT theory. Finally, we compare the dust polarization maps when grains are aligned by radiative torques, and when the grain alignment is uniform, to test if the alignment model leaves some imprint in the polarization maps calculated with this MHD simulation. \begin{figure*} \begin{center} \includegraphics[width=.45\textwidth]{aalig_xy_isrf.pdf} \includegraphics[width=.45\textwidth]{td_xy_isrf.pdf} \includegraphics[width=.45\textwidth]{G0_xy_isrf.pdf} \includegraphics[width=.45\textwidth]{urad_xy_isrf.pdf} \includegraphics[width=.45\textwidth]{th_xy_isrf.pdf} \includegraphics[width=.45\textwidth]{gamma_xy_isrf.pdf} \end{center} \caption{Quantities averaged along the LOS derived by \texttt{POLARIS}\ MC simulations for the ISRF-RAT setup. Here, direct averages are done, without any weighting by another parameter. The individual panels show the alignment radius $a_{\mathrm{alig}}$ (top left), average dust temperature $T_{\mathrm{dust}}$ (top right), $G_{\mathrm{0}}$ (middle left), the radiation field $U_{\rm rad}$ (middle right), the average angle $\left\langle\cos{\vartheta}\right\rangle$ (bottom left), and the anisotropy factor $\left\langle \gamma \right\rangle$ (bottom right), respectively.} \label{fig:POLARIS_ISRF_RAT} \end{figure*} \subsection{Characteristics of the radiation field} \label{sec:RT_ISRF} Figure~\ref{fig:POLARIS_ISRF_RAT} presents the set of derived MC quantities for the case ISRF-RAT. All maps show the average of grid cells along the $z$ axis of the MHD cube i.e. along the LOS (histograms over the complete 3D domain are provided in Appendix \ref{app:3DDistribution}). For clarity regarding the parameters characteristic for the radiation field, the maps are not weighted by any quantity e.g. by density. Otherwise characteristic features of the model ISRF would get modulated by the weighting, making it harder to discuss the different quantities on an individual basis. The map of the alignment radius $a_{\mathrm{alig}}$ has a range of ${80\,\mathrm{nm} - 145\,\mathrm{nm}}$. The map of $a_{\mathrm{alig}}$ clearly correlates with the pressure map presented in Figure \ref{fig:MHD_input}, not with the density map. The dust temperature is rather uniform in the entire simulation, between 16\,K and 17\,K. Here, we show the combined temperatures averaged over the materials of silicate and graphite (individual temperatures are provided in Appendix \ref{app:3DDistribution}). Even with such a small temperature variation, a correlation with the column density stands out. Dust grains in the densest regions are colder because of the shielding by the surrounding dust. As expected, the small variations of $G_{\mathrm{0}}$ coincide with the column density structure, as photons become more likely absorbed in dense regions. In contrast to $G_{\mathrm{0}}$, the total energy density $u_{\mathrm{rad}}$ is integrated over the entire spectrum and should be almost constant independently of the density, because the total energy within the system remains conserved while being shifted towards longer wavelength by dust emission. Still, we see that $u_{\mathrm{rad}}$ is slightly (2\% at most) smaller than $u_{\mathrm{ISRF}}$. This is a small artefact of the MC method associated with the loss of photons (see Section \ref{sect:MCPropagation}). The average angle $\left\langle \cos{\vartheta} \right\rangle $ between the radiation field and the magnetic direction draws the same picture of a totally diffuse radiation field. The values of $\left\langle \cos{\vartheta} \right\rangle $ in Figure~\ref{fig:POLARIS_ISRF_RAT} cluster around a value of $0.5$. We acknowledge that the quantity $\left\langle \cos{\vartheta} \right\rangle $ does not strictly correspond to a particular angle $\vartheta$ but represents an average over an ensemble of angles weighted by the cosine function and the radiation field (see Equation \ref{eq:AvgCos}). However, we note that a $\cos{\vartheta}=0.5$ would correspond to an angle of $\vartheta = 60^\circ$. This is exactly the value one obtains when averaging over a large ensemble of pairs of randomly orientated vectors. Hence, a value of $0.5$ is consistent with a mostly isotropic radiation field (see also Appendix \ref{app:3DDistribution}). Finally, the anisotropy factor $\left\langle \gamma \right\rangle $ has a trend with higher values in denser regions and amounts to an average value of $0.11$ comparable with the value of $0.1$ usually given in the literature \citep[see e.g.][]{LazarianHoang2007,Hoang2014} for the ISM. We run simulations with no dust at all i.e. a ratio of ${m_{\mathrm{dust}}/m_{\mathrm{gas}} = 0\ \%}$ in the $\texttt{RAMSES}$ cube. These test simulations show that $\left\langle \gamma \right\rangle > 0$ (see Appendix \ref{app:3DDistribution}). Even with more photons and for different radii of the source sphere, the anisotropy factor cannot be pushed below $\left\langle \gamma \right\rangle < 0.045$. We speculate that this may be a numerical limitation of the applied MC techniques. \subsection{What drives the variations of the grain alignment parameter $a_{\mathrm{alig}}$ ?}\label{sec:whatdrivesRAT} Figure~\ref{fig:aalig_POLARIS_ISRF} presents how the alignment parameter calculated by \texttt{POLARIS}\ for our \texttt{RAMSES}\ simulation depends on the local physical conditions, using the same phase diagram as in Figure~\ref{fig:aalig_modelHL14}. The density, temperature, and radiation field characterizing this simulation only occupies a small surface in our phase diagram. The density of points in this phase diagram allows to separate the WNM phase (high temperature, low density) from the CNM phase (high density, low temperature) where grains are not well aligned in a small fraction of cells (red points). Comparing Figure~\ref{fig:aalig_modelHL14} and Figure~\ref{fig:aalig_POLARIS_ISRF}, we see that our simple analytic derivation of $a_{\mathrm{alig}}$ (see Section~\ref{sect:RATAlignment}) reproduces quite well the numerical results of \texttt{POLARIS}. \begin{figure} \includegraphics[width=.49\textwidth]{spinup_FIRa_byaalig_POLARIS_rat_isrf.pdf} \caption{Alignment parameter $a_{\rm alig}$ calculated by POLARIS for the ISRF-RAT simulation, in the phase diagram of Figure~\ref{fig:aalig_modelHL14}. Contour lines indicate the density of points, delimitating two valleys of points corresponding to the cold (CNM, upper branch) and warm (WNM, lower branch) phases in the simulation.} \label{fig:aalig_POLARIS_ISRF} \end{figure} \begin{figure} \includegraphics[width=.49\textwidth]{spinup_FIRa_byaalig_model_withUrad.pdf} \caption{Alignment parameter $a_{\rm alig}$ in $\mathrm{nm}$, as a function of the spin-up parameter $\Qgammazeta\,\langle\gamma\rangle\,\langle\cos{\vartheta}\rangle\,U_{\rm rad}/(n_{\rm gas} T_{\rm gas})$ (see Eq.~\eqref{eq:aalig_powerlaw}) and of the FIR ratio at $a=100\,\mathrm{nm}$ (Eq.~\eqref{eq:FIR}), following \cite{Hoang2014}. Black arrows indicate the displacement in that frame when the corresponding physical quantity increases by a factor 10.} \label{fig:aalig_modelHL14} \end{figure} Figure~\ref{fig:aalig_NH} shows the dependence of the mean, density-weighted, $a_{\mathrm{alig}}$ parameter, as a function of the column density, for any LOS along the three axes of the cube. The value of $\langlea_{\mathrm{alig}}\rangle$ is rather uniform on a large range of column densities, from $4\times 10^{20}$ to $2\times 10^{21}$ cm$^{-2}$, but increases at the lowest and highest column densities. A trend of similar shape is reported in \cite{Seifried2019} for the dependency of the alignment radius $a_{\mathrm{alig}}$ on gas density $n_{\mathrm{gas}}$. However, their MHD data set has about a one order of magnitude lower gas densities and temperate and a $G_{\mathrm{0}}>0$ leading to values of $a_{\mathrm{alig}}$ up a factor of 6.5 smaller than ours. To understand what drives grain alignment, we plot on Figure~\ref{fig:nT_RATs_NH} how the mean, density-weighted, gas pressure $n_{\rm gas}\,T_{\rm gas}$ (responsible for grain disalignment) and radiative torque $\Gamma_{\rm rad}$ (responsible for grain alignment) calculated for a grain size $a=0.1\,\mu$m depend on the column density. The comparison of Figure~\ref{fig:nT_RATs_NH} with Figure~\ref{fig:aalig_NH} makes it clear that, unlike what is usually assumed, it is the variations in the gas pressure that drive the variations of grain alignment, and not the variations of the radiative torques through dust extinction. The latter is almost constant, slightly decreasing with $N_{\rm H}$. The decrease of the radiative torques intensity cannot therefore be invoked to explain the decrease of the alignment efficiency within the range of column densities present in our simulation. \begin{table} \centering \begin{tabular}{c|c} $G_0$& 0.21 \\ $1\,/\,\left(n_{\rm gas}\,T_{\rm gas}\right)$& 0.91 \\ $G_0 \,/\,\left(n_{\rm gas}\,T_{\rm gas}\right)$ & 0.86 \\ $\left\langle\gamma\right\rangle\, \left\langle\cos{\vartheta}\right\rangle\, G_0 \,/\,\left(n_{\rm gas}\,T_{\rm gas}\right)$ & 0.90 \\ \end{tabular} \caption{Pearson coefficients for the correlation of $\log(a_{\mathrm{alig}})$ with the $\log$ of different physical quantities. In the diffuse and translucent ISM, $a_{\mathrm{alig}}$ is primarily driven by the gas pressure, not by the characteristics of the radiation field (direction, anistropy factor, or intensity). An increasing intensity of the radiation field even tends to disalign grains by increasing FIR photon emission.} \label{tab:pearson} \end{table} \begin{figure} \includegraphics[width=0.49\textwidth]{maalig_NH.pdf} \caption{Mean, density-weighted, $a_{\rm alig}$ parameter for our ISRF case, as a function of the column density for our simulated cube, combining viewing angles along $x$, $y$, and $z$.} \label{fig:aalig_NH} \end{figure} \begin{figure} \includegraphics[width=0.49\textwidth]{mean_nT_RAT_NH_test.pdf} \caption{ Mean, density-weighted, gas pressure $n_{\mathrm{gas}}\,T_{\mathrm{gas}}$ (left axis) and radiative torque $\Gamma_{\mathrm{rad}}$ (right axis), as a function of the column density for our simulated cube, combining viewing angles along $x$, $y$, and $z$. To avoid a biased comparison, both axes share the same amplitude in log. This figure is to be compared with Figure~\ref{fig:aalig_NH}. A comparison with our simple model using $u_{\mathrm{rad}}$ (dotted) and $G_0$ (dashed) for the calculation of $\Gamma_{\mathrm{rad}}$ is overplotted. } \label{fig:nT_RATs_NH} \end{figure} Table.~\ref{tab:pearson} quantifies this interpretation by presenting the value of the Pearson correlation coefficient between the alignment parameter $a_{\mathrm{alig}}$ and different physical quantities characterizing the local ISM in our simulation, such as the density, temperature, gas pressure, radiation field intensity. A positive (resp. negative) correlation coefficient means that an increase of the quantity tends to increase (resp. decrease) $a_{\mathrm{alig}}$, and therefore to disalign (resp. align) grains. The correlation between the radiation field intensity as measured by $G_0$ and the grain alignment parameter $a_{\mathrm{alig}}$ is weak but, surprisingly, positive. This results from two competing effects of the radiation field on the RATs efficiency. These are the spin-up effect of radiative torques, expressed by Equation~\ref{eq:JradoverJth}, and the disaligning effect of FIR emission, described by Equation~\ref{eq:FIR}. In the WNM phase of the diffuse ISM, where the gas temperature is high and dust extinction remains weak everywhere, it is the latter effect that dominates over the former. This implies that in the WNM, an increase in the radiation field intensity makes the grain alignment efficiency decrease, not increase, due to the damping of grain rotation by the emission of IR photons. Grain alignment in the diffuse ISM is therefore primarily driven by gas pressure, and therefore by disalignment, while the alignment capacity of RATs is almost constant. The anisotropy of the radiation field $\gamma$, or the cosine of the angle $\vartheta$ between the radiation field anisotropy and the magnetic field only act as secondary factors which are not able to produce any significant patterns in the correlation of $a_{\mathrm{alig}}$ with $N_{\mathrm{H}}$. \subsection{Statistical analysis of dust polarization maps} In Figure~\ref{fig:PsI_map_ISRF} we show the resulting polarization maps for the ISRF setup with RAT alignment, and for the FIXED alignment setup. The general polarization pattern resembles the maps presented in \cite{Planck2015XX}, with peak values about $10\ \%$ lower. The cases RAT and FIXED are almost identical, with some minor amplification in the overall magnitude of polarization fraction $p$ in the latter case. The characteristic hallmarks of RAT alignment (angular dependency of $p$ with the radiation and magnetic field direction as well as the increase of $p$ with a higher radiation) seem not to cause any signature in the polarization signal shown in Figure~\ref{fig:PsI_map_ISRF}. A comparison of the polarization vectors (rotated by 90$^\circ$) with the averaged magnetic field orientation presented in Figure \ref{fig:MHD_input} shows that they do not perfectly match over the entire map. This demonstrates that dust polarization patterns cannot be simply interpreted as a projection of the magnetic field direction onto a plane. Hence, quantitative interpretation requires modeling by means of RT simulations including proper dust alignment physics. \begin{figure} \includegraphics[width=0.49\textwidth]{detector_01_lam_000_pl_rat_isrf.pdf} \includegraphics[width=0.49\textwidth]{detector_01_lam_000_pl_pa_isrf.pdf} \caption{Simulated maps of the polarization fraction at $353\ \mathrm{GHz}$ for the RAT (top) and FIXED (bottom) alignment cases. The contour lines show the column density. The white segments give the orientation of the magnetic field derived from the polarization angle.} \label{fig:PsI_map_ISRF} \end{figure} \begin{figure} \includegraphics[width=0.49\textwidth]{PsI_NH.pdf} \caption{ Polarization fraction $p$ at $353\ \mathrm{GHz}$at 5 arcmin of resolution, as a function of the column density for the ISRF-RAT simulation. The mean trend is overplotted for RATs alignment case (black) and for the FIXED alignment case (red). } \label{fig:PsI_NH} \end{figure} Figure~\ref{fig:PsI_NH} presents how the polarization fraction varies with the column density in our simulation, for all LOS along the three axes of the cube. The mean trend is compared for the RAT and FIXED alignment cases. The RAT case starts to depart from the FIXED case for $N_{\mathrm{H}} > 2\,10^{21}$\,cm$^{-2}$ (or $A_V=1$) predicting systematically lower polarization fractions. As discussed in Section~\ref{sec:whatdrivesRAT}, this is not due to dust extinction, but to the higher pressure encountered in denser environments. This departure is however quite small in the range of column densities covered with a sufficient statistics by our simulation. \begin{figure} \includegraphics[width=0.49\textwidth]{S_p.pdf} \includegraphics[width=0.49\textwidth]{Sxp_NH.pdf} \caption{Top: Dispersion of polarization angles $\S$ as a function of the polarization fraction $p$, all taken at $353\ \mathrm{GHz}$, combining viewing angles along $x$, $y$, and $z$. Bottom panel: Same for the product $\S\times p$ considered as a tracer of grain alignment efficiency. Mean trends for RAT alignment (black) and FIXED alignment (red) are overplotted. } \label{fig:Sp_NH} \end{figure} Figure~\ref{fig:Sp_NH} allows to extend our analysis by studying the product $\S\times\PsI$ which was proposed by \cite{Planck2018XII} as a tracer of the grain alignment efficiency. The top panel shows that $\S$ and $p$ are anti-correlated, whether we align grains uniformly or following the RATs model. The bottom panel, which presents the variations of the $\S\times\PsI$ product with the column density, confirms that the grain alignment predicted by RATs decreases with $N_{\mathrm{H}}$ in our simulation from $N_{\mathrm{H}}=2\,10^{21}$\,cm$^{-2}$. The value of the mean trend of $\S\times\PsI$ is however harder to interpret. As discussed in \cite{Planck2015XX}, this particular \texttt{RAMSES}\ simulation does not reproduce perfectly the observed inverse correlation $\S\propto 1/p$. As a consequence, and unlike in {\it Planck}\ data \citep[see][]{Planck2018XII}, we do not observe a constant $\S\times\PsI$ with $N_{\mathrm{H}}$. \section{Looking for signatures of RATs} In this section, we modify the physical conditions in the cube so as to favour the observation of characteristic signatures of RATs, such as its dependence on the radiation field intensity and its angle-dependence \citep{LazarianHoang2007}. \label{sec:RT_STAR} \subsection{Results with a star at the center of the MHD simulation} In Figure~\ref{fig:POLARIS_STAR_RAT} we show the output of our MC simulation for the STAR setup where a star is introduced at the center of the cube without changing the MHD simulation (see Section \ref{sect:RTPostProcessing}). For the STAR setup the radiation field is clearly dominated by the central star, both in magnitude and direction. Consequently, the RAT alignment is most efficient in the center of the MHD cube with a minimum of the alignment parameter $a_{\mathrm{alig}}$ down to $45\ \mathrm{nm}$, a maximum of about $250\ \mathrm{nm}$ and an average of $55\ \mathrm{nm}$. Here, the averaged map of $a_{\mathrm{alig}}$ barely shows any resemblance to the gas distribution. The only exception is at $X=4\ \mathrm{pc}$ and $Y=-2\ \mathrm{pc}$ where the clump with the highest density within the $\texttt{RAMSES}\ $ simulation is situated. However, this effect is a result of the radiation from the star being shielded by the clump. Here, we note a lane of minimal grain alignment size starting at this clump going radially outwards. Such a shadowing effect is due to extinction of radiation in the densest regions of the cube (compare with Figure~\ref{fig:MHD_input}). This shadowing is even more obvious for the average dust temperature $T_{\mathrm{dust}}$ map of Figure~\ref{fig:POLARIS_STAR_RAT}. Here, we highlight the densest regions and the resulting shadow by lines and arrows. As for the alignment efficiency we report a decreased dust temperature in regions that are shielded from radiation. Several similar features can be observed e.g. directly above the star. This shadowing effect can also be seen in the maps of $G_{\mathrm{0}}$, $u_{\mathrm{rad}}$, and $\left\langle\gamma \right\rangle$, respectively. In detail, the anisotropy factor $\left\langle\gamma \right\rangle$ reaches values up to $0.56$ meaning that the radiation field has a stronger unidirectional component compared to the ISRF setup where we have $\left\langle\gamma \right\rangle \approx 0.1$ in the center of the map. The same is true of the quantity $\left\langle \cos(\vartheta) \right\rangle$: on average the alignment angles cluster around $\vartheta \approx 60^\circ$ but the STAR setup has much smaller values of $\left\langle \cos(\vartheta) \right\rangle$ along the Y-axis through the center where the radiation is perpendicular to the direction of the large scale magnetic field. Hence, radiation and magnetic field direction are not randomly oriented with respect to each other in that region, with an anisotropic radiation field that is much stronger at the center. This configuration of the STAR setup represents a significantly different set of parameters regarding the radiation field compared with the ISRF setup. \begin{figure*} \begin{center} \includegraphics[width=.45\textwidth]{aalig_xy_both.pdf} \includegraphics[width=.45\textwidth]{td_xy_both.pdf} \includegraphics[width=.45\textwidth]{G0_xy_both.pdf} \includegraphics[width=.45\textwidth]{urad_xy_both.pdf} \includegraphics[width=.45\textwidth]{th_xy_both.pdf} \includegraphics[width=.45\textwidth]{gamma_xy_both.pdf} \end{center} \caption{The same as Figure \ref{fig:POLARIS_ISRF_RAT} for the STAR-RAT setup.} \label{fig:POLARIS_STAR_RAT} \end{figure*} \begin{figure} \centering \begin{minipage}[c]{1.0\linewidth} \includegraphics[width=1.0\textwidth]{both_ANG_pl.pdf} \end{minipage} \caption{Map of the polarization fraction in the STAR-RAT case, to be compared with the ISRF setup (Figure~\ref{fig:PsI_map_ISRF}, top panel). Black solid lines are the projected direction of the radiation field $\vec{k}$ while the central yellow dot indicates the position of the star and the contour lines represent the column density $N_{\mathrm{H}}$.} \label{fig:PsI_map_STAR} \end{figure} In Figure~\ref{fig:PsI_map_STAR} we show the resulting polarization maps for the STAR setup with RAT alignment. The map shows the idealized direction of the radiation field drawn on it for later analysis (see Section \ref{sect:RAT_test}). Regarding the polarization pattern, Fig.~\ref{fig:PsI_map_STAR} does not significantly differ from Fig.~\ref{fig:PsI_map_ISRF}, or from the one presented in \cite{Planck2015XX}. We compared polarization angles pixel by pixel between all combinations of ISRF and STAR setups with RAT or FIXED alignment (see Tables \ref{tab:Setups} and \ref{tab:Alignment}). Despite a significant change in the radiation field and subsequent RAT alignment between all these setups, the resulting polarization angles only differ by about $2^\circ$ on average. There is also no variation in $p$ that can be attributed to the shadowing effect observed in Figure \ref{fig:POLARIS_STAR_RAT}. For the radiation field coming from the STAR setup the magnitude of the polarization $p$ increases only by about $3\ \%$. However, this increase is a general trend throughout the $p$ map and not only limited to the center region where the star is situated. We analyse and discuss this phenomenon in the following sections in further detail. \begin{figure} \includegraphics[width=.49\textwidth]{spinup_FIRa_byaalig_POLARIS_rat_both.pdf} \caption{Same as Figure~\ref{fig:aalig_POLARIS_ISRF} for the STAR-RAT case.} \label{fig:aalig_POLARIS_STAR} \end{figure} Figure~\ref{fig:aalig_POLARIS_STAR}, similarly to Figure~\ref{fig:aalig_POLARIS_ISRF}, illustrates how the alignment parameter $a_{\mathrm{alig}}$ varies in the phase diagram of Figure~\ref{fig:aalig_modelHL14}. With a star illuminating the cube, the whole physical quantities are driven toward the top right corner of the phase diagram. As a consequence, the alignment efficiency is globally increased everywhere in the cube, increasing the mean value of the polarization fraction on any LOS (Figure~\ref{fig:PsI_map_STAR}) without modifying the patterns observed for the ISRF case (Figure~\ref{fig:PsI_map_ISRF}). \begin{figure} \centering \includegraphics[width=.49\textwidth]{PsI_dist_both.pdf} \includegraphics[width=.49\textwidth]{PsI_Bkangle_both.pdf} \caption{Polarization fraction as a function of the distance to the star projected on the plane of the sky (top) and of the angle between the projected magnetic field and starlight direction (bottom), for the STAR-RAT setup. } \label{fig:PsI_STAR_VG} \end{figure} \begin{figure} \includegraphics[width=0.49\textwidth]{Sxp_NH_both.pdf} \caption{$S\times p$ for the STAR case, considered as a tracer of grain alignment efficiency, as a function of the column density, with its mean trend overplotted. } \label{fig:Sp_NH_STAR} \end{figure} Despite the presence of a strong radiation field emitted by the star at the center of the cube, Fig.~\ref{fig:PsI_STAR_VG} show that we do not observe any systematic relation expected from the RATs theory, namely, a decrease of the polarization fraction with the distance to the star, or a sinusoidal dependence of the polarization fraction on the 2D-angle $\theta_{\mathrm{pos}}= \angle(\vec{k},\vec{B})$ between the projected directions of the magnetic field $\vec{B}$ (estimated from the rotated polarization vectors) and the assumed radiation field $\vec{k}$. This is explained by two main factors. First, the physical quantities that characterize RATs alignment, namely the intensity, the direction and the anisotropy of the radiation field, do not vary at small scales by a factor that is strong enough to dominate over the other factors affecting the polarization fraction : the structure of the magnetic field on the line sight and within the beam, and the grain alignment randomization by gas collisions. Second, when the alignment is very efficient (as is the case when grains are irradiated by a star: $a_{\mathrm{alig}} \sim 10$\, nm, see Figure~\ref{fig:aalig_POLARIS_STAR}), strong variations in $a_{\mathrm{alig}}$ do not produce a corresponding strong variation in the intrinsic polarization fraction of dust polarized emission because the dependence of $p$ on $a_{\mathrm{alig}}$ is not steep when $a_{\mathrm{alig}}$ is small (see Figure~\ref{fig:pst_PsI_lambda}). As a consequence, $p$ at $353\ \mathrm{GHz}$ does not trace the alignment efficiency very well even though the alignment is very efficient (low value of $a_{\mathrm{alig}}$). The polarization fraction does not reflect only the variations in the alignment efficiency, but also the structure of the magnetic field. Studying the statistics of $\S\times\PsI$ instead of $p$, we can get rid of the influence of the magnetic field structure \citep{Planck2018XII}. However, as presented in Figure~\ref{fig:PsI_STAR_VG}, the mean dependency of $\S\times\PsI$ with the distance and 2D-angle $\theta_{\mathrm{pos}}$ does not show any of the expected systematic trends either. We conclude that, under normal circumstances, the angle-dependence or distance-dependence of dust polarization with respect to a star is not present in simulated observations. However, this only holds true for the diffuse ISM case presented here whereas models of molecular clouds \citep[][]{Bethell2007,Hoang2014,Reissl2016} and circumstellar disks \citep[][]{Tazaki2017} do indeed reproduce the telltale signs for the presence of ongoing RAT alignment. \subsection{Optimal configuration for detecting the angle dependence of RATs} \label{sect:OptimalConfiguration} \begin{figure} \centering \includegraphics[width=.49\textwidth]{aalig_xy_constB.pdf} \includegraphics[width=.48\textwidth]{th_xy_constB.pdf} \includegraphics[width=.49\textwidth]{constB_ANG_pl.pdf} \caption{The same case as setup STAR-RAT but with a uniform magnetic field along the $X$ direction. Top panel: projected alignment radius $a_{\mathrm{alig}}$. Middle panel: projected average angle $\left\langle\cos{\vartheta}\right\rangle$. Bottom panel: linear polarization fraction overlaid with polarization vectors (white) rotated by $90^\circ$ tracing the magnetic field orientation $\vec{B}$.} \label{fig:ConstField} \end{figure} \begin{figure} \includegraphics[width=.49\textwidth]{PsI_dist_uniB.pdf} \includegraphics[width=.49\textwidth]{PsI_Bkangle_uniB.pdf} \caption{Polarization fraction $p$ as a function of the distance in the plane of the sky (top) and as a function of the projected angle $\theta_{\mathrm{pos}}$ between the magnetic field and starlight (bottom), for the STAR-RAT case with a uniform magnetic field in the plane of the sky. The image corresponds to Figure \ref{fig:ConstField} and is to be compared with Figure \ref{fig:PsI_STAR_VG}.} \label{fig:ConstField_VG} \end{figure} We pursue our analysis of the STAR case by studying a simple configuration where the distance and angle-dependence effect of RATs should be optimal. We run \texttt{POLARIS}\ for our \texttt{RAMSES}\ simulation, still with a star at the center, but replacing the magnetic field from the \texttt{RAMSES}\ simulation by a magnetic field direction everywhere uniform in the $X$ direction in the plane of the sky. Figure~\ref{fig:ConstField} presents the resulting maps of $a_{\mathrm{alig}}$, anisotropy $\left\langle \gamma \right\rangle$, and polarization $p$. Comparing these maps to the ones of the ISRF-RAT setup and the STAR-RAT setup presented in Figure \ref{fig:POLARIS_ISRF_RAT} and Figure \ref{fig:POLARIS_STAR_RAT}, respectively, the dust grains along $X = 0\ \mathrm{pc}$ become severely depolarized with alignment radii $a_{\mathrm{alig}} \gtrapprox 100\ \mathrm{nm}$ while we find $a_{\mathrm{alig}} \lessapprox 90\ \mathrm{nm}$ for the rest of the map. Yet again we observe the characteristic shadowing effect at $X = 4\ \mathrm{pc}$ and $Y = -2\ \mathrm{pc}$ caused by the densest clump in the \texttt{RAMSES}\ simulation (see Figure \ref{fig:MHD_input}). The average angle between the direction of radiation and magnetic field orientation $\left\langle \gamma \right\rangle$, is also characteristic of RAT alignment with lower values along the line $X=0\ \mathrm{pc}$. However, this influence is less obvious in the map of $p$ which results from physical quantities integrated along the LOS and is also dependent on other quantities such as the magnetic field orientation (see Appendix \ref{app:RTequation}). Overall, the magnitude of $p$ in Figure \ref{fig:ConstField} shows less variations compared to those of in Figure \ref{fig:POLARIS_ISRF_RAT} and Figure \ref{fig:POLARIS_STAR_RAT}. This demonstrates that a good part of depolarization is a result of the turbulent component of the magnetic field and not grain alignment physics itself. This finding is also consistent with the interpretation of synthetic dust polarization maps presented in \cite{Seifried2019}. The dependence of $p$ on the distance and on the angle $\theta_{\mathrm{pos}}$, presented in Figure~\ref{fig:ConstField_VG}, do indeed present small trends expected from RATs. However, the decrease of $p$ with the distance, as well as its sinusoidal modulation by $\theta_{\mathrm{pos}}$, are so small (by $1\%$ and $2\%$, respectively), that they would most probably not be observable once noise and background contamination are added, even in this optimal configuration of the magnetic field. \section{Discussion} \label{sec:discussion} In this section, we discuss the implications and limits of our model as well as the observational possibilities of testing alignment theories. \subsection{Impact of the fitted size distributions on our results}\label{sec:sizedist} In Section \ref{sec:fitdustmodel}, we mentioned that our simple oblate\footnote{Using prolate grains instead of oblate grains imposes to compute the grain optical properties integrated over the grain spinning dynamics \citep[see][for a detailled description]{Guillet2018}.} grain shape and size distributions (power-laws) do not allow for a precise fit to the polarization and extinction curves (see Figures~\ref{fig:p_lambda} and \ref{fig:DiffuseDust}). Let us first discuss the NIR extinction, which is not well reproduced by our dust model for $\lambda > 1.5\,\mu$m. According to Figure~\ref{fig:DiffuseDust}, we systematically underestimate the NIR extinction by a factor $\sim2$. With the same figure, we see that NIR extinction is significant ($\tau \ge 1$) only for column densities higher than $10^{22}$\,cm$^{-2}$ at $\lambda=2\,\mu$m, and higher than $5\,10^{22}$\,cm$^{-2}$ at $\lambda=4\,\mu$m. For these LOS, our calculations \emph{overestimate} the number of NIR photons that are present. Our model tends therefore to overestimate grain alignment at the highest column densities, in the densest clumps of our simulation, which are rare. Second, we inferred a maximal size $a_{\rm max}^{\rm S}=400\,$nm for the silicate distribution from a fit of the polarization curve, particularly its NIR part. A lower (resp. higher) value for $a_{\rm max}^{\rm S}$ would have increased (resp. decreased) the mass of dust grains above the mean alignment radius $a_{\mathrm{alig}}$ in the diffuse ISM, which is of the order of 100\,nm. As a consequence, a loss of alignment would have had more (resp. less) impact on the local polarization fraction, i.e. the relation between $p$ and $a_{\mathrm{alig}}$ would have been steeper (resp. less steep) than described in Figure~\ref{fig:pst_PsI_lambda}. All together, our model may slightly overestimate the alignment of grains by RATs, certainly not underestimate it. \subsection{Can the angle-dependence of the RATs alignment effiency be tested observationally ?} \label{sect:RAT_test} Section \ref{sec:RT_STAR} has demonstrated that one of the characteristic effects expected from the RATs theory, namely the angle-dependence of the grain alignment efficiency, is too weak to be observed in realistic conditions. However, \cite{VA15} claimed to detect this effect, analyzing polarization data for the OMC-1 ridge with a star at its center: the IRc2 source. Their Figure~2, which shows how the polarization fraction varies with the angle $\theta_{\mathrm{pos}}$, indeed exhibits a sinusoidal variation that looks like what we expect from the RATs theory. We propose an alternative explanation for these observations. It has been established long ago that the maximal polarization fraction, whether in extinction or in emission, tends to systematically decrease with the column density \citep[\textit{e.g.}][]{Jones1989}. The origin for this effect, whether it is due to the magnetic field tangling or to a drop in the alignment efficiency, is still debated and depends on the authors. More recently, \cite{Planck2016XXXV} demonstrated a systematic variation of the orientation of the magnetic field with respect to the gas structures, from parallel in the diffuse ISM to rather perpendicular to dense filaments. Such variations were observed in almost all regions of the Gould Belt. Both of these effects, which are observed all through the ISM, are present in the OMC-1 polarization maps of \cite{VA15}. If we combine these two effects and start our analysis at the position of the heating source IRc2, we can predict, without invoking any RAT physics, that the polarization fraction observed along the direction of the ridge will be weak and will correspond to $\theta_{\mathrm{pos}}$ close to $90^\circ$, while the polarization fraction observed perpendicular to the ridge, and therefore toward the less dense ISM, will be higher and correspond to $\theta_{\mathrm{pos}}$ closer to $0^\circ$. We speculate that such a correlation should also be observed toward dense filaments even without embedded stars, as long as the external magnetic field is observed to be perpendicular to the filaments, as is the case for the Musca filament \citep{Pereyra2004} or for the B213 filament in Taurus \citep{Chapman2011}. In summary, the characteristic effects of RAT alignment seem to be usually too weak to be observed, and can be mimicked by other physical effects, in particular those deriving from the orientation of the magnetic field with respect to the gas filaments. We note, that there is an additional factor that needs to be taken into account for testing the RAT theory observationally. In \cite{HoangLazarian2016} it was demonstrated that for superparamegntic grains of size $a>100\ \mathrm{nm}$ the angular-dependency with $\vartheta$ may get lost completely. The criterion for the loss of angular dependency of RAT alignment goes with $1/(n_{\mathrm{gas}} T_{\mathrm{gas}}^{1/2}) > C$ where $C$ is some constant \citep[see][for details]{HoangLazarian2016}. Hence, a dependency with $\vartheta$ can still be expected in dense molecular clouds while in the DISM it may become void. However, we already can barely report any angular-dependency in our setups ISRF-RAT as well as STAR-RAT (see Fig. \ref{fig:ConstField_VG}) so that this additional criterion is of minor relevance within the scope of this paper. In essence, to test the angle-dependency of RATs, one should use optimal conditions such as a uniform $\vec{B}$ in the plane of the sky around a hot star and avoid dense regions where other effects may dominate. We also suggest to test this effect in the optical, where dust models predict steeper variations of the polarization fraction with the grain alignment efficiency (see Fig~\ref{fig:pst_PsI_lambda}). \subsection{Testing grain alignment theories in dense cores} \label{sect:TestingAlignment} This article aims at demonstrating that it is necessary to provide quantitative tests of the RATs theory, and not only qualitative evidence as is usually done. The dependence of the polarization fraction on the dust properties or on the magnetic field structure is so degenerate that it is hard to disentangle between the different effects at work using only maps of the polarization fraction. In \cite{Planck2018XII}, we have advocated that using the statistics of the polarization angles, through the quantities $\S$ and $\S\times p$ could be useful to that purpose. In the present article, we have demonstrated that the efficiency of the radiative torques is constant in the diffuse and translucent ISM, and that all variations of the alignment efficiency are solely due to variations of the disalignement by gas collisions measured by the gas pressure, and not to the decrease of the radiation field intensity by dust extinction. The ISRF is dominated in energy by NIR ($\sim 1\,\mu$m) photons \citep[\textit{e.g.}][their Figure~1]{Mathis1983}. Comparing equation~\ref{eq:JradoverJth} with equation~\ref{eq:JRAT} especially its factor $\lambda\times u_{\lambda}$, shows that it is the total number of photons, not their total energy, that is involved in grain alignment by RATs. UV photons are unimportant for RAT alignment in the diffuse ISM, both in energy and - even more so - in numbers. \ As a consequence, the efficiency of the aligning torque will be rather constant under the ISRF radiation field as long as extinction in the NIR is not important. This could justify why no dependence of alignment on the grain temperature could be found in {\it Planck}\ diffuse ISM data~\citep{PlanckXIX2015,Planck2018XII}. On the contrary, the disaligning torques exerted by gas collisions will vary a lot through the diffuse and translucent ISM, because of pressure variations. In particular, the pressure increases by orders of magnitude between the diffuse and dense ISM, as soon as the gas temperature gets stabilized around a few tens of Kelvin. This dimension is underestimated when one interprets the difference in the polarization patterns in distinct environments through the prism of the radiation field alone. To test the decrease of the RATs efficiency, we therefore need to move to very dense environments where extinction in the NIR starts to be significant ($N_{\mathrm{H}} \gg 10^{22}$ cm$^{-2}$). The key issue will remain to explain the level of polarization observed in dense cores, where we expect a huge increase in pressure combined with a severe drop in the RAT efficiency due to extinction of optical and NIR photons which are driving the grain alignment. Such data analysis is not possible with the 5 arcmin resolution of {\it Planck}, but is accessible to the new generation of polarization instruments working at subarcmin resolutions such as JCMT/SCUBA-2/POL-2 \citep[][]{Holland2013}, SOFIA/HAWC+ \citep[][]{Dowell2010,Harper2018}, or NIKA2 \citep[][]{Monfardini2011,Monfardini2014,Calvo2016}. Maintaining a high level of grain alignment by RATs in cores requires a significant grain growth \citep[\textit{e.g.}][]{Pelkonen2009}. This hypothesis, which is indeed reasonable, ignores however that grain growth will automatically change both the grains' shapes and optical properties, and therefore their polarization capabilities. Altogether, modeling such scenarios requires to complete our understanding of grain alignment physics and dust evolution (in particular grain-grain coagulation), and a comparison of observations with numerical results obtained with MHD simulations and tools like \texttt{POLARIS}. In this paper we focused on the spin-up of dust grains by RATS. Alternatively, irregularly-shaped dust grains may spin up by means of mechanical torques \citep[MATs,][]{Hoang2018A}. Originally, such a theory was proposed for regular grain shapes by \cite{Gold1952A,Gold1952B}. It was later extended to a magneto-mechanical alignment theory by \cite{Lazarian1995MNRASL,Lazarian1997CL}. Here, a supersonic gas-dust drift velocity is required. In principle such a drift may be driven by cloud-cloud collisions, winds \citep[e.g.][]{Habing1994}, or MHD turbulence \citep[][]{Yan2003}. Although cloud-cloud collisions and winds cannot account for the large scale alignment of grains, MHD turbulence seems to be ubiquitous in the ISM \citep[][]{XuZhang2016}. However, it remains to be seen if MHD turbulence can provide a supersonic drift. More recent studies indicate that mechanical grain alignment may be efficient for helical grains even in the case of a subsonic drift \citep[][]{Lazarian2007C,Das2016,Hoang2018A}. In the MAT theory, the mechanical torque efficiency is proportional to the gas pressure \citep{Das2016,Hoang2018A}. This means that the grain alignment radius will be independent of the gas pressure, therefore of the gas density, unlike for RATs. We suggest that this property, which implies high level of grain alignment in dense cores (though not a systematically high level of polarization because of magnetic field tangling and possible dust coagulation), could be used to disentangle between alignment by RATs and alignment by MATs. \section{Summary} \label{sect:summary} In this paper, we presented a quantitative analysis of the impact of RAT alignment on dust polarimetry. This particular alignment theory predicts a sensitivity of the grain alignment efficiency with respect to the magnitude of the radiation field as well as an angular dependency on the direction of the radiation with respect to the magnetic field orientation. We aimed to model these dependencies for the diffuse and translucent ISM. For this we used a MHD cube representative of the diffuse ISM simulated with the $\texttt{RAMSES}$ code. We post-processed the MHD data with the RT code $\texttt{POLARIS}$ to produce synthetic dust polarization observations. The latest version of the $\texttt{POLARIS}$ code solves the full four Stokes parameters matrix equation of the RT problem, including RAT alignment, simultaneously. For the dust, we developed a best-fit model consisting of two populations of silicate and graphite grains following a power-law size distribution, that reproduce the mean Serkowski's law as well as the mean extinction curve in the diffuse ISM. We first performed Monte-Carlo dust heating and grain alignment calculations assuming a diffuse ISRF. The resulting radiation field and grain alignment efficiency is consistent with the alignment theory of RATs. We analyze the polarization maps and reproduce the anti-correlation of polarization fraction with gas column density as well as with the angular dispersion known from Planck observations. However, we cannot trace any of the characteristic predictions of RAT alignment in the synthetic polarization data. Our scientific findings are summarized as follows: \begin{enumerate}[(i)] \item Correlating the different parameters relevant for RATs reveals that the grain alignment efficiency in the diffuse and translucent ISM is primarily driven by the gas pressure (which tends to disalign grains, and varies by orders of magnitude through the ISM), and not by the radiation field intensity (which varies only moderately in the diffuse and translucent ISM). \item Anisotropy $\left\langle \gamma \right\rangle$ of the radiation field and its orientation $\left\langle \cos(\vartheta) \right\rangle$ with respect to the magnetic field have only a minor effect on grain alignment in the diffuse ISM. \item Despite the local drop of grain alignment in denser regions due to the increase in the gas pressure, the RATs alignment mechanism leaves no trace in the anti-correlation of gas column density $N_{\mathrm{H}}$ with polarization fraction $p$; nor in the anti-correlation of the angular dispersion $\S$ with $p$, the possible signposts of RATs being washed out by line of sight integration and variations of the magnetic field structure on the line of sight and within the beam. \end{enumerate} We then considered a second setup to investigate the RAT alignment behavior for different variations of the radiation field, by placing a B-type star in the very center of the $\texttt{RAMSES}\ $ MHD cube in addition to the ISRF, and repeating our RT simulations. We find that grain alignment efficiency is highest in close proximity of the star in concordance with RAT theory. Our findings in that case are the following: \begin{enumerate}[(i)] \item Even under optimal conditions, fingerprints of RATs would be barely observable. In particular, the predicted dependency of grain alignement by RATs with the angle between the radiation field and the magnetic field direction would not be detectable by observations of dust emission. \item Even close to a star, the variations in the magnetic structure along the LOS and within the beam are much more important for dust polarization than the variation in the characteristics of the radiation field. \end{enumerate} Altogether, our modelling of synthetic dust polarization observations indicates that the effects of RAT alignment are barely detectable in the diffuse and translucent ISM, but are predicted to be stronger in the optical (i.e. on starlight polarization) than in submillimetre polarized emission.
1,116,691,499,494
arxiv
\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\itshape}} \makeatother \def\pplogo{\vbox{\kern-\headheight\kern -29pt \halign{##&##\hfil\cr&{\ppnumber}\cr\rule{0pt}{2.5ex}&%\footnotesize{SU/ITP-14/XX\cr}}} \makeatletter \def\ps@firstpage{\ps@empty \def\@oddhead{\hss\pplogo}% \let\@evenhead\@oddhead \thispagestyle{plain} \def\maketitle{\parallel \begingroup \def\fnsymbol{footnote}{\fnsymbol{footnote}} \def\@makefnmark{\hbox{$^{\@thefnmark}$\hss}} \if@twocolumn \twocolumn[\@maketitle] \else \newpage \global\@topnum\z@ \@maketitle \fi\thispagestyle{firstpage}\@thanks \endgroup \setcounter{footnote}{0} \let\maketitle\relax \let\@maketitle\relax \gdef\@thanks{}\gdef\@author{}\gdef\@title{}\let\thanks\relax} \makeatother \numberwithin{equation}{section} \def\td\p{\td\p} \textwidth = 6.5 in \textheight = 8.5 in \oddsidemargin = 0.0 in \evensidemargin = 0.0 in \headheight = 0.0 in \headsep = 0.0 in \parskip = 0.03in \arraycolsep 2pt \linespread{1.2} \begin{document} \setcounter{page}0 \def\ppnumber{\vbox{\baselineskip14pt }} \def%\footnotesize{SU/ITP-14/XX } \date{\today} \title{\bf Fluctuations and magnetoresistance oscillations near the half-filled Landau level \vskip 0.5cm} \author{Amartya Mitra} \author{Michael Mulligan} \affil{\small \it Department of Physics and Astronomy, University of California, Riverside, CA 92511, USA} \bigskip \maketitle \begin{abstract} We study theoretically the magnetoresistance oscillations near a half-filled lowest Landau level ($\nu = 1/2$) that result from the presence of a periodic one-dimensional electrostatic potential. We use the Dirac composite fermion theory of Son [\href{http://dx.doi.org/10.1103/PhysRevX.5.031027} {Phys. Rev. X 5 031027 (2015)}], where the $\nu=1/2$ state is described by a $(2+1)$-dimensional theory of quantum electrodynamics. We extend previous work that studied these oscillations in the mean-field limit by considering the effects of gauge field fluctuations within a large flavor approximation. A self-consistent analysis of the resulting Schwinger--Dyson equations suggests that fluctuations dynamically generate a Chern-Simons term for the gauge field and a magnetic field-dependent mass for the Dirac composite fermions away from $\nu=1/2$. We show how this mass results in a shift of the locations of the oscillation minima that improves the comparison with experiment [Kamburov et.~al., \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.113.196801} {Phys.~Rev.~Lett.~113, 196801 (2014)}]. The temperature-dependent amplitude of these oscillations may enable an alternative way to measure this mass. This amplitude may also help distinguish the Dirac and Halperin, Lee, and Read composite fermion theories of the half-filled Landau level. \end{abstract} \bigskip \newpage \tableofcontents \newpage \vskip 1cm \section{Introduction and summary} \subsection{Motivation} In recent years, there has been a renewed debate about how effective descriptions of the non-Fermi liquid state at a half-filled lowest Landau level ($\nu = 1/2$) of the two-dimensional electron gas might realize an emergent Landau level particle-hole (PH) symmetry \cite{PhysRevLett.50.1219, girvin1984}, found in electrical Hall transport \cite{Shahar1995, Wong1996, Pan2019} and numerical \cite{rezayi2000, Geraedtsetal2015} experiments. The seminal theory of the half-filled Landau level of Halperin, Lee, and Read \cite{halperinleeread}, which has received substantial experimental support \cite{Willett97}, describes the $\nu=1/2$ state in terms of non-relativistic composite fermions in an effective magnetic field that vanishes at half-filling (see \cite{Jainbook, Fradkinbook} for pedagogical introductions). However, the HLR theory appears to treat electrons and holes asymmetrically \cite{kivelson1997, BMF2015}. For instance, it is naively unclear how composite fermions in zero effective magnetic field might produce the Hall effect $\sigma_{xy}^{\rm cf} = - {1 \over 4\pi}$ that PH symmetry requires \cite{kivelson1997}. (We use the convention $k_B = c = \hbar = e = 1$.) Two lines of thought point towards a possible resolution. The first comes by way of an a priori different composite fermion theory, introduced by Son \cite{Son2015}. In this Dirac composite fermion theory, the half-filled Landau level is described by a $(2+1)$-dimensional theory of quantum electrodynamics in which PH symmetry is a manifest invariance. This theory is part of a larger web of $(2+1)$-dimensional quantum field theory dualities \cite{2018arXiv181005174S}. On the other hand, it has recently been shown that HLR mean-field theory {\it can} produce PH symmetric electrical response, if quenched disorder is properly included in the form of a precisely correlated random chemical potential and magnetic flux \cite{2017PhRvX...7c1029W, 2018arXiv180307767K, PhysRevB.98.115105}. (Mean-field theory means that fluctuations of an emergent gauge field coupling to the composite fermion are ignored.) Furthermore, both composite fermion theories yield identical predictions for a number of observables in mean-field theory \cite{Son2015, PhysRevLett.117.216403, 2017PhRvX...7c1029W, PhysRevB.95.235424, PhysRevB.99.205151}, e.g., thermopower at half-filling and magnetoroton spectra away from half-filling. These results suggest that the HLR and Dirac composite fermion theories may belong to the same universality class. To what extent do these results extend beyond the mean-field approximation? How do alternative experimental probes constrain the description of the $\nu=1/2$ state? The aim of this paper is to address both of these questions within the Dirac composite fermion theory. Prior work has identified observables that may possibly differ in the two composite fermion theories: Son and Levin \cite{LevinSon2016} have derived a linear relation between the Hall conductivity and susceptibility that any PH symmetric theory must satisfy; Wang and Senthil \cite{PhysRevB.94.245107} have determined how PH symmetry constrains the thermal Hall response of the HLR theory; using the microscopic composite fermion wave function approach, Balram, Toke, and Jain \cite{BalramRifmmodeCsabaJain2015} found that Friedel oscillations in the pair-correlation function are symmetric about $\nu=1/2$. \subsection{Weiss oscillations and the $\nu=1/2$ state} Here, we study theoretically commensurability oscillations in the magnetoresistance near $\nu=1/2$, focusing on those oscillations that result from the presence of a periodic one-dimensional static potential \cite{Willett97}. These commensurability oscillations are commonly known as Weiss oscillations \cite{Weissfirst, gerhardtsweissklitzing, winkler1989landau, Weiss1990}. For a free two-dimensional Fermi gas, the locations of the Weiss oscillation minima, say, as a function of the transverse magnetic field $b$, satisfy \begin{align} \label{weissformulafreefermions} \ell_{b}^2 = {d \over 2 k_F}\Big(p + \phi \Big),\quad p = 1, 2, 3, \ldots, \end{align} where $\ell_b = 1/\sqrt{|b|}$ is the magnetic length; $d$ is the period of the potential; $k_F$ is the Fermi wave vector; $\phi = +1/4$ for a periodic vector potential, while $\phi = -1/4$ for a periodic scalar potential \cite{peetersvasilopoulos1992scalar, zhanggerhardts}. (Expressions for the oscillation minima when both potentials are present can be found in Refs.~\cite{peetersvasilopoulosmagnetic, gerhardts1996}.) Early experiments \cite{Willett97} saw $p=1$ Weiss oscillation minima about $\nu=1/2$ due to an electrostatic {\it scalar} potential, upon identifying, in Eq.~\eqref{weissformulafreefermions}, $b = B - 4 \pi n_e$ with the effective magnetic field experienced by composite fermions ($B$ is the external magnetic field and $n_e$ is the electron density) and $k_F = \sqrt{4 \pi n_e}$ with the composite fermion Fermi wave vector, and choosing $\phi = + 1/4$. These results, along with other commensurability oscillation experiments \cite{Willett97}, provided strong support for the general picture of the $\nu=1/2$ state suggested by the HLR theory. In particular, the phenomenology near the $\nu=1/2$ state could be well described by an HLR mean-field theory in which composite fermions respond to an electronic scalar potential as a {\it vector} potential. Recent improvements in sample quality and experimental design have allowed for an unprecedented refinement of these measurements. Through a careful study of the oscillation minima corresponding to the first three harmonics ($p = 1, 2, 3$), Kamburov et al.~\cite{Kamburov2014} came to a remarkable conclusion that is in apparent disagreement with the above hypothesis (see \cite{shayeganreview2019} for a review of these and related experiments): Weiss oscillation minima are well described by Eq.~\eqref{weissformulafreefermions} upon taking $k_F = \sqrt{4 \pi n_e}$ for $\nu < 1/2$, as before; but for $\nu > 1/2$, the inferred Fermi wave vector, $k_F = \sqrt{4 \pi ({B \over 2\pi} - n_e)}$, is determined by the density of holes. In both cases, $\phi = +1/4$. Might a theory of the $\nu=1/2$ state require two {\it different} composite fermion theories \cite{Kamburov2014, BMF2015}, a theory of composite electrons for $\nu < 1/2$ and a theory of composite holes for $\nu > 1/2$? If $k_F = \sqrt{4 \pi n_e}$ is instead taken for $1/2 < \nu < 1$, there is a roughly 2\% mismatch between the locations of the $p=1$ minimum obtained from Eq.~\eqref{weissformulafreefermions} and the nearest observed minimum; this discrepancy between theory and experiment decreases in magnitude as $p$ increases \cite{Kamburov2014}. While the mismatch is small, it is systematic: it persists in a variety of different samples of varying mobilities and densities, as well as two-dimensional hole gases, which typically have larger effective masses (as well as near half-filling of other Landau levels \cite{shayeganreview2019}). (This mismatch is the same magnitude as the difference between the electrical Hall conductivities produced by an HLR theory with $\sigma^{\rm cf}_{xy} = 0$ and an HLR theory with $\sigma^{\rm cf}_{xy} = -1/4\pi$, the composite fermion Hall conductivity required by PH symmetry; an equal value of the dissipative resistance \cite{Willett97} is assumed in both cases for this comparison. See Eq.~(48) of \cite{kivelson1997}.) The hypothesis that composite fermions respond to an electric scalar potential as a purely magnetic one approximates HLR mean-field field theory. In fact, an electric scalar potential generates both a scalar and vector potential in the HLR theory. (This observation by Wang et al.~\cite{2017PhRvX...7c1029W} is crucial for obtaining PH symmetric electrical Hall transport within HLR mean-field theory.) However, the magnitude of the scalar potential is suppressed relative to the vector potential by a factor of $\ell_B/d \approx 1/50$ \cite{BMF2015}. Cheung et al.~\cite{PhysRevB.95.235424} found that upon including the effects of the scalar potential in HLR mean-field theory, there is a slight correction to the expected locations of the oscillation minima {\it both} above and below $\nu=1/2$. The nature of the corrections are such that HLR mean-field theories of composite electrons or composite holes that take either $k_F = \sqrt{4 \pi n_e}$ or $k_F = \sqrt{4 \pi ({B \over 2\pi} - n_e)}$ produce identical results. In addition, the shifted oscillation minima are in agreement with the mean-field predictions of the Dirac composite fermion theory (at least within the regime of electronic parameters probed by experiment). Unfortunately, the small disagreement between composite fermion mean-field theory and experiment persists, in this case for all values of $0 < \nu < 1$: for a given $p$, the observed oscillation minima are shifted inwards relative to the theoretical prediction by an amount that decreases as $\nu = 1/2$ is approached---see Fig.~\ref{weissoscillations}. \subsection{Outline} In this paper, we consider the mismatch from the point of view of the Dirac composite fermion theory. In perturbation theory about mean-field theory, we argue that the comparison with experiment can be improved if the effects of gauge field fluctuations are considered. Our strategy is to include their effects by determining the fluctuation corrections to the mean-field Hamiltonian. We obtain this corrected Hamiltonian through an approximate large $N$ flavor analysis of the Schwinger--Dyson equations \cite{Itzykson:1980rh} for the Dirac composite fermion theory. The resulting Dirac composite fermion propagator specifies the input parameters, namely, the chemical potential and mass, of the corrected mean-field Hamiltonian. We then follow the analysis by Cheung et al.~\cite{PhysRevB.95.235424} to determine the corrected Weiss oscillation curves. Our results are summarized in Fig.~\ref{weissoscillations}. \begin{figure} \center \includegraphics[scale=0.47]{weiss_combined_fixed_n} \caption{Weiss oscillations of the Dirac composite fermion theory at fixed electron density $n_e$ and varying magnetic field $B$ about half-filling $B_{1/2}$ ($\ell_{B_{1/2}}/d = 0.03$ and $k_B T = 0.3\sqrt{2 B_{1/2}}$). The blue curve corresponds to Dirac composite fermion mean-field theory \cite{PhysRevB.95.235424}. The orange curve includes the effects of a Dirac composite fermion mass $m \propto |B - 4\pi n_e|^{1/3} B^{1/6}$ induced by gauge fluctuations. Vertical lines correspond to the observed oscillation minima \cite{Kamburov2014}.} \label{weissoscillations} \end{figure} To understand our results, it is helpful to reinterpret Eq.~\eqref{weissformulafreefermions} as a measure of a Dirac fermion density $n$ by replacing $k_F \mapsto \sqrt{4 \pi n}$ (we set the Fermi velocity to unity). Any decrease in the density induces an inward shift of the Weiss oscillation minima determined by Eq.~\eqref{weissformulafreefermions} towards $b = 0$. Dirac fermions of mass $m$, placed at chemical potential $\mu$ have a density $n = (\mu^2 - m^2)/4\pi$. Our leading order analysis of the Schwinger--Dyson equations indicates that gauge fluctuations generate a mass $m$ away from $\nu=1/2$, while the chemical potential is unchanged. Such dynamical mass generation in a non-zero magnetic field is known to occur in various (2+1)-dimensional theories of Dirac fermions (see \cite{Miransky:2015ava} for a review). For example, in the theory of a free Dirac fermion at zero density, a uniform magnetic field sources a vacuum expectation value for the mass operator. Short-ranged attractive interactions then induce a non-zero mass term in its effective Lagrangian \cite{Gusynin:1995nb}. We show how a similar phenomenon occurs in the Dirac composite fermion theory. This effect is also expected from the point of view of symmetry: PH symmetry forbids a Dirac composite fermion mass (see \S\ref{DiracCFreview}). (Manifest PH symmetry is the essential advantage that the Dirac composite fermion theory confers to our analysis.) Away from $\nu=1/2$, PH symmetry is broken and so all terms, consistent with the broken PH symmetry, are expected to be present in the effective Lagrangian. Note there is no symmetry preventing corrections to the Dirac composite fermion chemical potential; rather, it is found to be unaltered to leading order within our analysis. We also comment upon the finite-temperature behavior of quantum oscillations near $\nu = 1/2$. This behavior is interesting to consider because at finite temperatures, away from the long wavelength limit, differences in the HLR and Dirac composite fermion theories should appear. We discuss how the temperature dependence of the Weiss oscillation amplitude might exhibit subtle differences between the two theories. The remaining sections are organized as follows. In \S\ref{DiracCFreview}, we review the Dirac composite fermion theory. In \S\ref{SDsection}, we obtain an approximate solution to the Schwinger--Dyson equations. In \S\ref{weisssection}, we use the chemical potential and mass of the resulting Dirac composite fermion propagator as input parameters for the ``fluctuation-improved" mean-field Hamiltonian and determine the resulting Weiss oscillations. We discuss a few consequences of this analysis in \S\ref{discussion} and we conclude in \S\ref{conclusion}. Appendix \ref{integralappendix} contains details of calculations summarized in the main text. \section{Dirac composite fermions: review} \label{DiracCFreview} Electrons in the lowest Landau level near half-filling can be described by a Lagrangian of a 2-component Dirac electron $\Psi_e$ \cite{Son2015}: \begin{align} {\cal L}_e = \overline{\Psi}_e \gamma^\alpha (i \partial_\alpha + A_\alpha) \Psi_e - m_e \overline{\Psi}_e \Psi_e + {1 \over 8 \pi} \epsilon^{\alpha \beta \sigma} A_\alpha \partial_\beta A_\sigma + \ldots, \end{align} where $A_\alpha$ with $\alpha \in \{0, 1, 2 \}$ is the background electromagnetic gauge field, $\overline{\Psi}_e = \Psi_e^\dagger \gamma^0$, the $\gamma$ matrices $\gamma^0 = \sigma^3$, $\gamma^1 = i \sigma^1$, $\gamma^2 = i \sigma^2$ satisfy the Clifford algebra $\{\gamma^\alpha, \gamma^\beta\} = 2 \eta^{\alpha \beta}$ with $\eta^{\alpha \beta} = {\rm diag}(+1, -1, -1)$, and the anti-symmetric symbol $\epsilon^{0 1 2} = 1$. The benefit of the Dirac formulation is that the limit of infinite cyclotron energy $\omega_c = B/m_e$ can be smoothly achieved at fixed external magnetic field $B = \partial_1 A_2 - \partial_2 A_1 > 0$ by taking the electron mass $m_e \rightarrow 0$. The $\ldots$ include additional interactions, e.g., the Coulomb interaction and coupling to disorder. The electron density, \begin{align} n_e = \Psi^\dagger_e \Psi_e + {B \over 4 \pi}. \end{align} Consequently, when $\nu \equiv {2\pi n_e/B} = 1/2$, the Dirac electrons half-fill the zeroth Landau level. For $m_e = 0$ and $\nu = 1/2$, the Dirac Lagrangian is invariant under the anti-unitary ($i \mapsto - i$) PH transformation that takes $(t, x, y) \mapsto (-t, x, y)$, \begin{align} \label{electronPH} \Psi_e & \mapsto - \gamma^0 \Psi_e^\ast, \cr (A_0, A_1, A_2) & \mapsto (- A_0, A_1, A_2), \end{align} and shifts the Lagrangian by a filled Landau level ${\cal L}_e \mapsto {\cal L}_e + {1 \over 4 \pi} \epsilon^{\alpha \beta \sigma} A_\alpha \partial_\beta A_\sigma$. Son \cite{Son2015} conjectured that ${\cal L}_e$ is dual to the Dirac composite fermion Lagrangian, \begin{align} \label{CFlag} {\cal L} = \overline{\psi} \gamma^\alpha (i \partial_\alpha + a_\alpha) \psi - m \overline{\psi} \psi - {1 \over 4 \pi} \epsilon^{\alpha \beta \sigma} a_\alpha \partial_\beta A_\sigma + {1 \over 8 \pi} \epsilon^{\alpha \beta \sigma} A_\alpha \partial_\beta A_\sigma - {1 \over 4 g^2} f_{\alpha \beta}^2 + \ldots, \end{align} where $\psi$ is the electrically-neutral Dirac composite fermion; $a_\alpha$ is a dynamical $U(1)$ gauge field with field strength $f_{\alpha \beta} = \partial_\alpha a_\beta - \partial_\beta a_\alpha$ and coupling $g$; and $m \propto m_e$ is the Dirac composite fermion mass. $A_\alpha$ remains a non-dynamical gauge field, whose primary role in ${\cal L}$ is to determine how electromagnetism enters the Dirac composite fermion theory. As before, the $\ldots$ represent additional interactions, which can now involve the gauge field $a_\alpha$. The duality between ${\cal L}_e$ and ${\cal L}$ obtains in the low-energy limit when $g \rightarrow \infty$. See \cite{MetlitskiVishwanath2016, WangSenthilfirst2015, KMTW2015, Geraedtsetal2015, MrossAliceaMotrunichexplicitderivation2016, MurthyShankar2016halfull, Seiberg:2016gmd, PhysRevX.6.031043, 2018arXiv181111367S} for additional details about this duality and \cite{2018arXiv181005174S} for a recent review. At weak coupling, the $a_0$ equation of motion implies the Dirac composite fermion density, \begin{align} \label{a0constraint} \psi^\dagger \psi = {B \over 4 \pi}. \end{align} At strong coupling, the right-hand side of Eq.~\eqref{a0constraint} receives corrections from the $\ldots$ in ${\cal L}$ and should be replaced by $- {\delta {\cal L} \over \delta a_0} + \psi^\dagger \psi$. In the Dirac composite fermion theory, the electron density, \begin{align} \label{edensityDiracCF} n_e = {1 \over 4 \pi}(- b + B), \end{align} where the effective magnetic field $b = \partial_1 a_2 - \partial_2 a_1$. In the Dirac composite fermion theory, the PH transformation takes $(t, x, y) \mapsto (-t, x, y)$, \begin{align} \label{DiracCFPH} \psi & \mapsto \gamma^2 \psi, \cr (a_0, a_1, a_2) & \mapsto (a_0, - a_1, - a_2), \cr (A_0, A_1, A_2) & \mapsto (- A_0, A_1, A_2), \end{align} and shifts the Lagrangian by a filled Landau level. Intuitively, the PH transformation acts on the dynamical fields of ${\cal L}$ like a time-reversal transformation. As such, PH symmetry requires $m=0$ and forbids a Chern-Simons term for $a_\alpha$. Away from half-filling, PH symmetry is necessarily broken since Eq.~\eqref{edensityDiracCF} implies the effective magnetic field $b = B - 4 \pi n_e \neq 0$. Consequently, we can no longer exclude any PH breaking term allowed by symmetry. In particular, we generally expect a Dirac mass to be induced by fluctuations. Scaling implies the mass $m = \sqrt{B} f(\nu)$, where $f(\nu)$ is a scaling function of the filling fraction $\nu$. Unbroken PH symmetry at half-filling requires $f(\nu = 1/2) = 0$; away from $\nu=1/2$, it is possible that $m$ can have a non-trivial dependence on $B$ and $n_e$, as determined by $f(\nu)$. In the next section, we study the Schwinger--Dyson equations to determine how fluctuations generate a mass $m$ away from $\nu = 1/2$ within an expansion where the number of Dirac composite fermion flavors $N \rightarrow \infty$. \section{Dynamical mass generation in an effective magnetic field} \label{SDsection} Beginning with the works of Schwinger \cite{Schwinger:1951nm} and Ritus \cite{Ritus:1978cj}, there have been a number of studies on the effects of a background magnetic field on quantum electrodynamics in various dimensions. In this paper, we rely most heavily on Refs.~\cite{Gorbar:2013upa, watson2014quark, 2015EPJC...75..167K}; see Ref.~\cite{Miransky:2015ava} for an excellent introduction to this formalism and for additional references. We first summarize the relevant aspects of this formalism. Then, we analyze the Schwinger--Dyson equations for the Dirac composite fermion theory away from half-filling when the fluctuations of the emergent gauge field $a_\alpha$ about a uniform $b \neq 0$ are considered. \subsection{Dirac fermions in a magnetic field} \label{Diracsinbfield} At tree-level, i.e., in mean-field theory, the time-ordered real-space propagator $G_{0}(x,y)$ for a massive Dirac fermion in a uniform magnetic field $(\overline{a}_0, \overline{a}_1, \overline{a}_2) = (0, 0, b x_1)$ can be written in the form, \begin{align} \label{treerealprop} G_{0}(x,y) = e^{i \Phi(x,y)} \int {d^3 p \over (2\pi)^3} e^{i p_\alpha (x-y)^\alpha} G_{0}(p), \end{align} where the Schwinger phase, \begin{align} \Phi(x,y) = - {b \over 2} (x_2 - y_2) (x_1 + y_1). \end{align} The tree-level pseudo-momentum-space propagator, \begin{align} - i G_{0}(p) & = i \int_0^\infty d s e^{i s\Big((p_0 + \mu_0 + i \epsilon_{p_0})^2 - m_0^2 + i \delta - {p_1^2 + p_2^2 \over b s} \tan(b s) \Big)} \cr & \times \Big[(p_\alpha + \mu_0 \delta_{\alpha, 0})\gamma^\alpha - i b \Big((p_0 + \mu_0) \mathbb{I} + m_0 \gamma^0 \Big) \tan(b s) + p_i \gamma^i \tan^2(b s) \Big], \end{align} where the pseudo-momenta $p = (p_0, p_1, p_2)$ are analogous to the conserved momenta in a translationally-invariant system, $\mu_0$ is a chemical potential, $m_0$ is a mass, $\epsilon_{p_0} = {\rm sign}(p_0) \epsilon$ with the infinitesimal $\epsilon > 0$ ensures the Feynman pole prescription is satisfied, $\delta > 0$ is an infinitesimal included for convergence of the $s$ integral, and $\mathbb{I}$ is the $2 \times 2$ identity matrix. Expanding in $b$: \begin{align} \label{treeprop} - i G_{0}(p) & \equiv {(p_\alpha + \mu_0 \delta_{\alpha, 0}) \gamma^\alpha + m_0 \mathbb{I} \over (p_0 + \mu_0 + i \epsilon_{p_0})^2 - p_i^2 - m_0^2} + b {(p_0 + \mu_0) \mathbb{I} + m_0 \gamma^0 \over \Big((p_0 + \mu_0 + i \epsilon_{p_0})^2 - p_i^2 - m_0^2\Big)^2} + {\cal O}(b^2). \end{align} We imagine applying this formalism to the vicinity of $\nu=1/2$ when the effective magnetic field $b$ is small. As such, we drop all ${\cal O}(b^2)$ and higher terms in the pseudo-momentum-space propagator. For convenience, we use $G_{0}(p)$ to denote the linear expansion in Eq.~\eqref{treeprop} with higher order in $b$ terms excluded. The tree-level inverse propagator $G^{-1}_{0}(x,y)$ satisfies \begin{align} \int d^3 y\ G^{-1}_{0}(x,y) G_{0}(y, z) = \delta^{(3)}(x-z). \end{align} It takes a particularly simple form: \begin{align} \label{treepropinverse} i G^{-1}_{0}(x,y) = e^{i \Phi(x,y)} \int {d^3 p \over (2\pi)^3} e^{i p_\alpha (x - y)^\alpha} \Big((p_\alpha + \mu_0 \delta_{\alpha, 0})\gamma^\alpha - m_0 \mathbb{I} \Big). \end{align} In contrast to $G_{0}(x,y)$, the magnetic field dependence is entirely parameterized by the Schwinger phase in $G^{-1}_{0}(x,y)$. Both the propagator and its inverse are obtained after performing an infinite sum over all Landau levels. Thus, $G_{0}(x,y)$ and $G_0^{-1}(x,y)$ in Eqs.~\eqref{treerealprop} and \eqref{treepropinverse} allow for a straightforward expansion about their translationally-invariant forms at $b=0$; see \cite{watson2014quark} for further discussion. In the Dirac composite fermion theory, $G_0^{-1}(x,y)$ defines the mean-field Lagrangian, from which the Hamiltonian readily follows; the Schwinger phase $\Phi(x,y)$ reminds us to include a non-zero magnetic field by the Peierls substitution. We use the following ansatz for the exact real-space propagator: \begin{align} G(x,y) = e^{i \Phi(x,y)} \int {d^3 p \over (2 \pi)^3} e^{i p_\alpha (x-y)^\alpha} G(p). \end{align} For the exact pseudo-momentum propagator $G(p)$, we write \begin{align} \label{diracpropexpansion} - i G(p) = - i G^{(0)}(p) - i G^{(1)}(p), \end{align} where \begin{align} \label{exactzero} - i G^{(0)}(p) & = {\Big(p_\alpha + \mu_0 \delta_{\alpha, 0} - \Sigma_\alpha(p)\Big) \gamma^\alpha + \Sigma_m(p) \mathbb{I} \over (p_0 + \mu_0 - \Sigma_0(p) + i \epsilon_{p_0})^2 - (p_i - \Sigma_i(p))^2 - \Sigma_m^2(p)}, \\ \label{exactone} - i G^{(1)}(p) & = b {\Big(p_0 + \mu_0 - \Sigma_0(p)\Big) \mathbb{I} + \Sigma_m(p) \gamma^0 \over \Big((p_0 + \mu_0 - \Sigma_0(p) + i \epsilon_{p_0})^2 - (p_i - \Sigma_i(p))^2 - \Sigma_m^2(p)\Big)^2}. \end{align} In contrast to the tree-level pseudo-momentum propagator, $G_0(p)$, both $G^{(0)}(p)$ and $G^{(1)}(p)$ are expected to depend on $b$ through the self-energies $\Sigma_m(p)$ and $\Sigma_\alpha(p)$, in addition to the explicit linear dependence that appears in $G^{(1)}(p)$. We write the exact inverse propagator as \begin{align} \label{exactinversedirac} i G^{-1}(x,y) = e^{i \Phi(x,y)} \int {d^3 p \over (2 \pi)^3} e^{i p_\alpha (x - y)^\alpha} \left( \left( p_\alpha + \mu_0 \delta_{\alpha, 0} - \Sigma_\alpha \left( p \right) \right) \gamma^\alpha - \Sigma_m \left( p \right) \mathbb{I} \right). \end{align} In $G(p)$ and $G^{-1}(p)$, we set the tree-level mass $m_0 = 0$; this is consistent with the assumption of unbroken PH symmetry at $\nu = 1/2$. The ansatze for the exact propagator and its inverse are simplifications of that which symmetry allows for a Dirac fermion in a magnetic field \cite{watson2014quark}. Nevertheless, our ansatze are consistent to leading order in a $1/N$ analysis of the Schwinger--Dyson equations described in the next section. In general, the self-energies $\Sigma_m(p)$ and $\Sigma_\alpha(p)$ are non-trivial functions of the pseudo-momenta $p$. We expect the low-energy dynamics of the fermions to be dominated by fluctuations about the Fermi surface. Thus, we replace the self-energies as follows: \begin{align} \label{massselfenergy} \Sigma_m(p_{\rm FS} + \delta p) & \mapsto \Sigma_m(p_{\rm FS}), \\ \label{momentaselfenergy} \Sigma_\alpha(p_{\rm FS} + \delta p) & \mapsto \delta_{0 \alpha} \Sigma_0(p_{\rm FS}) + \delta p_\alpha \Sigma'_\alpha(p_{\rm FS}), \end{align} where $p_{\rm FS} = (0, p_i)$ lies on the Fermi surface (in mean-field theory, this is defined by $p_i^2 = \mu_0^2$ and $p_0 = 0$), $|\delta p_\alpha| \ll \mu_0$, $\Sigma'_{\alpha}(p_{\rm FS}) = \partial_{p_\alpha} \Sigma_\alpha(p = p_{\rm FS})$, and there is no sum over $\alpha$ in Eq.~\eqref{momentaselfenergy}. $G^{-1}(x,y)$ determines the ``fluctuation-corrected" Dirac composite fermion mean-field Hamiltonian. The tree-level chemical potential and mass are corrected by the fermion self-energies $\Sigma_\alpha$ and $\Sigma_m$. We define the physical mass, \begin{align} \label{dynamicalmass} m = {\Sigma_m(p_{\rm FS}) \over 1 - \Sigma'_0(p_{\rm FS})} \equiv {\Sigma_m \over 1 - \Sigma'_0}, \end{align} and chemical potential, \begin{align} \label{correctedchemicalpotential} \mu = {\mu_0 - \Sigma_0 \over 1 - \Sigma'_0}. \end{align} The Schwinger phase $\Phi(x,y)$ in $G^{-1}(x,y)$ reminds us to to include the effective magnetic field $b$ via the Peierls substitution. \subsection{Schwinger--Dyson equations: setup} The Schwinger--Dyson equations \cite{Itzykson:1980rh} are a set of coupled integral equations that relate the exact fermion and gauge field propagators to one another by way of the exact cubic interaction vertex $\Gamma^\alpha$ coupling the Dirac composite fermion current to $a_\alpha$. We will not solve the equations exactly; rather, we seek an approximate solution that one obtains within a large flavor generalization of the Dirac composite fermion theory. We hope this approximate solution reflects a qualitative behavior of the Dirac composite fermion theory. Specifically, we consider the Lagrangian, \begin{align} \label{largeNgeneneral} {\cal L}_N = \overline{\psi}_n \gamma^\alpha (i \partial_\alpha + a_\alpha) \psi_n - {N \over 4 \pi} \epsilon^{\alpha \beta \sigma} a_\alpha \partial_\beta A_\sigma + {N \over 8 \pi} \epsilon^{\alpha \beta \sigma} A_\alpha \partial_\beta A_\sigma - {1 \over 4 g^2} f_{\alpha \beta}^2, \end{align} where the different fermion flavors are labeled by $n = 1, \ldots, N$. When $N = 1$, we recover the Dirac composite fermion theory. In ${\cal L}_N$, $n_e = {\delta {\cal L}_N/\delta A_0} = {N \over 4 \pi} (B - b)$; thus, in our large $N$ theory, half-filling means $\nu=N/2$. To make contact with the formalism of \S\ref{Diracsinbfield}, we introduce a $SU(N)$-invariant chemical potential $\mu_0 = \sqrt{B}$ and we factor out the uniform effective magnetic field $(\overline{a}_0, \overline{a}_1, \overline{a}_2) = (0, 0, b x_1)$ that is generated away from half-filling from the dynamical fluctuations of the emergent gauge field $a_\alpha$. Setting $A_\alpha = 0$, Eq.~\eqref{largeNgeneneral} becomes \begin{align} \label{largeNgeneneralconstrained} {\cal L}_N = \overline{\psi}_n \gamma^\alpha (i \partial_\alpha + \overline{a}_\alpha + a_\alpha) \psi_n + \mu_0 \psi^\dagger_n \psi_n - {1 \over 4 g^2} f_{\alpha \beta}^2. \end{align} This is the large $N$ theory that we analyze. To leading order in $N$, the Ward identity implies that there are no corrections to the cubic interaction vertex at $\nu = 1/2$ \cite{2018arXiv180802140R}.\footnote{Furthermore, there are no corrections to this vertex if the Dirac composite fermion is given a non-zero bare mass $m_0^2 \ll \mu_0^2$ at $b=0$. We thank N. Rombes and S. Chakravarty for correspondence on this point.} Taking $\Gamma^\alpha = \gamma^\alpha$, the Schwinger--Dyson equations for ${\cal L}_N$ become: \begin{align} \label{realspaceSDfermion} i G^{-1}(x,y) - i G^{-1}_{0}(x,y) & = \gamma^\alpha G(x,y) \gamma^\beta \Pi^{-1}_{\alpha \beta}(x - y), \\ \label{realspaceSDgauge} i \Pi^{\alpha \beta}(x-y) - i \Pi^{\alpha \beta}_0(x-y) & = N {\rm tr}\Big[ \gamma^\alpha G(x,y) \gamma^\beta G(y,x) \Big], \end{align} where $\Pi^{\alpha \beta}(x-y)$ is the gauge field self-energy, $\Pi_0^{\alpha \beta}(x-y)$ is the kinetic term for $a_\alpha$ contributed by its Maxwell term, and we have taken the fermion propagator $G_{n, n'}(x,y) = G(x,y) \delta_{n, n'}$ to be diagonal in flavor space. $G(x,y)$ and $G_0(x,y)$ are defined in Eqs.~\eqref{diracpropexpansion} and \eqref{treeprop}. The factor of $N$ in Eq.~\eqref{realspaceSDgauge} arises from the $N$ flavors in the fermion loop. Upon substituting the Fourier transform $\Pi^{\alpha \beta}(p)$, defined by \begin{align} \Pi^{\alpha \beta}(x-y) = \int {d^3 p \over (2 \pi)^3} e^{i p_\sigma (x-y)^\sigma} \Pi^{\alpha \beta}(p), \end{align} and Eqs~\eqref{treepropinverse}, \eqref{diracpropexpansion}, and \eqref{exactinversedirac} into the Schwinger--Dyson equations, \eqref{realspaceSDfermion} and \eqref{realspaceSDgauge} become \cite{watson2014quark} \begin{align} \label{SDfermion} i \Sigma_\alpha(q) \gamma^\alpha + i \Sigma_m(q) \mathbb{I} = \int {d^3 p \over (2 \pi)^3} \gamma^{\alpha} G(p+q) \gamma^\beta \Pi^{-1}_{\alpha \beta}(p), \\ \label{SDgauge} i \Pi^{\alpha \beta}(\delta q) = N \int {d^3 p \over (2 \pi)^3} {\rm tr}\Big[\gamma^\alpha G(p) \gamma^\beta G(p + \delta q) \Big], \end{align} where $q = q_{\rm FS} + \delta q$. We aim to solve these equations. Our ansatz for the fermion self-energies is motivated by similar studies of $(2+1)$-dimensional quantum electrodynamics at zero density \cite{PhysRevD.29.2423, AppelquistNashWijewardhanaQED3, PhysRevLett.62.3024}. We consider the $1/N$ expansion for the fermion self-energies, \begin{align} \label{explicitexpansion} \Sigma_\alpha & = \Sigma_\alpha^{(1)} + \Sigma_\alpha^{(2)} + \ldots, \cr \Sigma_m & = \Sigma_m^{(1)} + \Sigma_m^{(2)} + \ldots. \end{align} All terms and all ratios of successive terms in Eq.~\eqref{explicitexpansion} vanish as $N\rightarrow \infty$. Ignoring terms with $i \geq 2$, we set $\Sigma_\alpha = \Sigma_\alpha^{(1)} = 0$ and $\Sigma_m = \Sigma_m^{(1)}$, and find a self-consistent solution to the Schwinger--Dyson equation in terms of $\Sigma_m^{(1)}$ and $\Pi^{\alpha \beta}$. This choice is consistent with the Ward identity, to leading order in $1/N$. From Eqs.~\eqref{dynamicalmass} and \eqref{correctedchemicalpotential}, the resulting solution implies $m = \Sigma_m^{(1)}$ and $\mu = \mu_0$ to leading order in $1/N$. We then calculate the leading perturbative correction $\Sigma^{(2)}_\alpha$ to $\Sigma_\alpha$ and verify that $\Sigma^{(2)}_\alpha/\Sigma^{(1)}_m \rightarrow 0$ as $N \rightarrow \infty$. \subsection{Gauge field self-energy} \label{gaugeselfenergy} The gauge field self-energy factorizes into PH symmetry even and odd parts: \begin{align} \label{gaugeinvariance} \Pi^{\alpha \beta}(q) = \Pi^{\alpha \beta}_{\rm even}(q) + \Pi^{\alpha \beta}_{\rm odd}(q). \end{align} As the PH transformation acts like time-reversal, $\Pi^{\alpha \beta}_{\rm even}(q)$ contains the Maxwell term for $a_\alpha$, while $\Pi^{\alpha \beta}_{\rm odd}(q)$---which can only be non-zero when PH symmetry is broken---can contain a Chern-Simons term for $a_\alpha$. To leading order in $b$, we substitute $G(p) = G^{(0)}(p)$ into Eq.~\eqref{SDgauge} and first compute \begin{align} \Pi^{\alpha \beta}_{\rm odd}(\delta q) = i \epsilon^{\alpha \beta \sigma} \delta q_\sigma \Pi_{\rm odd}(\delta q) = - i N \Big\{ \int {d^3 p \over (2\pi)^3} {\rm tr}\Big[ \gamma^\alpha G^{(0)}(p) \gamma^\beta G^{(0)}(p + \delta q)\Big] \Big\}_{\rm odd}, \end{align} where $\{ \cdot \}_{\rm odd}$ indicates the PH odd term is isolated. We find \begin{align} \label{inducedCS} \Pi_{\rm odd}(0) & = {N \over 4 \pi} \Big(\Theta(|\Sigma_m| - \mu_0) {\Sigma_m \over |\Sigma_m|} + \Theta(\mu_0 - |\Sigma_m|) {\Sigma_m \over \mu_0} \Big), \end{align} where $\Theta(x)$ is the step function. See Appendix \ref{gaugefieldselfenergyappendix} for details. Additional momentum dependence in $\Pi_{\rm odd}(q)$ is subdominant at low energies. For $\mu_0 > |\Sigma_m|$, Eq.~\eqref{inducedCS} implies an effective Chern-Simons term for $\alpha_\alpha$ with level, \begin{align} \label{cslevel} k = {N \over 2} {\Sigma_m \over \mu_0}, \end{align} is generated if $\Sigma_m \neq 0$. (This non-quantized Chern-Simons level is reminiscent of the anomalous Hall effect \cite{PhysRevLett.93.206602}.) Next, consider \begin{align} \Pi^{\alpha \beta}_{\rm even}(\delta q) - \Pi^{\alpha \beta}_{0}(\delta q) = - i N \Big\{ \int {d^3 p \over (2\pi)^3} {\rm tr}\Big[ \gamma^\alpha G^{(0)}(p) \gamma^\beta G^{(0)}(p + \delta q)\Big] \Big\}_{\rm even}, \end{align} where $\{ \cdot \}_{\rm even}$ indicates the PH even term is isolated and we have again substituted $G(p) = G^{(0)}(p)$. The Maxwell kinetic term is \begin{align} \Pi^{\alpha \beta}_{0}(q) = q^2 \eta^{\alpha \beta} - q^\alpha q^\beta. \end{align} Ref.~\cite{Miransky:2001qs} finds: \begin{align} \Pi^{00}_{\rm even}(q_0, q_i) - \Pi^{0 0}_{0}(q) & = \Pi_l(q_0, q_i), \cr \Pi^{0 i}_{\rm even}(q_0, q_i) - \Pi^{0 i}_{0}(q) & = q_0 {q^i \over q_i^2} \Pi_l(q_0, q_i), \cr \Pi^{i j}_{\rm even}(q_0, q_i) - \Pi^{0 i}_{0}(q) & = (\delta^{i j} - {q^i q^j \over q_k^2}) \Pi_t(q_0, q_i) + {q_0^2 q^i q^j \over (q_k^2)^2} \Pi_l(q_0, q_i), \end{align} where \begin{align} \Pi_l(q_0, q_i) & = \mu_0 N \Big(\sqrt{{q_0^2 \over q_0^2 - q_i^2}} - 1 \Big), \cr \Pi_t(q_0, q_i) & = \mu_0 N - {q_0^2 - q_i^2 \over q_k^2} \Pi_l(q_0, q_i). \end{align} We have simplified the expressions for $\Pi_l$ and $\Pi_t$ by taking $q_0^2 - q_i^2 > 0$ and by setting the common proportionality constant to unity. The precise behaviors of $\Pi_l$ and $\Pi_t$ and their effects on $a_\alpha$ depend upon whether $|q_0| < |q_i|$ or $|q_i| < |q_0|$. For instance, when $|q_0| < |q_i|$ (small frequency transfers, but potentially large $\sim 2k_F$ momenta transfers) and in the absence of $\Pi_{\rm odd}^{\alpha \beta}$, $\Pi_l$ gives rise to the usual Debye screening of the ``electric" component of $a_\alpha$ and $\Pi_t$ results in the Landau damping of the ``magnetic" component of $a_\alpha$ \cite{Miransky:2001qs}, familiar from Fermi liquid theory \cite{PhysRevB.8.2649}. These corrections dominate the tree-level Maxwell term for $a_\alpha$ at low energies. In our analysis of the fermion self-energy in the next section, we focus on the regime $|q_i| \leq |q_0|$. In this case, $\Pi_l$ and $\Pi_t$ provide non-singular corrections to the Maxwell term for $a_\alpha$ and will be ignored. At low energies, $g \rightarrow \infty$, the effects of the Maxwell term are suppressed compared with the Chern-Simons term \cite{1999tald.conf..177D}. Thus, to find the effective gauge field propagator $\Pi^{-1}_{\alpha \beta}(q)$ for use in Eq.~\eqref{SDfermion}, we drop $\Pi_{\rm even}^{\alpha \beta}(q)$, add the covariant gauge fixing term $- {1 \over 2 \xi} q^\alpha q^\beta$ to $\Pi^{\alpha \beta}_{\rm odd}(q)$, and invert. Choosing Feynman gauge $\xi = 0$, we obtain: \begin{align} \label{gaugepropagator} \Pi_{\alpha \beta}^{-1}(q) = {2 \pi \over k} {\epsilon_{\alpha \beta \sigma} q^\sigma \over q^2}, \end{align} where $k$ is given in Eq.~\eqref{cslevel}. It is with this gauge field propagator that we find a self-consistent solution to the Schwinger--Dyson equation for the fermion self-energy $\Sigma_m$ in \S\ref{fermionselfenergy}. Instantaneous density-density interactions between electrons give rise to additional gauge field kinetic terms in ${\cal L}$. Such terms, which should therefore be included in the tree-level Lagrangian ${\cal L}_N$, generally contribute to $\Pi_0^{\alpha \beta} \subset \Pi_{\rm even}^{\alpha \beta}$. To understand their possible effects in the kinematic regime $|q_i| \leq |q_0|$, we set $a_0 = 0$ and decompose the spatial components of the gauge field in terms of its longitudinal and transverse modes: \begin{align} a_i(q) = - i \hat{q}_i a_L(q) - i \epsilon_{j i} \hat{q}_j a_T(q), \end{align} where the normalized spatial momenta $\hat{q}_i = q_i/|\vec{q}\,|$. An un-screened Coulomb interaction dualizes to a term in ${\cal L}$ proportional to $|\vec{q}\,|^{z-1} a_T(-q) a_T(q)$ with $z=2$; a short-ranged interaction give $z=3$ (see Sec.~3.4 of \cite{KMTW2015}). (We are working in momentum space for this analysis.) On the other hand, the effective Chern-Simons term is proportional to $i q_0 a_L(-q) a_T(q)$; there is no $a_L - a_L$ or $a_T - a_T$ Chern-Simons coupling. We consider $z > 2$ in our analysis below. In this regime, the effects of any such screened interaction are expected to be subdominant compared with those of the Chern-Simons term, as such interactions correspond to higher-order terms in the derivative expansion. \subsection{Fermion self-energy} \label{fermionselfenergy} We now study Eq.~\eqref{SDfermion} for the $\Sigma_m$ and $\Sigma_0$ components of the Dirac composite fermion self-energy using the effective gauge field propagator in Eq.~\eqref{gaugepropagator}. \subsubsection{$\Sigma_m$} Taking the trace of both sides of Eq.~\eqref{SDfermion} and setting $\delta q_\alpha = 0$, we find: \begin{align} \label{massselfenergystart} i \Sigma_m(q_{\rm FS}) & = i {\cal M}^{(0)}(q_{\rm FS}) + i {\cal M}^{(1)}(q_{\rm FS}), \end{align} where \begin{align} i {\cal M}^{(0)}(q_{\rm FS}) & = {1 \over 2} \int {d^3 p \over (2\pi)^3} {\rm tr} \Big[ \gamma^\alpha G^{(0)}(p + q_{\rm FS}) \gamma^\beta \Big({2 \pi \over k} {\epsilon_{\alpha \beta \sigma} p^\sigma \over p^2} \Big) \Big], \\ i {\cal M}^{(1)}(q_{\rm FS}) & = {1 \over 2} \int {d^3 p \over (2\pi)^3} {\rm tr} \Big[ \gamma^\alpha G^{(1)}(p + q_{\rm FS}) \gamma^\beta \Big({2 \pi \over k} {\epsilon_{\alpha \beta \sigma} p^\sigma \over p^2} \Big) \Big], \end{align} and $G^{(0}(p)$ and $G^{(1)}(p)$ are given in Eqs.~\eqref{exactzero} and \eqref{exactone}. Recall that we set $\Sigma_\alpha = 0$ and only retain $\Sigma_m$ when using $G^{(0)}(p)$ and $G^{(1)}(p)$ to evaluate ${\cal M}^{(0)}$ and ${\cal M}^{(1)}$. The details of our evaluation of ${\cal M}^{(0)}$ and ${\cal M}^{(1)}$ are given in Appendix \ref{fermionselfenergyappendix}. Here, we quote the results: \begin{align} {\cal M}^{(0)} & = - {2 \mu_0 {\rm sign}(\Sigma_m) \over N}, \\ {\cal M}^{(1)} & = {2 \over 3} {b \mu_0^2 \over N |\Sigma_m|^3}. \end{align} Thus, $\Sigma_m$ solves: \begin{align} \label{massselfequation} \Sigma_m = - {2 \mu_0 {\rm sign}(\Sigma_m) \over N} + {2 \over 3} {b \mu_0^2 \over N |\Sigma_m|^3}. \end{align} When $b=0$, the only solution is $\Sigma_m = 0$, consistent with our expectation that PH symmetry is unbroken at $\nu=1/2$. Dimensional analysis and $1/N$ scaling implies \begin{align} \Sigma_m = {\mu_0 \over N} f\Big({b N^3 \over \mu_0^2}\Big). \end{align} We find that $\Sigma_m$ has the following asymptotics: for fixed $|b|/\mu_0^2 \approx 10^{-1}$, \begin{align} \label{sigmalargeN} \Sigma_m & = \mu_0 {\rm sign}(b) \Big({|b| \over \mu_0^2 N} \Big)^{1/4} \Big[c_1 + c_2 \Big({\mu_0^2\over |b| N^3}\Big)^{1/4} + \ldots \Big], \end{align} where $c_1 \approx 0.9$, $c_2 \approx -0.5$, and the $\ldots$ are suppressed as $N \rightarrow \infty$; while for fixed $N$, \begin{align} \label{sigmasmallb} \Sigma_m = \mu_0 {\rm sign}(b) \Big({|b| \over \mu_0^2}\Big)^{1/3}\Big[c_3 + c_4 \Big({|b| N^3 \over \mu_0^2}\Big)^{1/3} + \ldots \Big], \end{align} where $c_3 \approx 0.69$, $c_4 \approx -0.08$, and the $\ldots$ vanish as $|b|/\mu_0^2 \rightarrow 0$. \subsubsection{$\Sigma_0$} We now consider the leading perturbative correction to $\Sigma_0$. This allows us to calculate the corrections to $\Sigma'_0$ and the chemical potential $\mu_0$. To evaluate the leading correction to $\Sigma_0$ that one obtains when $G(p) = G^{(0)}(p)$, we multiply both sides of Eq.~\eqref{SDfermion} by $\gamma^0$ on the left and take the trace to find: \begin{align} i \Sigma_0(q) = {1 \over 2} \int {d^3 p \over (2\pi)^3} {\rm tr} \Big[\gamma^0 \gamma^\alpha G^{(0)}(p + q) \gamma^\beta \Big({2 \pi \over k} {\epsilon_{\alpha \beta \sigma} p^\sigma \over p^2} \Big) \Big], \end{align} where $q^\alpha = q^\alpha_{\rm FS} + q_0 \delta^{\alpha 0}$. As detailed in Appendix \ref{fermionselfenergyappendix}, we find the leading correction $\Sigma_0^{(2)}$ to $\Sigma_0$ (see Eq.~\eqref{explicitexpansion}) for $|q_0|/\mu_0 \ll \Sigma^2_m/\mu_0^2$, \begin{align} \label{wfrenorm} i \Sigma_0^{(2)}(q_{\rm FS}) = - i {2 \mu_0 \over 3 N |\Sigma_m|} (q_0 + \mu_0). \end{align} At large $N$, we use Eq.~\eqref{sigmalargeN} for $\Sigma_m$ to find $\Sigma_0 \propto \Sigma'_0 \propto N^{-3/4}$. This vanishes by a factor of $N^{-1/2}$ {\it faster} than $\Sigma_m$ and so it is relatively suppressed as $N \rightarrow \infty$. Next-order terms in $\Sigma_\alpha$ and $\Sigma_m$ are obtained by self-consistently solving the Schwinger--Dyson equations with propagators corrected by the leading self-energy corrections. We have checked that the other components of $\Sigma_\alpha$ are likewise suppressed at large $N$; as such and because they do not enter our subsequent calculations, we will not discuss them further. Because $\Sigma_m$ vanishes at half-filling, we may only ignore $\Sigma'_0$ for sufficiently large $|b|/\mu_0^2$ at large $N$. \subsubsection{Dynamically-generated mass and corrected chemical potential} We are now ready to evaluate Eq.~\eqref{dynamicalmass} for the dynamically-generated mass. We extrapolate our large $N$ solution for $\Sigma_m$ to $N=1$ using Eq.~\eqref{sigmasmallb}: \begin{align} \label{mass} m = {\Sigma^{(1)}_m \over 1 - \Sigma'^{(1)}_0} \approx .69 {\rm sign}(b) |b|^{1/3} B^{1/6}, \end{align} where we set $\mu_0 = \sqrt{B}$. The specific behavior of the mass $m$, away from $\nu=1/2$, depends on whether the electron density $n_e$ or external magnetic field $B$ is fixed. At fixed $B$, the magnitude of $m$ is symmetric as function of $n_e$ about half-filling; on the other hand, $|m|$ is asymmetric for fixed $n_e$ and varying $B$. Using Eqs.~\eqref{correctedchemicalpotential} and \eqref{explicitexpansion}, the chemical potential, \begin{align} \mu = {\mu_0 - \Sigma^{(1)}_0 \over 1 - \Sigma'^{(1)}_0} = \sqrt{B}. \end{align} These results imply that the Dirac composite fermion density and mass are corrected in such a way that the chemical potential is unaffected. In our analysis of the Weiss oscillations in the next section, we ignore all higher-order in $1/N$ corrections and assume that a mass term is the dominant correction to the Dirac composite fermion mean-field Hamiltonian away from $\nu=1/2$. The chemical potential for this fluctuation-improved mean-field Hamiltonian will be taken to be $\mu = \sqrt{B}$. \section{Weiss oscillations of massive Dirac composite fermions} \label{weisssection} Following earlier work \cite{matulispeetersweiss2007, tahirsabeehmagneticweiss2008, 2016arXiv161004068B, PhysRevB.95.235424}, we now study the effect of the field-dependent mass of Eq.~\eqref{mass} on the Weiss oscillations near $\nu=1/2$ using the fluctuation-improved Dirac composite fermion mean-field theory. We find that a non-zero mass results in an inward shift of the locations of the oscillation minima toward half-filling. \subsection{Setup} We are interested in determining the quantum oscillations in the electrical resistivity near $\nu=1/2$ that result from a one-dimensional periodic scalar potential. In the Dirac composite fermion theory, the dc electrical conductivity, \begin{align} \label{conductivitydictionary} \sigma_{ij} = {1 \over 4\pi} \Big(\epsilon_{ij} - {1 \over 2} \epsilon_{i k} (\sigma^\psi)_{kl}^{-1} \epsilon_{l j} \Big), \end{align} where the (dimensionless) dc Dirac composite fermion conductivity. This equality is true at weak coupling; at strong coupling, $\langle \overline{\psi}\gamma_i \psi(- q_0) \overline{\psi}\gamma_j \psi(q_0) \rangle$ should be replaced by the exact gauge field $a_\alpha$ self-energy, evaluated at $q_1 = q_2 = 0$. \begin{align} \sigma^\psi_{ij} = \lim_{q_0 \rightarrow 0} {\langle \overline{\psi}\gamma_i \psi(- q_0) \overline{\psi}\gamma_j \psi(q_0) \rangle \over i q_0}. \end{align} Thus, the longitudinal electrical resistivity, \begin{align} \label{resistivity} \rho_{i i} \propto |\epsilon_{i j}| \sigma_{j j}^\psi, \end{align} where there is no sum over repeated indices. When a one-dimensional periodic scalar potential, $A_0 = V \cos(K x_1)$ with $K = 2\pi/d$, is applied to the electronic system, the $a_2$ equation of motion following from the Dirac composite fermion Lagrangian \eqref{CFlag} implies \begin{align} \overline{\psi} \gamma^2 \psi = - {K V \over 4\pi} \sin(K x). \end{align} We accommodate this current modulation within Dirac composite fermion mean-field theory by turning on a modulated perturbation to the emergent vector potential, \begin{align} \label{backgroundvector} \delta \vec{a} = \Big(0, W \sin(K x_1)\Big), \end{align} where $W = W(V)$ vanishes when $V = 0$. (Fluctuations will also generate a modulation in the Dirac composite fermion chemical potential; we ignore such effects here.) Putting together Eqs.~\eqref{resistivity} and \eqref{backgroundvector}, our goal in this section is to determine the correction to $\sigma_{jj}^\psi$ due to $\delta \vec{a}$, \begin{align} \label{dictionaryoscillation} \Delta \rho_{ii} \propto |\epsilon_{ij}| \Delta \sigma^\psi_{jj}. \end{align} In Dirac composite fermion mean-field theory, corrected by Eq.~\eqref{mass}, the calculation of $\Delta \sigma_{ij}^\psi$ simplifies to the determination of the conductivity of a free {\it massive} Dirac fermion. We use the Kubo formula \cite{charbonneauvVV1982} to find the conductivity correction: \begin{align} \label{kubo} \Delta \sigma_{ij}^\psi = {1 \over L_1 L_2} \Sigma_M \Big( \partial_{E_M} f_D(E_M) \Big) \tau(E_M) v_i^M v_j^M, \end{align} where $L_1$ ($L_2$) is the length of the system in the $x_1$-direction ($x_2$-direction), $\beta^{-1} = T$ is the temperature, $M$ denotes the quantum numbers of the single-particle states, $f^{-1}_D(E) = 1 + \exp(\beta(E - \mu))$ is the Fermi-Dirac distribution function with chemical potential $\mu = \sqrt{B}$, $\tau(E_M)$ is the scattering time for states at energy $E_M$, and $v_i^M = \partial_{p_i} E_M$ is the velocity correction in the $x_i$-direction of the state $M$ due to the periodic vector potential. As before, the Fermi velocity is set to unity. Assuming constant $\tau(E) = \tau \neq 0$, we only need to calculate how the energies $E_M$ are affected by $\delta \vec{a}$, which in turn will determine the velocities $v_i^M$. We will show that the leading correction in $W$ to $E_M$ only contributes to $v_2^M$. Calling $x_1 = x$ and $x_2 = y$, this implies the dominant correction is to $\Delta \rho_{xx} \propto \Delta \sigma_{yy}^\psi$. There are generally oscillatory corrections to $\rho_{yy}$ and $\rho_{xy}$, however, their amplitudes are typically less prominent and so we concentrate on $\Delta \rho_{xx}$ here. \subsection{Dirac composite fermion Weiss oscillations} The Dirac composite fermion mean-field Hamiltonian, corrected by Eq.~\eqref{mass}, \begin{equation}\label{eq:2} \begin{split} H&=\vec\sigma\cdot\Big(\frac{\partial}{\partial\vec x}+\vec a\Big)+m \sigma_3\\ \end{split}, \end{equation} where \begin{align} \vec{a} = \Big(0, b x_1 + W \sin(K x_1) \Big). \end{align} To zeroth order in $W$, $H$ has the particle spectrum, \[ E_n^{\left(0\right)} = \begin{cases} {\color{red}} \sqrt{2n |b|+ m^2},\quad n =1,2,\ldots, \cr \cr |m|,\quad n =0. \end{cases} \] with the corresponding eigenfunctions, \[ \psi_{n,p_2}(\vec{x}) = \begin{cases} \mathcal{N}e^{ip_2 x_2} \begin{pmatrix} -i\Phi_{n-1}\big(\frac{x_1+x_b}{l_b}\big)\\ \frac{\sqrt{m^2+2n|b|}-m}{\sqrt{2n|b|}}\Phi_{n}\big(\frac{x_1+x_b}{l_b}\big)\\ \end{pmatrix} & \text{for $n = 1, 2, \ldots$}, \\ \\ \mathcal{N}e^{ip_2 x_2} \begin{pmatrix} 0\\ \Phi_0\big(\frac{x_1+x_b}{l_b}\big)\\ \end{pmatrix} & \text{for $n=0$}, \end{cases} \] where the normalization constant, \begin{equation*}\label{eq:normalization} \mathcal{N}=\sqrt{\frac{n |b|}{l_bL_y(m^2+2n |b|-m\sqrt{m^2+2n |b|})}}, \end{equation*} $k_2 \in {2 \pi \over L_2} \mathbb{Z}$ is the momentum along the $x_2$-direction ($L_2 \rightarrow \infty$), $x_b(p_2) \equiv x_b = p_2 l_b^2$, $l_b^{-1} = |b|$, and $\Phi_n(z) = {e^{-z^2/2} \over \sqrt{2^n n! \sqrt{\pi}}} H_n(z)$ for the $n$-th Hermite polynomial $H_n(z)$. Thus, the states are labeled by $M = (n, p_2)$. We are interested in how the periodic vector potential in Eq.~\eqref{backgroundvector} lifts the degeneracy of the flat Landau level spectrum and contributes to the velocity $v_i^M$. (Finite dissipation has already been assumed in using a finite, non-zero scattering time $\tau$ in our calculation of the oscillatory component of $\rho_{xx}$.) First order perturbation theory gives the energy level corrections, \begin{equation} \begin{split} E_{n,p_2}^{(1)} = W\frac{\sqrt{2n}}{Kl_b}\Bigg[\sqrt{\frac{2n |b|}{m^2+2n |b|}}\Bigg]\cos(Kx_b)e^{-z/2}\Big[L_{n-1}(z)-L_n(z)\Big]\\ \end{split}, \end{equation} where $L_n(z)$ is the $n$th Laguerre polynomial, $z = K^2 l_b^2/2$, and terms suppressed as $L_1, L_2 \rightarrow \infty$ have been dropped. Thus, to leading order, $v_1^{n, p_2} = 0$ and \begin{equation} v_2^{n,p_2}=\frac{\partial E_{n,p_2}^{(1)}}{\partial p_2}=-W l_b\sqrt{2n}\Bigg[\sqrt{\frac{2n |b|}{m^2+2n |b|}}\Bigg]\sin(Kx_b)\mathrm{e}^{-z/2}\Big[L_{n-1}(z)-L_n(z)\Big]. \end{equation} We substitute these $v_i^{n, p_2}$ into the Kubo formula \eqref{kubo} to find $\Delta \sigma_{yy}^\psi$. To perform the integral over $p_2$, we approximate the Fermi-Dirac distribution function by substituting in the zeroth order energies $E_n^{(0)}$ (which are independent of $p_2$). Thus, we obtain the periodic potential correction to the Dirac composite fermion conductivity: \begin{equation}\label{eq:del_sig_1} \begin{split} \Delta\sigma_{yy}^{\psi} & \approx W^2\widetilde{\tau}\beta \sum_{n=0}^\infty\Bigg(\frac{2n |b|}{m^2+2n |b|}\Bigg)\frac{n\exp(\beta(E_{n}^{(0)}-\mu))}{\Big[1+\exp(\beta(E_{n}^{(0)}-\mu))\Big]^2}\mathrm{e}^{-z}\Big[L_{n-1}(z)-L_n(z)\Big]^2, \end{split} \end{equation} where $\tilde{\tau} \propto \tau$ has absorbed non-universal ${\cal O}(1)$ constants. $\Delta \sigma_{yy}^\psi$ in Eq.~\eqref{eq:del_sig_1} exhibits both Shubnikov--de Haas (for large $|b|$) and Weiss oscillations (for smaller $|b|$). We are interested in extracting an analytic expression that approximates Eq.~\eqref{eq:del_sig_1} at low temperatures near $\nu = 1/2$, following the earlier analysis in \cite{peetersvasilopoulosmagnetic}. In the weak field limit, $|b|/\mu^2 \ll 1$, a large number of Landau levels are filled ($n\to\infty$). Thus, we express the Laguerre polynomials $L_n$ as \begin{equation} L_n\left(z\right) \xrightarrow[n \to \infty]{} \mathrm{e}^{z/2}\frac{\cos\left(2\sqrt{nz}-\frac{\pi}{4}\right)}{\left(\pi^2nz\right)^{1/4}} + {\cal O}({1 \over n^{3/4}}). \end{equation} Next, we take the continuum approximation for the summation over $n$ by substituting \begin{equation*} n \rightarrow {l_b^2 \over 2}\Big(E^2 - m^2 \Big), \quad \sum_{n}\rightarrow l_b^2 \int\ EdE, \end{equation*} into Eq.~\eqref{eq:del_sig_1}: \begin{align} \label{approximatedconductivity} \Delta\sigma_{yy}^{\psi} = {\cal C} \int_{- \infty}^{\infty} dE\ {\beta e^{\beta (E - \mu)} \over (1 + e^{\beta(E - \mu)})^2} \sin^2\Big(l_b^2 K \sqrt{E^2 - m^2} - {\pi \over 4} \Big), \end{align} where ${\cal C} = W^2 {\tilde \tau} l_b^2 K$ and we have approximated $2n |b|/(m^2 + 2 n |b|)$ by unity. (The substitution for $n$ is motivated by the zeroth order expression for the energy of the Dirac composite fermion Landau levels.) Anticipating that at sufficiently low temperatures the integrand in Eq.~\eqref{approximatedconductivity} is dominated by ``energies" $E$ near the Fermi energy $\mu$, we write: \begin{align} E = \mu + s T \end{align} so that Eq.~\eqref{approximatedconductivity} becomes for $|s| T \ll \mu = \sqrt{B}$: \begin{align} \label{thirdform} \Delta\sigma_{yy}^{\psi} = {\cal C} \int_{- \infty}^{\infty} ds\ {e^{s} \over (1 + e^{s})^2} \sin^2\Big(l_b^2 K \sqrt{B - m^2} + {s T l_b^2 K \over \sqrt{1 - {m^2 \over B}}} - {\pi \over 4} \Big). \end{align} Performing the integral over $s$, we find the Weiss oscillations (see Eq.~\eqref{dictionaryoscillation}): \begin{align} \label{finiteTresult} \Delta \rho_{xx} \propto 1 - {T/T_D \over \sinh(T/T_D)} \Big[1 - 2 \sin^2\Big({2 \pi l_b^2\sqrt{B - m^2} \over d} - {\pi \over 4}\Big)\Big], \end{align} where \begin{align} T_D^{-1} = {4 \pi^2 l_b^2 \over d} {1 \over \sqrt{1 - {m^2 \over B}}}, \end{align} we have substituted $K = 2\pi/d$, $l_b^2 = |b|^{-1}$, and the proportionality constant is controlled by the longitudinal resistivity at $\nu = 1/2$. Eq.~\eqref{finiteTresult} constitutes the primary result of this section. The minima of $\Delta \rho_{xx}$ occur when \begin{align} {1 \over |b|} = {d \over 2 \sqrt{B - m^2}}\Big(p + {1 \over 4}\Big), p = 1, 2, 3, \ldots, \end{align} where $m$ is given in Eq.~\eqref{mass}. For either fixed electron density $n_e$ or fixed external field $B$, the locations of the oscillation minima for a given $p$ (either $B(p)$ or $n_e(p)$) are shifted inwards towards $\nu = 1/2$. This is shown in Fig.~\ref{weissoscillations} for fixed $n_e$ and in Fig.~\ref{weissoscillationsfixedB} for fixed $B$. The magnitude of this shift is symmetric for fixed $B$, but asymmetric for fixed $n_e$, given the form of the mass in Eq.~\eqref{mass}. Mass dependence also appears in the temperature-dependent prefactor ${T/T_D \over \sinh(T/T_D)}$. In principle, this mass dependence could be extracted from the finite-temperature scaling of $\Delta \rho_{xx}$ at the oscillation extrema. \begin{figure} \center \includegraphics[scale=0.47]{weiss_combined_fixed_B} \caption{Weiss oscillations of the Dirac composite fermion theory at fixed magnetic field $B$ and varying electron density $n_e$ about half-filling $n_{1/2} = B_{1/2}/4\pi$ ($\ell_{B_{1/2}}/d = 0.03$ and $k_B T = 0.3\sqrt{2 B_{1/2}}$). The blue curve corresponds to Dirac composite fermion mean-field theory \cite{PhysRevB.95.235424}. The orange curve includes the effects of a Dirac composite fermion mass $m \propto |B - 4\pi n_e|^{1/3} B^{1/6}$ induced by gauge fluctuations. Vertical lines correspond to the observed oscillation minima \cite{Kamburov2014}.} \label{weissoscillationsfixedB} \end{figure} \section{Comparison to HLR mean-field theory at finite temperature} \label{discussion} \subsection{Shubnikov--de Haas oscillations} In \cite{Manoharan1994}, Shayegan et al.~found the Shubnikov--de Haas (SdH) oscillations near half-filling to be well described over two orders of magnitude in temperature by the formula, \begin{align} \label{normalsdh} {\Delta \rho_{xx} \over \rho_0} \propto {\xi_{NR} \over \sinh(\xi_{NR})} \cos(2 \pi \nu - \pi), \end{align} where $\xi_{NR} = {2 \pi^2 T \over \omega_c}$, $\omega_c = |b|/m^\ast$, $m^\ast$ is an effective mass, $\nu$ is the electron filling fraction, and $\rho_0$ is the longitudinal resistivity at half-filling (measured at the lowest accessible temperature). (Note that these experiments were performed without any background periodic potential and so no Weiss oscillations were present.) Recall that we are using units where $k_B = \hbar = e = c = 1$. In particular, it was found that $m^\ast \propto \sqrt{B}$ for sufficiently large $|b| = |B - 4 \pi n_e|$ and that $m^\ast$ appeared to diverge as half-filling was approached. Interpreted within the HLR composite fermion framework, $m^\ast$ corresponds to the composite fermion effective mass. The $\sqrt{B}$ behavior of the composite fermion effective mass is consistent with the theoretical expectation \cite{halperinleeread, read1996cfs} that the composite fermion mass scale at $\nu=1/2$ is determined entirely by the characteristic energy of the Coulomb interaction. (Away from $\nu=1/2$, scaling implies the effective mass can be a scaling function of $B$ and $n_e$.) Applying previous treatments of SdH oscillations in graphene \cite{PhysRevB.71.125124, PhysRevB.78.245406} to the Dirac composite fermion theory, the temperature dependence of the SdH oscillations is controlled by \begin{align} {\Delta \rho_{xx} \over \rho_0} \propto {\xi_{D} \over \sinh(\xi_D)}, \end{align} where $\xi_D = {2 \pi^2 T \sqrt{B} \over |b|}$. Thus, $\xi_{NR} \propto \xi_D$ if $m^\ast \propto \sqrt{B}$. Consequently, the Dirac composite fermion theory is consistent with the observed temperature scaling with $\sqrt{B}$. We cannot account for the divergence at small $|b|$ attributed to $m^\ast$ in our treatment. \subsection{Weiss oscillations} In \cite{PhysRevB.95.235424}, it was shown that the locations of the Weiss oscillation minima obtained from Dirac and HLR composite fermion mean-field theories coincide to $0.002\%$. This result provides evidence that the two composite fermion theories may belong to the same universality class. However, the (possible) equivalence of the two theories only occurs at long distances and so the finite-temperature behavior of the two theories will generally differ. In HLR {\it mean-field theory}, the temperature dependence of the Weiss oscillations enters in the factor \cite{peetersvasilopoulosmagnetic}, \begin{align} \Delta \rho_{xx} \propto {T/T_{NR} \over \sinh(T/T_{NR})}, \end{align} where the characteristic temperature scale, \begin{align} T^{-1}_{NR} = {4 \pi^2 l_b^2 \over d} {m^\ast \over \sqrt{4 \pi n_e}}. \end{align} Assuming the effective mass $m^\ast \propto \sqrt{B}$, the characteristic temperatures $T_D$ and $T_{NR}$ generally have very different behaviors as functions of $B$ and $n_e$. It would be interesting to study the effects of fluctuations in HLR theory, along the lines of the study presented here, and compare with our result in Eq.~\eqref{finiteTresult}. \section{Conclusion} \label{conclusion} In this paper, we studied theoretically commensurability oscillations about $\nu=1/2$ that are produced by a one-dimensional scalar potential using the Dirac composite fermion theory. Through an approximate large $N$ analysis of the Schwinger--Dyson equations, we considered how corrections to Dirac composite fermion mean-field theory affect the behavior of the predicted oscillations. We focused on corrections arising from the exchange of an emergent gauge field whose low-energy kinematics satisfy $|\vec{q}\,| \leq |q_0|$. In addition, we only considered screened electron-electron interactions. Remarkably within this restricted parameter regime, we found a self-consistent solution to the Schwinger--Dyson equations in which a Chern-Simons term for the gauge field and mass for the Dirac composite fermion are dynamically generated. The Dirac mass resulted in a correction to the locations of the commensurability oscillation minima which improved comparison with experiment. There are a variety of directions for future exploration. It would be interesting to consider the effects of the exchange of emergent gauge fields with $|q_0| < |\vec{q}\,|$. In this regime, Landau damping of the ``magnetic" component of the gauge field propagator is expected to result in IR dominant Dirac composite fermion self-energy corrections \cite{SSLee2009OrderofLimits, MetlitskiSachdev2010Part1, Mross2010}. In particular, it would be interesting to understand this regime when a dynamically-generated Chern-Simons term for the gauge field is present. These studies are expected to be highly sensitive to the nature of the electron-electron interactions. At $\nu=1/2$ when the effective magnetic field vanishes, single-particle properties depend upon whether this interaction is short or long ranged \cite{KimFurusakiWenLee1994}. It is important to understand the interplay of this physics with a non-zero effective magnetic field that is generated away from $\nu=1/2$ and its potential observable effects. The corrections to the predicted commensurability oscillations relied on a solution to the Schwinger--Dyson equations, obtained in a large $N$ flavor approximation, that was extrapolated to $N=1$. The study of higher-order in $1/N$ effects may provide additional insight into the validity of this extrapolation. Alternatively, study of the 't Hooft large $N$ limit of the Dirac composite fermion theory dual conjectured in \cite{PhysRevB.99.125135} may complement our analysis. Recent works \cite{2017PhRvX...7c1029W, 2018arXiv180307767K, PhysRevB.98.115105} have shown that PH symmetry at $\nu=1/2$ and reflection symmetry about $\nu=1/2$ rely on precisely correlated electric and magnetic perturbations. (This correlation is implemented by the Chern-Simons gauge field in the HLR theory.) Specifically, a periodic scalar potential $V({\bf x})$ generates a periodic magnetic flux $b({\bf x})$ via \begin{align} \label{slaving} b({\bf x}) = - 2 m^\ast V({\bf x}). \end{align} How might fluctuations about HLR mean-field theory affect Eq.~\eqref{slaving} and potentially modify its predicted commensurability oscillations and other observables? \section*{Acknowledgments} We thank Hamed Asasi, Sudip Chakravarty, Pak Kau Lim, Leonid Pryadko, Srinivas Raghu, Nicholas Rombes, and Mansour Shayegan for useful conversations and correspondence. M.M. is supported by the Department of Energy Office of Basic Energy Sciences contract DE-SC0020007. M.M. and A.M. are supported by the UCR Academic Senate and the Hellman Foundation. This work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611.
1,116,691,499,495
arxiv
\section{Introduction} More than 15 years ago, cosmologists had a satisfying picture of how the universe worked, based on three main hypotheses: it was assumed that the universe is homogeneous and isotropic on large scales, that energy and momentum exist in the form of ``ordinary'' matter and radiation plus Cold Dark Matter (CDM), and that gravity is described by Einstein's General Relativity (GR). This Standard Cosmological model was successful in explaining several aspects of our universe, and predicted that the expansion of the universe necessarily has to decelerate. In fact, by the second Friedmann equation, the scale factor of the Universe $a(t)$ obeys $\ddot{a}/a \propto - (\rho + 3 p)$ (where $\rho$ is the energy density, $p$ is the pressure and a dot indicates a time derivative), and the right hand side of the previous relation is negative for ordinary matter, radiation and CDM. The discovery of the present acceleration of the expansion \cite{Riess98, Perlmutter98}, successively confirmed by a variety of observations, forces us to abandon some of the assumptions mentioned above. We could relax the assumption of homogeneity, which differently from isotropy is not well-tested, and consider Lema\^{i}tre-Tolman-Bondi models where the Earth is placed inside a large void. Another possibility is that the backreaction of the deviations from homogeneity and isotropy on the evolution of the scale factor is not negligible, and neglecting this effect may cause a misinterpretation of the observational data. If instead we enforce homogeneity, we could postulate that the universe is filled by a source (which we don't observe in the lab or in the solar system) which has negative pressure: the simplest option is the Cosmological Constant ($\Lambda$), but we may also consider new dynamical degrees of freedom (the \emph{dark energy} paradigm). Finally, it may be that the cosmological observations are simply signalling that General Relativity is not the correct description of gravity at very large scales (the \emph{modified gravity} approach). See \cite{EllisNicolaiDurrerMaartens, DurrerMaartens} and references therein for a more detailed discussion of these approaches. The most straightforward solution of this problem seems to allow for a non-zero cosmological constant (\emph{$\Lambda$CDM models}). However, fitting the observational data gives a value for $\Lambda$ which is really puzzling. If $\Lambda$ is considered as a new scale for gravity, then the universe is very fine tuned (coincidence problem), while if it is considered as the semi-classical manifestation of vacuum energy then the mismatch with the theoretical prediction is dramatic (``old'' cosmological constant problem) \cite{WeinbergCC}. Therefore, it seems worthwhile to pursue alternative approaches, and we choose to investigate the modified gravity approach. \section{Modif\mbox{}ied gravity and Cascading DGP} Modifying gravity is however potentially dangerous, since quite in general it introduces extra degrees of freedom compared to GR. On the one hand, these extra degrees of freedom have to be screened on terrestrial and astrophysical scales, otherwise they would be detected via fifth-force experiments and precise tests of gravity in the solar system. On the other hand, when studying the perturbative stability of certain solutions of the equations of motion, quite often some perturbation modes turn out to have negative kinetic energy and therefore lead to ghost instabilities. In fact, the presence of fields with negative kinetic energy (ghosts) renders the Hamiltonian unbounded from below, and causes the system to be unstable with respect to the simultaneous excitation of ghost and non-ghost fields (see \cite{Sbisa:2014pzo} and references therein). Several modified gravity theories have been proposed so far, including $f(R)$ gravity, massive gravity, and braneworlds (see \cite{ModifiedGravityAndCosmologyHugeReview} for a review). In particular, braneworld models are appealing from a fundamental point of view, since the presence of extra dimensions and branes is crucial in string theory. In these theories, the spacetime has more than four dimensions and matter and radiation (as well as the strong, weak and electro-magnetic interactions) are confined on structures (branes) whose dimensionality is lower than the dimensionality of the ambient spacetime (bulk). Only gravity can propagate in the extra dimensions, which is consistent with the fact that in string theory gravity is described by closed strings. The codimension of a brane is defined as the difference between the dimension of the bulk and the dimension of the brane, and the embedding functions are those functions which indicate the position in the bulk of the points belonging to the brane. In general the confinement is not sharp, so matter, radiation and the gauge fields are distributed around the brane within a characteristic distance, which is determined by the properties of the system and of the confinement mechanism. This characteristic length scale is called the thickness of the brane, while the details of the distribution constitute the internal structure of the brane. Concerning the late time acceleration problem, a promising idea is that of brane induced gravity, whose first and best-known realization is the DGP model \cite{DGP00}. Braneworld theories with induced gravity are characterized by the inclusion, in the part of the action which describes the dynamics of the brane, of an Einstein-Hilbert term built from the metric induced on the brane. This term can be introduced at classical level purely on phenomenological grounds, but can be also understood as a contribution coming from loop corrections in the low energy effective action of a quantum description where matter is confined on the brane \cite{DGP00}. The DGP model has the intriguing property of admitting self-accelerating cosmological solutions, which open the possibility of explaining the cosmic acceleration by geometrical means. On the other hand, it has been shown that these solutions have a perturbative ghost instability, and that the DGP model fits the cosmological data significantly worse than $\Lambda$CDM. See \cite{Sbisa:2014dwa} and references therein for a discussion on the DGP model and its problems. A natural idea to go beyond the DGP model is to increase the codimension, while still having infinite extra dimensions and brane induced gravity. Such models offer also the possibility of addressing the Cosmological Constant problem, since they evade Weinberg's no-go theorem \cite{ArkaniHamed:2002fu, Dvali:2002pe} and may realize the degravitation mechanism \cite{Dvali:2007kt}. However, it is not clear if increasing the codimension helps with the ghost problem, since there are contradicting results \cite{Dubovsky:2002jm, Kolanovic:2003am, Berkhahn:2012wg}. Moreover, the gravitational field on a brane of codimension higher than one diverges when the thickness of the brane tends to zero \cite{Cline:2003ak, Vinet:2004bk, Bostock:2003cv}, and it is necessary to take explicitly into account its internal structure. \subsection{The Cascading DGP model} Both of the aforementioned problems were claimed to be solved in a fairly recent class of models, the Cascading DGP \cite{deRham:2007xp}. In these models there is a recursive embedding of branes into branes of increasing dimensionality, each equipped with an appropriate induced gravity term. In the minimal set-up, a four-dimensional (4D) brane (our universe) is embedded inside a 5D brane which in turn is embedded in the 6D bulk. This model has three parameters, the masses $M_6$, $M_5$ and $M_4$, where $M_6^4$ controls the strength of the bulk action, $M_5^3$ controls the strength of the induced gravity term on the 5D brane and $M_4^2$ controls the strength of the induced gravity term on the 4D brane. The requirement that the model reproduces Einstein gravity on small scales fixes $M_4$ to be equal to the Planck mass, so this model has in truth two free parameters, the mass scales $m_6 = M_6^4/M_5^3$ and $m_5 = M_5^3/M_4^2$. They respectively control the relative strength between the bulk action and the induced gravity term on the 5D brane ($m_6$), and between the induced gravity term on the 5D brane and the induced gravity term on the 4D brane ($m_5$). It was shown that, moving from large distances to small distances, weak gravity ``cascades'' $6D \to 5D \to 4D$ when $m_6 \ll m_5$ while there is a direct transition $6D \to 4D$ when $m_6 \gg m_5$. Moreover, it was claimed that the presence of the codimension-1 brane with induced gravity renders the gravitational field finite on the codimension-2 brane even when the thickness of the latter tends to zero \cite{deRham:2007xp, deRham:2007rw}. A very important class of configurations are those that correspond to a source term given simply by vacuum energy ($\bar{\lambda}$) on the 4D brane (pure tension solutions). Importantly, it was claimed that there exists a critical tension $\bar{\lambda}_c$ such that a pure tension configuration is free of ghost instabilities (at first order in perturbations) if $\bar{\lambda} > \bar{\lambda}_c$ while it has ghosts if $\bar{\lambda} < \bar{\lambda}_c$ \cite{deRham:2007xp,deRham:2010rw}. \section{The nested brane realization of the Cascading DGP model} \label{regularization choice} To probe the dynamics of the internal structure of a brane, we need to excite it with amounts of energy roughly of the order of the inverse of the thickness (in ``natural units'' $\hbar = c = 1$). If we are interested only in what happens outside of the brane, and want to focus on energy scales lower than the inverse of the thickness, it is usual to consider a ``thin limit description'' in which the thickness of the brane is sent to zero while keeping fixed the amount of energy and momentum on the brane. In this case the brane is said to be ``thin''. While this is extremely useful for codimension-1 branes, it was proved that the thin limit of branes of codimension higher than one is not well-defined \cite{GerochTraschen}. This result is very likely to be true also for more elaborate constructions such as in the Cascading case. Therefore, to obtain a well-defined formulation of the Cascading DGP model, it is necessary to give some information on the internal structure of the branes. If we assume that there is a hierarchy between the thickness of the branes, namely that the codimension-1 brane is much thinner than the codimension-2 brane, we can describe the system as if a ``ribbon'' codimension-2 brane was present inside a thin codimension-1 brane. Furthermore, it can be shown that the thin limit of the ribbon brane (inside the already thin codimension-1 brane) is well defined \cite{Sbisa':2014uza}. It follows that, with this assumption about the hierarchy of the thicknesses, we can indeed work with thin branes and forget the internal structures. We refer to these set-ups as the \emph{nested branes realization} of the Cascading DGP. \subsection{Perturbations around pure tension solutions} To investigate the phenomenon which gives rise to the critical tension, we study perturbations at first order around solutions where pure tension is placed on the codimension-2 brane. These background solutions are most naturally expressed in a bulk-based approach, where the bulk metric is flat and the codimension-1 embedding has a cusp at the codimension-2 brane \cite{Sbisa':2014uza}. The complete space time is the product of a 4D Minkowski space and a 2D Riemannian cone, whose deficit angle is proportional to the tension. Therefore there exists a maximum tension $\bar{\lambda}_M = 2 \pi M_{6}^{4}$ which corresponds to a deficit angle of $2 \pi$ (when the cone becomes degenerate). To study perturbations, we leave both the bulk metric and the codimension-1 embedding free to fluctuate. To deal with the issue of gauge invariance, we consider a 4D scalar-vector-tensor decomposition, and work with gauge-invariant variables \cite{Mukohyama:2000ui}. In particular, in the scalar sector it is possible to express the equations in terms of two master variables, the trace part $\pi$ of the bulk metric perturbations, and the normal component of the codimension-1 bending $\delta \! \varphi_{\!_{\perp}}$ \cite{Sbisa':2014uza} (we call \emph{bending modes} the perturbation of the embedding functions). Notably, if we focus on the behaviour of the fields near the codimension-2 brane, it is possible to eliminate $\delta \! \varphi_{\!_{\perp}}$ and obtain a master equation for $\pi$ alone. This equation however contains the derivative of $\pi$ normally to the codimension-1 brane, so to know the behaviour of $\pi$ on the codimension-2 brane it is still necessary to solve the full 6D problem. This difficulty can be overcome by considering a ``4D limit'', which gives the following (local) equation on the codimension-2 brane \cite{Sbisa:2014vva} \begin{equation} 3 M_4^2 \, \bigg[ \, 1 - \frac{3}{2} \, \frac{m_5}{m_6} \, \tan \bigg( \frac{\bar{\lambda}}{4 M_6^4} \bigg) \bigg] \,\, \Box \, \pi = \mathcal{T} \end{equation} where $\Box$ indicates the four-dimensional D'Alembert operator and $\mathcal{T}$ is the trace of the energy-momentum tensor. This equation indicates that $\pi$ is an effective 4D ghost if $0 < \bar{\lambda} < \bar{\lambda}_c$ while it is a healthy field if $\bar{\lambda}_c < \bar{\lambda} < \bar{\lambda}_M$, where the \emph{critical tension} reads $\bar{\lambda}_c \equiv 4 M_6^4 \, \arctan \big( 2 m_6/3 m_5 \big)$. \section{Geometrical interpretation and ghost-free regions in parameters space} Having identified the critical tension, we can study the coupled dynamics of the fields $\pi$ and $\delta \! \varphi_{\!_{\perp}}$ to interpret geometrically its existence. Considering the 4D limit mentioned above, it can be shown that the trace of the energy-momentum tensor $\mathcal{T}$ excites $\pi$ via two separate channels. It does so directly, because of the 4D induced gravity term, and indirectly via the bending of the codimension-1 brane, because of the 5D induced gravity term. Crucially, the first channel excites $\pi$ in a ghostly and $\bar{\lambda}$-independent way, while the second channel excites $\pi$ in a healthy and $\bar{\lambda}$-dependent way \cite{Sbisa:2014vva}. The existence of the critical tension is due to the competition between these two channels, and the fact that the field $\pi$ is a ghost or not is decided by the first channel being more or less efficient than the second channel. Note that the existence of the second channel is entirely due to the higher dimensional structure of the theory, and in particular to the presence of the induced gravity term on the codimension-1 brane. Our result for the critical tension is at odds with the findings of \cite{deRham:2007xp,deRham:2010rw}, which found the value $\bar{\lambda}_{c}^{dRKT} = 8 \, m_6^2 \, M_4^2/3$. These two results coincide when $m_6 \ll m_5$ but differ significantly when $m_6 \gtrsim m_5$ and are dramatically different when $m_6 \gg m_5$. To see why this difference is crucial, note that the value we found for the critical tension is always smaller than the maximum tension $\bar{\lambda}_{M}$, so for every value of $m_5$ and $m_6$ we can find a range of values for the background tension such that $\pi$ is a healthy field. However, $\bar{\lambda}_{c}^{dRKT}$ is smaller than $\bar{\lambda}_{M}$ only when $m_6 \lesssim m_5$, so the results of \cite{deRham:2007xp,deRham:2010rw} imply that half of the phase space of the theory is plagued by ghosts, and so is phenomenologically ruled out. It is therefore very important to establish why two different results are obtained, and which of the two is correct. \subsection{Numerical check} A priori we may wonder if, referring to the discussion in section \ref{regularization choice}, the hypotheses used in \cite{deRham:2007xp,deRham:2010rw} about the internal structure of the branes are different from the one we use, and therefore in truth we are considering different models. However this is not the case, since our analysis can be mapped into that of \cite{deRham:2010rw} by a coordinate transformation \cite{Sbisa:2014vva}. In fact it can be shown that the difference lies in how the junction conditions at the codimension-2 brane are derived, which is linked to which hypotheses are made on the behaviour of the fields near the codimension-2 brane in the thin limit. Roughly speaking, we assume that the embedding functions remain continuous in the thin limit (``the codimension-1 brane does not break''), while the result of \cite{deRham:2010rw} is reproduced assuming that the normal component of the bending remains continuous. These two conditions cannot be both satisfied at the same time, since in the background solutions the normal vector becomes discontinuous in the thin limit \cite{Sbisa:2014vva}. To decide which of the two results is correct, we consider a case, the pure tension perturbation case, where the internal structure of the codimension-2 brane is exactly solvable. In this case it is possible to derive the codimension-2 junction conditions by performing numerically the integration of the codimension-1 junction conditions across the codimension-2 brane (pillbox integration). Since the configuration of the fields is known explicitly also inside the codimension-2 brane, it is not necessary to make any hypothesis on the behaviour of the fields to do that. The idea is to perform the numerical pillbox integration for a sequence of configurations with different thickness, and study the asymptotic behaviour of the outcome when the thickness tends to zero. If our analysis or the analysis of \cite{deRham:2007xp,deRham:2010rw} is correct, then it has to agree with the numerical outcome in the thin limit. The outcome of the numerical integrations are plotted in figure \ref{PillboxIntegrationfigure} (the parameter $n$ is inversely proportional to the thickness). It is evident that the points corresponding to our codimension-2 junction conditions (squares) converge to the points corresponding to the numerical integration (circles), while the points corresponding to the codimension-2 junction conditions which reproduce the result of \cite{deRham:2010rw} (diamonds) are significantly distant from the former ones. \begin{figure}[htp!] \begin{center} \includegraphics{PillboxIntegration2.eps} \caption[Numerical results of the pillbox integration]{Plot of the numerical results of the pillbox integration} \label{PillboxIntegrationfigure} \end{center} \end{figure} \section{Conclusions} The numerical check above puts on firm footing our analysis of the nested branes realization of the 6D Cascading DGP, and strongly supports our claim that the correct value of the critical tension is $\bar{\lambda}_c \equiv 4 M_6^4 \, \arctan \big( 2 m_6/3 m_5 \big)$. In particular, it strongly suggests that also models where gravity displays a direct transition 6D $\to$ 4D ($m_6 > m_5$) are phenomenologically viable. We conclude that braneworld models with infinite volume extra dimensions and induced gravity might be a powerful tool to tackle the late time acceleration and the Cosmological Constant problems. In particular, the Cascading DGP seems a promising candidate to overcome the problems of the DGP models whilst preserving its good features. On the other hand, the singular structure of the geometry is very subtle at the codimension-2 brane, thus indirectly confirming the belief that the singular structure of branes of codimension higher than one is in general more complex than that of codimension-1 branes. Regarding future directions of research, a lot of work is still to be done, including studying the existence of the ghost at full non-linear level by performing a Hamiltonian analysis. More in general, it is important to derive the codimension-2 junction conditions at non-perturbative level and to derive cosmological solutions, as well as verifying if the model passes the high-precision solar system tests. \ack This talk is based on work done in collaboration with Kazuya Koyama. \section*{References} \providecommand{\newblock}{}
1,116,691,499,496
arxiv
\section{Introduction} Bandit algorithms~\cite{thompson1933likelihood,robbins1952some,bubeck2012regret,slivkins2019introduction,lattimore2020bandit} provide an attractive model of learning for online platforms, and they are now widely used to optimize retail, media streaming, and news-feed. Each round of bandit learning corresponds to an interaction with a user, where the algorithm selects an arm (e.g. product, song, article), observes the user's response (e.g. purchase, stream, read), and then updates its policy. Over time, the bandit algorithm thus learns to maximize the user responses, which are often well aligned with the objective of the online platform (e.g. profit maximization, engagement maximization). While maximizing user responses may arguably be in the interest of the platform and its users at least in the short term, there is now a growing understanding that it can also be problematic in multiple respects. In this paper, we focus on the fact that this objective ignores the interests of the items (i.e. arms), which also derive utility from the interactions. In particular, sellers, artists and writers have a strong interest in the exposure their items receive, as it affects their chance to get purchased, streamed or read. It is well understood that algorithms that maximize user responses can be unfair in how they allocate exposure to the items~\cite{singh2018fairness}. For example, two items with very similar merit (e.g. click-through rate) may receive substantially different amounts of exposure --- which is not only objectionable in itself, but can also degrade the long-term objectives of the platform (e.g. seller retention~\cite{mehrotra2018towards}, anti-discrimination~\cite{noble2018algorithms}, anti-polarization~\cite{epstein2015search}). To illustrate the problem, consider a conventional (non-personalized) stochastic multi-armed bandit algorithm that is used to promote new music albums on the front-page of a website. The bandit algorithm will quickly learn which album draws the largest click-through rate and keep displaying this album, even if other albums are almost equally good. This promotes a winner-takes-all dynamic that creates superstars \cite{mehrotra2018towards}, and may drive many deserving artists out of business. Analogously, a (personalized) contextual bandit for news-feed recommendation can polarize a user by quickly learning which type of articles the user is most likely to read, and then exclusively recommend such articles instead of a portfolio that is more reflective of the user's true interest distribution. To overcome these problems of the conventional bandit objective, we propose a new formulation of the bandit problem that implements the principle of Merit-based Fairness of Exposure~\cite{singh2018fairness,BiegaGW18,Wang/etal/21b}. For brevity, we call this the \FairX\ bandit problem. It incorporates the additional fairness requirement that each item/arm receives a share of exposure that is proportional to its merit. We define the merit of an arm as an increasing function of its mean reward, and the exposure as the probability of being selected by the bandit policy at each round. Based on these quantities, we then formulate the reward regret and the fairness regret so that minimizing these two regrets corresponds to maximizing responses while minimizing unfairness to the items. For the \FairX\ bandit problem, we present a fair upper confidence bound (UCB) algorithm and a fair Thompson sampling (TS) algorithm in the stochastic multi-armed bandits (MAB) setting, as well as a fair linear UCB algorithm and a fair linear TS algorithm in the stochastic linear bandits setting. We prove that all algorithms achieve fairness regret and reward regret with sub-linear dependence on the number of rounds, while the TS-based algorithms have computational advantages. The fairness regret of these algorithms also depends on the minimum merit of the arms and a bounded Lipschitz constant of the merit function, and we provide fairness regret lower bounds based on these quantities. Beyond the theoretical analysis, we also conduct an empirical evaluation that compares these algorithms with conventional bandit algorithms and more naive baselines, finding that the fairness-aware algorithms can fairly allocate exposure to different arms effectively while maximizing user responses. \section{Stochastic Multi-Armed Bandits in the \FairX\ Setting} We begin by introducing the \FairX\ setting for stochastic MAB, including our new formulation of fairness and reward regret. We then develop two algorithms, called \FairXUCB\ and \FairXTS, and bound their fairness and reward regret. In the subsequent section, we will extend this approach to stochastic linear bandits. \subsection{\FairX\ Setting for Stochastic MAB} A stochastic MAB instance can be represented as a collection of reward distributions $\BanditEnv = (\RewardDist_\Action:\Action\in[\NumActions])$, where $\RewardDist_\Action$ is the reward distribution of arm $\Action$ with mean $\Params^\star_\Action = \mE_{\Reward\sim\RewardDist_\Action}\left[\Reward\right]$. The learner interacts with the environment sequentially over $\TimeSteps$ rounds. In each round $t\in[\TimeSteps]$, the learner has to choose a policy $\Policy_t$ over the $\NumActions$ arms based on the interaction history before round $t$. The learner then samples an arm $\Action_t\sim\Policy_t$. In response to the selected arm $\Action_t$, the environment samples a reward $\Reward_{t,\Action_t}\sim\RewardDist_{ \Action_t}\in\mathbb{R}$ from the reward distribution $\RewardDist_{\Action_t}$ and reveals the reward $\Reward_{t,\Action_t}$ to the learner. The history $\History_t = \left(\Policy_1, \Action_1,\Reward_{1,\Action_1},\ldots, \Policy_{t-1}, \Action_{t-1},\Reward_{t-1,\Action_{t-1}} \right)$ consists of all the deployed policies, chosen arms, and their associated rewards. Conventionally, the goal of learning is to maximize the cumulative expected reward $ \sum_{t=1}^{\TimeSteps}\mE_{\Action_t\sim\Policy_t}\Params^\star_{\Action_t}$. Thus conventional bandit algorithms converge to a policy that deterministically selects the arm with the largest expected reward. As many have pointed out in other contexts \cite{singh2018fairness,mehrotra2018towards,BiegaGW18,beutel2019fairness,geyik2019fairness,abdollahpouri2020multistakeholder}, such winner-takes-all allocations can be considered unfair to the items in many applications and can lead to undesirable long-term dynamics. Bringing this insight to the task of bandit learning, we propose to incorporate merit-based fairness-of-exposure constraints~\cite{singh2018fairness} into the bandits objective. Specifically, we aim to learn a policy $\pi^\star$ which ensures that each arm receives an amount of exposure proportional to its merit, where merit is quantified through an application-dependent merit function $\Merit(\cdot)>0$ that maps the expected reward of an arm to a positive merit value. \[ \frac{\Policy^\star(\Action)}{\Merit(\Params^\star_\Action)} = \frac{\Policy^\star(\Action')}{\Merit(\Params^\star_{\Action'})}\quad \forall \Action, \Action' \in [\NumActions]. \] The merit function $\Merit$ is an input to the bandit algorithm, and it provides a design choice that permits tailoring the fairness criterion to different applications. The following theorem shows that there is a unique policy that satisfies the above fairness constraints. \begin{theorem}[Optimal Fair Policy] \label{theo:unique_optimal_fair_policy} For any mean reward parameter $\Params^\star$ and any choice of merit function $\Merit(\cdot)>0$, there exist a unique policy $\Policy^\star$ of the form \[ \Policy^{\star}(\Action) = \frac{\Merit(\Params^\star_\Action)}{\sum_{\Action'}\Merit(\Params^\star_{\Action'}) }\quad\forall \Action\in[\NumActions], \] that fulfills the merit-based fairness of exposure constraints. \end{theorem} We refer to $\Policy^\star$ as the optimal fair policy. All the proofs of the theorems are in Appendix~\ref{section:proof_theorems}. When the bandit converges to this optimal fair policy $\Policy^\star$, the expected reward also converges to the expected reward of the optimal fair policy. We thus define the {\em reward regret} $\RewardRegret_\TimeSteps$ at round $\TimeSteps$ as the gap between the expected reward of the deployed policy and the expected reward of the optimal fair policy $\Policy^\star$ \begin{equation} \begin{split} \RewardRegret_\TimeSteps &=\sum_{t=1}^\TimeSteps\sum_{\Action}\Policy^\star(\Action)\Params^\star_\Action - \sum_{t=1}^\TimeSteps\sum_{\Action}\Policy_t(\Action)\Params^\star_\Action. \end{split} \end{equation} While this reward regret quantifies how quickly the reward is optimized, we also need to quantify how effectively the algorithm learns to enforce fairness. We thus define the following {\em fairness regret} $\FairnessRegret_\TimeSteps$, which measures the cumulative $\ell^1$ distance between the deployed policy and the optimal fair policy at round $\TimeSteps$ \begin{equation} \FairnessRegret_\TimeSteps = \sum_{t=1}^{\TimeSteps}\sum_\Action \lvert\Policy^{\star}(\Action) - \Policy_t(\Action)\rvert. \end{equation} The fairness regret and the reward regret depend on both the randomly sampled rewards, as well as the arms randomly sampled from the policy. They are thus random variables and we aim to minimize the regrets with high probability. To prepare for the theoretical analysis, we introduce the following two conditions on the merit function $\Merit$ to suitably characterize a \FairX\ bandit problem. \begin{condition}[Minimum Merit] \label{condi:min_merit} The merit of each arm is positive, i.e. $\min_{\Params}\Merit(\Params)\geq\MeritMin$ for some positive constant $\MeritMin>0$. \end{condition} \begin{condition}[Lipschitz Continuity] \label{condi:lip_cont} The merit function $\Merit$ is $\LipConst$-Lipschitz continuous, i.e. $\forall$ $\Params_1,\Params_2$, $\lvert\Merit(\Params_1)-\Merit(\Params_2)\rvert\leq\LipConst\lvert\Params_1-\Params_2\rvert$ for some positive constant $\LipConst>0$. \end{condition} The following two theorems show that neither of the two conditions can be dropped, if we want to obtain bandit algorithms with fairness regret that is sub-linear in the number of rounds $\TimeSteps$. \begin{theorem}[Lower Bound on Fairness Regret is Linear without Minimum-Merit Condition] \label{theo:lower_bound_min_merit} For time horizon $\TimeSteps>0$, there exists a $1$-Lipschitz continuous merit function $f$ where $\min_{\Params} \Merit(\Params) = 1/\sqrt{\TimeSteps}$, such that for any bandit algorithm, there must exist a MAB instance such that the expected fairness regret is at least $\mE\left[\FairnessRegret_\TimeSteps\right] \geq 0.015 \TimeSteps$. \end{theorem} \begin{theorem}[Lower Bound on Fairness Regret is Linear without Bounded Lipschitz-Continuity Condition] \label{theo:lower_bound_lip_cont} For time horizon $\TimeSteps>0$, there exists a $\sqrt{\TimeSteps}$-Lipschitz continuous merit function $f$ with minimum merit $1$, such that for any bandit algorithm, there must exist a MAB instance such that the expected fairness regret is at least $\mE[\FairnessRegret_\TimeSteps] \geq 0.015 \TimeSteps$. \end{theorem} \subsection{\FairXUCB\ Algorithm} \begin{algorithm}[tbh] \begin{algorithmic}[1] \STATE {\bf input: }{$\NumActions$, $\TimeSteps$, $\Merit$, $\ConfiWidth$} \FOR{$t=1$ to $\TimeSteps$} \STATE $\forall\Action$ $\SelectedTimes_{t,\Action} = \sum_{\tau=1}^{t-1}\Indicator\{\Action_\tau=\Action\}$ \STATE $\forall\Action$ $\hat{\Params}_{t,\Action} =\sum_{\tau=1}^{t-1}\Indicator\{\Action_\tau=\Action\}\Reward_{\tau,\Action_\tau}/\SelectedTimes_{t,\Action}$ \STATE $\forall\Action$ $w_{t,\Action} = \ConfiWidth/\sqrt{\SelectedTimes_{t,\Action}}$ \STATE $\ConfidenceRegion_t = \left(\Params:\forall \Action\:\Params_\Action\in\left[ \hat{\Params}_{t,\Action}-w_{t,\Action}, \hat{\Params}_{t,\Action}+ w_{t,\Action} \right]\right)$ \STATE \label{alg:fair_UCB:optimization} $\Params_t = \argmax_{\Params\in\ConfidenceRegion_t}\sum_\Action\frac{ \Merit(\Params_\Action)}{\sum_{\Action'}\Merit(\Params_{\Action'})}\Params_\Action$ \STATE Construct policy $\Policy_t(\Action) = \frac{\Merit(\Params_{t,\Action})}{\sum_{\Action'} \Merit(\Params_{t,\Action'})}$ \STATE Sample arm $\Action_t\sim\Policy_t$ \STATE Observe reward $\Reward_{t,\Action_t}$ \ENDFOR \end{algorithmic} \caption{\FairXUCB\ Algorithm} \label{alg:fair_UCB} \end{algorithm} The first algorithm we introduce is called \FairXUCB\ and it is detailed in Algorithm~\ref{alg:fair_UCB}. It utilizes the idea of optimism in the face of uncertainty. At each round $t$, the algorithm constructs a confidence region $\ConfidenceRegion_t$ which contains the true parameter $\Params^\star$ with high probability. Then the algorithm optimistically selects a parameter $\Params_t \in \mathbb{R}^{K}$ within the confidence region $\ConfidenceRegion_t$ that maximizes the estimated expected reward subject to the constraint that we construct a fair policy as if the selected parameter is the true parameter. Compared to the conventional UCB algorithm which selects the arm with the largest UCB deterministicly in each round, the proposed \FairXUCB\ algorithm selects arms stochastically to ensure fairness. Finally, we apply the constructed policy $\Policy_t$, observe the feedback, and update the confidence region. The following two theorems characterize the fairness and reward regret upper bounds of the \FairXUCB\ algorithm. \begin{theorem}[\FairXUCB\ Fairness Regret] \label{theo:fair_UCB_FR} Under Condition ~\ref{condi:min_merit} and ~\ref{condi:lip_cont}, suppose $\forall t,\Action: \Reward_{t,\Action}\in[-1,1]$, when $\TimeSteps>\NumActions$, for any $\delta\in(0,1)$, set $\ConfiWidth = \sqrt{2\ln\left(4\TimeSteps\NumActions/\delta\right)}$, the fairness regret of the \FairXUCB\ algorithm is $\FairnessRegret_\TimeSteps =\widetilde{O}\left( \LipConst\sqrt{\NumActions\TimeSteps}/\MeritMin\right)$ with probability at least $1-\delta$. \end{theorem} \begin{theorem}[\FairXUCB\ Reward Regret] \label{theo:fair_UCB_RR} Suppose $\forall t,\Action: \Reward_{t,\Action}\in[-1,1]$, when $\TimeSteps>\NumActions$, for any $\delta\in(0,1)$, set $\ConfiWidth = \sqrt{2\ln\left(4\TimeSteps\NumActions/\delta\right)}$, the reward regret of the \FairXUCB\ algorithm is $\RewardRegret_\TimeSteps =\widetilde{O}\left( \sqrt{\NumActions\TimeSteps}\right)$ with probability at least $1-\delta$. \end{theorem} $\widetilde{O}$ ignores logarithmic factors in $O$. Note that the well-known $\Omega\left(\sqrt{\NumActions\TimeSteps}\right)$ reward regret lower bound~\cite{auer2002nonstochastic} developed for the conventional bandit problem also holds for the \FairX\ setting because the conventional stochastic MAB problem that only minimizes the reward regret is a special case of the \FairX\ setting where we set the merit function $\Merit$ to be an infinitely steep increasing function. Since the reward regret upper bound of \FairXUCB\ we proved does not depend on Conditions~\ref{condi:min_merit} and ~\ref{condi:lip_cont} about the merit function $\Merit$, our reward regret upper bound of the \FairXUCB\ algorithm is tight up to logarithmic factors. The fairness regret has the same dependence on the number of arms $\NumActions$ and the number of rounds $\TimeSteps$ as the reward regret. It further depends on the minimum merit constant $\MeritMin$ and the Lipschitz continuity constant $\LipConst$, which we treat as absolute constants due to Theorem~\ref{theo:lower_bound_min_merit} and Theorem~\ref{theo:lower_bound_lip_cont}. Compared to Fair\_SD\_TS algorithms proposed in~\cite{liu2017calibrated}, our proposed \FairXUCB\ algorithm focuses on fairness and reward regret across rounds instead of achieving a smooth fairness constraint for each round. This allows \FairXUCB\ to achieve improved fairness and reward regret ($\sqrt{\NumActions\TimeSteps}$ compared to $\left(\NumActions\TimeSteps\right)^{2/3}$). In addition, \FairXUCB\ works for general reward distributions and merit functions while SD\_TS only works for Bernoulli reward distribution and identity merit function. One challenge in implementing Algorithm~\ref{alg:fair_UCB} lies in Step~\ref{alg:fair_UCB:optimization}, since finding the most optimistic parameter is a non-convex constrained optimization problem. We solve this optimization problem approximately with projected gradient descent in our empirical evaluation. In the next subsection, we will introduce the \FairXTS\ algorithm that avoids this optimization problem. \subsection{\FairXTS\ Algorithm} \begin{algorithm}[t] \begin{algorithmic}[1] \STATE{\bf input: }{$\Merit$, $\mathcal{V}_1$} \FOR{$t=1$ to $\infty$} \STATE Sample parameter from posterior $\Params_t\sim \mathcal{V}_t$ \STATE Construct policy $\Policy_t(\Action)=\frac{\Merit(\Params_{t,\Action})}{\sum_{\Action'}\Merit(\Params_{t,\Action'})}$ \STATE Sample arm $\Action_t\sim\Policy_t$ \STATE Observe reward $\Reward_{t,\Action_t}$ \STATE Update posterior $\mathcal{V}_{t+1}=\text{Update}(\mathcal{V}_1, \History_{t+1})$ \ENDFOR \end{algorithmic} \caption{\FairXTS\ Algorithm} \label{alg:fair_TS} \end{algorithm} Another approach to designing stochastic bandit algorithms that has proven successful both empirically and theoretically is Thompson Sampling (TS). We find that this approach can also be applied to the \FairX\ setting. In particular, our \FairXTS\ as shown in Algorithm~\ref{alg:fair_TS} uses posterior sampling similar to a conventional TS bandit. The algorithm puts a prior distribution $\mathcal{V}_1$ on the expected reward of each arm $\Params^\star$. For each round $t$, the algorithm samples a parameter $\Params_t$ from the posterior $\mathcal{V}_t$, and constructs a fair policy $\Policy_t$ from the sampled parameter to deploy. Finally, the algorithm observes the feedback and updates the posterior distribution of the true parameter. Following~\cite{russo2014learning}, we analyze the Bayesian reward and fairness regret of the algorithm. The Bayesian regret framework assumes that the true parameter $\Params^\star$ is sampled from the prior, and the Bayesian regret is the expected regret taken over the prior distribution \begin{equation} \text{Bayes}\RewardRegret_\TimeSteps = \mE_{\Params^\star}\left[\mE[\RewardRegret_\TimeSteps\vert\Params^\star]\right] \end{equation} \begin{equation} \text{Bayes}\FairnessRegret_\TimeSteps = \mE_{\Params^\star}\left[\mE[\FairnessRegret_\TimeSteps\vert\Params^\star]\right]. \end{equation} In the following two theorems we provide bounds on both the Bayesian reward regret and the Bayesian fairness regret of the \FairXTS\ algorithm. \begin{theorem}[\FairXTS\ Fairness Regret] \label{theo:fair_TS_FR} Under Condition~\ref{condi:min_merit} and ~\ref{condi:lip_cont}, suppose the mean reward $\Params^\star_a$ of each arm $a$ is independently sampled from standard normal distribution $\Normal(0,1)$, and $\forall t,\Action$ $\Reward_{t,\Action}\sim\Normal(\Params^\star_\Action,1)$, the Bayesian fairness regret of the \FairXTS\ algorithm at any round $\TimeSteps$ is $\text{Bayes}\FairnessRegret_\TimeSteps=\widetilde{O}\left( \LipConst\sqrt{\NumActions\TimeSteps}/\MeritMin \right)$. \end{theorem} \begin{theorem}[\FairXTS\ Reward Regret] \label{theo:fair_TS_RR} Suppose the mean reward $\Params^\star_a$ of each arm $a$ is independently sampled from standard normal distribution $\Normal(0,1)$, and $\forall t,\Action$ $\Reward_{t,\Action}\sim\Normal(\Params^\star_\Action,1)$, the Bayesian fairness regret of the \FairXTS\ algorithm at any round $\TimeSteps$ is $\text{Bayes}\RewardRegret_\TimeSteps=\widetilde{O}\left( \sqrt{\NumActions\TimeSteps} \right)$. \end{theorem} Note that these regret bounds are on the same order as the fairness and reward regret of the \FairXUCB\ algorithm. However, \FairXTS\ relies on sampling from the posterior and thus avoids the non-convex optimization problem that makes the use of \FairXUCB\ more challenging. \section{Stochastic Linear Bandits in the \FairX\ Setting} In this section, we extend the two algorithms introduced in the MAB setting to the more general stochastic linear bandits setting where the learner is provided with contextual information for making decisions. We discuss how the two algorithms can be adapted to this setting to achieve both sub-linear fairness and reward regret. \subsection{\FairX\ Setting for Stochastic Linear Bandits} In stochastic linear bandits, each arm $\Action$ at round $t$ comes with a context vector $\Context_{t,\Action}\in\mathbb{R}^\Dimension$. A stochastic linear bandits instance $\BanditEnv=(\RewardDist_{\Context}:\Context\in\mathbb{R}^\Dimension)$ is a collection of reward distributions for each context vector. The key assumption of stochastic linear bandits is that there exists a true parameter $\Params^\star$ such that, regardless of the interaction history $\History_t$, the mean reward of arm $\Action$ at round $t$ is the product between the context vector and the true parameter $\mE_{\Reward\sim\RewardDist_{\Context_{t,\Action}}}[\Reward\vert\History_t] =\Params^\star\cdot\Context_{t,\Action}$ for all $t, \Action$. The noise sequence \[ \eta_t = \Reward_{t,\Action_t} - \Params^{\star}\cdot\Context_{t,\Action_t} \] is thus a martingale difference sequence, since \[ \mE[\eta_t|\History_t] = \mE_{\Action\sim\Policy_t}[ \mE_{\Reward\sim\RewardDist_{\Context_{t,\Action}}}[\Reward|\History_t] - \Params^{\star}\cdot\Context_{t,\Action}]=0. \] At each round $t$, the learner is given a set of context vectors $\Actions_t\subset\mathbb{R}^\Dimension$ representing the arms, and it has to choose a policy $\Policy_t$ over these $\NumActions$ arms based on the interaction history $\History_t = (\Actions_1,\Policy_1, \Action_1,\Reward_{1,\Action_1},\ldots, \Actions_{t-1}, \Policy_{t-1}, \Action_{t-1},\Reward_{t-1,\Action_{t-1}})$. We focus on problems where the number of available arms is finite $\forall t: \lvert\Actions_t\rvert=\NumActions$, but where $K$ could be large. Again, we want to ensure that the policy provides each arm with an amount of exposure proportional to its merit \[ \frac{\Policy^\star_t(\Action)}{\Merit(\Params^\star\cdot\Context_{t,\Action})} = \frac{\Policy^\star_{t}(\Action')}{\Merit(\Params^\star\cdot\Context_{t,\Action'})}\quad \forall t, \Context_{t,\Action}, \Context_{t,\Action'} \in \Actions_t, \] where $\Merit$ is the merit function that maps the mean reward of the arm to a positive merit value. Since the set of arms changes over time, the optimal fair policy $\Policy_t^\star$ at round $t$ is time-dependent \[ \Policy^\star_t(\Action) = \frac{\Merit(\Params^{\star}\cdot \Context_{t,\Action})}{\sum_{\Action'}\Merit(\Params^{\star}\cdot \Context_{t,\Action'})}\quad\forall t, \Action. \] Analogous to the MAB setting, we define the reward regret as the expected reward difference between the optimal fair policy and the deployed policy \begin{equation} \RewardRegret_{\TimeSteps} = \sum_{t=1}^{\TimeSteps}\sum_\Action \Policy^\star_t(\Action)\Params^{\star}\cdot\Context_{t,\Action} - \sum_{t=1}^{\TimeSteps}\sum_\Action \Policy_t(\Action)\Params^{\star}\cdot\Context_{t,\Action}, \end{equation} and fairness regret as the cumulative $\ell^1$ distance between the optimal fair policy and the deployed policy \begin{equation} \FairnessRegret_\TimeSteps = \sum_{t=1}^{\TimeSteps}\sum_\Action \lvert\Policy^{\star}_t(\Action) - \Policy_t(\Action)\rvert. \end{equation} The lower bounds on the fairness regret derived in Theorem~\ref{theo:lower_bound_min_merit} and Theorem~\ref{theo:lower_bound_lip_cont} in the MAB setting also apply to the stochastic linear bandit setting, since we can easily convert a MAB instance into a stochastic linear bandits instance by constructing $\NumActions$ $\NumActions$-dimensional basis vectors, each representing one arm. Thus we again employ Condition~\ref{condi:min_merit} and~\ref{condi:lip_cont} to design algorithms that have fairness regret with sub-linear dependence on the horizon $\TimeSteps$. \subsection{\FairXLinUCB\ Algorithm} \begin{algorithm}[t] \begin{algorithmic}[1] \STATE{\bf input: } { $\beta_t$, $\Merit$, $\lambda$} \STATE{\bf initialization:} { $\CovMatrix_1 = \lambda\mathbf{I}_\Dimension$, $\B_1 = \mathbf{0}_\Dimension$ } \FOR { $t=1$ to $\infty$ } \STATE Observe contexts $\Actions_t =$ $(\Context_{t,1},$ $ \Context_{t,2},$ $\ldots,$ $\Context_{t,\NumActions})$ \STATE $\hat{\Params}_t = \CovMatrix_t^{-1}\B_t$ \{The ridge regression solution\} \STATE $\ConfidenceRegion_t=(\Params: \|\Params-\hat{\Params}_t\|_{\CovMatrix_t}\leq \sqrt{\beta_t} )$ \STATE \label{alg:fair_LinUCB_optimization}$\Params_t = \argmax_{\Params\in \ConfidenceRegion_t} \sum_{\Action}\frac{\Merit(\Params\cdot\Context_{t,\Action})}{ \sum_{\Action'} \Merit(\Params\cdot\Context_{t,\Action'}) }\Params\cdot\Context_{t,\Action}$ \STATE Construct policy $\Policy_t(\Action)=\frac{\Merit(\Params_t\cdot\Context_{t,\Action})}{\sum_{\Action'}\Merit(\Params_t\cdot\Context_{t,\Action'})}$ \STATE Sample arm $\Action_t\sim\Policy_t$ \STATE Observe reward $\Reward_{t,\Action_t}$ \STATE $\CovMatrix_{t+1} = \CovMatrix_t + \Context_{t,\Action_t}\Context_{t,\Action_t}^\top$ \STATE $\B_{t+1} = \B_t + \Context_{t,\Action_t} r_{t,\Action_t}$ \ENDFOR \end{algorithmic} \caption{\FairXLinUCB\ Algorithm} \label{alg:fair_LinUCB} \end{algorithm} Similar to the \FairXUCB\ algorithm, the \FairXLinUCB\ algorithm constructs a confidence region $\ConfidenceRegion_{t}$ of the true parameter $\Params^\star$ at each round $t$. The center of the confidence region $\hat{\Params}_t$ is the solution of a ridge regression over the existing data, which can be updated incrementally. The radius of the confidence ball $\beta_t$ is an input to the algorithm. The algorithm proceeds by repeatedly selecting a parameter $\Params_t$ that is optimistic about the expected reward within the confidence region, subject to the constraint that we construct a fair policy from the parameter. We prove the following upper bounds on the fairness regret and reward regret of the \FairXLinUCB\ algorithm. \begin{theorem} [\FairXLinUCB\ Fairness Regret] \label{theo:fair_LinUCB_FR} Under Condition~\ref{condi:min_merit} and~\ref{condi:lip_cont}, suppose $\forall t,a$ $\|\Context_{t,\Action}\|_2\leq 1$, $\eta_t$ is $1$ sub-Gaussian, $\|\Params^{\star}\|_2\leq 1$, set $\lambda=1$, with proper choice of $\beta_t$, the fairness regret at any round $\TimeSteps>0$ is $\FairnessRegret_\TimeSteps=\widetilde{O}\left(\LipConst\Dimension\sqrt{\TimeSteps}/{\MeritMin}\right)$ with high probability. \end{theorem} \begin{theorem}[\FairXLinUCB\ Reward Regret] \label{theo:fair_LinUCB_RR} Suppose $\forall t,a$ $\|\Context_{t,\Action}\|_2\leq 1$, $\eta_t$ is $1$ sub-Gaussian, $\|\Params^{\star}\|_2\leq 1$, set $\lambda=1$, with proper choice of $\beta_t$, the reward regret at any round $\TimeSteps>0$ is $\RewardRegret_\TimeSteps=\widetilde{O}\left(\Dimension\sqrt{\TimeSteps}\right)$ with high probability. \end{theorem} Both fairness and reward regret have square root dependence on the horizon $\TimeSteps$ and a linear dependence on the feature dimension $\Dimension$, and the fairness regret depends on the absolute constants $\LipConst$ and $\MeritMin$. Note that the reward regret is not tight in terms of $\Dimension$ and there exist algorithms~\cite{chu2011contextual,lattimore2020bandit} that achieve reward regret $\widetilde{O}(\sqrt{\Dimension\TimeSteps})$. However these algorithms are based on the idea of arm elimination and thus will likely not achieve low fairness regret. Also LinUCB is a much more practical option than the ones based on arm elimination \citep{chu2011contextual}. The optimization Step~\ref{alg:fair_LinUCB_optimization} in Algorithm~\ref{alg:fair_LinUCB}, where we need to find an optimistic parameter $\Params_t$ that maximizes the estimated expected reward within the confidence region $\ConfidenceRegion_t$ subject to the fairness constraint, is again a non-convex constrained optimization problem. We use projected gradient descent to find approximate solutions in our empirical evaluation. \subsection{\FairXLinTS\ Algorithm} \begin{algorithm}[t] \begin{algorithmic}[1] \STATE{\bf input: }{ $\Merit$, $\mathcal{V}_1$} \FOR{$t=1$ to $\infty$} \STATE Observe contexts $\Actions_t = (\Context_{t,1}, \Context_{t,2}, \ldots, \Context_{t,\NumActions})$ \STATE Sample parameter from posterior $\Params_t\sim \mathcal{V}_t$ \STATE Construct policy $\Policy_t(\Action)=\frac{\Merit(\Params_{t,\Action}\cdot\Context_{t,\Action})}{\sum_{\Action'}\Merit(\Params_{t,\Action'}\cdot\Context_{t,\Action})}$ \STATE Sample arm $\Action_t\sim\Policy_t$ \STATE Observe reward $\Reward_{t,\Action_t}$ \STATE Update posterior $\mathcal{V}_{t+1} = \text{Update}(\mathcal{V}_1, \History_{t+1})$ \ENDFOR \end{algorithmic} \caption{\FairXLinTS~Algorithm} \label{alg:fair_LinTS} \end{algorithm} To avoid the difficult optimization problem of \FairXLinUCB, we again explore the use of Thompson sampling. Algorithm~\ref{alg:fair_LinTS} shows our proposed \FairXLinTS. At each round $t$, the algorithm samples a parameter $\Params_t$ from the posterior distribution $\mathcal{V}_t$ of the true parameter $\Params^\star$ and derives a fair policy $\Policy_t$ from the sampled parameter. Then the algorithm deploys the policy and observes the feedback for the selected arm. Finally, the algorithm updates the posterior distribution of the true parameter given the observed data. Note that sampling from the posterior is efficient for a variety of models (e.g. normal distribution), as opposed to the non-convex optimization problem in \FairXLinUCB. Appropriately extending our definition of Bayesian reward regret and fairness regret \begin{equation} \text{Bayes}\RewardRegret_\TimeSteps = \mE_{\Params^\star}\left[\mE[\RewardRegret_\TimeSteps\vert\Params^\star]\right] \end{equation} \begin{equation} \text{Bayes}\FairnessRegret_\TimeSteps = \mE_{\Params^\star}\left[\mE[\FairnessRegret_\TimeSteps\vert\Params^\star]\right] , \end{equation} we can prove the following regret bounds for the \FairXLinTS\ algorithm. \begin{theorem}[\FairXLinTS\ Fairness Regret] \label{theo:fair_LinTS_FR} Under Condition~\ref{condi:min_merit} and~\ref{condi:lip_cont}, suppose each dimension of the true parameter $\Params^\star$ is independently sampled from standard normal distribution $\Normal(0,1)$, $\forall t,a$ $\|\Context_{t,\Action}\|_2\leq 1$, $\eta_t$ is sampled from standard normal distribution $\Normal(0,1)$, the Bayesian fairness regret of the \FairXLinTS\ algorithm is $\text{Bayes}\FairnessRegret=\widetilde{O}\left(\LipConst\sqrt{\Dimension\TimeSteps}/\MeritMin\right)$. \end{theorem} \begin{theorem}[\FairXLinTS\ Reward Regret] \label{theo:fair_LinTS_RR} Suppose each dimension of the true parameter $\Params^\star$ is independently sampled from standard normal distribution $\Normal(0,1)$, $\forall t,a$ $\|\Context_{t,\Action}\|_2\leq 1$, $\eta_t$ is sampled from standard normal distribution $\Normal(0,1)$, the Bayesian reward regret of the \FairXLinTS\ algorithm is $\text{Bayes}\RewardRegret=\widetilde{O}\left( \Dimension\sqrt{\TimeSteps} \right)$. \end{theorem} Similar to the \FairXTS\ algorithm in the MAB setting, the Bayesian fairness regret of \FairXLinTS\ assumes a normal prior. Note that the Bayesian fairness regret of \FairXLinTS\ differs by order of $\sqrt{\Dimension}$ from the non-Bayesian fairness regret of the \FairXLinUCB\ algorithm. The Bayesian setting and the normal prior assumption enable us to explicitly bound the total variation distance between our policy and the optimal fair policy, which allows us to avoid going through the UCB-based analysis of the LinUCB algorithms as in the conventional way of proving Bayesian regret bound~\cite{russo2014learning}. \begin{figure*}[!htb] \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/histgram_exp_short_L4.pdf} \end{subfigure} \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=\linewidth]{figures/histgram_exp_long_L4.pdf} \end{subfigure} \caption{The average exposure distribution of different algorithms on the yeast dataset in the MAB setting after $2,000$ rounds (left) and $2,000,000$ rounds (right). ($c=4$) } \label{fig:hist} \end{figure*} \section{Experiments} While the theoretical analysis provides worst case guarantees for the algorithms, we now evaluate empirically how the algorithms perform on a range of tasks. We perform this evaluation both on synthetic and real-world data. The synthetic data allows us to control properties of the learning problem for internal validity, and the real-world data provides a data-point for external validity of the analysis. \subsection{Experiment Setup} \label{sec:exp_setup} For the experiments where we control the properties of the synthetic data, we derive bandit problems from the multi-label datasets \emph{yeast}~\cite{horton1996probabilistic} and \emph{mediamill}~\cite{snoek2006challenge}. The yeast dataset consists of $2,417$ examples. Each example has $103$ features and belongs to one or multiple of the $14$ classes. We randomly split the dataset into two sets, 20\% as the validation set to tune hyper-parameters and 80\% as the test set to test the performance of different algorithms. For space reasons, the details and the results of the mediamill dataset are in Appendix~\ref{appendix:exp}. To simulate the bandit environment, we treat classes as arms and their labels ($0$ or $1$) as rewards. For each round $t$, the bandit environment randomly samples an example from the dataset. Then the bandit algorithm selects an arm (class), and its reward (class label) is revealed to the algorithm. To construct context vectors for the arms, we generate $50$-dimensional random Fourier features~\cite{rahimi2007random} from the outer product between the features of the example and the one-hot representation of the arms. For the experiments on real-world data, we use data from the Yahoo! Today Module~\cite{li2010contextual}, which contains user click logs from a news-article recommender system that was fielded for $10$ days. Each day logged around $4.6$ million events from a bandit that selected articles uniformly at random, which allows the use of the replay methodology~\cite{li2010contextual} for unbiased offline evaluation of new bandit algorithms. We use the data from the first day for hyper-parameter selection and report the results on the data from the second day. The results using all the data are presented in Appendix~\ref{appendix:exp}. Each article and each user is represented by a $6$-dimensional feature vector respectively. Following~\cite{li2010contextual}, we use the outer product between the user features and the article features as the context vector. To calculate the fairness and reward regret, we determine the optimal fair policy as follows. For MAB experiments, we use the empirical mean reward of each arm as the mean parameter for each arm. For linear bandit experiments, we fit a linear least square model that maps the context vector of each arm to its reward. Note that the linearity assumption does not necessarily hold for any of the datasets, and that rewards are known to change over time for the Yahoo! data. This realism adds a robustness component to the evaluation. We also add straightforward \FairX-variants of the $\epsilon$-greedy algorithms to the empirical analysis, which we call \FairXEG\ and \FairXLinEG. The algorithms are identical to their conventional $\epsilon$-greedy counterparts, except that they construct their policies according to $\Policy_t(\Action)=\frac{\Merit(\hat{\Params}_{t,\Action})}{\sum_{\Action'}\Merit(\hat{\Params}_{t,\Action'})}$ or $\Policy_t(\Action)=\frac{\Merit(\hat{\Params}_{t}\cdot\Context_{t,\Action})}{\sum_{\Action'}\Merit(\hat{\Params}_{t}\cdot\Context_{t,\Action})}$ where $\hat{\Params}_{t}$ is the estimated parameter at round $t$. While $\epsilon$-greedy has weaker guarantees already in the conventional bandit setting, it is well known that it often performs well empirically and we thus add it as a reference for the more sophisticated algorithms.In addition to the \FairX\ algorithms, we also include the fairness regret of conventional UCB, TS, LinUCB, and LinTS bandit algorithms. We use merit functions of the form $\Merit(\Params) = \exp(c\Params)$, since the choice of the constant $c$ provides a straightforward way to explore how the algorithms perform for steeper vs. flatter merit functions. In particular, the choice of $c$ varies the value of $\LipConst/\MeritMin$. For both \FairXUCB\ and \FairXLinUCB, we use projected gradient descent to solve the non-convex optimization problem each round. We set the learning rate to be $0.01$ and the number of steps to be $10$. For \FairXLinUCB, we use a fixed $\beta_t = \beta$ for all rounds. In general, we use grid search to tune hyper-parameters to minimize fairness regret on the validation set and report the performance on the test set. We grid search $\ConfiWidth$ for \FairXUCB\ and UCB; prior variance and reward variance for \FairXTS, TS, \FairXLinTS\, and LinTS; $\lambda$ and $\beta$ for \FairXLinUCB\ and LinUCB; $\epsilon$ for \FairXEG; $\epsilon$ and the regularization parameter of the ridge regression for \FairXLinEG. We run each experiment $10$ times and report the mean and the standard deviation. \subsection{How unfair are conventional bandit algorithms?} We first verify that conventional bandit algorithms indeed violate merit-based fairness of exposure, and that our \FairX\ algorithms specifically designed to ensure fairness do indeed perform better. \autoref{fig:hist} shows the average exposure that each arm received across rounds under the conventional UCB and TS algorithms for a typical run, and it compares this to the exposure allocation under the \FairXUCB\ and \FairXTS\ algorithm. The plots show the average exposure after 2,000 (left) and 2,000,000 (right) rounds, and it also includes the optimally fair exposure allocation. Already after 2,000 rounds, the conventional algorithms under-expose many of the arms. After 2,000,000 rounds, they focus virtually all exposure on arm 11, even though arm 12 has only slightly lower merit. Both \FairXUCB\ and \FairXTS\ track the optimal exposure allocation substantially better, and they converge to the optimally fair solution. This verifies that \FairX\ algorithms like \FairXUCB\ and \FairXTS\ are indeed necessary to enforce merit-based fairness of exposure. The following sections further show that conventional bandit algorithms consistently suffer from much larger fairness regret compared to \FairX\ algorithms across different datasets and merit functions in both MAB and linear bandits setting. \subsection{How do the \FairX\ algorithms compare in the MAB setting?} \begin{figure}[!htb] \begin{subfigure}{.235\textwidth} \centering \includegraphics[width=\linewidth]{figures/yeast_L10_fr.jpeg} \end{subfigure} \begin{subfigure}{.235\textwidth} \centering \includegraphics[width=\linewidth]{figures/yeast_L10_rr.jpeg} \end{subfigure} \caption{Fairness regret and reward regret of different MAB algorithms on the yeast dataset. ($c=10$)} \label{fig:yeast_MAB} \end{figure} \autoref{fig:yeast_MAB} compares the performance of the bandit algorithms on the yeast dataset. The fairness regret converges roughly at the rate predicted by the bounds for \FairXUCB\ and \FairXTS, and \FairXEG\ shows a similar behavior as well. In terms of reward regret, all \FairX\ algorithms perform substantially better than their worst-case bounds suggest. Note that \FairXUCB\ does particularly well in terms of reward regret, but also note that part of this is due to violating fairness more than \FairXTS. Specifically, in the \FairX\ setting, an unfair policy can get better reward than the optimal fair policy, making a negative reward regret possible. While \FairXEG\ wins neither on fairness regret nor on reward regret, it nevertheless does surprisingly well given the simplicity of the exploration scheme. We conjecture that \FairXEG\ benefits from the implicit exploration that results from the stochasticity of the fair policies. Results for other merit functions are given in Appendix~\ref{appendix:exp}, and we find that the algorithms perform more similarly the flatter the merit function. \subsection{How do the \FairX\ algorithms compare in the linear bandits setting?} \begin{figure}[!htb] \begin{subfigure}{.235\textwidth} \centering \includegraphics[width=\linewidth]{figures/yeast_lin_L3_fr.jpeg} \end{subfigure} \begin{subfigure}{.235\textwidth} \centering \includegraphics[width=\linewidth]{figures/yeast_lin_L3_rr.jpeg} \end{subfigure} \caption{Fairness regret and reward regret of different linear bandit algorithms on the yeast dataset. ($c=3$) } \label{fig:yeast_linear} \end{figure} \begin{figure*}[!tbh] \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{figures/yahoo_L10_fr.jpeg} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{figures/yahoo_L10_rr.jpeg} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{figures/yahoo_lin_L10_fr.jpeg} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\textwidth]{figures/yahoo_lin_L10_rr.jpeg} \end{subfigure} \caption{Experiment results on the Yahoo! dataset for both the MAB and the linear bandits setting ($c=10$ for both settings).} \label{fig:yahoo} \end{figure*} We show the fairness regret and the reward regret of the bandit algorithms on the yeast dataset in \autoref{fig:yeast_linear}. Results for other merit functions are in Appendix~\ref{appendix:exp}. Similar to the MAB setting, there is no clear winner between the three \FairX\ algorithms. Again we see some trade-offs between reward regret and fairness regret, but all three \FairX\ algorithms show a qualitatively similar behavior. One difference is that the fairness regret no longer seems to converge. This can be explained with the misspecification of the linear model, as the estimated ``optimal'' policy that we use in our computation of regret may differ from the policy learned by the algorithms due to selection biases. Nevertheless, we conclude that the fairness achieved by the \FairX\ algorithms is still highly preferable to that of the conventional bandit algorithms. \subsection{How do the \FairX\ algorithms compare on the real-world data?} To validate the algorithms on a real-world application, \autoref{fig:yahoo} provides fairness and reward regret on the Yahoo! dataset for both the MAB and the linear bandits setting. Again, all three types of \FairX\ algorithms perform comparably and have reward regret that converges quickly. Note that even the MAB setting now includes some misspecification of the model, since the reward distribution changes over time. This explains the behavior of the fairness regret. However, all \FairX\ algorithms perform robustly in both settings, even though the real data does not exactly match the model assumptions. \section{Conclusions} We introduced a new bandit setting that formalizes merit-based fairness of exposure for both the stochastic MAB and the linear bandits setting. In particular, we define fairness regret and reward regret with respect to the optimal fair policy that fulfills the merit-based fairness of exposure, develop UCB and Thompson sampling algorithms for both settings, and prove bounds on their fairness and reward regret. An empirical evaluation shows that these algorithms provide substantially better fairness of exposure to the items, and that they are effective across a range of settings. \section*{Acknowledgements} This research was supported in part by NSF Awards IIS-1901168 and IIS-2008139. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. \section{Related Work} The bandit problem was first introduced by Thompson~\cite{thompson1933likelihood} to efficiently conduct medical trials. Since then, it has been extensively studied in different variants, and we refer to these books~\cite{bubeck2012regret,slivkins2019introduction,lattimore2020bandit} for a comprehensive survey. We focus on the classic stochastic MAB setting where each arm has a fixed but unknown reward distribution, as well as the stochastic linear bandits problem where each arm is represented as a $\Dimension$-dimensional vector and its expected reward is a linear function of its vector representation. In both stochastic MAB and stochastic linear bandits, some of the algorithms we designed leverage the idea of optimism in the face of uncertainty behind the UCB algorithm~\cite{lai1985asymptotically}, while other algorithms leverage the idea of posterior sampling behind the TS~\cite{thompson1933likelihood} algorithm. The theoretical results of the proposed fair UCB and fair linear UCB algorithm borrow some ideas from several prior finite time analysis works on the conventional UCB and linear UCB algorithm~\cite{auer2002using, dani2008stochastic, abbasi2011improved}. We adopt the Bayesian regret framework~\cite{russo2014learning} for our theoretical analysis of the fair TS and the fair linear TS algorithm. Algorithmic fairness has been extensively studied in binary classification~\cite{HardtPNS16,chouldechova2017fair,KleinbergMR17,agarwal2018reductions}. These works propose statistical criteria to test algorithmic fairness that often operationalize definitions of fairness from political philosophy and sociology. Several prior works~\cite{blum2018preserving,blum2019advancing,bechavod2019equal} study how to achieve these fairness criteria in online learning. These algorithms achieve fairness to the incoming users. We, in contrast, achieve fairness to the arms. Joseph et al. \cite{NIPS2016_6355,joseph2016fair,joseph2018meritocratic} study fairness in bandits that ensure a better arm is always selected with no less probability than a worse arm. Different from our definition of fairness, their optimal policy is still the one that deterministically selects the arm with the largest expected reward while giving zero exposure to all the other arms. Another type of fairness definition in bandits is to ensure a minimum and/or maximum amount of exposure to each arm or group of arms~\cite{heidari2018preventing,wen2019fairness,schumann2019group,li2019combinatorial,celis2018algorithmic,claure2020multi,patil2020achieving,chen2020fair}. However, they do not take the merit of the items into consideration. Gillen et al.~\cite{gillen2018online} propose to optimize individual fairness defined in \cite{dwork2012fairness} in the adversarial linear bandits setting, where the difference between the probabilities that any two arms are selected is bounded by the distance between their context vectors. They require additional feedback of fairness-constraint violations. We work in the stochastic bandits setting and we do not require any additional feedback beyond the reward. We also ensure that similar items obtain similar exposure, but we focus on similarity of merit, which corresponds to closeness in mean reward conditioned on context. The most relevant work may arguably be~\cite{liu2017calibrated}, which considers fairness in stochastic MAB problems where the reward distribution is Bernoulli. They aim to achieve calibrated fairness where each arm is selected with the probability equal to that of its reward being the largest, while satisfying a smoothness constraint where arms with similar merit should receive similar exposure. They propose a TS-based algorithm that achieves fairness regret with a $T^{2/3}$ dependence on the time horizon $T$. Our formulation is more general in a sense that we consider arbitrary reward distributions and merit functions, with their formulation as a special case. What is more, our proposed algorithms achieve fairness regret with a $\sqrt{T}$ dependence on $T$. In addition, we further study the more general setting of stochastic linear bandits. Our definition of fairness has connections to the fair division problem~\cite{steihaus1948problem,brams1996fair,procaccia2013cake}, where the goal is to allocate a resource to different agents in a fair way. In our problem, we aim to allocate the users' attention among the items in a fair way. Our definition of fairness ensures proportionality, one of the key desiderata in the fair division literature to ensure each agent receives its fair share of the resource. Recently, merit-based fairness of exposure has been studied for ranking problems in the statistical batch learning framework~\cite{singh2018fairness,singh2019policy}. We build upon this work, and extend merit-based fairness of exposure to the online-learning setting.
1,116,691,499,497
arxiv
\section{Introduction} There have been consistent advancements in automatic text summarization due to its importance and relevance in modern natural language processing (NLP) applications. The task of single document summarization (SDS) remains a challenge for which various extractive and abstractive approaches have been researched. Further challenges arise in the task of multi-document summarization (MDS), which is the generation of a summary from a cluster of related documents as opposed to a single source document, and introduced additional challenges due to the increased search space and increased potential for redundancy. While various summarization models may be applicable to MDS document clusters as flattened documents, training data and pretrained models specific to MDS remain relatively limited, increasing the usefulness of few-shot and zero-shot approaches. In a few-shot application, the model or system being utilized has limited prior information regarding the target data. In this context, few-shot summarization involves the use of a system which is only partially pretrained on the corpus being summarized. In zero-shot applications, there is no prior knowledge of the target data and the system is not pretrained on the corpus at all. Promising advancements in extractive summarization have been made, such as the use of text-matching \cite{zhong-etal-2020-extractive}. There is also a growing interest in the use of reinforcement-learning (RL) for text summarization, particularly for extractive summarization approaches. These approaches include the incorporation of simple embedding features in a RL summarization approach \cite{lee-lee-2017-automatic} and the use of RL for sentence-ranking \cite{narayan-etal-2018-ranking}. Additionally, some approaches to extractive summarization involve the use of simple statistical methods for the improved summaries, such as the various statistical models Daiya and Singh \shortcite{daiya-singh-2018-using} combined with various semantic models for MDS. Another of these statistical methods being utilized for summarization is the use of the maximal marginal relevance (MMR) algorithm, which Mao et al. \shortcite{mao-etal-2020-multi} incorporate into a reinforcement learning approach for MDS. While extractive approaches remain effective in producing accurate, low-cost summaries from the source documents themselves, abstractive methods allow for more comprehensive summaries using novel terms from outside the source documents. Recent advancements in both extractive and abstractive text summarization methods have involved the development and use of innovative language models based on various architectures. While neural networks remain effective summarization model architectures with potential for advancements \cite{see-etal-2017-get,khatri2018abstractive,cohan-etal-2018-discourse}, there is also a substantial increase in summarization approaches using models based on the transformer architecture proposed by Vaswani \shortcite{vaswani2017attention}. Recent state-of-the-art models based on the transformer architecture include BERT \cite{devlin-etal-2019-bert}, RoBERTa \cite{liu2019roberta}, BART \cite{lewis-etal-2020-bart}, UniLM \cite{dong2019unified}, GPT-2 \cite{Radford2019LanguageMA}, PEGASUS \cite{zhang2020pegasus}, LED \cite{beltagy2020longformer}, ALBERT \cite{lan2020albert}, T5 \cite{raffel2020exploring}, ProphetNet \cite{qi-etal-2020-prophetnet}, XLNet \cite{yang2020xlnet}, and TED \cite{yang-etal-2020-ted}. State-of-the-art summarization models, including those which are not pretrained on multi-document datasets, can produce promising MDS results. However, the potential for oversized or redundant input data remains. It seems possible that if a summarization approach could utilize the diversity of outputs produced by pretrained models while minimizing their redundancy, the combined outputs could represent an improvement in the task of MDS, particularly in few-shot and zero-shot applications. To address the models' limitations with respect to few-shot and zero-shot MDS, we propose an abstractive-extractive MDS approach which combines state-of-the-art model outputs in a manner which improves state-of-the-art MDS performance. To achieve this combination of outputs, we employ the MMR algorithm with a strong bias towards MMR query relevance among model-generated outputs. We test this approach in both a few-shot and zero-shot context. \section{Method} Our framework makes use of the output of multiple pretrained models. Some of these models are pretrained on the MDS dataset being summarized, while some of the models are pretrained on a SDS dataset. Training all models on the MDS dataset would be expensive and unrelated to the goals of our approach, which we intend to prove effective without the need for optimal pretraining conditions. For any model which is trained on the MDS dataset being summarized, the source document cluster is used as a single-document input for the model. This is the input method for which these models were pretrained. For every model including those trained on the MDS dataset, each document cluster is split into single documents, for which a summary is produced. Preliminary research indicates that models which are not pretrained on the given MDS dataset produce more accurate summaries on the single documents individually than on the combined document cluster. \subsection{MMR} MMR is a measure of relevance which is dependent on the amount of new information which is provided by each input document \cite{101145/290941291025}. The formula for MMR is as follows: \begin{equation} \resizebox{8cm}{!} { $ Arg \; \max_{D_{i \notin S}} [\lambda (Sim1(D_{i},Q)) - (1 - \lambda) \; (\max_{D_{j \in S}} Sim2(D_{i},D_{j}))] $ } \end{equation} In the MMR formula, the lambda constant $\lambda$ is a number between 0 and 1 which determines the degree to which the calculation prioritizes relevance or diversity. \textit{S} is the set of documents which have already been selected, \textit{$D_i$} is the given candidate document or sentence which is not selected, \textit{$D_j$} is the given previously-selected document to which the candidate document's similarity is compared, and \textit{Q} is the query document to which the relevance of each candidate document is computed. Given a desired number of documents to select, the MMR calculation iterates through the unselected documents and selects the desired number of documents that are the most relevant or the most diverse, depending on the lambda constant. \textit{Sim1} and \textit{Sim2} are similarity measurements between documents. However, given the $(1 - \lambda)$ factor preceding \textit{Sim2}, the right-hand side of the calculation effectively becomes a maximization of diversity rather than similarity. A higher lambda constant increases relevance to the query, while a lower lambda constant increases diversity among the selected documents. If only one document is passed through the algorithm, only the relevance portion of the algorithm is used, as is also the case with the first document selection when running the MMR algorithm. We intentionally choose a sentence from our best-performing model as the first selected document. \subsection{Approach Overview} A visual representation of our current base approach can be seen in Figure ~\ref{fig:approach}. The \textit{k} constant is the number of documents we take from the cluster for SDS generation. For datasets with only a few documents per cluster such as Multi-News, \textit{k} is usually all documents in the cluster. For datasets which contain 100 or more documents per cluster such as WCEP, \textit{k} is usually smaller than the number of documents in the cluster. \textit{m} is the desired output length of the final summary in sentences, ultimately defined as \textit{n}, the number of sentences in the model-generated summary with the highest MMR score, plus \textit{l}, the optimal number of additional sentences to have. \textit{l} can have a range of values, but we found it optimal to set \textit{l} to \textit{n} * \textit{p}, with \textit{p} being the percentage of final output sentences to be extracted using MMR in the \textit{m} * \textit{p} portion of the approach. The available sentences from each model's output is reduced subtracting the maximum number of MMR-removed sentences \textit{r} from the given model output's number of sentences \textit{d}. The MMR reduction is further explained in Section ~\ref{mmrreduction}. For example, this approach gives the option to specify a final output of length \textit{n + 2} and a \textit{p} value of 0.9, which would reconstitute 90\% of the final output using MMR, with the remainder of the \textit{n + 2} output being composed of unchanged output from the best-scoring model. For optimization in our case, we set \textit{l} to \textit{max(1, n*p)}, which means the entire best model output of \textit{n} length is unchanged, but these sentences are concatenated with at least 1 sentence which is extracted using MMR, and as many as \textit{n*p} sentences. Our use of a max expression ensures that the final output is never a simple repetition of the best-scoring model's output, as the intention of our research is to explore the improvement of model output using MMR. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{diagramrevised.pdf} \caption{Overview of our MMR-based approach.} \label{fig:approach} \end{figure} After each model produces its SDS summary, as well as its MDS summary if applicable, each model's summary is split into sentences. By sorting each model's output sentences as whole sentences, we preserve the readability and coherence of the resulting summary without the need for further language processing. If the outputs were instead processed using individual words or phrases, the words would be rearranged to lose coherence without additional parameters added to this approach, which is beyond the scope of this research. The SDS outputs for the given model are then concatenated into a single output from the model. However, this step of the process requires a limit to be placed on the number of summaries combined into the final SDS output in the case of a large imbalance between cluster size and document length in the dataset. For example, if the dataset has 100 documents per cluster and 1 sentence per document, then the concatenation of all SDS summaries would clearly result in an output too long to be comparable to the expected gold summary. Therefore, the number of SDS summaries returned per model for our approach is at maximum the average number of sentences per document. To extract the best given number of SDS outputs from all of the model's SDS outputs, we use the MMR algorithm with a focus on relevance. \subsection{MMR Reduction} \label{mmrreduction} In order to reduce the chance of corrupt or irrelevant information being included in our final MMR calculation while increasing the relevance, we also make use of MMR to reduce the sentences of each model's output by a certain percentage if the model is not pretrained on the given dataset. Early experimentation revealed that this additional usage of MMR improves the results when there are a large number of sentences to select from, potentially revealing a minor shortcoming in the use of LDA topic word queries. However, this reduction is only performed in the case of datasets that produce numerous sentences per summary. When there are sufficient sentences per summary, each model's summary is reduced by MMR to a size of max(1, $n - $r), where \textit{d} is the size of the summary in sentences and \textit{r} is the number of sentences to reduce by. \textit{r} is defined by max(1, $d \cdot R$), with \textit{R} being the percentage to reduce by, which can be as small as 1\%. In our experiments, this reduction percentage is small enough that each summary is reduced by 1, although in datasets with more sentences a different optimal percentage might be found. Preliminary research indicated that this simple reduction of sentences improves overall summarization performance. It was also found that some model outputs may benefit from the use of certain similarity measurement methods, such as the use of TF-IDF vectorization with cosine similarity for one model and Doc2Vec vectorization with cosine similarity for another. This use of MMR for model output reduction essentially selects the $(d-1)$ most-relevant sentences from each model's output, where \textit{d} is that number of sentences originally generated by the given model. In the case of datasets that produce summaries of only one or two sentences, no MMR reduction is used. \subsection{MMR Query Document} For the query document in our implementation of the MMR formula, we use a small sequence of words generated via Latent Dirichlet Allocation (LDA) topic modeling. LDA is a statistical model which incorporates Bayesian calculations to infer the most probable topic words from the given corpus \cite{Blei03latentdirichlet}. We also use a low lambda constant in our MMR calculations to prioritize diversity among the input documents over relevance to the query document. \subsection{Final Output} After applying this MMR calculation to reduce the model output if applicable, we combine all summaries into a single sequence. We then select the m best sentences from this single sequence using our MMR calculation, where m is the desired final number of sentences in our approach's output and the best sentences are those which best satisfy the diversity and relevance requirements set by the lambda constant. This new sequence of the m best sentences is our final summary. We expect our MMR-based, diversity-focused combination approach to increase the final summary's similarity to human output without sacrificing the coherence and readability of the output already generated by the state-of-the-art models. \section{Experiments} We conduct two experiments to test our proposed approach: 1) The few-shot Multi-News experiment, for which one model was pretrained on the dataset, and 2) The zero-shot WCEP experiment, for which no models were pretrained on the dataset. These two experiments demonstrate the performance of our approach in few-shot and zero-shot applications, respectively. Additionally, the WCEP experiment demonstrates how our approach handles larger and potentially-unclean data. Our implementations for both experiments use the PyTorch numerical framework \cite{NEURIPS20199015}. We ran both of our experiments on Google's Colaboratory Pro platform using their GPUs \footnote{https://colab.research.google.com}. The GPUs available on the Colaboratory platform include Nvidia K80s, T4s, P4,s and P100s, although the exact GPU being used cannot be selected by the end user \cite{ColabFAQ}. In each experiment, we run an optimization library for the parameters of our approach, including the lambda constant determining relevance versus diversity, the percentage of the final output to be MMR-generated with a minimum of one sentence, the number of LDA topics to generate, and the number of words to generate per LDA topic. The LDA parameters determine the size and diversity of the query document to be used for the MMR calculation, with the number of topics primarily determining the diversity of information covered by the query document, and the number of words determining allowing the topic phrases to be expounded upon or reworded via a longer string of words. For both experiments, we compare our final MMR-generated summaries, as well as the intermediary model-generated summaries, with the human-generated summaries from the given dataset using ROUGE metrics \cite{lin-2004-rouge}. This set of metrics compares the co-occurrence of text sequences of various lengths to produce the precision and recall of the generated summaries, as well as the combined F-measure score. We use these metrics because they are sufficient approximations for summarization performance, particularly in experiments such as ours in which more expensive evaluations methods such as human evaluation are not available. \subsection{Multi-News Experiment} The Multi-News Experiment is a few-shot application, meaning some of the models have been pretrained on the dataset in question, and the system has some familiarity with the information to be processed. The purpose of this experiment is to determine the utility of this approach when a model is available which is pretrained on the dataset in question. \subsection{Multi-News Dataset} For our corpus in this experiment, we use the Multi-News dataset, which consists of news article document clusters paired with professionally-written summaries, all of which are in the English language \cite{fabbri-etal-2019-multi}. There are 2 to 10 documents in each document cluster in this dataset, with each cluster containing an average of 82.73 sentences \cite{fabbri-etal-2019-multi}. The 5622 multi-document clusters from the testing subset of this dataset are split for single-document pretrained models using the dataset's special separation token. \subsection{Models Used for Multi-News} For the models used to generate our summaries for combination, we employ three extractive models and five abstractive models. One of the extractive models is the matching model used by Zhong et al. \cite{zhong-etal-2020-extractive} in their implementation of summarization as semantic text-matching, known as MatchSum. This extractive model is pretrained on the Multi-News dataset, and produces state-of-the-art results. The other two extractive models were the XLNet\textsubscript{base} model \cite{yang2020xlnet} and the GPT-2\textsubscript{base} model \cite{Radford2019LanguageMA}, for which we use the model versions available through the Bert Extractive Summarizer library \footnote{https://pypi.org/project/bert-extractive-summarizer/}. This library was developed for an extractive summarization approach for lectures \cite{miller2019leveraging}. The models available through this library are simply placed within the architecture with no need for finetuning \cite{miller2019leveraging}. Multiple models are available through this library, although they were not included in the library's seminal research paper. The five abstractive models we use for summarization are: the PEGASUS\textsubscript{multi-news} model \cite{zhang2020pegasus}, the BART\textsubscript{large-cnn} model \cite{lewis-etal-2020-bart}, the T5\textsubscript{large} model \cite{raffel2020exploring}, the ProphetNet\textsubscript{large-cnndm} model \cite{qi-etal-2020-prophetnet}, and the LED model \cite{beltagy2020longformer}. For simple implementation of these models, we employ the HuggingFace Transformers library where possible \cite{wolf2020huggingfaces}. For models which are not trained on the Multi-News dataset, we either use the large base pretrained model as in the case of T5, the base pretrained model as in the case of GPT-2, XLNet and LED, or the models pretrained on the CNN-DM dataset \cite{hermann2015teaching,nallapati-etal-2016-abstractive-alt} as in the case of BART and ProphetNet. In accordance with our proposed approach, due to the sufficient number of sentences in the Multi-News dataset, the outputs of the models which are not pretrained on the Multi-News dataset are reduced to a maximum of 1 and the model output length minus the MMR-reduction constant. \subsection{Multi-News Experiment Parameters} We considered the use of two similarity measures for our Multi-News experiment: our TF-IDF similarity measure, which combines cosine similarity with a TF-IDF vectorizer, and our Doc2Vec similarity measure, which combines cosine similarity with a Doc2Vec vectorizer. More specifically, we use the Distributed Bag-of-Words (DBOW) model implementation within Gensim’s Doc2Vec architecture \cite{le2014distributed}, which uses predictions from randomly-sampled words in the document. We did not alter the seed of the vectorizer's random sampling. Because we did not need to use single documents from the clusters, and because WMD was found to be inferior for the tasks of both MMR reduction and the final MMR combination, we did not use the WMD similarity measure in this experiment. Preliminary testing revealed that the vectorization method significantly affects model output selection, particularly with LED model outputs. For the final MMR combination method, we found the Doc2Vec similarity measure to be most effective. This method was also found to be most effective for the MMR-reduction of all model outputs with the exception of LED, for which we used the TF-IDF similarity measure. Our MMR parameters were consistent across all usages, and were optimized using the black box optimization library Scikit-Optimization \footnote{https://scikit-optimize.github.io/stable/}. The optimal lambda constants, MMR percentages, LDA topic amounts, and LDA word amounts can be seen in Table~\ref{tab:mnews-optimized-parameters} \begin{table}[h] \centering \begin{tabular*}{\linewidth}{l|c|c} \hline \textbf{Parameter} & \textbf{TF-IDF value} & \textbf{Doc2Vec value}\\ \hline lambda constant & 0.975 & 0.808\\ MMR percentage & 0.107 & 0.106 \\ LDA topics & 3 & 5\\ Words per topic & 7 & 6\\ \hline \end{tabular*} \caption{Optimized MMR combination and LDA parameters for Multi-News.} \label{tab:mnews-optimized-parameters} \end{table} \subsection{WCEP Experiment} The WCEP Experiment is a zero-shot application, meaning none of the models are pretrained on the given dataset, and the system as a whole has no familiarity with the data to be processed. The purpose of this experiment is to determine the usefulness of this approach when no model is available which is pretrained on the given dataset. \subsection{WCEP Dataset} For this experiment, we use the Wikipedia Current Events Portal (WCEP) dataset, which consists of short, human-written summaries of news events, the articles for which are all extracted from the Wikipedia Current Events Portal \cite{gholipour-ghalandari-etal-2020-large}. Each document cluster contains a large quantity of automatically-extracted articles. There are two primary versions of this dataset: the full version (WCEP-total) and the truncated version (WCEP-100). The full version consists of 2.39 million articles, while the truncated version consists of 650,000 articles. Also, each article cluster is limited to 100 articles in the truncated version, while the full version can contain as many as 8411 articles per cluster. The test split of the WCEP-100 dataset consists of 1,022 article clusters, of which we perform our experiment on 500 due to cost limitations. We use the WCEP-100 dataset version available through the WCEP dataset authors' GitHub repository \footnote{https://github.com/complementizer/wcep-mds-dataset}. \subsection{Special Considerations} The large number of documents per cluster posed two major differences from our Multi-News experiment. First, it would be prohibitively expensive for each model to generate SDS output for each of the 100 documents in each cluster. Therefore, our k constant from our proposed approach is more relevant in this experiment, as our database clusters contain more than the desired number of documents. Only the 10 most relevant documents are summarized from each cluster, which are selected using MMR. Our selection method also reduces the number of irrelevant, overly brief, unintelligible, or otherwise bad data selected from the large data clusters of the dataset. Second, each model is generating SDS outputs only, and there are more documents per cluster than sentences per document even after our MMR selection. Therefore, in accordance with our method previously described, the summary selection method rather than the concatenation method is used to construct the best model output to append our MMR-extracted sentences to. This use of summary selection means that each model's output is represented by 10 smaller summaries rather than 1 larger summary which would be incomparable to the much smaller expected summaries of the dataset. The best summary of the best model is selected using MMR with a focus on relevance. Additionally, we faced the problem of which of the SDS outputs of the best model to choose to append our MMR-selected output to. We wanted to select the best summary from the best-performing model without apriori evaluation, and with a suitable final number of sentences for comparison with the summaries of the sentences themselves. To select the best summary, we uses the cosine similarity measure with the Doc2Vec vectorizer, which is equivalent to the MMR algorithm with an output size of 1 sentence. We considered appending the MMR output to each of the best model's single-document summaries prior to selecting the best one. While this method did perform better than the baseline models, it was not optimal, and increased the likelihood of an unreadable final summary being generated. We therefore chose to append the MMR output only after the best summary to append to was selected using our Doc2Vec similarity measure. Also, because the documents in the WCEP often contain only one or two sentences, and because the summaries generated from it are similarly brief, the use of MMR to reduce each model's output would be redundant. Therefore, no MMR reduction is performed in this experiment, in accordance with the procedures of our approach. \subsection{Models Used for WCEP} For this experiment, we used 7 abstractive models and 3 extractive models. The model selection was slightly different in this experiment due to the different dataset. The MatchSum extractive model used in our Multi-News experiment was not available for this dataset, so we instead included both extractive and abstractive implementations of the XLNet, BART, and GPT-2 models. Additionally, we included the abstractive implementations of the ProphetNet, GPT-2, LED, T5, XLNet, PEGASUS, and BART models. As was the case with our Multi-News experiment, all of the abstractive implementations used were taken from the HuggingFace platform \footnote{https://huggingface.co/} created by Wolf et al. \cite{wolf2020huggingfaces}. As there was no MatchSum model in this experiment, all of the extractive implementations we used were taken from the Bert Extractive Summarizer library \footnote{https://pypi.org/project/bert-extractive-summarizer/} developed by Miller \cite{miller2019leveraging}. We ensured that the model versions used in the extractive implementations were the same as the model versions used in the abstractive implementations. \subsection{WCEP Experiment Parameters} As with the Multi-News Experiment, we consider the use of our TF-IDF, Doc2Vec, and WMD similarity measures for this experiment. As in the Multi-News experiment, we used Gensim's DBOW Doc2Vec vectorizer with the default random sampling seed. Given the added need for similarity-based selection in this experiment with a larger dataset and smaller summaries, we consider the three uses of similarity in this implementation: Sim\textsubscript{0}, Sim\textsubscript{1}, and Sim\textsubscript{2}. Sim\textsubscript{0} is the similarity measure used to select the best summary from the best model to append our MMR output to. Sim\textsubscript{1} and Sim\textsubscript{2} are the similarity measures used in the query relevance and diversity calculations of the MMR algorithm, respectively. Note that the Sim\textsubscript{2} similarity measure is rarely used in this implementation, because of the small number of returned sentences as well as the high lambda constant and relevance prioritization. Our WCEP experiment parameters were optimized using the black box optimization library Optuna \footnote{https://optuna.org/}. The optimized similarity measures we used, as well as the optimal MMR and LDA parameters, are included in Table~\ref{tab:wcep-optimized-parameters}. \begin{table}[h] \centering \begin{tabular*}{\linewidth}{l|c} \hline \textbf{Parameter} & \textbf{Value}\\ \hline lambda constant & 0.997 \\ MMR percentage & 0.298 \\ LDA topics & 5 \\ LDA words per topic & 2 \\ Sim\textsubscript{0} & Doc2Vec \\ Sim\textsubscript{1} & WMD \\ Sim\textsubscript{2} & TF-IDF \\ \hline \end{tabular*} \caption{Optimized WCEP MMR combination and LDA parameters.} \label{tab:wcep-optimized-parameters} \end{table} \section{Results} The resulting F-measure ROUGE scores of both experiments for our approach can be seen in Table~\ref{tab:results}. Because the Multi-News Experiment is a few-shot application rather than a zero-shot application, a comparison with the Mutli-News results of an additional pretrained model, MatchSum, is included. \begin{table}[ht] \centering \begin{tabular*}{\linewidth}{l|c|c|c} \hline \multicolumn{4}{c}{\textbf{Multi-News}} \\ \hline \textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL}\\ \hline ProphetNet\textsubscript{large-cnndm} & 32.01 & 10.43 & 16.46 \\ GPT-2\textsubscript{base} & 32.56 & 9.87 & 16.34 \\ XLNet\textsubscript{base} & 32.76 & 9.97 & 16.38 \\ T5\textsubscript{large} & 32.95 & 10.27 & 16.62 \\ LED\textsubscript{base} & 33.27 & 10.83 & 16.84 \\ BART\textsubscript{large-cnn} & 36.64 & 12.09 & 18.19 \\ MatchSum\textsubscript{BERT-base, multi-news} & 43.98 & 15.91 & 20.94 \\ PEGASUS\textsubscript{multi-news}(SDS) & 40.25 & 16.26 & 19.43\\ PEGASUS\textsubscript{multi-news}(MDS) & 45.79 & \textbf{18.48} & \textbf{24.27}\\ MMR-combination (ours) & \textbf{46.23} & 18.30 & 21.25 \\ \hline \multicolumn{4}{c}{\textbf{WCEP}} \\ \hline \textbf{Model} & \textbf{R1} & \textbf{R2} & \textbf{RL}\\ \hline XLNet\textsubscript{base} & 12.89 & 2.70 & 9.48 \\ ProphetNet\textsubscript{large-cnndm} & 14.19 & 2.36 & 10.56 \\ GPT-2\textsubscript{small} & 15.89 & 3.08 & 11.18 \\ T5\textsubscript{large} & 24.14 & 5.98 & 16.95 \\ LED\textsubscript{base} & 24.38 & 8.55 & 16.72 \\ BART\textsubscript{large-cnn}(EXT) & 27.17 & 9.39 & 19.39 \\ PEGASUS\textsubscript{xsum} & 27.33 & 8.38 & 19.19 \\ GPT-2\textsubscript{small}(EXT) & 27.33 & 9.42 & 19.52 \\ XLNet\textsubscript{base}(EXT) & 27.41 & \textbf{9.49} & 19.48 \\ BART\textsubscript{large-cnn} & 28.12 & 8.84 & 19.71 \\ MMR-combination (ours) & \textbf{30.74} & 9.18 & \textbf{21.57}\\ \hline \end{tabular*} \caption{Resulting F1-scores of ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL) metrics from test sets of Multi-News and WCEP datasets. (SDS) denotes that the model was used to summarize the individual documents from the document cluster, and (MDS) denotes that the model was used to summarize the complete document cluster as a single flattened document.} \label{tab:results} \end{table} As shown in Table 1, we also observe that the ROUGE-1 score of our MMR-based combination approach is higher than all of the models used in both experiments. However, in the Multi-News experiment, the ROUGE-2 and ROUGE-L scores of our approach are lower than those of the MDS PEGASUS model implementation. While the ROUGE-2 and ROUGE-L scores of our MMR-based approach are lower than those of the PEGASUS model, they remain higher than those of all other models with the exception of PEGASUS. This relative discrepancy in ROUGE scores seems to indicate that, while our approach does not focus on the properties of multiple-word strings within the summaries, it nevertheless preserves state-of-the-art performance in n-gram inclusion. Due to the increased ROUGE-1 score, it seems possible that there is at least a limited direct correlation between term relevance and ROUGE scores for the length of the terms being related to the LDA query document. Furthermore, these scores represent the use of only two models which are pretrained on the correct dataset, the combination of which was determined via preliminary experimentation to decrease all ROUGE scores relative to PEGASUS. We can therefore infer that the increase in ROUGE-1 score is due to the inclusion of models which are not pretrained on the Multi-News dataset. If these models were pretrained on the Multi-News dataset, we find it possible that the ROUGE-2 and ROUGE-L scores would not decrease as much relative to the leading pretrained model. We see an additional increase in the ROUGE-L score in the zero-shot WCEP experiment, in which no models were pretrained on the given dataset. We also observe that the relative improvement of our approach over state-of-the-art methods is larger in the zero-shot WCEP experiment than in the few-shot Multi-News experiment, indicating that our approach is particularly effective in fewer-shot and zero-shot MDS applications. Due to our method of extracting whole sentences rather than words and phrases, readability and coherence is largely preserved. These results indicate that our approach is useful in fewer-shot and zero-shot applications, such as applications in which the available data is too scarce or the cost of finetuning is too high. Furthermore, these results suggest the possibility that our approach grows more useful as the availability of finetuned models decreases, although this possibility would need to be more thoroughly tested to confirm. \section{Conclusion} We present a novel MMR-based framework that improves upon state-of-the-art pretrained models' performances for the task of MDS, particularly in fewer-shot and zero-shot applications where the cost for finetuning is too high or the available training data is too scarce. This framework increases single-term similarity to human-generated summaries by increasing the single-term relevance of the summaries generated by pretrained models, and further combining the summaries using MMR. These MMR-generated summaries maintain state-of-the-art quality and readability in addition to the improved single-term similarity results. Our approach serves to further demonstrate the potential for MMR as an effective tool for the task of automatic text summarization. We also demonstrate the potential for diversity among summary words to improve aspects of summary quality. It is unclear whether an MMR-based approach which prioritized additional n-gram relevance as opposed to only single-term relevance would produce superior results, and this uncertainty could be the basis for future research. \section{Bibliographical References}\label{reference}
1,116,691,499,498
arxiv
\section{Introduction} \label{s_intro} \addtocounter{footnote}{4} Star formation in other galaxies is typically identified by looking at tracers of young stellar populations, including either photoionizing stars, ultraviolet-luminous stars, and supernovae. The most commonly-used star formation tracers are ultraviolet continuum emission; H$\alpha$ (6563~\AA) and other optical and near-infrared recombination lines; mid- and far-infrared continuum emission; and radio continuum emission. However, each of these tracers have disadvantages when used to measure star formation rates (SFRs). Ultraviolet continuum and optical recombination line emission directly trace the young stellar populations, but dust obscuration typically affects the SFRs from these tracers. Near-infrared recombination line emission is less affected by dust obscuration, but it is still a concern in very dusty starburst galaxies. Dust continuum emission in the infrared is unaffected by dust obscuration except in extreme cases, but since this emission is actually a tracer of bolometric stellar luminosity and not just the younger stellar population, it may yield an overestimate of the SFR if many evolved stars are present. Radio continuum emission traces a combination of free-free continuum emission from photoionized gas and synchrotron emission from supernova remnants, so proper spectral decomposition is needed to accurately convert radio emission to SFR. Additionally, the cosmic rays that produce synchrotron emission will travel significant distances through the ISM, making radio emission appear diffused relative to star formation on scales of $\sim$100~pc \citep{murphy06a, murphy06b}. Higher-order recombination line emission at millimetre and submillimetre wavelengths, which is produced by the same photoionized gas that produce H$\alpha$ and other optical and near-infrared recombination lines, can also be used to measure SFRs. Unlike ultraviolet, optical, and near-infrared star formation tracers, these millimetre and submillimetre recombination lines are not affected by dust extinction, but unlike infrared and radio synchrotron emission, the recombination lines directly trace the photoionizing stars. Recombination line emission can also be observed at centimetre and longer wavelengths, but the line emission at these longer wavelengths is generally affected by a combination of masing effects and opacity issues in the photoionized gas, while the millimetre and submillimetre lines are not \citep{gordon90}. The primary reason why these millimetre and submillimetre recombination lines are not used more frequently as star formation tracers is because the emission is very faint. Before the Atacama Large Millimeter/submillimeter Array (ALMA) became operational, millimetre and submillimetre recombination line emission has only been detected within three other galaxies: M82 \citep{seaquist94, seaquist96}, NGC 253 \citep{puxley97}, and Arp 220 \citep{anantharamaiah00}. ALMA, however, has the sensitivity to potentially detect this emission in many more nearby galaxies, including many nearby luminous infrared galaxies and other starbursts \citep{scoville13}. Detections of recombination line emission from the first three cycles of ALMA observations include measurements of the H40$\alpha$ (99.02~GHz) emission from the centre of NGC~253 \citep{meier15, bendo15b} and H42$\alpha$ (85.69~GHz) emission from the centre of NGC~4945 \citep{bendo16} as well as a marginal detection of H26$\alpha$ (353.62~GHz) in Arp 220 \citep{scoville15}. The NGC 253 and 4945 analyses included comparisons of SFRs from ALMA recombination line and free-free emission to other star formation tracers in radio and infrared emission. The results illustrated some of the challenges in measuring star formation rates in other wavebands. SFRs from radio continuum emission calculated using one of the conversions given by \citet{condon92} or \citet{murphy11} yielded results that differed significantly from the ALMA data. Recombination line emission at centimetre or longer wavelengths often produced much less accurate results. For NGC~253 specifically, some of the SFRs from recombination line emission at these longer wavelengths were $\sim$3$\times$ lower than what was determined from the ALMA data, which indicated that the longer wavelength lines may be affected by gas opacity effects. The SFR from the mid-infrared data for the central starburst in NGC~4945 was $\sim$10$\times$ lower than the SFR from the H42$\alpha$ or free-free emission, but the SFR from the total infrared flux was consistent with the ALMA-based SFRs. This suggests that the centre of NGC~4945 is so dusty that even the mid-infrared light from the dust itself is heavily obscured. While the results for NGC 253 and 4945 have revealed some of the limitations of other star formation tracers, the analyses have mainly focused on radio or infrared data. The dust extinction is so high in the central starbursts in both galaxies that comparisons of millimetre recombination line emission to ultraviolet or H$\alpha$ emission is not worthwhile. Such comparisons need to be performed using a less dusty object. NGC 5253 is a nearby blue compact dwarf galaxy within the M83/Centaurus A Group \citep{karachentsev07} that hosts a starburst nucleus. Most published distances for the galaxy range from 3 to 4~Mpc; we use a distance of 3.15$\pm$0.20~Mpc \citep{freedman01}. Because the starburst is both very strong and relatively nearby, it has been intensely studied at multiple wavelengths, including H$\alpha$, infrared, and radio emission, making it an ideal object to use in a comparison of SFRs from millimetre recombination line data to SFRs from data at other wavelengths. Radio recombination line emission from NGC~5253 has been detected previously by \citet{mohan01} and \citet{rodriguezrico07} at lower frequencies, but they indicated that adjustments for masing and gas opacity would need to be taken into account to calculate SFR. As millimetre recombination line emission is unaffected by these issues, it can provide more accurate measurements of the SFR. We present here ALMA observations of the H30$\alpha$ line at a rest frequency of 231.90~GHz, with which we derive a SFR that directly traces the photoionizing stars while not being affected by dust attenuation. We compare the ALMA-based SFR to SFRs from other wavebands to understand their effectiveness, and we also examine the efficacy of SFRs based on combining H$\alpha$ emission (a tracer of unobscured light from star forming regions) with infrared emission (a tracer of light absorbed and re-radiated by dust). The ALMA observations and the SFRs derived from the line are presented in Section~\ref{s_alma}. SFRs from optical, infrared, and radio emission as well as composite SFRs based on multiple wavebands are derived and compared to the H30$\alpha$ values in Section~\ref{s_othersfr}. Section~\ref{s_conclu} provides a summary of the results. \section{ALMA data} \label{s_alma} \subsection{Observations and data processing} Observations were performed with both the main (12~m) array and the Morita (7~m) Array with baselines ranging from 8 to 1568~m. The observations with both arrays consist of two pointings centered on positions 13:39:55.9 -31:38:26 and 13:39:56.8 -31:38:32 in J2000 coordinates, which were used to map both emission from the compact central starburst and more diffuse, extended emission from gas to the southeast of the centre (see Miura et al. in preparation). Additional data are also available from ALMA total power observations, but since the H30$\alpha$ emission is from a very compact source, the total power data will not substantially improve the detection of the line emission. Moreover, inclusion of the total power data will complicate the data processing procedure, and these data currently cannot be used for continuum imaging. Hence, we did not include the total power data in our analysis. General information about each execution block used to construct the H30$\alpha$ image cube and continuum image is listed in Table~\ref{t_alma_obs}. Information about the spectral set-up for each spectral window in each execution block is given in Table~\ref{t_alma_spw}. \begin{table*} \centering \begin{minipage}{149mm} \caption{ALMA observation information.} \label{t_alma_obs} \begin{tabular}{@{}lcccccccc@{}} \hline Array & Unique & Observing & On-source & Usable & uv & Bandpass & Flux & Phase \\ & identifier & dates & observing & antennas & coverage & calibrator & calibrator & calibrator \\ & & & time (min) & & (m) & & & \\ \hline Morita & A002/X8440e0/X29c6 & 15 Jun 2014 & 30.7 & 9 & 8-48 & J1427-4206 & Ceres & J1342-2900 \\ Morita & A002/X9652ea/X5c3 & 10 Dec 2014 & 30.7 & 9 & 9-45 & J1337-1257 & Callisto & J1342-2900 \\ 12~m & A002/X966cea/X25de & 11 Dec 2014 & 8.5 & 37 & 13-336 & J1337-1257 & Callisto & J1342-2900 \\ 12~m & A002/Xa5df2c/X50ce & 18 Jul 2015 & 13.2 & 39 & 14-1512 & J1337-1257 & J1427-4206 & J1342-2900 \\ 12~m & A002/Xa5df2c/X52fa & 18 Jul 2015 & 15.8 & 39 & 14-1568 & J1337-1257 & Titan & J1342-2900 \\ \hline \end{tabular} \end{minipage} \end{table*} \begin{table*} \centering \begin{minipage}{94mm} \caption{ALMA spectral window settings.} \label{t_alma_spw} \begin{tabular}{@{}lcccc@{}} \hline Array & Unique & Frequency range$^a$ & Bandwidth & Number of \\ & identifier & (GHz) & (GHz) & channels \\ \hline Morita & A002/X8440e0/X29c6 & 229.216 - 231.208 & 1.992 & 2040 \\ & & 230.577 - 232.569 & 1.992 & 2040 \\ & & 243.594 - 245.586 & 1.992 & 2040 \\ Morita & A002/X9652ea/X5c3 & 229.247 - 231.239 & 1.992 & 2040 \\ & & 230.608 - 232.601 & 1.992 & 2040 \\ & & 243.627 - 245.619 & 1.992 & 2040 \\ 12~m & A002/X966cea/X25de & 229.306 - 231.181 & 1.875 & 1920 \\ & & 230.667 - 232.542 & 1.875 & 1920 \\ & & 243.686 - 245.561 & 1.875 & 1920 \\ 12~m & A002/Xa5df2c/X50ce & 229.270 - 231.145 & 1.875 & 1920 \\ & & 230.630 - 232.505 & 1.875 & 1920 \\ & & 243.647 - 245.522 & 1.875 & 1920 \\ 12~m & A002/Xa5df2c/X52fa & 221.269 - 231.144 & 1.875 & 1920 \\ & & 230.630 - 232.505 & 1.875 & 1920 \\ & & 243.647 - 245.522 & 1.875 & 1920 \\ \hline \end{tabular} $^a$ These values are the observed (sky) frequencies and not rest frequencies. \end{minipage} \end{table*} The visibility data for each execution block was calibrated separately using the Common Astronomy Software Applications ({\sc CASA}) version 4.7.0. To begin with, we applied amplitude corrections based on system temperature measurements to all data, and we applied phase corrections based on water vapour radiometer measurements to the 12~m data. For the data with baselines greater than 1000~m, we also applied antenna position corrections. Following this, we visually inspected the visibility data and flagged data with noisy or abnormal phase or amplitude values as well as atmospheric lines centered at 231.30~GHz. Next, we derived and applied calibrations for the bandpass, phase, and amplitude. The Butler-JPL-Horizons 2012 models were used to obtain the flux densities for Callisto and Titan. J1427-4206 is one of the 43 quasars routinely monitored for flux calibration purposes as described within the ALMA Technical Handbook\footnote{https://almascience.eso.org/documents-and-tools/cycle4/alma-technical-handbook} \citep{magnum16}. The {\sc getALMAFlux} task within the Analysis Utilities software package was used to calculate the flux density of J1427-4206 based on measurement in the ALMA Calibrator Source Catalogue\footnote{https://almascience.eso.org/alma-data/calibrator-catalogue}. The version of the Butler-JPL-Horizons 2012 models implemented in {\sc CASA} 4.7.0 is known to produce inaccurate results for Ceres, so the data for the bandpass calibrator J1427-4206 and the estimated flux density from the {\sc getALMAFlux} task were used to flux calibrate the 15 June 2014 observation. The uncertainty in band 6 flux calibration is specified in the ALMA Proposer's Guide\footnote{https://almascience.eso.org/documents-and-tools/cycle4/alma-proposers-guide} \citep{andreani16} as 10\%, but because the uncertainty in the estimated flux densities for J1427-4206 was 15\%, we used 15\% as the final calibration uncertainty. The H30$\alpha$ image was created after subtracting the continuum from the visibility data in the spectral window containing the line. The continuum was determined by fitting the visibility data at approximately 230.7-231.2~GHz and 231.7-232.5~GHz (in the Barycentric frame of reference) with a linear function; this frequency range avoids not only the H30$\alpha$ line but also the atmospheric absorption feature centered at 231.30~GHz. After this, we concatenated the continuum-subtracted data and then created a spectral cube using {\sc clean} within {\sc CASA}. We used Briggs weighting with a robust parameter of 0.5, which is the standard weighting used when producing ALMA images of compact sources like the H30$\alpha$ source in NGC~5253. After creating the image, we applied a primary beam correction. The spectral channels in the cube have a width of 10~MHz (equivalent to a velocity width of 12.9 km s$^{-1}$) and range from sky frequencies of 231.40 to 231.80 GHz in the Barycentric frame. The pixels have a size of 0.05~arcsec, and the imaged field was 100$\times$100~arcsec. The reconstructed beam has a size of 0.21$\times$0.19~arcsec. We also created a 231.6~GHz (observed frame) continuum image using the data from 230.7-231.2~GHz and 231.7-232.5~GHz. We could have used the data from the spectral windows covering the CO and CS lines. However, the potentially steep slope of the continuum at these frequencies (see the discussion in Section~\ref{s_images}), problems with observations on the shortest baselines in the spectral window covering the 243.6-245.6~GHz frequency range, and the relatively broad CO line emission made it difficult to create reliable continuum images using all spectral windows. Because we found evidence for extended continuum emission, we used natural weighting when creating the final image. The image dimensions are the same as for the H30$\alpha$ image cube. The beam size is 0.28$\times$0.25~arcsec. As a test of the flux calibration, we imaged J1427-4206 and J1337-1257 (another quasar monitored by ALMA for flux calibration purposes) using the same parameters that we used for creating the H30$\alpha$ image cube. The measured flux densities differ by $<$10\% from the {\sc getALMAFlux} estimates, which is below our assumed flux calibration uncertainty of 15\%. We also checked the astrometry of each of the two longest-baseline observations by imaging the astrometry check source (J1339-2620) using the same parameters as for the continuum image, fitting the peak with a Gaussian function, and comparing the position to the coordinates in the ALMA Calibrator Source Catalogue. The positions match to within one pixel, or 0.05~arcsec. Equations in the ALMA Technical Handbook \citep{magnum16} yield a smaller value, but we will use 0.05~arcsec as the astrometric uncertainty. \subsection{H30$\alpha$ and 231.6 GHz continuum images} \label{s_images} \begin{figure} \epsfig{file=bendogj_fig01.ps,width=8.5cm} \caption{Images of the central 8$\times$8~arcsec of NGC~5253 in 231.6~GHz (sky frequency) continuum emission (top) and H30$\alpha$ line emission (bottom). The contours in each image show the 2$\sigma$, 3$\sigma$, 5$\sigma$, 10$\sigma$, and 100$\sigma$ detections levels in each image, where $\sigma$ is 1.0 Jy arcsec$^{-2}$ for the 231.6~GHz image and 0.69 Jy km s$^{1}$ arcsec$^{-2}$ for the H30$\alpha$ image. The red ovals at the bottom right of each panel show the FWHM of the beam (0.28$\times$0.25~arcsec for the 231.6~GHz image and 0.21$\times$0.19~arcsec for the H30$\alpha$ image).} \label{f_map} \end{figure} Figure~\ref{f_map} shows the 231.6~GHz (sky frequency) continuum and H30$\alpha$ spectral line images of the central 8$\times$8~arcsec of NGC~5253. The H30$\alpha$ image is the integral of the continuum-subtracted flux between sky frequencies of 231.55 and 231.65~GHz. Both sources show a very bright central peak at a right ascension of 13:39:55.965 and declination of -31:38:24.36 in J2000 coordinates. No significant H30$\alpha$ emission is observed outside of the central peak. The best-fitting Gaussian function to the H30$\alpha$ image has a FWHM 0.27$\times$0.21~arcsec, indicating that the H30$\alpha$ source may have a deconvolved angular size of $\sim$0.15~arcsec. In the continuum image, a second unresolved source to the northeast of the centre is clearly detected at the 10$\sigma$ level. Several other compact sources are detected in continuum at the 3-5$\sigma$ level, including a few sources outside the region shown in Figure~\ref{f_map}, and some diffuse emission near the 3$\sigma$ level is seen around the central peak, most notably immediately to south of the centre. \begin{figure} \epsfig{file=bendogj_fig02.ps,width=8.5cm} \caption{ A map showing the positions of several sources detected in different bands in the centre of NGC~5253. The H30$\alpha$ source is identified in green, and the ellipse represents the FWHM of a Gaussian function fit to the data, which is 0.27$\times$0.21~arcsec. The blue circles identify the locations of the two brightest H$\alpha$ and Pa$\alpha$ sources imaged by \citet[][; blue]{calzetti15}, and the diameters of the circles represent the 0.20~arcsec astrometric uncertainty in the data. The deconvolved size (0.099 $\times$ 0.039~arcsec) of the 43~GHz core imaged by \citet{turner04} is shown in red; this source also corresponds to the central source imaged at 15 and 23~GHz by \citet{turner00}. A secondary 23~GHz source identified by \citet{turner04} is shown in magenta, and the ellipse diameter matches the 0.22 $\times$ 0.08~arcsec FWHM of the beam.} \label{f_map_beam} \end{figure} Figure~\ref{f_map_beam} shows the location of the H30$\alpha$ source (which coincides with the location of the peak in 231.6~GHz emission) relative to other sources detected in observations with comparable angular resolutions. The H30$\alpha$ source lies within 0.04~arcsec of the peak in 43~GHz emission measured by \citet{turner04}, which has the same position as the brightest 15 and 23~GHz sources detected in the high angular resolution radio observations presented by \citet{turner00}. This offset is smaller than the astrometric uncertainty of 0.05~arcsec that we are using for the ALMA data. \citet{turner00} also reported the detection of a second 23 GHz source at a location 0.21~arcsec east of the H30$\alpha$ source, although this source is not detected at 15~GHz, and it also is either a very weak detection or non-detection in the 43~GHz images shown by \citet{turner04}. The difference between the peak fluxes of the primary and secondary 23~GHz sources is $\sim$6$\times$. If this secondary source has a corresponding H30$\alpha$ source, it would be difficult to separate the emission of this secondary source from the brighter H30$\alpha$ source given the small angular separation between the sources compared to the FWHM of the ALMA beam and the much lower flux expected from the secondary source relative to the brighter source. Additional millimetre or radio observations with better angular resolutions and better sensitivities would be needed to confirm that the secondary source is present. The two brightest H$\alpha$ and Pa$\alpha$ (1.876~$\mu$m) sources that were identified by \citet{calzetti15} using {\it Hubble} Space Telescope straddle the H30$\alpha$ source. These sources, which were labelled clusters 5 and 11, have counterparts to sources identified by \citet{calzetti97}, \citet{alonsoherrero04}, \citet{harris04}, and \citet{degrijs13}. \citet{calzetti15} had suggested that the brightest radio continuum sources actually corresponded to cluster 11 (as originally suggested by \citet{alonsoherrero04}), that the secondary 23~GHz source detected by \citet{turner00} at 23~GHz corresponds to cluster 5 and that the offset between the radio and optical/near-infrared data was related to systematic effects in the astrometry systems used to create the images. It is therefore possible that the H30$\alpha$ source detected in the ALMA data corresponds to cluster 11. However, clusters 5 and 11 are separated by a distance of 0.46 arcsec, so they should have been resolved into two separate components in the H30$\alpha$ map. The Pa$\alpha$ fluxes measured by \citet{calzetti15} differ by only $\sim$25\%, but after they attempt to correct for dust absorption using models that account for the intermixing of the stars and dust, the line fluxes are expected to differ by $\sim$6$\times$. The peak of the H30$\alpha$ source is detected by ALMA at the 18$\sigma$ level, so a second photoionization region with at least one-sixth the flux of the main source would be detected at the 3$\sigma$ level. It is possible that the second source is even fainter than expected relative to the primary H30$\alpha$ source, which could occur if the primary source is more obscured than expected. Alternately, it is possible that both sources lie at ends of a larger star forming complex that is heavily obscured in the optical and near-infrared bands, although the 0.46~arcsec separation between the two optical/near-infrared sources is larger than the $\sim$0.15~arcsec size of the H30$\alpha$ source we derived, which makes this second scenario less likely. The coincidence of the H30$\alpha$ emission with the 231.6~GHz continuum emission provides additional support for the possibility that the central star forming region is very heavily obscured. As discussed below, the 231.6~GHz continuum emission is expected to contain a combination of dust and free-free emission, although the relative contributions of each may be very uncertain. If the 231.6~GHz emission is primarily from dust, then we would expect the optical and near-infrared light to be heavily obscured where most of the photoionizing stars are also located and for the Pa$\alpha$ and H$\alpha$ emission to be more easily detected at the fringes of the region. In fact, \citet{calzetti15} noted that the Pa$\alpha$ emission is offset relative to the H$\alpha$ emission in their cluster 11, which would be consistent with this interpretation of the millimetre continuum and recombination line emission. However, if the 231.6~GHz emission includes substantial free-free emission, then we would expect to easily detect a second photoionization region, even one that is 20$\times$ fainter than the central photoionizing region. \begin{figure} \epsfig{file=bendogj_fig03a.ps} \epsfig{file=bendogj_fig03b.ps} \caption{Plots of the integrated 231.6~GHz continuum flux density (top) and integrated H30$\alpha$ flux (bottom) as a function of the radius of the measurement aperture. The uncertainties in the continuum measurement related to random noise in the image are smaller than the width of the line.} \label{f_curveofgrowth} \end{figure} To understand the distribution of the continuum and H30$\alpha$ emission, we measured integrated fluxes within apertures with radii varying from 0.1 arcsec, which is equivalent to the radius of the beam, to 9 arcsec, which is the radius at which we begin to measure artefacts related to the negative sidelobes of the central source. These curve-of-growth profiles are shown in Figure~\ref{f_curveofgrowth}. Most of the H30$\alpha$ flux from the unresolved central source falls within a radius of 0.3~arcsec, so we will use the measurements of the line flux from within that radius as the total flux. The integrated continuum emission peaks at a radius of $\sim$8.5~arcsec. Emission on angular scales larger than 17~arcsec is either resolved out, strongly affected by the negative sidelobes, or affected by the high noise at the edge of the primary beam. While differences in the signal-to-noise ratio may explain why the continuum emission is detected at a larger radius than the H30$\alpha$ line, it is also possible that a significant fraction of the continuum emission may originate from dust within NGC~5253 that is distributed more broadly than the photoionized gas. \begin{table*} \centering \begin{minipage}{109mm} \caption{Continuum measurements of NGC~5253 at 230-250~GHz.} \label{t_contcomp} \begin{tabular}{@{}lccccc@{}} \hline Reference & Telescope & Frequency & Beam & Flux Density & Aperture \\ & & (GHz) & FWHM & (mJy) & Diameter\\ & & & (arcsec) & & (arcsec)\\ \hline [this paper] & ALMA & 231.6 & 0.28$\times$0.25 & $104 \pm 16$ & 17 \\ \citet{meier02} & OVRO & 233.3 & $6.5 \times 4.5$ & $46 \pm 10$ & 20 \\ \citet{vanzi04} & SEST & 250 & 11 & $114 \pm 4$ & 11 \\ \citet{miura15} & SMA & 230 & $11 \times 4$ & $34 \pm 9$ & $11 \times 4$ \\ \hline \end{tabular} \end{minipage} \end{table*} The 231.6~GHz continuum flux density within a radius of 8.5~arcsec is 104$\pm$16~mJy. A few other continuum flux densities at comparable frequencies have been published. These measurements as well as details about the data are listed in Table~\ref{t_contcomp}. The ALMA measurement is very close to the Swedish-ESO Submillimetre Telescope (SEST) measurement from \citet{vanzi04}. While the Vanzi et al. number is for both a smaller aperture and a higher frequency than the ALMA measurement, adjustments for both the measurement aperture and spectral slope should potentially cancel out. The \citet{meier02} flux density from the Owens Valley Radio Observatory (OVRO)and \citet{miura15} flux density from the Submillimeter Array (SMA) are both lower than the ALMA measurements by 2-3$\times$. Both measurements have high uncertainties related to either calibration or detection issues, which could partly explain the mismatch between these data and ours. The \citet{miura15} measurement is also for a smaller area; when we use a similar aperture to their beam size, we obtain a flux density of 47$\pm$7~mJy, which is near their measurement. However, it is also likely that the OVRO and SMA data were insensitive to the faint, extended continuum emission from this source because of a combination of limited uv coverage and the broader beam size. The extended emission observed by OVRO and SMA could have been smeared into the negative sidelobes of the central source or could have been redistributed on spatial scales larger than the largest angular scales measurable by the arrays. The ALMA observations, which used both short and long baselines, can recover structures on the same angular scales as OVRO and SMA while not spreading the emission onto scales where it is not recoverable by the interferometer, and since the ALMA data have a smaller beam, any negative sidelobes will not cover a significant fraction of the diffuse, extended emission within a radius of 8.5~arcsec of the centre. We therefore think the ALMA flux density, which is consistent with the SEST measurement, should be fairly reliable. Having said this, we will emphasize that much of the continuum emission in our image has a surface brightness at $<$5$\sigma$; more sensitive measurements would be needed to confirm our results. The emission at 231.6~GHz could originate from a variety of sources. The amount of dust emission at 231.6~GHz can be estimated by extrapolating the modified blackbody function from \citet{remyruyer13} that was fit to the 100-500~$\mu$m {\it Herschel} Space Observatory data. This gives a flux density of 63~mJy, but the uncertainties in the parameters for the best fitting function indicate that the uncertainty is$\sim$2$\times$. Free-free emission as well as a few more exotic emission mechanisms could contribute to the emission at 231.6~GHz. While a more in-depth analysis of the SED is beyond the scope of this paper, we discuss this topic further in Section~\ref{s_discuss_mmradio}. \begin{figure} \epsfig{file=bendogj_fig04.ps} \caption{The continuum-subtracted spectrum of the centre of NGC~5253 showing the H30$\alpha$ line emission. This spectrum was measured within a region with a radius of 0.5~arcsec. The green line shows the best-fitting Gaussian function, which has a mean relativistic velocity in the Barycentric frame of 391$\pm$2~km~s$^{-1}$ and a FWHM of 68$\pm$3~km~s$^{-1}$.} \label{f_spec} \end{figure} Figure~\ref{f_spec} shows the portion of the continuum-subtracted spectrum that includes the H30$\alpha$ line emission. The line has a mean relativistic velocity in the Barycentric reference frame of 391$\pm$2~km~s$^{-1}$ and a FWHM of 68$\pm$3~km~s$^{-1}$. The integral of the line is 0.86~Jy~km~s$^{-1}$ with a measurement uncertainty of 0.04~Jy~km~s$^{-1}$ and a calibration uncertainty of 0.13~Jy (15\%). The spectral window covering the H30$\alpha$ line does not include any other detectable spectral lines. Miura et al. (in preparation) will discuss the CO (2-1) line emission and any other spectral lines detected in the other spectral window. \subsection{SFR from the H30$\alpha$ data} \label{s_alma_sfr} For this analysis, we assume that the nuclear starburst detected in H30$\alpha$ data contains most of the photoionizing stars in the galaxy and that a SFR derived from it will be representative of the global SFR. While the source is near the brightest clusters found in H$\alpha$ and Pa$\alpha$ emission, fainter $<$5~Myr clusters and diffuse emission are found outside the central starburst \citep{calzetti04, calzetti15, harris04}. Although most of this emission should fall within the region imaged by ALMA, the lack of H30$\alpha$ emission detected from these fainter sources could cause the SFR from the H30$\alpha$ emission to be biased downwards. Given the central concentration of the H$\alpha$ and Pa$\alpha$ emission, however, the bias may not be too severe to significantly affect comparisons to other star formation tracers. The H30$\alpha$ flux can be converted to a photoionizing photon production rate $Q$ using \begin{equation} \begin{split} \frac{\mbox{Q}(\mbox{H30}\alpha)}{\mbox{s}^{-1}}= 3.99\times10^{30} \left[\frac{\alpha_B}{\mbox{ cm}^3\mbox{ s}^{-1}}\right] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \times \left[\frac{\epsilon_\nu}{\mbox{erg s}^{-1}\mbox{ cm}^{-3}}\right]^{-1} \left[\frac{\nu}{\mbox{~GHz}}\right] \left[\frac{D}{\mbox{ Mpc}}\right]^{2} \left[\frac{\int f_\nu(\mbox{line}) dv}{\mbox{Jy km s}^{-1}}\right] \end{split} \label{e_q_h30a} \end{equation} based on equations from \citet{scoville13}. The effective recombination coefficient ($\alpha_B$) and emissivity ($\epsilon_\nu$) terms in this equation depend on the electron density and temperature. Both term varies by less than 15\% for electron densities between $10^2$ and $10^5$cm$^{-3}$. The terms do vary significantly with electron temperature between 3000 and 15000~K, and the resulting $Q$ can change by a factor of $\sim$2.4 depending on which temperature is selected. In analyses at lower frequencies, where the continuum emission is dominated by free-free emission, it is possible to use the line-to-continuum ratio to estimate electron temperatures. However, thermal dust emission comprises a significant yet poorly-constrained fraction of the 231.6~GHz continuum emission in the central starburst, so this method would produce questionable results. Instead, we use 11500~K for the electron temperature, which is based on [O{\small III}] measurements near the centres of the brightest optical recombination line sources within NGC~5253 from \citet{kobulnicky97}, \citet{lopezsanchez07}, \citet{guseva11}, and \citet{monrealibero12}. For the electron density, we use 600 cm$^{-3}$, which is the mean of electron densities based on measurements of the [O{\small II}] and [S{\small II}] from these same regions as given by \citet{lopezsanchez07}, \citet{guseva11}, and \citet{monrealibero12}. Although the electron density measurements from these three studies vary by a factor of $\sim$3, probably because of issues related to the position and size of the measurement apertures as discussed by \citet{monrealibero12}, the resulting $Q$ values should be unaffected by the relatively large disagreement in these measurements. Dust extinction could have affected both the electron temperature and density estimates, but we do not expect extinction affects to alter the data to a degree where our calculations are affected. We interpolated among the $\alpha_B$ and $\epsilon_\nu$ terms published by \citet{storey95} to calculate specific values for these terms based on our chosen electron temperature and density. Based on these terms and our H30$\alpha$ flux, we calculated $Q$ to be (1.9$\pm$0.3)$\times$$10^{52}$~s$^{-1}$. The $Q$ value can be converted to SFR (in M$_\odot$ yr$^{-1}$) using a simple scaling term, but this term depends upon the characteristics of the stellar population within the star forming regions. Using {\sc Starburst99} \citep{leitherer99}, \citet{murphy11} derived a scaling term of $7.29\times10^{-54}$ for solar metallicity (defined in older versions of {\sc Starburst99} as $Z$=0.020) and a \citet{kroupa02} initial mass function (IMF)\footnote{The Kroupa IMF used in {\sc Starburst99} is defined as having an index of 2.3 between 100~M$_\odot$ (the IMF upper mass boundary) and 0.5~M$_\odot$ and an index of 1.3 between 0.5~M$_\odot$ and 0.1~M$_\odot$ (the IMF lower mass boundary).}. However, the conversion factor is dependent on metallicity. \citet{monrealibero12} reported that 12+log(O/H) for NGC~5253 is 8.26$\pm$0.04 (on a scale where solar metallicity corresponds to 12+log(O/H)=8.66 and $Z$=0.014), which is consistent with older results \citep{kobulnicky97, lopezsanchez07, guseva11}. Assuming that all other abundances scale with the O/H ratio, this oxygen abundance is equivalent to $Z$$\cong$0.0056. The current version (7.0.1) of {\sc Starburst99} \citep{leitherer14} provides the Geneva 2012/13 evolutionary tracks, which include stellar rotation \citep{ekstrom12, georgy12}, as an option for the simulations. Stellar rotation enhances convection and also metallicity at the surfaces of the stars, which makes stars hotter and therefore affects the conversion between $Q$ and SFR. Currently, {\sc Starburst99} includes tracks for no rotation and rotation at 40\% of the break up velocity. These two extremes are expected to bracket the actual rotation velocities of typical stellar populations. The current version of {\sc Starburst99} also only includes Geneva tracks with rotation for $Z=0.002$ and $Z=0.014$. We assume that the logarithm of $Q$ scales linearly with $Z$ when deriving conversion factors. Based on {\sc Starburst99} results using the 1994 versions of the Geneva tracks \citep{shaller92, charbonnel93, schaerer93a, schaerer93b}, this interpolation should be accurate to within 5\%. We report three versions of the SFR: a version based on solar metallicity and no rotation using the \citet{murphy11} conversion; a version based on the Geneva 2012/13 tracks for $Z$=0.0056 with no stellar rotation; and a version based on the Geneva 2012/13 tracks for $Z$=0.0056 where we use the average of model results for no rotation and 40\% of the break up velocity. The last two numbers are derived from {\sc Starburst99} simulations using a Kroupa IMF and should be applicable to scenarios where star formation has been continuous for $>$10~Myr. The three conversion factors are listed in Table~\ref{t_qsfrconversion}, with more general-purpose correction factors for metallicity and rotation effects listed in Table~\ref{t_sfrcorrfac}. The SFRs based on Equation~\ref{e_q_h30a} and the conversion factors between $Q$ and SFR are reported in Table~\ref{t_sfr}. Additionally, Figure~\ref{f_sfr} shows a graphical comparison between the SFR from the H30$\alpha$ data and the SFRs calculated from other star formation tracers as described in the following section. In accounting for metallicity effects (using the new Geneva tracks), the resulting SFR decreases by $\sim$25\%. When rotation is incorporated, the SFR decreases by an additional $\sim$10\% relative to the solar value without rotation. \begin{table} \caption{Conversions between $Q$ and SFR.} \label{t_qsfrconversion} \begin{center} \begin{tabular}{@{}lc@{}} \hline Model & Conversion from $Q$ to SFR \\ & ( M$_\odot$ yr$^{-1}$ / (s$^{-1}$) )\\ \hline Solar metallicity, no stellar rotation & $7.29\times10^{-54~a}$\\ $Z$=0.0056$^b$, no stellar rotation & $5.40\times10^{-54}$\\ $Z$=0.0056$^b$, with stellar rotation & $4.62\times10^{-54}$\\ \hline \end{tabular} \end{center} $^a$ This conversion is from \citet{murphy11}.\\ $^b$ These metallicities are based on a scale where solar metallicity is 0.014. \end{table} \begin{table*} \centering \begin{minipage}{123mm} \caption{Corrections in SFR relative to solar metallicity scenario with no stellar rotation.} \label{t_sfrcorrfac} \begin{tabular}{@{}cccc@{}} \hline General & Specific & \multicolumn{2}{c}{Correction factor$^a$} \\ waveband & star formation & $Z$=0.0056 & $Z$=0.0056 \\ & metrics & no stellar rotation$^b$ & with stellar rotation$^b$ \\ \hline Recombination lines & H$\alpha$, H30$\alpha$ & 0.74 & 0.63 \\ Infrared & 22~$\mu$m, 70~$\mu$m, 160~$\mu$m, total infrared & $0.90 \pm 0.03$ & $0.72 \pm 0.03$ \\ \hline \end{tabular} $^a$ Derived SFRs should be multiplied by these correction factors.\\ $^b$ On this scale, solar metallicity corresponds to $Z$=0.014, and $Z$=0.0056 is the approximate metallicity of NGC~5253 based on 12+log(O/H) measurements. \end{minipage} \end{table*} \begin{table*} \centering \begin{minipage}{119mm} \caption{SFRs calculated for NGC 5253.} \label{t_sfr} \begin{tabular}{@{}lccc@{}} \hline Waveband & \multicolumn{3}{c}{SFR (M$_\odot$~yr$^{-1}$)$^a$}\\ & Solar metallicity & $Z$=0.0056 & $Z$=0.0056 \\ & no stellar rotation & no stellar rotation & with stellar rotation \\ \hline H30$\alpha$ & $0.14 \pm 0.02$ & $0.102 \pm 0.015$ & $0.087 \pm 0.013$ \\ \hline 22~$\mu$m & $0.41 \pm 0.02$ & $0.37 \pm 0.02$ & $0.30 \pm 0.02$ \\ 70~$\mu$m & $0.095 \pm 0.004$ & $0.086 \pm 0.005$ & $0.068 \pm 0.004$ \\ 160~$\mu$m & $0.070 \pm 0.003$ & $0.063 \pm 0.003$ & $0.050 \pm 0.003$ \\ Total infrared & $0.148 \pm 0.009$ & $0.133 \pm 0.009$ & $0.107 \pm 0.008$ \\ \hline H$\alpha$ (corrected using Pa$\alpha$, Pa$\beta$ data)$^b$ & $0.11 \pm 0.03$ & $0.08 \pm 0.02$ & $0.07 \pm 0.02$ \\ H$\alpha$ (corrected using 22~$\mu$m data)& $0.221 \pm 0.012$ & $0.164 \pm 0.009$ & $0.139 \pm 0.008$ \\ H$\alpha$ (corrected using total infrared data) & $0.055 \pm 0.04$ & $0.041 \pm 0.003$ & $0.035 \pm 0.003$ \\ \hline \end{tabular} $^a$ All uncertainties incorporate measurement uncertainties and uncertainties related to the correction factors in Table~\ref{t_sfrcorrfac}. The uncertainties do not include the uncertainties in the multiplicative factos applied when converting from the fluxes in the individual bands to SFR.\\ $^b$ The extinction corrections for these data were calculate and applied by \citet{calzetti15}.\\ \end{minipage} \end{table*} \begin{figure*} \epsfig{file=bendogj_fig05.ps} \caption{A graphical depiction of the SFRs for NGC~5253. These are values calculated using corrections to account for the metallicity of NGC~5253 ($Z=0.0056$) and stellar rotation effects; they correspond to the values in the rightmost column in Table~\ref{t_sfr}. Uncertainties for the individual data points (when they are larger than the symbols in this plot) correspond to the measurement uncertainties and uncertainties related to the correction factors in Table~\ref{t_sfrcorrfac} but do not include uncertainties in the conversion factors between the fluxes and SFR. The grey band corresponds to the mean and 1$\sigma$ uncertainties in the SFR from the H30$\alpha$ data.} \label{f_sfr} \end{figure*} \section{Other star formation tracers} \label{s_othersfr} Having derived the SFR from the H30$\alpha$ data, we calculated SFRs using other publicly-available infrared and H$\alpha$ data. While the H30$\alpha$ emission primarily originates from stars younger than 5~Myr, other star formation tracers may trace stellar populations with different ages, so the derived SFRs could differ if the rate has changed significantly over time. SFRs based on most star formation tracers are also affected by IMF, metallicity, and stellar rotation effects. Most current conversions, including all of the ones we used, are calibrated for a Kroupa IMF. However, the SFR equations are usually calibrated for solar metallicities and do not account for stellar rotation. We therefore report SFRs both using the original equations for solar metallicity and using conversions modified for the metallicity of NGC~5253. For the lower metallicity scenario, we include SFRs with and without incorporating stellar rotation effects. All corrections are derived using version 7.0.1 of the {\sc Starburst99} models and are applicable to scenarios with a Kroupa IMF and continuous star formation older than 10~Myr. The corrections for stellar rotation are the average of values for no rotation and 40\% of the break-up velocity. \subsection{Comparisons of H30$\alpha$ results to other results from millimetre and radio data} \label{s_discuss_mmradio} The H30$\alpha$ results can be compared to some of the published $Q$ based on other radio data analyses. To start with, two papers have published analyses based on radio recombination lines. \citet{mohan01} published multiple values of $Q$ based on different models applied to H92$\alpha$ (8.31 GHz) data; these $Q$ values range from 0.9$\times$$10^{52}$ to 2$\times$$10^{52}$~s$^{-1}$ when rescaled for a distance of 3.15~Mpc. \citet{rodriguezrico07} using H53$\alpha$ (42.95~GHz) data obtained a $Q$ of 1.2$\times$$10^{52}$~s$^{-1}$. Most of these measurements are lower than the value of (1.9$\pm$0.3)$\times$$10^{52}$~s$^{-1}$ from the H30$\alpha$ data, although Model IV from \citet{mohan01} produced a slightly higher $Q$, and most other results differ by less than 2$\times$. Both the \citet{mohan01} and \citet{rodriguezrico07} $Q$ values are based on models that have attempted to account for masing and gas opacity effects, but it is possible that they were not always able to correct for these effects accurately. Nonetheless, some fine-tuning of the models of the lower frequency recombination line emission may yield more accurate $Q$ and SFR values from the higher order recombination lines. A series of analyses of the 23 - 231 GHz continuum emission published by \citet{turner00}, \citet{meier02}, \citet{turner04}, and \citet{miura15} treated the emission at these frequencies as originating mainly from free-free emission. Based on this assumption, these observations gave $Q$ values ranging from 2.4$\times$$10^{52}$ to 4.8$\times$$10^{52}$ s$^{-1}$ after being rescaled for a distance of 3.15~Mpc. Some of these values are significantly higher than the value of (1.9$\pm$0.3)$\times$$10^{52}$~s$^{-1}$ from the H30$\alpha$ line emission. As discussed in Section~\ref{s_images}, thermal dust emission could produce more than half of the observed emission at 231~GHz, and this probably resulted in the high $Q$ values calculated by \citet{meier02} and \citet{miura15} based on photometry as similar frequencies. The highest $Q$ from \citet{turner04} is based on 43~GHz emission measured within a region with a diameter of 1.2~arcsec. \citet{rodriguezrico07} determined that only $\sim$20\% of the 43~GHz emission originated from a region with a diameter of 0.4~arcsec, comparable to the region where we detected H30$\alpha$ emission. If we adjust the value of $Q$ from \citet{turner04} to account for the aperture effects, the value drops to $\sim$1.0$\times$$10^{52}$ s$^{-1}$, which is nearly 2$\times$ lower than what we measure from the H30$\alpha$ source. \citet{meier02} and \citet{rodriguezrico07} both indicate that optical thickness effects could alter the free-free emission, which complicates the conversion from continuum emission to $Q$. Additionally, it is possible that other emission sources that are not understood as well from a physical standpoint, such as the "submillimetre excess" emission that has been identified in many other low-metallicity dwarf galaxies \citep[e.g. ][]{galametz11,remyruyer13} or anomalous microwave emission \citep[e.g. ][]{murphy10, dickinson13}, could contribute to the emission in the millimetre bands. The data from the analysis in our paper are insufficient for determining how any of these phenomena affect the radio or millimetre emission from NGC~5253 or affect the calculation of $Q$ from radio or millimetre data. A deeper analysis of the SED, potentially including the reprocessing and analysis of ALMA 86 - 345~GHz data acquired in 2014 and later as well as a re-examination of archival radio data from the Very Large Array, would be needed to identify the various emission components in the 1-350~GHz regime as well as to determine how to convert the continuum emission to SFR. However, this is beyond the scope of our paper. Aside from the calculations described above, a widely-used conversion between radio continuum emission and SFR relies upon the empirical correlation between 1.4~GHz emission, which is an easily-observable radio continuum band, and far-infrared emission. We can only find published 1.4~GHz flux densities integrated over an area much broader than the aperture used for the H30$\alpha$ measurement, and the radio emission is very extended compared to the central starburst \citep[e.g. ][]{turner98, rodriguezrico07}. Hence, any SFR calculated using the available globally-integrated flux density will also be affected by aperture effects, so it would be inappropriate to list it alongside the other SFRs in Table~\ref{t_sfr}. Having said this, if we use \begin{equation} \frac{\mbox{SFR}(\mbox{1.4 GHz})}{\mbox{M}_\odot\mbox{ yr}^{-1}} = 0.0760 \left[\frac{D}{\mbox{Mpc}}\right]^{2} \left[\frac{f_\nu(1.4 \mbox{ GHz})}{\mbox{Jy}}\right], \label{e_sfr_1.4ghz} \end{equation} which is based on the conversion from \citet{murphy11}, and the globally-integrated measurement of $84.7 \pm 3.4$~mJy from the 1.4~GHz NRAO VLA Sky Survey \citep{condon98}, we obtain a SFR of $0.064 \pm 0.003$~M$_\odot$~yr$^{-1}$. The SFR from the 1.4~GHz emission is lower than the SFR from the H30$\alpha$ data even though the 1.4~GHz emission is measured over a much larger area. The 1.4~GHz band has been calibrated as a star formation tracer using data for solar metallicity galaxies and relies upon a priori assumptions about the relative contributions of free-free and synchrotron emission to the SED. The ratio of these forms of emission can vary between spiral and low mass galaxies, but the relation between infrared and radio emission has been found to staty linear in low metallicity galaxies in general \citep[e.g. ][ and references therein]{bell03,wu07}. In NGC~5253 specifically, the relative contribution of synchrotron emission to the 1.4~GHz band is very low \citep{rodriguezrico07}, which is probably sufficient to cause the SFR from the globally-integrated 1.4~GHz measurement to fall below the SFR from the nuclear H30$\alpha$ measurement as well as the SFR from the total infrared flux (see Section~\ref{s_othersfr_ir}). In any case, it appears that Equation~\ref{e_sfr_1.4ghz} simply is not suitable for NGC~5253, as it evidently deviates from the empirical relations between radio emission and either infrared flux or SFR. An SFR based on the spectral decomposition of the SED and the analysis of its components would potentially yield more accurate results, but as we stated above, this analysis is beyond the scope of this paper. \subsection{Comparisons of H30$\alpha$ and infrared dust continuum results} \label{s_othersfr_ir} \subsubsection{SFR calculations} \label{s_othersfr_ircalc} For calculating SFRs based on infrared flux densities, we used globally-integrated values, mainly because the infrared emission appears point-like in most data and because no data exist with angular resolutions comparable to the ALMA data. Figures~\ref{f_map} and \ref{f_curveofgrowth} show that a significant fraction of the cold dust emission originates from an extended region outside the central starburst. However, the 8~$\mu$m image from \citet{dale09} shows that most of the emission in that band (a combination of hot dust and polycyclic aromatic hydrocarbon (PAH) emission as well as a small amount of stellar emission) originates from a central unresolved source with a diameter smaller than 2~arcsec. The emission at mid-infrared wavelengths should be very compact and therefore should be directly related to the luminosity of the central starburst seen in H30$\alpha$ emission. The SFR from the total infrared emission could be affected by diffuse dust heated by older stellar populations outside the centre and therefore could be higher. While we do not apply any corrections for this diffuse dust, we discuss the implications of this more in Section~\ref{s_othersfr_irdiscuss}. \begin{table} \caption{Infrared data for NGC 5253.} \label{t_data_ir} \begin{center} \begin{tabular}{@{}lc@{}} \hline Wavelength & Flux \\ ($\mu$m) & density$^a$ (Jy)\\ \hline 12 & $1.86 \pm 0.08$ \\ 22 & $12.4\pm0.7$ \\ 70 & $33.0 \pm 1.6$ \\ 100 & $33.2 \pm 1.6$ \\ 160 & $22.4 \pm 1.2$ \\ 250 & $7.3 \pm 0.6$ \\ 350 & $3.3 \pm 0.3$ \\ 500 & $1.04 \pm 0.09$\\ \hline \end{tabular} \end{center} \end{table} We calculated SFRs based on flux densities measured in individual bands as well as total infrared fluxes based on integrating the SED between 12 and 500~$\mu$m, which covers most of the dust continuum emission. Table~\ref{t_data_ir} shows the data we used in this analysis. For the 12 and 22~$\mu$m bands, we made measurements within Wide-field Infrared Survey Explorer \citep[WISE; ][]{wright10} images from the AllWISE data release. Although 24~$\mu$m flux densities based on {\it Spitzer} data have been published by \citet{engelbracht08} and \citet{dale09}, the centre of NGC~5253 is saturated in the 24~$\mu$m image \citep{bendo12}. Hence, we used the WISE 22~$\mu$m data instead and assumed that the conversions from WISE 22~$\mu$m flux densities to SFR will be the same as for {\it Spitzer} 24~$\mu$m data. The 12 and 22~$\mu$m flux densities were measured within circles with radii of 150 arcsec; this is large enough to encompass the optical disc of the galaxy as well as the beam from the central source, which has a FWHM of 12~arcsec at 22~$\mu$m \citep{wright10}. The backgrounds were measured within annuli with radii of 450 and 500~arcsec and subtracted before measuring the flux densities. Calibration uncertainties are 5\% at 12~$\mu$m and 6\% at 22~$\mu$m. Colour corrections from \citet{wright10}, which change the flux densities by $~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}}$10\%, are applied based on spectral slopes proportional to $\nu^{-3}$ at 12~$\mu$m and $\nu^{-2}$ at 22~$\mu$m, which is based on the spectral slopes between 12 and 22 and between 22 and 70~$\mu$m. For the 70, 100, 160, 250, 350, and 500~$\mu$m measurements, we used the globally-integrated flux densities measured from {\it Herschel} data by \citet{remyruyer13}. These data include no colour corrections, so we applied corrections equivalent to that for a point-like modified blackbody with a temperature of 30~K and an emissivity that scales as $\nu^2$, which is equivalent to the shape of the modified blackbody fit by \citet{remyruyer13} to the data. The 70, 100, and 160~$\mu$m colour corrections, which change the flux densities by $~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}}$5\%, are taken from \citet{muller11}\footnote{http://herschel.esac.esa.int/twiki/pub/Public/PacsCalibrationWeb\\ /cc\_report\_v1.pdf}, and the 250, 350, and 500~$\mu$m colour corrections, which change the flux densities by $\sim$10\%, are taken from \citet{valtchanov17}\footnote{http://herschel.esac.esa.int/Docs/SPIRE/spire\_handbook.pdf}. These colour-corrected data are listed in Table~\ref{t_data_ir}. While multiple methods have been derived for calculating the total infrared flux using a weighted sum of flux densities measured in multiple other individual bands \citep[e.g. ][]{dale02,boquien10,galametz13,dale14}, most of these derivations are calibrated for galaxies with SEDs similar to those of spiral galaxies. The dust in NGC~5253 is much hotter than in than in most spiral galaxies, and in particular, the 22/70~$\mu$m ratio of 0.38 in NGC~5253 is higher than the 24/70~$\mu$m ratio of 0.05-0.10 found in many spiral galaxies \citep[e.g. ][]{dale07,bendo12}. We therefore calculated the total infrared flux by first linearly interpolating between the logarithms of the monochromatic flux densities as a function of the logarithm of the wavelength and then integrating under the curve. This gave a total infrared flux of (3.2$\pm$0.2)$\times$$10^{14}$~Jy~Hz, with the uncertainties derived using a monte carlo analysis. Most conversions from infrared emission to SFR work given the assumption that star forming regions produce most of the observed bolometric luminosity of galaxies and that dust absorbs and re-radiates virtually all of the emission from the star forming regions. Additionally, the conversions of measurements from individual infrared wavebands to SFR is based on the assumption that the SED does not change shape and that the flux densities in the individual band scale linearly with the total infrared flux. For this analysis, we used \begin{equation} \frac{\mbox{SFR}(22\mu\mbox{m})}{\mbox{M}_\odot\mbox{ yr}^{-1}}= 2.44\times10^{-16} \left[\frac{D}{\mbox{Mpc}}\right]^{2} \left[\frac{\nu f_\nu(22\mu\mbox{m})}{\mbox{Jy Hz}}\right] \label{e_sfr_22} \end{equation} from \citet{rieke09}, \begin{equation} \frac{\mbox{SFR}(70\mu\mbox{m})}{\mbox{M}_\odot\mbox{ yr}^{-1}}= 7.04\times10^{-17} \left[\frac{D}{\mbox{Mpc}}\right]^{2} \left[\frac{\nu f_\nu(70\mu\mbox{m})}{\mbox{Jy Hz}}\right] \label{e_sfr_70} \end{equation} from \citet{calzetti10}, \begin{equation} \frac{\mbox{SFR}(160\mu\mbox{m})}{\mbox{M}_\odot\mbox{ yr}^{-1}}= 1.71\times10^{-16} \left[\frac{D}{\mbox{Mpc}}\right]^{2} \left[\frac{\nu f_\nu(160\mu\mbox{m})}{\mbox{Jy Hz}}\right] \label{e_sfr_160} \end{equation} from \citet{calzetti10}, and \begin{equation} \begin{split} \frac{\mbox{SFR}(\mbox{total infrared})}{\mbox{M}_\odot\mbox{ yr}^{-1}} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ =4.66\times10^{-17} \left[\frac{D}{\mbox{Mpc}}\right]^{2} \left[\frac{f(\mbox{total infrared})}{\mbox{Jy Hz}}\right] \end{split} \label{e_sfr_tir} \end{equation} from \citet{kennicutt12} based on derivations from \citet{hao11} and \citet{murphy11}. The SFRs calculation using these equations are listed in Table~\ref{t_sfr}. While the conversion for total infrared flux is derived using {\sc Starburst99} for solar metallicity and older (but unspecified) stellar evolutionary tracks, the conversions for the 22, 70, and 160~$\mu$m bands are based on part upon empirical relations between the emission in those individual bands and other star formation tracers. However, all of these conversions are based upon the assumption that the infrared flux as measured in an individual band or as integrated over a broad wavelength range will scale with the bolometric luminosity. Based on both the Geneva 1994 and 2012/13 tracks implemented in {\sc Starburst99} version 7.0.1, the conversions factors in Equations~\ref{e_sfr_22}-\ref{e_sfr_tir} should be multiplied by 0.90 to correct for the lower metallicity in NGC~5253. To account for metallicity and rotation in the same way as we did for the recombination lines, the conversions need to be multiplied by 0.72. Versions of the SFRs with these corrections applied are listed in Table~\ref{t_sfr} alongside the value calculated assuming solar metallicity and no stellar rotation. \subsubsection{Discussion} \label{s_othersfr_irdiscuss} The 22~$\mu$m flux density yielded a SFR that was $\sim$3$\times$ higher than the H30$\alpha$ SFR and is also significantly higher than the SFRs calculated using most other methods. The aberrant SFR from the mid-infrared data is a consequence of the low metallicity of NGC~5253. As stated in the previous section, the conversions from flux densities in individual infrared bands to SFRs are based upon the key assumptions that the total infrared flux originates from light absorbed from star forming regions and that the individual bands will scale linearly with the total infrared flux. When the second condition is not met, the SFRs from individual infrared bands will be inaccurate. This problem had been anticipated for low metallicity galaxies like NGC~5253 \citep[e.g. ][]{calzetti10}. Low metallicity galaxies contain less interstellar dust, so the light from star forming regions is not attenuated as much as it is in larger galaxies. As a result, the dust that is present is irradiated by a relatively hard and strong radiation field, which makes the dust warmer than in spiral galaxies \citep{hunt05, engelbracht08, rosenberg08, hirashita09, remyruyer13}. The resulting change in the shape of the infrared SED results in biasing the SFR from 22~$\mu$m data upwards. The 70~$\mu$m flux density yielded a SFR that was 1-2$\sigma$ smaller than the H30$\alpha$ value (depending on which versions from in Table~\ref{t_sfr} are compared), while the SFR from the 160~$\mu$m data was approximately half of the H30$\alpha$ value. This change in the calculated SFR with increasing wavelength is clearly a consequence of the unusually hot dust within NGC~5253. The longer wavelength bands appear low in comparison to the total infrared flux, which is expected to scale with the bolometric luminosity, and the resulting SFRs from the 70 and 160~$\mu$m data are also lower. Additionally, longer wavelength infrared bands are typically expected to include emission from diffuse dust heated by older stars \citep[e.g. ][]{bendo15a}, which should affect the SFRs based on the data from the bands \citep{calzetti10}. However, if diffuse dust (particularly extended cold dust emission from outside the central starburst) was present, the SFRs should be biased upwards at longer wavelengths. The low SFRs from the 70 and 160~$\mu$m data indicate that the 70 and 160~$\mu$m bands contain relatively little cold, diffuse dust, at least compared to the galaxies used by \citet{calzetti10} to derive the relations between SFR and emission in these bands. Notably, the difference between the SFRs from the total infrared and H30$\alpha$ fluxes is less than 1$\sigma$ or 5\% before any corrections for metallicity or stellar rotation are applied. This increases to 1.5-2$\sigma$ or 20-30\% when the correction are applied, but given the number of assumptions behind the corrections as well as the calculation of the SFRs themselves, this match is actually very good. Aside from the potential calibration issues, the total infrared flux may also yield a slightly higher SFR because the dust emission could also include some energy absorbed from an older stellar population. However, given the low SFRs from the 70 and 160~$\mu$m bands, which would be more strongly affected by an older stellar population, it seems very unlikely that any such diffuse dust emission, especially extended dust emission outside the central starburst, contributes significantly to the total infrared flux. It is also possible that photoionizing photons are directly absorbed by dust grains before they are absorbed by the ionized gas, which could cause a slight difference in the SFRs. None the less, calibration issues are probably one of the main reasons for any discrepancies between the SFRs from the H30$\alpha$ and total infrared fluxes. \subsection{Optical and near-infrared recombination line data} \subsubsection{SFR calculations} \label{s_othersfr_hacalc} \begin{table} \caption{H$\alpha$ fluxes for the centre of NGC~5253$^a$.} \label{t_ha} \begin{center} \begin{tabular}{@{}lc@{}} \hline Extinction Correction & Flux (erg cm$^{-2}$ s$^{-1}$ \\ \hline No correction$^b$ & $(8.9 \times 2.0)\times10^{-13}$ \\ Correction by \citet{calzetti15} & $(1.7 \pm 0.5)\times10^{-11}$ \\ Correction using 24$\mu$m flux density$^c$ & $(3.47 \pm 0.19)\times10^{-11}$\\ Correction using total infrared flux$^c$ & $(8.6 \pm 0.5)\times10^{-12}$\\ \hline \end{tabular} \end{center} $^a$ All H$\alpha$ fluxes are based on the sum of fluxes from clusters 5 and 13 from \citet{calzetti15}.\\ $^b$ These data include foreground dust extinction corrections but no corrections for intrinsic dust extinction.\\ $^c$ Infrared fluxes are based on the globally-integrated measurements, which primarily originate from a pointlike source; see Section~\ref{s_othersfr_ir} for details.\\ \end{table} While the two central H$\alpha$ and Pa$\alpha$ sources are the brightest optical and near-infrared recombination line sources seen in this galaxy, multiple other fainter sources are also detected by \citet{alonsoherrero04}, \citet{harris04}, and \citet{calzetti15}, and diffuse extended emission is also present in the H$\alpha$ image from \citet{dale09}. While a comparison could be made between the global H$\alpha$ and H30$\alpha$ emission, the inclusion of these fainter sources would cause some inaccuracies. We therefore compared the SFR from the H30$\alpha$ data to the SFR from the near-infrared and optical line data from \citet{calzetti15}. As indicated in Section~\ref{s_images}, it is possible that the two central clusters identified by \citet{alonsoherrero04} and \citet{calzetti15} are actually parts of one larger photoionization region, so we will use the sum of the optical and near-infrared fluxes from these two sources in our analysis. \citet{calzetti15} published H$\alpha$, Pa$\alpha$, and Pa$\beta$ fluxes that have been corrected for foreground dust extinction but not for intrinsic dust extinction within the galaxy. Using these data, they then derive the intrinsic dust extinction for the sources as well as extinction-corrected H$\alpha$ fluxes. The corrections for cluster 5 assume a simple foreground dust screen, but cluster 11 is treated as a case where the line emission is attenuated both by a foreground dust screen and by dust intermixed with the stars. The sum of these corrected H$\alpha$ fluxes, which are listed in Table~\ref{t_ha}, are used for computing one version of SFRs. We also tested SFRs calculated by correcting H$\alpha$ for dust attenuation by adding together the observed H$\alpha$ emission (representing the unobscured emission) and infrared emission multiplied by a constant (representing the obscured H$\alpha$ emission). Such constants have been derived for multiple individual infrared bands, including the {\it Spitzer} 8 and 24~$\mu$m bands and the WISE 12 and 22~$\mu$m bands, as well as the total infrared flux \citep[e.g. ][]{calzetti05, calzetti07, kennicutt07, zhu08, kennicutt09, lee13}. To be concise, we restrict our analysis to the WISE 22~$\mu$m and total infrared flux. The relations we used for these corrections are \begin{equation} f(\mbox{H}\alpha)_{corr} = f(\mbox{H}\alpha)_{obs} + 0.020 \nu f_\nu(22 \mu\mbox{m}) \label{e_sfr_ha_22} \end{equation} and \begin{equation} f(\mbox{H}\alpha)_{corr} = f(\mbox{H}\alpha)_{obs} + 0.0024 f(\mbox{TIR}), \label{e_sfr_ha_tir} \end{equation} which are from \citet{kennicutt09}. Based on the dispersions in the data used to derive these relations, the uncertainty in the result from Equation~\ref{e_sfr_ha_22} and \ref{e_sfr_ha_tir} are 0.12 dex (32\%) and 0.089 dex (23\%), respectively. The sum of the uncorrected H$\alpha$ fluxes for clusters 5 and 11 from \citet{calzetti15} as well as versions of these fluxes corrected using Equations \ref{e_sfr_ha_22} and \ref{e_sfr_ha_tir} are listed in Table~\ref{t_ha}. The H$\alpha$ fluxes can be converted to SFR using \begin{equation} \frac{\mbox{SFR}(\mbox{H}\alpha)}{\mbox{M}_\odot\mbox{ yr}^{-1}}=6.43\times10^8 \left[\frac{D}{\mbox{ Mpc}}\right]^{2} \left[\frac{f(\mbox{H}\alpha)}{\mbox{erg s}^{-1}\mbox{ cm}^{-2}}\right] \label{e_sfr_ha} \end{equation} from \citet{murphy11}. This result is based on models from {\sc Starburst99} using a Kroupa IMF, solar metallicities, and older but unspecified stellar evolution tracks. The assumed $T_e$ is 10000~K, which is close to the measured value in NGC~5253, so we will make no modifications to this conversion. The SFRs can be rescaled to correct for metallicity and stellar rotation effects in the same way that the SFR from the H30$\alpha$ was rescaled. The SFRs based on the three different extinction-corrected H$\alpha$ fluxes are listed in Table~\ref{t_sfr}. \subsubsection{Discussion} The extinction-corrected H$\alpha$ fluxes calculated by \citet{calzetti15} using Pa$\alpha$ and Pa$\beta$ line data yield SFRs that fall within 25\% of the SFRs from the H30$\alpha$ data. Given that the extinction corrections changed the H$\alpha$ fluxes by $\sim$20$\times$, that relatively complex dust geometries were used in calculating the corrections, and that the uncertainties in the extinction-corrected H$\alpha$ fluxes are relatively high, this match is reasonably good. However, the fact that the SFR from the H30$\alpha$ is higher would indicate that the method of correcting the H$\alpha$ flux could still be improved. We discussed in Section~\ref{s_images} the non-detection of a second H30$\alpha$ source corresponding to the secondary star forming region seen in optical and near-infrared bands, which is labelled as cluster 5 by \citet{calzetti15}. One possible explanation for this is that the optical/near-infrared sources lie at the ends of a much larger star forming complex that is heavily obscured in its centre. If this is the case, then the ratio of the area of the optical/near-infrared sources to the area of the much larger star forming complex should be roughly similar to the ratio of the SFR from the extinction-corrected H$\alpha$ emission to the SFR from the H30$\alpha$ emission. \citet{calzetti15} do not list sizes for their sources, but they do state that they use measurement apertures with diameters of 0.25~arcsec, which we can treat as an upper limit on the source sizes. The hypothetical larger star forming complex would need to be 0.71~arcsec in size to encompass both optical/near-infrared regions. Based on the ratio of the area of the two smaller optical/near-infrared sources to the area of the hypothetical larger complex (which we can assume is spherical), the optical/near-infrared regions should yield a SFR that is only 25\% of that from the H30$\alpha$ emission, not the 75\% that we measure. This indicates that it is unlikely that both optical/near-infrared sources are part of one larger obscured complex centered on the H30$\alpha$ source. The other possible reason for the non-detection of a second H30$\alpha$ source corresponding to cluster 5 from \citet{calzetti15} is that the difference in brightness between it and the brighter source (cluster 11) is higher than the factor of $\sim$6$\times$ derived from the analysis of the optical and near-infrared data. The results here indicate that the extinction correction for the H$\alpha$ data applied by \citet{calzetti15} may indeed be too low, so it is possible that the difference in the extinction-corrected brightness between the two sources is much greater than implied by their analysis. In such a scenario, the second source would be too faint in recombination line emission to detect in the ALMA data at the 3$\sigma$ level. The SFR based on the composite H$\alpha$ and 22~$\mu$m data yields a SFR that is highest among the three based on the H$\alpha$ data, and it is also significantly higher than the SFR from the H30$\alpha$ flux. This is most likely a result of the unusually hot dust within NGC~5253, and the problem is similar to the direct conversion of 22~$\mu$m flux density to SFR discussed in Section~\ref{s_othersfr_ir}. The extinction correction in Equation~\ref{e_sfr_ha_22} was calibrated using spiral galaxies and, to some degree, relied on a linear relation between the amount of energy absorbed by dust and mid-infrared emission. Since that scaling relation does not apply well to NGC~5253, Equation~\ref{e_sfr_ha_22} overcorrected the extinction. In contrast, the composite H$\alpha$ and total infrared flux yielded a SFR that was too low compared to the H30$\alpha$ value or most other values listed in Table~\ref{t_sfr}. Again, this is probably because the relation was calibrated using spiral galaxies. The emission from cold, diffuse dust heated by evolved stars seems to be a relatively small fraction of the total infrared emission in NGC~5253 in comparison to what is found in spiral galaxies. If Equation~\ref{e_sfr_ha_tir} is effectively calibrated to account for this cold dust, then it may yield an underestimate of the extinction correction in galaxies like NGC~5253. Also note that any attempt to correct the total infrared flux for extended emission outside the central starburst will simply make the SFR from Equation~\ref{e_sfr_ha_tir} more discrepant comapred to values calculated from the H30$\alpha$ data or most other bands. The relative sizes of the uncorrected H$\alpha$ and infrared components in Equations~\ref{e_sfr_ha_22} and \ref{e_sfr_ha_tir} provide some additional insights into these extinction corrections. In both cases, the infrared component is dominant. In Equation~\ref{e_sfr_ha_22}, the observed H$\alpha$ flux is equivalent to 2.6\% of the 22~$\mu$m term, so using Equation~\ref{e_sfr_ha_22} and \ref{e_sfr_ha} to calculate a SFR is effectively an indirect conversion of the 22~$\mu$m flux density to SFR. Meanwhile, the uncorrected H$\alpha$ flux in Equation~\ref{e_sfr_ha_tir} is equal to 11.5\% of the total infrared flux term, so using the resulting corrected H$\alpha$ flux in Equation~\ref{e_sfr_ha} is not quite as much like indirectly converting the total infrared flux to SFR. Related to this, \citet{kennicutt12} and references therein describe how infrared flux could be used correct ultraviolet flux densities in the same way as they correct H$\alpha$ fluxes. However, when we investigated using such equations with the ultraviolet flux densities measured for the central two star clusters, we found that the infrared terms were $>$100$\times$ higher than the ultraviolet terms, which meant that any SFR based on combining ultraviolet and infrared data would effectively be independent of the ultraviolet measurements. \section{Conclusions} \label{s_conclu} To summarize, we have used ALMA observations of H30$\alpha$ emission from NGC~5253, a low-metallicity blue compact dwarf galaxy, in a comparison with different methods of calculating SFR for the centre of this galaxy. We measure a $Q$ (1.9$\pm$0.3)$\times$$10^{52}$~s$^{-1}$, which is based on using a distance of 3.15~Mpc. Accounting for the low metallicity of the galaxy and stellar rotation, we obtain an SFR of 0.087$\pm$0.013 M$_\odot$ yr$^{-1}$; with only the correction for metallicity, we obtain 0.102$\pm$0.015 M$_\odot$ yr$^{-1}$. In our analysis, we found three SFR measurements that best matched the H30$\alpha$ measurements and that seemed to be the least affected by the types of systematic effects that we could identify as causing problems with other bands. The total infrared flux (calculated by integrating the SED between 12 and 500~$\mu$m) yielded a SFR that was very similar to the value from the H30$\alpha$ data. In other dusty low-metallicity starbursts like NGC~5253, the total infrared flux may yield the most accurate star formation rates as long care is taken to account for the unusual shapes of the SEDs for these galaxies. The 70~$\mu$m band may be the best monochromatic infrared star formation tracer available, as it yielded a SFR that was closest to the SFRs from either the H30$\alpha$ and total infrared flux. However, given how the conversion of 70~$\mu$m flux to SFR depends on a linear relationship between emission in this band and the total infrared flux, it is not clear how reliable 70~$\mu$m emission would be for measuring SFR in other low metallicity galaxies where a relatively large but potentially variable fraction of the dust emission is at mid-infrared wavelengths. The SFR from the H$\alpha$ flux that was extinction corrected by \citet{calzetti15} using Pa$\alpha$ and Pa$\beta$ data was 25\% lower than but consistent with the SFR from the H30$\alpha$ data. However, as we noted in Section~\ref{s_images}, it is possible that some parts of the central star forming complex are completely obscured in the optical and near-infrared observations, which potentially illustrates the issues with examining star formation within dust starbursts, even at near-infrared wavelengths. Most other star formation tracers that we examined seemed to be affected by systematic effects that cause problems when calculating SFRs. Previously-published versions of the SFR based on millimetre and radio data yielded star formation rates with a lot of scatter relative to each other and relative to the SFR from the H30$\alpha$ data. At least some of the SFRs from radio continuum measurements are affected by incorrect assumptions about the nature of the emission sources in these bands. A new analysis of broadband archival radio and millimetre data is needed to produce better models of the SED and to covert emission from these bands into SFRs more accurately. The 22~$\mu$m flux density by itself and the combined H$\alpha$ and 22~$\mu$m metric produced SFRs that were exceptionally high compared to the value from the H30$\alpha$ data and compared to SFRs from other tracers. The main problem is that the dust in this low metallicity environment is thinner than in solar metallicity objects, so the dust that is present is exposed to a brighter and hotter radiation field. Consequently, the total dust emission is stronger in low metallicity environments, and the mid-infrared emission is stronger relative to total infrared emission. Based on these results, we strongly recommend not using any star formation tracer based on mid-infrared data for low metallicity galaxies. Infrared emission at 160~$\mu$m yields a very low SFR. This is probably because the dust temperatures within NGC~5253 are relatively hot and because the conversion of emission in this band to SFR accounts for the presence of diffuse dust heated by older stars that is present in spiral galaxies but not in NGC~5253 and similar dwarf galaxies. The composite H$\alpha$ and total infrared star formation metric yielded a SFR that was too low. Again, this could be because the correction is calibrated using spiral galaxies that contain relatively higher fractions of diffuse, cold dust than NGC~5253. If such a metric were to be used for measuring star formation in low metallicity systems, it would need to be recalibrated. This analysis represents the first results from using ALMA observations of hydrogen millimetre recombination line emission to test SFR metrics based on optical data, and it is also one of the first comparisons of SFR metrics for a low metallicity galaxy that has involved ALMA data. A more thorough analysis of star formation tracers observed in low metallicity galaxies is needed to understand whether the results obtained for NGC~5253 are generally applicable to similar objects. Further analysis of more ALMA recombination line observations combined with this and previous works on this subject \citep{bendo15b, bendo16} will allow us to create a broader picture of the reliability of various tracers of star formation in both nearby starbursts and more distant galaxies. \section*{Acknowledgments} We thank the reviewer for the constructive criticisms of this paper. GJB and GAF acknowledge support from STFC Grant ST/P000827/1. KN acknowledges support from JSPS KAKENHI Grant Number 15K05035. CD acknowledges funding from an ERC Starting Consolidator Grant (no.~307209) under FP7 and an STFC Consolidated Grant (ST/L000768/1). This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2013.1.00210.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration.
1,116,691,499,499
arxiv
\section{Introduction} Circumventing exposure to an outbreak can require economic foresight and ambition. For managers of agricultural production facilities, biosecurity investment decisions can be challenging, as biosecurity practices are costly and the return on investment can be difficult to quantify (e.g., efficacy uncertainty). At what perceived levels of risk will individuals invest resources to mitigate their chances of becoming infected? For many, this decision-making process is multi-factored and dynamic, where many factors can change over time depending on past experiences \cite{tversky1981framing,caruso2008wrinkle}. In \cite{mankad2016psychological}, the psychosocial factors motivating biosecurity adoption with respect to risk attitudes, adherence, and resistance to behavioral change initiatives was shown to be an urgent topic for continued investigation and policy consideration. Studying risk mitigation behavioral strategies can help us understand how information and perceived risk affects the decision-making process. Livestock epidemics can cause substantial economic damage to agricultural industries, \cite{paarlberg2014updated} estimates a net annual cost of \$900 million to \$1.8 billion from PEDv outbreaks. Particularly transmissible pathogens, like Porcine Epidemic Diarrhea virus (PEDv) can spread quickly throughout supply chain networks resulting in high mortality rates \cite{schulz2015assessment,lee2015porcine} and can often reoccur even after an outbreak has been seemingly eradicated \cite{lee2014reemergence}. Failure to mitigate an outbreak as well as corresponding ``reemergence events'' can have serious fiscal consequences \cite{schulz2015assessment}. Biosecurity \cite{Bowman2015} can be defined as a set of tools, practices, disease prevention measures and sanitary regulations designed to attenuate the spread of disease. For example, truck washes have been shown to slow the spread of PEDv \cite{lowe2014role}. After animal feed was identified as a vector for PEDv \cite{dee2014evaluation,pasick2014investigation} sanitation of feed using thermal or chemical treatments helps prevent contamination \cite{huss2018physical}. In \cite{kim2017evaluation}, the efficacy of varying degrees of biosecurity for mitigating disease transmission was tested by comparing ``Low'', ``Medium'' and ``High'' biosecurity experimental groups, delineated by the amount of practices implemented. ``Medium'' and ``High'' levels of biosecurity were shown to outperform ``Low'' biosecurity treatments in attenuating disease spread. In realistic animal supply chain network conditions, investment in biosecurity cannot guarantee safety from an outbreak. Although biosecurity has become an expected norm in crop and animal based agriculture, full participation in these practices is not entirely widespread \cite{kristensen2011danish,mankad2016psychological}. Yet, increased biosecurity reduces the likelihood of disease transmission \cite{kim2017evaluation}. Each producer's risk of infection is also largely dependent upon their network \cite{rautureau2012structural, machado2019identifying, wiltshire2018using,wiltshire2019network}. Hence, a structural equilibrium exists for optimizing welfare by investing in biosecurity for outbreak mitigation. Our simulations focus on studying participant’s aversion to risk in response to perceived economic danger associated with infectious diseases. When there is a disease outbreak, one natural strategy is to ``wait and see'' how the disease spreads before choosing to allocate resources for protection. These scenarios have been studied with respect to flu vaccines \cite{bhattacharyya2011wait,machado2019identifying}. ``Wait and See’’ strategies were shown to exacerbate outbreaks if the vaccination rate among the population was low. Risk averse individuals, who might vaccinate early, could be perceived as protective shields for those going unvaccinated who are acting as free-riders. This \emph{opportunistic} strategy weighs the perceived opportunity costs of vaccination versus the risk of infection. This behavior may benefit one's own facility, but can increase the chances for an epidemic along with more extreme economic consequences across the supply chain \cite{bucini2019risk}. Computational social science focuses on leveraging data to investigate questions regarding human decision making, behaviors, societies and systems \cite{lazer2009computational,doi:10.1002/wics.95, Conte2012, mann2016core, bond201261}. Many have examined decision-making using a behavioral economic lens. For example, the role of risk preferences in decision-making has been widely studied using survey methods such as multiple price lotteries \cite{holt2002risk,chakravarthy1986measuring}. One method for collecting decision-making data is through the use of serious games, which we refer to as digital field experiments. Digital field experiments that use performance-based incentives can increase salience and effort in the decision making process \cite{CheongWildfire,camerer1999effects}. Real monetary payments that scale with the payouts in the experimental choices provide an incentive for participants to act according to their true risk preferences. Incentive-compatible experiments are the gold standard for understanding real-world relevant behavior \cite{azrieli2018incentives,brookshire1987measuring}. These experiments can be hosted online to gather social interactions and behaviors from wide audiences \cite{parigi2017online, buhrmester2011amazon}. Digital field experiments have also been used to assess the efficacy of digital decision support systems. In regards to emergency response, digital field experiments have been shown to bolster preparedness and reduce economic damage \cite{mendonca2006designing}. Interactive simulations have been useful for studying human behavior in the face of an epidemic \cite{merrill2019message,lofgren2007untapped}. Gaming simulations have also been applied for business management decision assessment \cite{thavikulwat1997real} as well as conceptual training of entrepreneurial strategy \cite{thavikulwat1995computer}. In our experimental process, we intend to extract and compare behavioral strategies that emerge over the course of the simulation. We developed a \emph{biosecurity adoption} metric, based upon players’ decisions to allocate resources to reduce their risk of infection. Here we define behavioral scores using the infection rate as our experimental variable. We calculated each participant’s biosecurity adoption rating by tallying their level of protection (None, Low, Medium, or High) across each simulated year, consisting of 6 “decision months”. Decisions to adopt biosecurity earlier in the year were implicitly weighted heavier than investing at the end of the year. Participants were informed of the probability of infection across several high risk and low risk scenarios, in which we also varied the amount of biosecurity in their simulated network. This allowed us to compare each player’s risk aversion score for both low and high probabilities of infection as a way of measuring their behavioral response with respect to their perceived risk. Aggregated biosecurity adoption ratings were then categorized using clustering algorithms. Clustering algorithms \cite{xu2005survey} are unsupervised learning methods for grouping multi-dimensional data. These are useful tools for data exploration and can help us separate thematic behaviors across sampled participants’ game-plays Clustering allows us to group participants into distinct behavioral categories from their decisions throughout the simulation. Using these analyses, we can design experiments that can automatically group participants by their decisions, predict their behaviors, and even adapt the simulation based upon their group. Several clustering algorithms have been validated for financial risk analysis \cite{kou2014evaluation} and have been applied to examine behavior in experimental games \cite{merrill2019decision}. Comparing player strategies using a clustering framework can help identify appropriate audiences for tailoring interventions or personalized messaging. We selected the K-Means algorithm \cite{hartigan1979algorithm} to cluster the biosecurity adoption ratings and categorize observed behaviors. Mitigation strategies, recorded from participants’ game-plays, spanned from minimally protective risk-tolerant behaviors to more cautious risk averse approaches. Here we present a framework for digitally simulating risk scenarios to identify behavioral strategies from sampled participants. These digital field experiments allow us to test the effects of various informational stimuli and their influence on the decision-making process. This can also help us understand in general how perceptions of risk and attitudes may differ across a sampled population. Our sampled audiences vary from industry professionals and stakeholders to the general public. Our overarching goal is to simulate complex decision mechanisms using digital representations of disease outbreak situations. We analyzed participant choices to study behavioral strategies employed under different scenarios. To reach our goal, we designed two experiments to quantify behavioral risk profiles associated with a biosecurity investment response to outbreak scenarios. Experiment One focused on identifying the most prominent behavioral strategies associated with risk mitigation in response to perceived economic danger. We designed a digital field experiment \cite{merrill2019decision} simulating the budgeting of a farm’s biosecurity over the course of a year during an outbreak. We recruited individuals to participate in our digital field experiment using Amazon Mechanical Turk (Mturk), an online survey marketplace recently applied for behavioral experiments \cite{rand2012promise,mason2012conducting}. In addition, specific treatments about visibility of infection and biosecurity in the producer network were implemented to test the following hypotheses: \\ \noindent (H1): More visibility in the number of infected sites increases risk aversion. \\ \noindent (H2): More visibility of the amount of biosecurity in the system increases risk taking behaviors. \\ In Experiment Two, we identify differences between our audience of primary interest, industry specialists, and the sampled population from Amazon MTurk. Here we sought to delineate behavioral differences between an audience with extensive knowledge of the swine industry and those without relevant industry experience (i.e., online recruits). We hypothesize that industry professionals will internalize \cite{sellnow2017idea} and empathize with diseased animals and/or past experiences during outbreak situations and thus behave differently than an audience without industry experience: \\ \noindent (H3): Industry professionals risk mitigation behaviors will differ from an audience without industry experience. \\ To test this hypothesis, we rented a booth at the 2018 World Pork Expo in order to recruit industry professionals and stakeholders to compare their decision-making strategies to an additional sample of online recruited participants without relevant industry experience. By examining behavioral differences between audiences, we may be able to determine how to leverage behaviors from an easily accessible population to gain insights that apply to a group that is logistically challenging to recruit. \section{Methods} Our digital field experiments simulated the management of a porcine facility's biosecurity in the face of a contagious disease. Our simulation was modeled after \cite{merrill2019decision}. Our updated version follows a similar user interface (UI) and mechanics, with the added capability for online deployment. Practices accepted by the University of Vermont Institutional Review Board were followed for experiments using human participants (University of Vermont IRB \# CHRBSS-16-232-IRB). Instructional slideshows were presented to participants before they began play, and were identical for both experiments. The slideshows, describing the gaming mechanics and interface, are given in Appendix A. We conducted two experiments. \subsection{Experiment One} We recruited 1000 participants using Amazon Mechanical Turk, an online survey marketplace \cite{paolacci2010running}. The digital field experiment application was built in the Unity Development platform and hosted with WebGL \cite{parisi2012webgl, buhrmester2011amazon}. In Figure \ref{fig:game}, the simulation interface conveys information about each neighboring facility's biosecurity level as well as information about the facilities’ disease infection status. We designed our digital field experiment to simulate the management of swine production facilities. Players made management decisions to adapt their facility's biosecurity during several outbreak scenarios. We hosted our simulation online. Each participant played 32 rounds with each round consisting of up to 6 decisions. Each decision provided the opportunity to invest simulation funds towards biosecurity. One investment choice was allotted per month costing \$1000 simulation dollars, which could be spent up to three times per simulated year: upgrading their status from ``None to Low'', ``Low to Medium'' or ``Medium to High''. The player was never economically impeded from investing in biosecurity (i.e., this option is always available independent from their total score). If the player owned facility became infected, the round would end and \$25,000 was subtracted from their total score. At the end of the simulation, players were compensated with a rate of \$50,000 in game currency to \$1 USD. \begin{figure}[!htbp]\begin{center} \fbox{\includegraphics[width = .99\linewidth]{fig_PA.png}} \caption{ User interface. The red arrow marks an infected facility (red dot). The player's facility is enclosed by a triangle. Each round spans 6 decision months, where the player can remain at the current level of biosecurity or invest in increased biosecurity from None to Low, Medium, and High. An indicator for the infection status is presented to the participant.} \label{fig:game} \end{center} \end{figure} The network of facilities in each round includes one player-controlled facility and 49 simulation-controlled facilities. Each computer-controlled facility in the simulation was assigned a biosecurity value. The probability of a contaminated facility infecting other premises was scaled with distance, facility-specific biosecurity and a predefined infection rate (Infection rates: Low (0.08) or High (0.3)). At the close of each decision month, the transmission probability per facility is determined using a pseudorandom number drawn from a uniform distribution. Each round, consisting of up to six decision months, begins with a single infection. At the end of each month, information about the infection's progression is presented to the player along with the opportunity to invest experimental funds to increase their facility's biosecurity. Increasing biosecurity protects the player's facility by dampening the infection's transmission probability. The probability of infection must be inferred by the player using their information regarding the number of infections in the system, amount of biosecurity implemented at neighboring facilities, and the infection rate, which is clearly displayed on the user interface. Player installed biosecurity does not depreciate over time, nor are participants given the option to decrease their biosecurity level. If the player’s facility becomes infected after a decision month, the player is alerted, \$25,000 experimental dollars are subtracted from their score, and the round ends immediately. The simulation then progresses to the next year with a new initial infection and spatial distribution of producers, and the player owned facility’s biosecurity level is reset to ``None''. Our simulation tested the effects of concealing information about infection status as well as the amount of biosecurity in the system at each of the computer-generated production facilities. In this way, we injected information uncertainty into the decision-making process, exposing each participant to a variety of risk scenarios. This allows us to test for differences in behavior given more or less information regarding infection status and/or the amount of biosecurity present at neighboring farms. Participants played multiple rounds with treatments that varied the infection dynamics as well as the visibility of neighboring facilities’ infection status and biosecurity level. In one quarter of all rounds, the number of infections and biosecurity status of each facility was fully visible to the participant. The remaining 75\% injected uncertainty into the decision making process by concealing information regarding either the infection and/or biosecurity statuses across the production network. The neighboring facilities were all present on the user interface, however either the infection indicators and/or biosecurity statuses remain `gray’, indicating unknown. These treatments were incorporated to consider how the presence of uncertainty and increased risk of infection can affect players’ decisions. Each treatment was played twice, once with biosecurity values drawn from a Low biosecurity distribution and once with values drawn from a High biosecurity distribution. The Low biosecurity distribution generated biosecurity values for each facility drawn with 60\% chance for `None' rated at 0, a 32\% chance of `Low' (1), a 6\% chance at `Medium' (2), and a 2\% chance at `High' (3). The High distribution randomly pulled biosecurity values with a 60\% chance for `High' (3), a 32\% chance of `Medium' (2), a 6\% chance at `Low' (1), and a 2\% chance at `None' (0). Our treatments consisted of all permutations of Low, High biosecurity distributions against uncertainty in both infection information and amount of biosecurity in the system (i.e., 25\% full information, 25\% biosecurity obscured, 25\% infection obscured, 25\% no infection or biosecurity information). This accounted for 182,124 biosecurity investment decisions collected from the 1,000 participants. Our clustering analysis grouped each of these treatment scenarios in order to compare the overall comparative risk associated with each player’s choices. We then grouped decision data during high and low information obscurity to test our behavioral hypotheses (H1,H2). The particular effects of these treatments on the decision making process were further explored in \cite{merrill2019decision}. Each decision has an associated risk, based upon the amount of biosecurity implemented at the player’s facility and severity of the infection rate. The infection spreads to more facilities per month, each of which can infect the player's facility. Biosecurity investments reduce infection rates for the entire round, hence, earlier adoption of biosecurity leads to reduced risk throughout the six month round. Players can invest their simulated earnings to reduce their risk of infection, or take a chance with a lower disease protection and a possibly higher end-round payout. Each participant’s decisions were assigned biosecurity adoption ratings depending on the amount of biosecurity they implemented during the simulation. More risk averse strategies choose to increase biosecurity earlier within the simulated year, while risk tolerant strategies allocate fewer experimental dollars to biosecurity in a gamble for a higher payout. Every individual, $i$, was assigned a biosecurity adoption rating, $R_i$, computed using decisions, $d \in D_i$, from each simulated month. Each participant’s risk score is calculated by tallying the player facility’s level of biosecurity, $b_d \in \{0 = \text{``None''}, 1 = \text{``Low''} ,2 = \text{``Medium'' } ,3 = \text{``High'' } \}$ across each simulated year and then normalizing by the total number of player decisions, $\lvert D_i \rvert$: \begin{equation} R_i = \dfrac{1}{{\vert D_i \rvert}} \mathlarger{\mathlarger{\mathlarger{\sum}}}_{d \in D_i} b_d \label{eq:risk} \end{equation} for each decision, $d$, of the $i^{th}$ player. The biosecurity adoption rating, $R_i$, increases as the player invests in more biosecurity, which decreases their risk of infection. This means that investments in biosecurity that occur in earlier months carry more weight, because early investments increase a facility’s protection for the remainder of the round (i.e., each decision month). The most risk averse strategy is characterized by those participants that consistently invested substantial funds into biosecurity during the early months of each round. We clustered these strategies using a K-means clustering algorithm \cite{hartigan1979algorithm} implemented with the Python programming language \cite{van2011python}. Graphics were created using matplotlib \cite{Hunter:2007}. Here, the clustering coefficient, $K$, is the number of unique clusters assumed for each analysis. We chose $K=3$, using the elbow method (Figure \ref{fig:elbow}), as it optimizes the sum of the square errors across cluster centers \cite{ng2012clustering,kodinariya2013review} and produces the most pronounced scheme of observed behavior. \begin{figure}[!htbp]\begin{center} \includegraphics[width = .95\linewidth]{BendKnee_K1-10_All-adoption-Full.png} \caption{ For each clustering coefficient, $K$, the sum of the squared errors are plotted using participant's Biosecurity Adoption Ratings, $\vec{R_i}$ } \label{fig:elbow} \end{center} \end{figure} The perceived risk attitudes of each recruited participant were calculated using each of their investment decisions. Each player's two dimensional biosecurity adoption rating was calculated from each of their investment decisions aggregated from two datasets: (1) decisions made given rounds with Low infection rates, and (2) decisions made in rounds with High infection rates, $\vec{R_i} = (R_{i(Low)},R_{i(High)})$. We tested the effects of restricting information regarding the presence of infection as well the visibility of biosecurity statuses of neighboring facilities. This injected uncertainty into the decision making process. Using the biosecurity adoption ratings as a measure of risk, we test for a significant difference in the distributions using one-tailed Mann-Whitney U tests \cite{mann1947test}. This non-parametric test was chosen since each distribution of biosecurity adoption ratings failed D'Agostino and Pearson's test for normality \cite{d1971omnibus,d1973tests}. \subsection{Experiment 2} To test for a difference in strategy by audience, we directly compared decisions of two groups: Amazon Mechanical Turk and industry professional participants. For this analysis, we used a Biosecurity Investment version of our digital field experiment that featured a single infection rate across 32 rounds (i.e., simulated infection scenarios). We hosted a booth at the 2018 American World Pork Expo \cite{PorkExpo} and recruited 50 participants with knowledge of the swine industry including business owners, production managers, laborers, animal health experts and enthusiasts. In contrast, this simulation featured a constant infection rate (0.15), an intermediate value between the low (0.08) and high (0.3) infection rates tested in Experiment One. This allowed us to collect sufficient observations for a statistical comparison, while decreasing the total participation time, which aided in our enrollment. We also recruited 50 Amazon Mechanical Turk participants to play this digital field experiment. Each audience type played 32 rounds with 6 decision months per round, thus providing 9,600 decisions per audience type. Decisions by each group were compared using a two sample Kolmogorov-Smirnov (KS) test \cite{massey1951kolmogorov}. At the end of the simulation, Mturk recruits were compensated with a rate of \$23,500 in simulation currency to \$1 USD. We paid a higher rate for participants at the World Pork Expo in order to bolster participation: \$12,000 simulation dollars to \$1 USD. \section{Results} \subsection{Experiment One} In Figure \ref{fig:cluster}, we clustered each participants biosecurity adoption ratings using K-means clustering with $K=3$. The circles represent each player’s two dimensional risk attitude rating, $\vec{R_i} = (R_{i(Low)},R_{i(High)})$. The diamonds portray the center of each cluster. The x axis scores their decisions using a low infection rate (0.08) while the y axis represents scores from a high infection rate (0.3). Near the origin, (0,0) players adopted very little biosecurity during the experiment. In the upper right corner, players adopted the most biosecurity for both infection rates. Each point (or cluster) shows how differently a player’s decisions behaved in the two treatments. Points close to the main diagonal (dotted black line along $y=x$) do not modify their behavior in response to the game context (e.g., always or never investing in increased biosecurity), while those points off the main diagonal show players who differ their behavior in response to opportunities in game situations. The bottom right quadrant is empty, as these scores represent nonsensical behavior (i.e., only adopting high biosecurity on low risk rounds and low biosecurity on high risk rounds). \vspace{1cm} \begin{figure}[!htbp]\begin{center} \includegraphics[width = 1.01\linewidth]{RiskCluster_K3_All-adoption-Full.png} \caption{Participant biosecurity adoption ratings are clustered using the K-means algorithm with $K=3$. The circles represent each player’s two dimensional risk attitude rating, $\vec{R_i} = (R_{i(Low)},R_{i(High)})$. The diamonds portray the center of each cluster. The x axis scores their decisions using a low infection rate (0.08) while the y axis represents scores from a high infection rate (0.3). Near the origin, (0,0) players adopted very little biosecurity during their game-play. In the upper right corner, players adopted the most biosecurity for both infection rates. Points close to the main diagonal do not modify their behavior in response to the game context, while points off the main diagonal show players who differ their behavior in response to simulated opportunities.} \label{fig:cluster} \end{center} \end{figure} \vspace{1cm} Cluster 1 (\textcolor{green}{ $\blacklozenge$}) made the most \emph{risk averse} biosecurity investment decisions. They adopted the most biosecurity, for both low and high infection rates, in comparison to the other clusters. Cluster 2 (\textcolor{red}{ $\blacklozenge$}) took the opposite approach, adopting the least amount of biosecurity in both dimensions. These \emph{risk tolerant} participants attempted to maximize their payouts using a minimal biosecurity investment strategy. Cluster 3 (\textcolor{Goldenrod}{ $\blacklozenge$}) , the \emph{opportunists}, adopted more biosecurity with a high infection rate and little to no biosecurity with the low rate. Some cautious members of this group purchased more biosecurity than the risk averse group (Cluster 1) during highly contagious rounds. This group of players is characterized by a balance between risky behavior when the probability of transmission was dampened and more conservative choices when presented with a higher risk of infection. They behave similarly to the risk-tolerant during low infection rates, and appear more risk averse during highly infectious rounds. For reference, the optimal or \emph{risk neutral} strategies can be quantified from the end of round earnings ($E=\$15,000$), probability of infection ($p_i$), cost associated with becoming infected ($C_i = \$25,000$) and cost of biosecurity ($b_c = \$1,000$ per level). We ran several hundred trials given each level of biosecurity to estimate the probability of infection given each biosecurity level. The expected return ($E$) for each biosecurity adoption choice was calculated in experimental dollars as: $ E = (E-b_c) * (1-p_i) – (p_i*C_i)$. In Table \ref{table:earnings} we show the expected return for biosecurity adoption choice with respect to infection rate. We see that a minimum biosecurity status of ‘Low’ during Low infection rates and a ``High’’ biosecurity status during High Infection rates are the most optimal choices for producing the highest expected returns. \begin{table}[!htbp]\begin{tabular}{|c|c|c|} \cline{1-3} & Expected Earnings &Expected Earnings \\ Biosecurity Level & Low Infection Rate ($p_i$) & High Infection Rate ($p_i$)\\ \hline \hline \hline None & \$12,204.30 (7\%) & -\$1,729.22 (41.8\%) \\ \hline Low & \$12,357.89 (4.2\%) & -\$2,044.30 (41.1\%) \\ \hline Medium & \$11,400.00 (4.2\%) & \$400.00 (33.2\%)\\ \hline High & \$11,406.42 (1.6\%) & \$4,857.91 (19.3\%)\\ \hline \hline \end{tabular} \caption{ \textbf{Biosecurity Level Expected Returns:} Expected Returns in experimental dollars for each level of biosecurity adopted. The probability of infection ($p_i$) is estimated using several hundred trials at each specified biosecurity level. Recall, the the probability of infection depends on the infection rate and distance to each infected facility. } \label{table:earnings} \end{table} In Fig \ref{fig:decisionhists} we highlight the differences in these observed behaviors with histograms of each cluster’s decisions to invest in biosecurity as a function of decision month. Here, the \emph{risk tolerant} adopt little biosecurity for both low and high infection rates, while \emph{risk averse} players tend to frequently increase protection. The \emph{opportunists} mirror risk-tolerant behavior when the infection rate is low and appear more risk averse when infection rate was presented as high. We can show this by comparing distributions by computing their Kullback–Leibler (KL) divergence \cite{joyce2011kullback}. When comparing the monthly distributions of biosecurity investment decisions, \emph{[No Biosecurity, Low, Medium, High]}, from Opportunists (OP) versus Risk Tolerant (RT) during low infection rates, we found: \hspace{2.5mm} $D_{KL}( \textnormal{OP } \vert \vert \textnormal{ RT}) = [0.0002, 0.052, 0.0482, 0.045] $ Similarly, when comparing Opportunists (OP) to the Risk Averse (RA) group during high infection rates we find: \hspace{2.5mm} $D_{KL}( \textnormal{OP } \vert \vert \textnormal{ RA}) = [0.0005, 0.04, 0.0351, 0.0186]$ \noindent This helps justify our intuition regarding the similarities between these groups under these conditions. To test hypotheses (H1,H2), we first investigated how the risk cluster distributions, \{Risk Averse (RA), Risk Tolerant (RT), Opportunistic (O)\}, may change with respect to each set of information visibility treatments. We calculated each participant’s two-dimensional risk scores, $(R_{i(Low)},R_{i(High)})$, for each group of visibility treatments and then re-categorized each participant’s biosecurity adoption rating using the centroids defined from the full decision space (see Figure \ref{fig:cluster}). We then applied a KS test to compare the differences in behavioral groups between treatments. For infection visibility information treatments (H1) the change in clustered distributions was not significant (KS $D = 0.045$, $p = 0.257$, two tailed). However, we did find more risk averse behaviors during full infection visibility treatments (395 RA, 207 RT, 398 O) compared to low infection visibility (350 RA, 290 RT, 360 O). See Figure SI 1 for additional graphs showing the clustered risk scores per visibility treatment. For biosecurity information treatments (H2), we found a significant difference (KS $D=0.08$, $p < 0.005$, two-tailed) in the clustered distributions when comparing treatments with full visibility of neighboring biosecurity (322 RA, 290 RT, 388 O) to low biosecurity visibility (402 RA, 207 RT, 391 O). This lends support to (H2), as we see more risk taking behaviors with more visibility of biosecurity in the system. This helps justify our intuition regarding the similarities between these groups under these conditions. \onecolumngrid \vspace{1cm} \begin{figure}[!htbp]\begin{center} \includegraphics[width = .85\linewidth]{FeatureHists_Month-AllTreatments_All.png} \caption{ Histograms of the proportions of all decisions to remain with no biosecurity (``None'') or increase from None - to Low - Medium - and finally to High as a function of decision month. Biosecurity can only increase one level per month. Less biosecurity was implemented when the infection rate, $p_{inf}$, was Low ( $= 0.08$); the number of decisions to invest in biosecurity increased with higher infection rates. The Risk tolerant cluster (left column) implements the least biosecurity, while the Risk Averse cluster (right column) invests the most biosecurity under both infection rates. After attaining a High biosecurity level, no more decisions can be logged for the simulated year, which is why most of the decisions from the risk averse cluster are completed by the third decision month. The Opportunistic cluster (middle column) behave like the risk averse group under high infection rate scenarios, and implement less biosecurity during low infection rates.} \label{fig:decisionhists} \end{center} \end{figure} \pagebreak \twocolumngrid In order to formally investigate hypotheses (H1) and (H2), we computed each participant’s aggregate biosecurity adoption rating across both the low and high infection rates for each set of information treatments. To test (H1), we examined whether more visibility in the number of infected sites increased risk aversion. To accomplish this, we measure each participant’s biosecurity adoption rating during 16 treatments in which the infection status of neighboring facilities was visible \{median=1.40, $\mu=1.40$, $\sigma=0.62$, min = 0, max = 2.50\}, versus 16 treatments with hidden infection statuses \{median = 1.31, $\mu = 1.30$, $\sigma = 0.69$, min = 0, max = 2.50\} . Using the Mann-Whitney U Test, we found the distributions of biosecurity adoption ratings differed significantly, with more biosecurity being implemented when the infection status was fully visible (Mann–Whitney $U = 541840.5$, $n_1 = n_2 = 1000$, $p < 0.001$, one-tailed). For (H2) we tested if more visibility of the amount of biosecurity in the system increases risk taking behaviors. We similarly compare 16 treatments in which biosecurity was visible \{median=1.27, $\mu=1.27$, $\sigma=0.63$, min = 0, max = 2.50\} versus 16 treatments in which the neighboring biosecurity statuses were hidden \{median=1.45, $\mu=1.43$, $\sigma=0.66$, min = 0, max = 2.50\}. This difference in biosecurity adoption ratings between distributions was significant, with less biosecurity being implemented during treatments in which neighboring biosecurity statuses were visible (Mann–Whitney $U = 429424.0$, $n_1=n_2 = 1000$, $p<0.001$, one-tailed). \subsection{Experiment Two} To test (H3), compared the decisions made by industry professionals recruited at the 2018 World Pork Expo to the decisions made by workers from Amazon Mechanical Turk. In Figure \ref{fig:PEcorr}, the risk aversion distributions are given for both the Amazon Mechanical Turk \{$\mu=1.41$, $\sigma =0.71$, median=1.34 , min = 0.02, max = 2.50\} and World Pork Expo \{$\mu=1.36$, $\sigma=0.66$, median=1.31, min = 0.13, max = 2.50\} audiences. Using a two sample KS test, we found ($D =0.16$, $p=0.51$, $n_1 = n_2 = 50$), leading us to fail to reject the null hypothesis that the two distributions of two samples are the same. Results did not detect a difference in the spectrum of behavioral strategies from sampled online participants and agricultural professionals under our risk aversion metric. For comparison, we also find the same result using a two tailed Man Whitney U Test ($U = 1180.0$, $p=0.63$, two-tailed, $n_1=n_2 = 50$). \begin{figure}[!htbp]\begin{center} \includegraphics[width = .99\linewidth]{RiskSummaryStats_PE-Turk-AllTreatments.png} \caption{ Biosecurity Adoption Ratings from 50 industry professionals who attended the 2018 World Pork Expo were compared to 50 participants recruited online from Amazon Mechanical Turk. Each participant's rating ($R_i$) was calculated from a single infection rate (0.15) across 192 decision months for a total of 19,200 choices. } \label{fig:PEcorr} \end{center} \end{figure} \section{Discussion} In this study and in other published works \cite{merrill2019message, merrill2019decision, bucini2019risk} we have demonstrated the potential value of using digital field experiments as a data gathering tool for use in categorizing behavioral strategies. Our experimental framework focuses on creating a digital representation of a complex decision mechanism in order to identify prevalent behavioral strategies. Digital field experiments are readily applicable for designing controlled settings for collecting assessments of tasks in response to changing visual stimuli \cite{merrill2019message}. In Experiment One, we created a personalized metric for comparing each player's risk mitigation decision strategy and identified three distinct behaviors. We tested how risk communication and information accessibility of both the disease infection status and neighboring biosecurity protection efforts affect this decision-making process. In Experiment One we analyzed strategies from 1000 Amazon Mechanical Turk recruits using the K-means clustering algorithm. Derivative projects can consider implementing other clustering algorithms for comparison. We grouped the decisions of all participants without considering their demographics. Future studies can incorporate data collected from post-game interactive surveys, such as each participant’s age or sex. Gaming simulations for education, or edutainment are more dominant for cognitive gain outcomes as contrasted with traditional learning methods \cite{vogel2006computer}. Our framework may also be adapted to create educational tools featuring risk messaging and communication. Feedback on the consequences of the decision making process can help a player learn from their past decisions. We can also use these educational tools for data gathering and testing the effects of presenting risk communication strategies with respect to a complex decision mechanism. Since these applications can be hosted online, they can allow us to collect inputs from a wide audience for computational social science research applications, while also providing educational value to the players. We tested the effects of information uncertainty with regards to the infection and biosecurity status of neighboring farms. In (H1) we found more visibility in the number of infected sites results in an increase in the biosecurity adoption rating distribution (i.e., more risk averse behaviors). This is intuitive as there is more perceived risk when the participant can see the spread of the disease over the course of the year. This feedback appears to invoke more risk averse behaviors in our sample. In (H2) we found more visibility in the amount biosecurity implemented throughout the system increases risk tolerant behaviors. This could be due in part to the free-rider effect in which some participants venture for a higher payout by exploiting their neighboring biosecurity status, perceived as a shield from the infection. While Amazon Mechanical Turk provided a tool for fast recruitment of many participants for our experiment, we assume that, based on the overall rate of employment in agriculture, most online participants were not currently working in an agricultural field. Therefore, their decisions may potentially differ from experienced industry professionals. Experiment Two focused on comparing the decisions from industry professionals and stakeholders to a random sample of participants recruited from Amazon Mechanical Turk. We did not find evidence to support (H3). No difference was detected in the proportion of observed risk scores of the two cohorts. This may be in part due to our relatively small sample size, approximately 20,000 decisions spanning 100 participants, which was in part due to the difficulty in obtaining decision-making data from industry professionals. Through survey methods, \cite{roe2015risk} found farm owners self-identified as generally more risk tolerant in comparison to the general public, however were more risk averse in comparison to non-farm business owners. Our simulation was framed specifically for outbreak mitigation and resource allocation, which may have influenced the risk tolerance of farm owners due to the internalization of the economic consequences of their decisions. Naturally, we may expect some differences in how an experienced industry professional approaches these simulated risk scenarios, given any real-world past experiences mitigating their disease risk. However this data, suggest that the underlying behavioral distributions are comparable under our risk aversion metric. Since experimental data from industry professionals is difficult and costly to gather at scale, these results lend credence to behavioral results from data collected using a convenience sample of online recruited participants for analyzing behavior even if the participants lack insider knowledge of the industry being studied. Effective risk communication strategies are essential for crisis aversion and mitigation \cite{sellnow2008effective}. In particular, outreach messaging strategies may need constant adaptations in order to improve compliance at critical moments to minimizing outbreaks \cite{dynamicRiskSellnow}. Online recruitment of participants can be used to rapidly gather data for testing the efficacy of risk messaging strategies. Comparing their decision strategies to industry professionals and stakeholders helps leverage our findings. This framework can help us study behavioral mechanisms leading to more proactive risk management. These interventions can then be further tested using simulation modeling, such as agent-based modeling approaches to forecast their effect on systemic contagion dynamics \cite{bucini2019risk}. Additionally, this framework can be leveraged to study risk aversion with respect to other behavioral strategies or to investigate the sensitivity of behavioral responses and how they change over time (i.e., learning effects). Implications of these results for the industry itself include the need to appreciate the heterogeneity along the risk aversion and risk tolerance spectrum that is apparent, not just in Amazon Mechanical Turk participants, but industry professionals. Risk communication and incentive strategies likely need to be tailored for specific populations of industry producers. In our study, we focused on analyzing risk aversion with respect to biosecurity investment and disease prevention, since this objective decision making process can impact the wellbeing of these agricultural systems. Those producers who tend to be more risk averse, in general, may only need basic information regarding risks of disease and consequences to be incentivized into action. Other, more risk tolerant populations, may require mandates to motivate change. Those producers who take a more situational approach, who essentially learn as they interpret changes in standard operating procedure over time, may benefit from extended learning and training opportunities. In future simulations, the use of incentives to nudge participants toward higher levels of biosecurity investment should be tested as a potential intervention for this population. Both the risk tolerant and the risk averse populations were reluctant to change strategies and may require larger incentives (higher penalties for a disease strike or higher monetary incentives to adopt biosecurity) or the use of regulatory power to improve resistance to disease incursion and ensure system resilience. \section{Conclusion} Digital field experiments can provide insights into a wide variety of social ecological systems and further, data collected can be used to provide inputs to digital decision support systems. Interfaces can be adapted for a wide variety of applications. Recruitment of participants can be rapidly assembled using web hosted platforms. Tailored interfaces that simulate real-world decisions can control information variability to capture the choices of individuals. Digital field experiments can improve upon traditional survey methods using immersive simulations for tracking human behavior. Online survey recruitment tools, like Amazon Mechanical Turk, can assist in expediting recruitment for conducting digital field experiments. Our results validate using online participants for accelerating behavioral studies. We affirm this by comparing behaviors to a convenience sample of industry professionals, which can be more difficult to recruit. Further validation should be conducted to compare results from industry professionals to online recruited audiences. Applying a clustering algorithm to our risk aversion metric (Eq. \ref{eq:risk}) helped us identify the most prominent behaviors exhibited by our sampled participants. We uncovered three distinct strategic clusters using a risk aversion scale to categorize participant's decision-making behavior. \emph{Risk averse} participants invested the most resources to protect their facility, regardless of the infection rate. \emph{Risk tolerant} players would invest little to nothing in biosecurity regardless of the communicated infection rate. \emph{Opportunists} would take their chances with little protection when the infection rate was low, but invested in high protection when their perceived risk of infection increased. The opportunists were the most responsive to information provided. Identifying these types of behaviors may be beneficial for more targeted information campaigns in order to promote more resilient and healthier systems \cite{bucini2019risk}. Our categorization of risk tolerant, opportunistic and risk averse strategies can help us model agricultural decisions and the ramifications of interventions that seek to alter behavior. Digital field experiments can be applied to test the efficacy of risk communication information campaigns. Incentives can be incorporated into our simulations to test the effects of \emph{nudging} \cite{thaler2009nudge} populations towards healthier risk management practices. Identifying realistic decision response distributions from tested human behaviors can help modeling approaches find system-wide optimal biosecurity resource allocation for outbreak mitigation. Behavioral clustering has value because it allows us to identify a wide spectrum of behaviors and consider targeted interventions to groups who are more responsive to risk communication. People in both the risk averse and risk tolerant clusters, which make up 59.3\% of the tested population, do not appear to readily change behavior with additional information. These categories show recalcitrant behavior, and may be indicative of individuals that are “set in their ways”. These populations are likely the most difficult populations to nudge towards alternate, and more productive pathways. Behavioral interventions of these populations will likely require substantial effort with associated costs and may not provide movement towards the desired outcomes. Yet changing communication strategy by altering disease and biosecurity communication strategies (H1 and H2) confirms that shifts in behavior are possible. We suggest that interventions may be best targeted towards those identified as risk opportunists because they are likely to change their behavior as information is modified. Further work can examine if messaging may differentially effect specific clusters, and thus, provide support for nuanced behavioral nudges designed to impact specific subsets of society. This work is/was supported by the USDA National Institute of Food and Agriculture, under award number 2015-69004-23273. The contents are solely the responsibility of the authors and do not necessarily represent the official views of the USDA or NIFA. The authors would like to thank Susan Moegenburg, Ph.D., for assisting in project framing, data collection, and managerial support for this research project. \bibliographystyle{te.bst}
1,116,691,499,500
arxiv
\section{Introduction} At the Princeton Bicentennial Conference in $1946$, Tibor Rad{\'o} posed the problem to estimate the number of compact minimal surfaces bounded by one or more given Jordan curves in Euclidean three-space. We will answer this question of Rad{\'o} under the assumption that the surfaces are embedded and stable, and their boundary curves satisfy a natural asymptotic geometric constraint. We now describe our results and methods in more detail. Throughout this paper, $\cal A$ and $\cal B$ will each denote a collection of disjoint closed smooth Jordan curves in the plane ${P_0}:z=0$ of ${\R}^3$. We make the assumption that $\cal A$ and $\cal B$ intersect transversely in a finite collection of points, which we call {\em crossing points}. We denote by ${\cal B}(t)$ the vertical translation of $\cal B$ to the plane ${P_t}:z=t$. Consider the immersed $1$-cycle $Z={\cal A}{\cup}{\cal B}$ in $P_0$, and let ${\cal C}({\cal A}, {\cal B})$ denote the collection of the closures of the bounded components in ${P_0}{\backslash}Z$. Let ${\cal V}({\cal A}, {\cal B})$ denote the (finite) collection of integer multiplicity varifolds that can be represented by nonnegative integer multiple sums of the components in ${\cal C}({\cal A}, {\cal B})$, where the multiplicity changes by $one$ in passing from any component to an adjacent one, and at least one of the four components meeting at a crossing point has multiplicity zero (note that as a consequence these varifolds have $Z$ as their ${\Z}_2$-boundary). Let ${\cal S}(t)$ be the collection of compact stable embedded minimal surfaces bounded by ${\cal A}{\cup}{\cal B}(t)$. Our main theorem is: \begin{theorem}[Main Theorem] For values of $t$ sufficiently small, one can naturally associate to a varifold $V$ in ${\cal V}({\cal A}, {\cal B})$ a unique compact stable embedded minimal surface ${\Sigma}(V, t)$ in ${\cal S}(t)$ bounded by ${\cal A}{\cup}{\cal B}(t)$. Furthermore, this association between varifolds in ${\cal V}({\cal A}, {\cal B})$ and surfaces in ${\cal S}(t)$ is a one-to-one correspondence. \end{theorem} In order to discuss further the geometry and topology of the stable minimal surface ${\Sigma}(V, t)$ associated to a varifold $V{\in}{\cal V}({\cal A}, {\cal B})$, we need to give a few definitions. Note that around each crossing point, $Z$ divides a sufficiently small neighborhood $D$ in four connected components; we order them according to the counterclockwise orientation of $D$, so that the multiplicity of the first component is zero. Now fix a $V{\in}{\cal V}({\cal A}, {\cal B})$. Our assumptions on $V$ imply that only the multiplicities $(0, 1, 0, 1)$ and $(0, 1, 2, 1)$ can occur at the crossing points. We let $v_1$ be the number of crossing points with multiplicity $(0, 1, 0, 1)$ in ${\cal A}{\cap}{\cal B}$, let $v_2$ be the number of crossing points with multiplicity $(0, 1, 2, 1)$ in ${\cal A}{\cap}{\cal B}$, let $f_1$ be the number of components of $V$ with multiplicity one, let $f_2$ be the number of components of $V$ with multiplicity two, let $e_1$ be the number of multiplicity one connected components of $Z{\backslash}{\{}crossing\ \ points{\}}$, let $e_2$ be the number of multiplicity two connected components of $Z{\backslash}{\{}crossing\ \ points{\}}$. Then we have the following result. \begin{theorem} Let $V{\in}{\cal V}({\cal A}, {\cal B})$, and for $t$ sufficiently small let ${\Sigma}(V, t)$ be the surface given in the statement of theorem $1.1$. Then the Euler characteristic of ${\Sigma}(V, t)$ is equal to $({v_1} + 2{v_2}) - ({e_1} + 2{e_2}) + ({f_1} + 2{f_2})$. \end{theorem} It follows that since there is exactly one varifold ${V_0}{\in}{\cal V}({\cal A}, {\cal B})$ having least area (namely no multiplicity $2$ components), there is exactly one ${\Sigma}({V_0}, t)$ in ${\cal S}(t)$ with Euler characteristic equal to $N - l + f$, where $N$ is the number of crossing points, $l$ the number of connected components of $Z{\backslash}{{} crossing\ points{}}$, $f$ is the number of multiplicity one components of $V$. This surface ${\Sigma}({V_0}, t)$ is the unique surface of least area in ${\cal S}(t)$. The first step in our proof of theorem $1.1$ is to show that, for any fixed varifold $V$, there exists a compact stable embedded minimal surface bounded by ${\cal A}{\cup}{\cal B}(t)$ corresponding to $V$, when $t$ is sufficiently small. More precisely, we prove the following theorem, which follows from an existence result proven by W. Meeks and S. T. Yau \cite{MeYa1}. \begin{theorem} Let $\cal A$ and ${\cal B}(t)$ be two collections of disjoint closed smooth Jordan curves in the planes ${P_0}$ and $P_t$ respectively, such that $\cal A$ intersects ${\cal B}={\cal B}(0)$ transversely in a finite number of points. Fix a varifold $V$ in ${\cal V}({\cal A}, {\cal B})$. Then, for every sufficiently small $t$, there exists a compact stable embedded minimal surface ${\Sigma}(t)$ with ${\partial}({\Sigma}(t)) = {\cal A}{\cup}{\cal B}(t)$, and such that ${\Sigma}(t)$ is ``close to'' $V$, in the sense that the family of minimal surfaces ${\{}{\Sigma}(t){\}}_t$ converges to $V$ as $t$ approaches $0$. \end{theorem} Next we describe geometrically the stable minimal surfaces obtained in the previous theorem, extending a previous result by W. Rossman \cite{Ro}. \begin{theorem} Let ${\cal A}{\cup}{\cal B}(t)$ be the boundary of compact stable embedded minimal surfaces ${\Sigma}(t)$ arising in theorem $1.3$. Then, for $t$ sufficiently close to $0$, the surfaces ${\Sigma}(t)$ have the following properties: \begin{description} \item{1.} In the complement of small vertical cylindrical neighborhoods $N(i)$ of the crossing points, the components of ${\Sigma}(t){\backslash}{\bigcup}N(i)$ are graphs over their projections to the plane $P_0$. \item{2.} In $N(i)$, ${\Sigma}(t)$ is either approximately helicoidal, or is the union of two graphs over the plane $P_0$. \end{description} \end{theorem} Next we prove that if a minimal surface ${\Sigma}(t)$ possesses a description as in theorem $1.4$, then such minimal surface must be stable, for $t$ sufficiently small. \begin{theorem} Let ${\Sigma}(t)$ be a minimal surface with boundary ${\cal A}{\cup}{\cal B}(t)$. Suppose that ${\Sigma}(t)$ is described as in theorem $1.4$. Then, for $t$ sufficiently close to $0$, $\Sigma(t)$ is stable. \end{theorem} The proof of the above theorem involves some technical lemmas; we give here a sketch of the proof of theorem $1.5$, referring the reader to section $4$ for details. A key step of the proof is the following observation: let $\Sigma$ be a minimal surface, given by the union ${{\Sigma}_1} {\cup} S {\cup} {{\Sigma}_2}$, and suppose that the first eigenvalue of the Jacobi operator on ${\Sigma}_1$ and ${\Sigma}_2$ is strictly positive, and $S$ is a simply connected sufficiently flat piece, in the sense that the diameter of its Gaussian image is very small. Then the first eigenvalue of the Jacobi operator on the minimal surface $\Sigma$ is also strictly positive. Our hypotheses allow us to apply some techniques developed by Kapouleas in \cite{Ka}, and used in his construction of constant mean curvature surfaces in ${\R}^3$. These results imply that if one glues two stable pieces $E_1$ and $E_2$ by a sufficiently flat bridge $S$, then no Jacobi vector fields arise on ${E_1} {\cup} S {\cup} {E_2}$, since none arise from $E_1$ or $E_2$, and no new ones arise from gluing $S$. In the case corresponding to a surface ${\Sigma}(V, t)$ in theorem $1.5$, we start with $E_1$ and $E_2$ each being a small helicoidal minimal surfaces near a crossing point, and $S$ being one of the flat components of ${\Sigma}(t){\backslash}{ \bigcup}N(i)$ adjacent to $E_1$, and $E_2$, and is the region that glues $E_1$ with $E_2$. We can do this by the description of ${\Sigma}(V, t)$ given in theorem $1.4$. A proof by induction on the number of helicoidal components shows that ${\Sigma}(V, t)$ is stable. This completes our brief sketch of theorem $1.5$. \begin{remark} This inspires a {\em Gauss-map bridge principle}, which will be discussed more extensively elsewhere.\end {remark} We now give a brief sketch of the proof of our Main Theorem. By theorem $1.3$, we can associate to each varifold $V{\in}{\cal V}({\cal A}, {\cal B})$ a minimal surface ${\Sigma}(V, t)$, for $t$ sufficiently small. The Main Theorem states that ${\Sigma}(V, t)$ is eventually unique. We now give a sketch of the proof of the uniqueness. Suppose that there are two sequences of stable minimal surfaces ${\Sigma_1}(t_i)$ and ${\Sigma_2}(t_i)$ which have ${\partial}({\Sigma_1}(t_i))={\partial}({\Sigma_2}(t_i))={\cal A}{\cup}{\cal B}(t_i)$, where ${t_i}$ approaches $0$ as $i$ tends to infinity. We show that if such a sequence of minimal surfaces existed, then there would be two other sequences of embedded stable minimal surfaces with boundary given by ${\cal A}{\cup}{\cal B}(t_i)$, and these surfaces could be taken so that their interiors would be disjoint. So we may assume that ${\Sigma_1}(t_i)$ and ${\Sigma_2}(t_i)$ are disjoint in their interior, for all $t_i$, and that both sequences converge to a fixed varifold $V$, as ${t_i} \to 0$. Theorem $1.4$ implies that, for $i$ large, the surfaces ${\Sigma_1}(t_i)$ and ${\Sigma_2}(t_i)$ bound a product region $R(i)$, where the interior angles are less than $\pi$, and the surfaces are ambient isotopic in $R(i)$. By a deep minimax theorem by Pitts and Rubinstein \cite{PiRu}, generalized to the case of nonempty boundary by Jost \cite{Jo}, there would exist an unstable embedded minimal surface $\Sigma^*(t_i)$ in $R(i)$, such that ${\partial}(\Sigma^*(t_i))={\partial}({\Sigma_1}(t_i))={\partial}({\Sigma_2}(t_i))={\cal A}{\cup}{\cal B}(t_i)$. Because ${\Sigma_1}(t_i)$ and ${\Sigma_2}(t_i)$ are expressed as normal graphs over each other, and by the definition of minimax, we are able to show that, away from the crossing points, $\Sigma^*(t_i)$ is a very flat graph. Then Nitsche's $4{\pi}$-theorem allows us to show that $\Sigma^*(t_i)$ is approximately helicoidal around the crossing points. Hence the unstable minimal surface $\Sigma^*(t_i)$ would possess the same geometric description as ${\Sigma_1}(t_i)$ and ${\Sigma_2}(t_i)$. Then theorem $1.5$ implies that, for $i$ sufficiently large, $\Sigma^*(t_i)$ would have to be stable. This produces a contradiction, and finishes the sketch of our proof of the Main Theorem. We then give an upper bound on the number of compact embedded stable minimal surfaces bounded by two given collections of curves. More precisely, we show the following. \begin{corollary} Once the limiting cycle $Z={\cal A}{\cup}{\cal B}$ in the plane $P_0$ is given, the number of stable compact minimal surfaces ${\Sigma}(t)$ such that ${\partial}({\Sigma}(t)){\to} Z$ as $t{\to}0$, is bounded above by $$ 2^{f^i_-} + 2^{f^o_-}, $$for $t$ sufficiently close to $1$, where $f^i_-$ is the number of bounded components of $P_0{\backslash}Z$ inside ${Region}({\cal A}) {\cap} {Region}{({\cal B})}$, and $f^o_-$ is the number of bounded components of $P_0{\backslash}Z$ outside ${Region}({\cal A}) {\cup} {Region}({\cal B})$. \end{corollary} Finally, we observe how the above results support a conjecture made by W.H. Meeks in \cite{Me1}, about the nonexistence of positive genus compact embedded minimal surfaces bounded by two convex curves in parallel planes of ${\R}^3$. \setcounter{chapter}{2} \setcounter{section}{-1} \setcounter{theorem}{0} \setcounter{lemma}{0} \setcounter{definition}{0} \setcounter{claim}{0} \setcounter{corollary}{0} \setcounter{condition}{0} \setcounter{question}{0} \setcounter{example}{0} \setcounter{remark}{0} \section{Preliminary notions} In this section we recall some known facts about minimal surfaces. \section{Minimal surfaces are locally area-minimizing} There are several possible ways to define minimal surfaces in ${\R}^3$. A surface $S$ in ${\R}^3$ is {\it minimal}\ \ if for every point $p$ of $S$ it is possible to find a neighborhood $U_p$ of $p$, contained in $S$, which is the unique surface of least area among all surfaces with boundary ${\partial}U_p$. Notice that $S$ could have infinite area, for instance, if $S$ is a plane in ${\R}^3$. A minimal surface can also be defined as a surface for which every compact subdomain $C$ is a critical point for the area functional among all surfaces having boundary equal to ${\partial}C$. From the point of view of local geometry, this is equivalent to the condition that the mean curvature $H$ be identically zero. \begin{theorem}[Meusnier] A regular surface $\Sigma$ immersed in ${\R}^3$ is a critical point for the area functional if and only if the mean curvature vanishes identically on $\Sigma$. \end{theorem} \section{Stability: eigenvalues of the Laplacian} Recall that the Gauss map of an oriented surface in ${\R}^3$ assigns to each point of the surface the unit normal vector to the surface at that point, viewed as an element of the unit sphere $S^2$ in ${\R}^3$. We fix an orientation on $S^2$ by the inward pointing normal vector. \begin{theorem}[Christoffel] Given a surface ${\Sigma}\hookrightarrow{\R}^3$, the Gauss map $g$ is conformal or anti-conformal if and only if $\Sigma$ is a minimal surface (or a sphere). \end{theorem} \begin{proof} The proof of this theorem follows from the definitions of mean curvature and Gauss map. A complete proof can be found in \cite[p. 385, vol. 4]{Sp}, and the original proof in\cite{Ch}. \end{proof} Recall that a minimal surface is stable if, for each subdomain $D$ the second derivative of area is positive for each smooth normal variation of $D$ which is the identity on ${\partial D} = \Gamma$, and {\it unstable} if for some smooth normal variation of $D$ which is the identity on ${\partial}D = \Gamma$ the second derivative of area is negative. We now recall another characterization of stability. We start with the second variational formula (see for example \cite{Si}): \begin{theorem}[Second variational formula] \protect\begin{equation} {\frac{d^2A}{dt^2}}{\bigg|}_{t = 0} = {\int_{D}(|\nabla h|^2 + 2Kh^2){\,}dA}, \protect\end{equation}where $h$ is as in the previous paragraph. \end{theorem} Clearly the stability of $D$ is equivalent to the inequality: $$ \int_{D}|\nabla h|^2dA > -2\int_{D}Kh^2dA $$for any $h$ in $C^\infty(\overline D)$ which vanishes on ${\Gamma}$. Recall now that for a minimal surface, the pullback metric of the metric on $S^2$ under the Gauss map is given by: \protect\begin{equation} d{\hat s} = -Kds^2, \protect\end{equation}from which it follows that $$ d{\hat A} = -KdA $$and $$ |\nabla h|^2 = -K|{\hat {\nabla}}h|^2, $$where ${\hat {\nabla}}$ is the gradient of the new (pullback) metric on $\Sigma$. Hence we can rewrite the previous equation in the form: $$ {\int}_{D}|{\hat {\nabla}}h|^2{\,}d{\hat A}>2{\int}_{D}h^2{\,}d{\hat A}. $$Now recall that the ratio $$ Q_{D}(h) = \frac{{\int}_{D}|{\nabla}h|^2{\,}dA}{{\int}_{D}h^2{\,}dA} $$is the {\it Rayleigh quotient}, and the quantity $$ {\lambda}_1(D) = \inf Q_{D}(h) $$represents the first (smallest) eigenvalue of the Dirichlet problem \[ \left\{ \begin{array}{lll} {\Delta}h + {\lambda}h &= 0 & \mbox{in D} \\ h &= 0 & \mbox{on $\Gamma$.} \end{array} \right. \]The above infimum can be taken over all continuous piecewise smooth functions on the closure of $D$ which vanish on $\Gamma$, and $\Delta$ represents the Laplace operator with respect to the (given) metric on $D$. If ${\Gamma}$ is sufficiently smooth, then the above boundary value problem has a solution $h_1$ corresponding to the eigenvalue ${\lambda}_1$, and the above infimum is attained by $h = h_1$. Putting these facts together, we see that the above conditions of stability are equivalent to the inequality: $$ {\lambda}_1(D) > 2. $$For a minimal surface, the metric $(2.3)$ is exactly the pullback metric under the Gauss map of the metric on $S^2$. Thus we have the following theorem, due to Schwarz: \begin{theorem}[Schwarz] Let $D$ be a (relatively) compact domain on a minimal surface ${\Sigma}{\hookrightarrow}{{\R}^3}$. Suppose that the Gauss map $g$ takes $D$ injectively on a domain $\hat{D}$ on the unit sphere $S^2$. If ${\lambda_1}(\hat{D}) < 2$, then $D$ cannot be locally area-minimizing with respect to its boundary. \end{theorem} \begin{proof} Because of the above observations we see that the condition ${\lambda_1}(\hat{D}) < 2$ implies the existence of a $C^{\infty}$ function $u:{\overline{D}}{\to}\R$ such that $u_{|{\partial}\hat{D}} = 0$ and which makes the second variation of area (2.2) negative. Namely $D$ is the initial point of a 1-parameter family of minimal surfaces with boundary equal to $\partial{D}$, such that each element in this family has area smaller than the area of $D$. Hence $D$ cannot be locally area-minimizing (and therefore cannot be stable). \end{proof} Schwarz's theorem, and the facts that the first eigenvalue of the Laplacian on a hemisphere of $S^2$ is equal to $2$, and that $D'{\subseteq}D{\Rightarrow}{\lambda_1}(D'){\geq}\lambda_1(D)$, imply that if the Gauss map $g$ is injective on a domain $D$ of a minimal surface, and the Gaussian image $g(D)$ contains a hemisphere on $S^2$, then $D$ is unstable. Barbosa and Do Carmo \cite{BaDo1,BaDo2} generalized this result, showing that $g$ does not necessarily have to be injective, by proving the following theorem. \begin{theorem}[Barbosa-Do Carmo] If the area of the Gaussian image $g(D){\subseteq}{S^2}$ is less than $2\pi$, then $D$ is stable. \end{theorem} \begin{remark} Note that the converse statement to theorem $2.5$ does not hold. This can be seen by considering two or more disjoint stable minimal annuli, and by gluing them together by thin bridges, to obtain a new connected stable minimal surface (see \cite{MeYa1}, \cite{Sm}, \cite{Wh2}, \cite{Wh3}). This new minimal surface is stable, but in general its image under the Gauss map, counted with multiplicities, has area larger that $2\pi$. \end{remark} We recall yet another characterization \cite{BaDo1,BaDo2} of stable minimal surfaces. First we need the following definition. \begin{definition} A smooth normal vector field $V(p) = p + u(p){\vec N}(p)$, where ${\vec N}(p)$ is the normal vector to $D$ in $p$ is said to be a Jacobi vector field if $u:{\overline D}{\to}{\R}$, $p{\mapsto}u(p)$ satisfies the Jacobi equation $$-{\Delta}u + 2uK = 0,$$ where $\Delta$ is the Laplace operator on $D$ and $K$ is the Gaussian curvature on $D$. \end{definition} Then it can be shown, via the Morse Index Theorem \cite{SSm}, that \begin{theorem}[Classical] A domain $D$ is stable if and only if no subdomain $D'$ inside $D$ admits a Jacobi vector field which is nonzero in $D'$ but zero on $\partial {D'}$. \end{theorem} \section{The maximum principle} Suppose that $X=(X_1,X_2,X_3):S{\to}{{\R}^3}$ is a minimal immersion. Then one can check, using the definition of zero mean curvature (since ${\Delta} X= 2 H{\cdot}N$), that the coordinate functions $X_1$, $X_2$, $X_3$ are harmonic functions. It is well known that there is a maximum principle for harmonic functions. Hence there is also a maximum principle for minimal surfaces. \begin{theorem}[Maximum principle at an interior point] Let $M_1$ and $M_2$ be two minimal surfaces in ${\R}^3$ that intersect at an interior point $p$. If $M_1$ lies on one side of $M_2$ near $p$, then $M_1$ and $M_2$ coincide in a neighborhood of the point $p$. \end{theorem} \begin{theorem}[Maximum principle at a boundary point] Suppose \newline that the minimal surfaces $M_1$ and $M_2$ have boundary curves $C_1$ and $C_2$ respectively. Let $p$ be a point in ${C_1}{\cap}{C_2}$, and suppose that $T_p(M_1) = T_p(M_2)$ and $T_p(C_1) = T_p(C_2)$. Choose orientations of $M_1$ and $M_2$ in such a way that the two surfaces have the same normal vector $\vec n$ at $p$. If near $p$ $M_1$ lies on one side of $M_2$ and the conormal vectors of $M_1$ and $M_2$ at $p$ coincide, then $M_1$ and $M_2$ coincide in a neighborhood of the point~$p$. \end{theorem} \section{Minimax theorems} It is intuitively evident that a smooth function $f$ from ${\R}^N$ to $\R$ which has two isolated nondegenerate relative minimum points should also have at least one unstable critical point. This was proven by Courant in his book \cite{Co}. Later in this paper we will need the existence of a {\em minimax solution} between two stable minimal surfaces with the same boundaries (see the Introduction and section $4$). \begin{definition} Let $M$ be the space of embedded surfaces contained in the product region of space bounded by ${\Sigma_1}{\cup}\Sigma_2$, and having interior angles less than $\pi$. Let $\overline M$ be an opportunely constructed compactification of $M$ (see \cite[p. 234-235]{Jo}), and let $\cal F$ be a cover of $\overline M$ by connected sets. Denote with $F$ a generic element in $\cal F$. Define now the minimax of the function $A =$ ``${Area}$ on $\cal F$'' by: $${Minimax}(A, {\cal F})={inf}_{F{\in}{\cal F}}{sup}{\{ A(M) | M{\in}F{\}}}.$$ \end{definition} Then the deep minimax theorems proven by Pitts and Rubinstein \cite{PiRu}, and generalized by Jost \cite{Jo} to the case of nonempty boundary, guarantee that the following assertion holds true. \begin{theorem}[Minimax] There exists an embedded unstable minimal surface ${\Sigma}^*$, contained in the region of space $\cal R$ bounded by ${\Sigma_1}{\cup}\Sigma_2$, having ${\partial}({\Sigma}^*)={\partial}({\Sigma_1})={\partial}(\Sigma_2)$. Furthermore, the genus of ${\Sigma}^*$ is at most equal to the genus of ${\Sigma}_i$, $i = 1, 2$. \end{theorem} \begin{remark} In our case, it will be very important to make sure that ${\Sigma}^*$ is homeomorphic to each of ${\Sigma}_1$ and ${\Sigma}_2$. More precisely, the minimax theorems mentioned above guarantee that the genus of ${\Sigma}^*$ is at most equal to $g$, but it could, {\em a priori}, be less than $g$. However, since ${\Sigma}^*$ is contained in the product region $\cal R$ bounded by ${\Sigma_1}{\cup}\Sigma_2$, and both ${\Sigma}_1$ and ${\Sigma}_2$ are incompressible in $\cal R$, then the genus of ${\Sigma}^*$ is at least $g$. Hence the unstable minimax solution is homeomorphic to the two stable ones. \end{remark} \section{Nitsche's $4\pi$-theorem} Finally, let us state here another result which will be useful in subsequent sections of this paper, namely the ``${4\pi}$-theorem'', due to J.C.C. Nitsche \cite{Ni}. \begin{theorem}[Nitsche] Let $\gamma$ be a real analytic curve in ${\R}^3$ having total curvature less than $4\pi$. Then $\gamma$ bounds a unique minimal surface which is topologically equivalent to a disk, and this disk is stable and has no branch points. \end{theorem} \vskip .6cm {\em Sketch of Proof.} It follows easily from previous work of Shiffman that if $\gamma$ bounded more than one minimal disk which is a $C^0$-minimum to area, then it would also bound a branched minimal disk that is not a $C^0$-minimum to area. Nitsche proved, for the curve $\gamma$ stated in the theorem, that every minimal disk which it bounds is a $C^0$-minimum for area, and hence by Shiffman's result it must be unique.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ q.e.d. \setcounter{chapter}{3} \setcounter{section}{-1} \setcounter{theorem}{0} \setcounter{lemma}{0} \setcounter{definition}{0} \setcounter{claim}{0} \setcounter{corollary}{0} \setcounter{condition}{0} \setcounter{question}{0} \setcounter{example}{0} \setcounter{remark}{0} \section{Existence and description} In this section we show the existence of the minimal surfaces ${\Sigma}(V, t)$, and describe them geometrically, for sufficiently small values of $t$. \section{Existence} Recall that $P_t$ denotes the plane ${\{} (x, y, z){\in}{{\R}^3} | z = t{\}}$. Let us consider two closed smooth Jordan curves, ${\alpha}$ and ${\beta}={\beta}(0)$ in the plane ${P_0}:z=0$ of ${\R}^3$. Suppose that the curves ${\alpha}$ and ${\beta}$ intersect transversely in a finite number of points $p_1,\ p_2, {\ldots},\ p_N$, which we will call {\it crossing points}. For our purposes in this paper, the word {\em varifold} will denote a finite union of connected compact planar regions which are smooth except possibly at a finite number of boundary points, and to the interior of each such component we associate a nonnegative integer, called the {\em multiplicity} of that component, without orientation. We will denote by ${\beta}(t)$ the translated image of $\beta$ by the vector $(0, 0, t)$. First of all we will prove an existence theorem, which shows that the class of minimal surfaces that are our object of study is not the empty one, and that it makes sense to study the asymptotic behavior. \begin{theorem} Let $\alpha$ and ${\beta}$ be two Jordan curves in the plane $P_0$. Let $V$ be a fixed varifold in $P_0$ with ${\Z}_2$-boundary given by $Z = {\alpha}\cup{\beta}$. Then, for every $t$ sufficiently close to $0$, there exists a compact stable embedded minimal surface ${\Sigma}(t)$ whose boundary is ${\alpha}{\cup}{\beta}(t)$, and such that $\Sigma(t)$ is ``close to'' $V$, in the sense that the sequence of minimal surfaces ${\{}{\Sigma}(t){\}}_t$ converges to $V$, as $t {\to} 0$. \end{theorem} The proof will follow from the following theorem proven by Meeks and Yau \cite{MeYa1}, which we state here without proof. \begin{theorem}[Meeks-Yau] Let $M$ be a submanifold of a $3$-dimensional analytic manifold whose boundary is piecewise smooth and has nonnegative mean curvature with respect to the inner pointing normal vector, and has interior angles less than $\pi$. Let $\Sigma$ be a compact subdomain of $\partial M$ such that $\Sigma$ is incompressible in $M$, namely each homotopically nontrivial curve in $\Sigma$ is also homotopically nontrivial in $M$. Then there exists a stable minimal embedding $f:{\Sigma}\to M$ such that $f(\partial{\Sigma}) = \partial{\Sigma}$. Moreover if $g:{{\Sigma}'}\to M$ is another minimal immersion of a compact surface such that $g(\partial{{\Sigma}'})=\partial{\Sigma}$, then one can assume $f(\Sigma){\subset}M{\backslash}g({\Sigma}')$. \end{theorem} {\em Proof of Theorem 3.1.} Let ${\cal R}(\alpha)$ and ${\cal R}(\beta)$ be the bounded simply connected components of ${P_0}{\backslash}{\alpha}$ and ${P_0}{\backslash}{\beta}$ respectively, and let $\cal R$ be the region in the plane $P_0$ given by $\cal R = {\cal R}(\alpha){\cup}{\cal R}(\beta)$. We wish to construct the analytic three-manifold $M(V, t)$ satisfying the hypotheses of theorem $3.2$, for $t$ sufficiently small. Let $C_1, \ldots , C_k$ be the bounded components of ${P_0}{\backslash}Z$ with $V$-multiplicity equal to zero. Choose round circles ${S_i}\subseteq{Int}(C_i)$, for each $i{\in}{\{}1, \ldots , k{\}}$, and let $S_i(t)$ be the vertical upward translated of $S_i$ by $t$. Since the number of connected components of $P_0 {\backslash} ({\alpha}{\cup}{\beta})$ is finite, there will be a largest $t'$ such that for $t {\leq} t'$ there exist stable catenoids $A_i(t)$ bounded by ${S_i}{\cup}{S_i}(t)$ whose height is $t$. We do not place catenoids above any component of $V$ which has multiplicity $2$, nor above components of $V$ whose multiplicity is $1$. Next, glue flat components away from the $(0, 1, 0, 1)$ crossing points, and where the prescribed multiplicities are $1$ or $2$. The glueing above the multiplicity $2$ components should be made so that the following condition is satisfied. Let $\sigma$ be the multiplicity two arc bounded by the two $(0, 1, 2, 0)$ crossing points. Then of the two flat pieces glued above the multiplicity $2$ component, one does not contain $\sigma$, and it glues on smoothly (as its projection ``crosses'' $\sigma$) to the adjacent flat piece which also doesn't contain $\sigma$ and projects on the multiplicity $1$ component ``across'' $\sigma$. Finally, along the $(0, 1, 0, 1)$ crossing points, glue the corresponding multiplicity $1$ pieces via a twisted strip (the glueing taking place along the ``helices''), in such a way that if $\cal S$ is the surface just constructed, then the curvature of the strip with respect to the inner pointing normal vector (with respect to $\cal S$) is nonnegative. Then the three-manifold $M = M(V, t)$ is given by $$M = Slab(t){\backslash} ({\bigcup}_{i=1}^{k}Int(A_i(t)){\bigcup}(Out({\cal S}))), $$ where $Slab(t)$ is the region of space between the planes $P_0$ and $P_t$, $Int(A_i(t))$ is the bounded simply connected region in $Slab(t)$ containing $A_i(t)$ in its boundary, $Out(\cal S)$ is the unbounded connected component of $Slab(t){\backslash}{\cal S}$. Note that $M$ is analytic because the boundaries of the catenoids are circles. Also, by construction, the interior angles are less than $\pi$, and the curvature with respect to the inner pointing normal is nonnegative. Then the surface $\Sigma$ satisfying the hypotheses of theorem $3.2$ is going to be $\cal S$. By construction, the surface $\Sigma$ constructed in this way has $${\pi}_{1}(\Sigma) = {\cal F}(k),$$ where ${\cal F}(k)$ denotes the free group on $k$ generators. Moreover $\Sigma$ admits a submersion onto a planar region in the plane $P_0$ which is homotopically equivalent to $V{\subseteq} \cal R$. Now, $V$ is easily seen to be homotopically equivalent to $M$ (which is, homotopically, the cross product of the unit interval with a plane from which $k$ disks have been removed), and so one has $${\pi}_{1}(M) = {\cal F}(k).$$This allows us to conclude that $\Sigma$ is incompressible in $M$ because of the commutativity of the following diagram: \begin{picture}(150,100)(0,0) \put(45, 80){${\pi_1}(\Sigma)$} \put(150,80){${\pi_1}(M)$} \put(97,10){${\pi_1}(V)$} \put(55,45){1-1} \put(59,72){\vector(1,-1){48}} \put(161,72){\vector(-1,-1){48}} \put(161,72){\vector(1,1){1}} \put(152,45){$isomorphism$} \put(76,84){\vector(1,0){70}} \put(110,60){\oval(10,10)[tr]} \put(110,60){\oval(10,10)[tl]} \put(110,60){\oval(10,10)[br]} \put(110,55){\vector(-1,0){1}} \end{picture} Hence the above construction satisfies all the hypotheses of Meeks and Yau's theorem, providing us with a {\it good barrier for the solution of Plateau's problem}. We are assured the existence of a compact stable embedded minimal surfaces ${\Sigma}(t)$, having boundary given by ${\alpha}{\cup}{\beta}(t)$, for all $t {\leq}t'$, and incompressible in $M$. By construction, we have that ${lim}_{t \to 0}({\Sigma}(t)) = V$. {\hskip 11cm} q.e.d. \begin{remark} As the number of annuli used as barriers in the previous proof increases, the genus of the minimal surfaces ${\Sigma}(V, t)$ correspondingly obtained increases, and the area of ${\Sigma}(V, t)$ decreases, since the area of $V$ decreases. Hence the least area surfaces bounded by ${\alpha}{\cup}{\beta}(t)$ are the ones having largest genus, and they correspond to the only varifold which has no components with multiplicity $2$. The surfaces of largest area bounded by ${\alpha}{\cup}{\beta}(t)$ correspond to the disjoint union of the regions ${\cal R}(\alpha)$ and ${\cal R}({\beta}(t))$. \end{remark} \section{Description} In this section we will describe geometrically the minimal surfaces whose existence has been shown in the previous section, when the distance between the two planes containing the boundary of such surfaces is sufficiently small. Let's give here a definition which will be useful in the proof of the next theorem. \begin{definition} A sequence of surfaces $\{{S_i}\}_{i=1}^{\infty}$ with boundary is said to converge to a proper surface $S$ in ${\R}^3$ if, for each compact region $B$ of ${\R}^3$, there exists a positive integer $N_B$ such that, for $i>{N_B}$, ${S_i}{\cap}B$ is a normal graph over $S$, and $\{{S_i}\cap{B}\}_{i=1}^{\infty}$ converges to $S{\cap}B$ in the $C^1$ norm (${\|}f{\|}_{C^1}={\|}f{\|}_0+{\|}Df{\|}_0$). Moreover, if $\{S_i\}_{i=1}^{\infty}$ is a sequence of surfaces with boundary, and $\{{\phi_i}\}_{i=1}^{\infty}$ is a sequence of homotheties of ${\R}^3$ with center the origin, and $\{{\phi_i}(S_i)\}_{i=1}^{\infty}$ converges to a connected piece of a helicoid bounded by two straight lines, we will say that $S_i$ is {\it approximately helicoidal}\ \ near the origin for $i$ sufficiently large. Similarly one can define the convergence of a continuous family $\{S_t\}$, as $t$ approaches $t_0$. \end{definition} The minimal surfaces considered in the Main Theorem were partially described by Wayne Rossman \cite{Ro}, under the hypotheses that these minimal surfaces exist and are actually area-minimizing. By theorem $3.1$ we have proven the existence of stable embedded minimal surfaces which are not necessarily area-minimizing (see the above remark). In this section we prove theorem 3.7, which will provide a complete description of the {\it stable} minimal surfaces under consideration in this paper. Before stating the theorem, let us make a few observations about the crossing points. More precisely, consider a varifold $V$, arising as the limit of a family ${\Sigma}(t_i)$ of minimal surfaces, in a small neighborhood of the crossing points. This is pictured schematically in Figure $3.1$, where $a, b, c, d$ represent the multiplicity of $V$ in each of the four regions. Then the following claim holds. \begin{figure} \begin{picture}(250,100)(0,0) \qbezier(20,80)(225,60)(200,10) \qbezier(30,10)(100,80)(245,100) \put(95,62){$a$} \put(114,71){$b$} \put(133,66){$c$} \put(112,54){$d$} \end{picture} \caption{The varifold $V$, with multiplicity, around a crossing point.} \end{figure} \begin{claim} Let $V$ be a varifold determined by any of the minimal surfaces obtained in theorem $3.1$. Then, with notation as introduced above, the following three properties hold for $V$: \begin{description} \item{-} The difference between the multiplicity of any region and an adjacent one in Figure $3.1$ is exactly one. \item{-} At least one of $a, b, c, d$ must be equal to zero. \item{-} The only possibilities allowed for the multiplicity at the crossing points are $(0, 1, 0, 1)$ and $(0, 1, 2, 1)$. \end{description} \end{claim} \begin{proof} The proof of the three assertions in the claim follows easily from the fact that the surfaces being considered are embedded and stable, and that $Z$ is the homology boundary of these surfaces. \end{proof} Hence we will, from now on, speak of points with multiplicity $(0, 1, 0, 1)$ and $(0, 1, 2, 1)$. The two cases are illustrated in Figure $3.2$. \begin{figure} \begin{picture}(250,100) \put(30,90){\line(6,-5){100}} \put(185,90){\line(6,-5){100}} \put(30,5){\line(6,5){100}} \put(185,5){\line(6,5){100}} \put(66,45){0} \put(80,58){1} \put(91,46){0} \put(80,32){1} \put(215,45){0} \put(230,58){1} \put(246,46){2} \put(230,32){1} \end{picture} \caption{Crossing points with multiplicity $(0, 1, 0, 1)$ and $(0, 1, 2, 1)$.} \end{figure} \begin{remark} The previous claim implies that, for $t$ sufficiently small, the varifold associated with each embedded compact stable minimal surface \newline bounded by ${\alpha}{\cup}{\beta}(t)$ is actually in the set ${\cal V}({\alpha}, {\beta})$, whose meaning is the same as in the introduction.\end{remark} \begin{theorem} Let ${\Sigma}(t)$ be a compact stable embedded minimal surface with boundary given by the union of the curves ${\alpha}$ and ${\beta}(t)$. Then, for $t$ sufficiently close to $0$, the surface ${\Sigma}(t)$ has the following properties: \begin{description} \item{1.} In the complement of the union of small cylindrical neighborhoods $N(i)$ of the crossing points, the connected components of ${\Sigma}(t){\backslash}{\cup}N(i)$ are graphs over their projection to the plane $P_0$. \item{2.} In $N(i)$, ${\Sigma}(t)$ is either approximately helicoidal, or it is the union of two graphs over the plane $P_0$. \end{description} \end{theorem} \begin{proof} First notice that the theorem certainly holds if ${\Sigma}(t)$ consists of the union of the two planar regions bounded by ${\alpha}$ and ${\beta}(t)$. Hence we need to prove the theorem for connected ${\Sigma}(t)$. The proof will be divided in five steps. \underline{Step 1}: {\it For $t$ sufficiently close to $0$, the components of ${\Sigma}(t)$ away from the crossing points are graphs over $P_0$.} For each $1{\leq}i{\leq}n$, let $D_{i}(\epsilon)$ be a disk in $P_0$ with center $p_i$ and radius $\epsilon$, chosen so that the following two conditions are satisfied: \begin{description} \item(a) the disks $D_i$ are mutually disjoint; \item(b) for each $i$, ${D_i}{\cap}({\alpha}{\cup}{\beta})$ is topologically a ${\times}$. \end{description} Let $N={P_0}{\backslash}{\bigcup}_{i=1}^{N}{D_i(\epsilon)}$, and let $U=N{\times}{\R}$ be the vertical cylinder over $N$. Moreover, let ${\hat {\Sigma}}(t)={{\Sigma}(t)}{\cap}U$. Let us notice that along the curves ${\alpha}{\cap}{\hat {\Sigma}(t)}$ and ${\beta}(t){\cap}{\hat {\Sigma}(t)}$, the normal vector to ${\Sigma}(t)$ must become arbitrarily vertical as $t$ approaches $0$. To see this, for example along ${\beta}(t){\cap}{\hat {\Sigma}(t)}$, suppose $\Sigma(t)$ is vertical at a boundary point $p$ of ${\beta}(t){\cap}{\hat {\Sigma}(t)}$. Consider, above and away from ${\Sigma}(t)$, a piece of a half strictly unstable vertical catenoid with boundary curves a larger circle in the plane $P_0$ and a smaller circle in the plane $P_{t+{\epsilon}'}$ (this catenoid is a graph above the plane $P_0$, except along the smaller boundary circle). Translate vertically down this catenoid piece until it makes first contact with ${\Sigma}(t)$ at some point $p$; the maximum principle implies first that $p$ is a boundary point both of the catenoid piece and of ${\Sigma}(t)$, and then that the two minimal surfaces ${\Sigma}(t)$ and the catenoid piece coincide in a small neighborhood of $p$. This contradiction shows that $\Sigma(t)$ cannot become vertical along its boundary, away from the crossing points. Moreover we have that locally the surface $\hat {\Sigma}(t)$ is situated on the same side with respect to the vertical cylinder $(({\alpha}{\cup}{\beta}){\cap}{\hat {\Sigma}(t)}){\times}{\R}$, otherwise one could translate a catenoid in such a way that the first point of contact would be an interior point, which contradicts the maximum principle. With the same notation as above, let us notice that there exists the upper lower bound of the angle $\theta(p,t)$ formed by ${\Sigma}(t)$ with an arbitrary catenoid piece intersecting ${\Sigma}(t)$ only in $p$; in fact this infimum is given by the angle between the tangent plane to ${\Sigma}(t)$ at $p$ and the horizontal plane $P_0$. Moreover, for an arbitrary $p$ in $({\alpha}{\cup}{\beta}(t)){\cap}{\hat {\Sigma}(t)}$, the upper lower bound of $\theta(p,t)$ approaches zero as $t$ gets closer to $0$. Let $$\theta_0(t)={\max}{\{}\theta(p,t) |{\,} p{\in}({\alpha}{\cup}{\beta}(t)){\cap}{\hat {\Sigma}(t)}{\}}.$$ The compactness of $({\alpha}{\cup}{\beta}(t)){\cap}{\hat {\Sigma}(t)}$ guarantees that $\theta_0(t)$ is well defined. Furthermore, what has been said above implies that $${\lim}_{t\to 0}{\theta_0}(t)=0,\ \ \ \ {\mbox{and therefore}}\ \ \ \ {\lim}_{t\to 0}{\tan}{\theta_0}(t)=0.$$ By R. Schoen's estimate \cite{Sc2} there exists a universal constant $c$ such that $K{\!}<{\!}\frac{c}{r^2}$, where $K$ is the Gaussian curvature at a point of a stable minimal surface and $r$ is the distance between that point and the boundary of the surface. Now let us choose $t$ sufficiently close to $0$ so that ${\tan}{\theta_0}(t){\!}<{\!}{\min}{\{}{\frac{\pi}{32{\sqrt{c}}}},{\frac{1}{16}}{\}}$. Let $p$ be a point in $\hat {\Sigma}(t)$, and let $\hat p$ be the orthogonal projection of $p$ on the plane $P_0$. Let $r$ be the horizontal distance between $\hat p$ and ${\alpha}{\cup}{\beta}{\backslash}{{\bigcup}_{i=1}^{n}{D_i}}$. Let $Cyl_1$ be the part of the vertical cylinder $D({\hat p},{\frac{r}{2}}){\times}{\R}$ with height $2{r}{}{\tan}{\theta_0}$ and containing ${\hat {\Sigma}(t)}{\cap}{\partial}Cyl_1$ in its interior. Let $Cyl_2$ be the part of the vertical cylinder $D({\hat p},{\frac{r}{4}}){\times}{\R}$ with height $2{r}{}{\tan}{\theta_0}$ which is contained in $Cyl_1$. Let us suppose that our assertion in Step 1 is not true. It will be therefore possible to find a point $q$ in ${\Sigma}(t){\cap}Cyl_2$ whose normal vector ${\vec N}(q)$ is horizontal, namely $<{\vec N(q)}, {\vec {e_3}}>=0$. There follows the existence of a geodesic ${\omega}(s)$ in ${\Sigma}(t)$ with unimodular velocity, and such that ${\omega}(0)=q$, ${\omega}'(0)={\vec e_3}$, and the curvature $|{\omega}''(s)|$ of ${\omega}(s)$ in $Cyl_1$ is bounded above by $\frac{2{\sqrt{c}}}{r}$ because of Schoen's estimate. Notice that, since ${\tan}{\theta_0}<\frac{1}{16}$ (and since $|{\omega}'(s)|=1$ implies $|d{\omega}|{\approx}|ds|$), ${\omega}(s)$ lies in $D({\hat p},{\frac{r}{2}}){\times}{\R}$ for $0{\leq}s{\leq}{4}{r}{}{\tan}{\theta_0}$. Considering the estimate for the curvature of ${\omega}(s)$, the estimate $\frac{\pi}{32{\sqrt{c}}}$ for ${\tan}(\theta_0)$, and the integral $$\int_{0}^{4{r}{}{\tan}{\theta_0}}|{\omega}''(s)|{\,}ds,$$ we can conclude that the length of the curve ${\omega}'(s)$ in the unit sphere $S^2$ is less than $\frac{\pi}{4}$, for $0{\leq}s{\leq}4{r}{}{\tan}{\theta_0}$. Since ${\omega}'(0)={\vec e_3}$, we have that $<{\,}{\omega}'(s),{\vec e_1}{\,}>$ and $<{\,}{\omega}'(s),{\vec e_2}{\,}>$ are both less than $\frac{\sqrt 2}{2}$, and that $<{\,}{\omega}'(s),{\vec e_3}{\,}>$ is larger than $\frac{\sqrt 2}{2}$, for $0{\leq}s{\leq}4{r}{}{\tan}{\theta_0}$. Hence it is $${\langle}{\omega(4{r}{\tan}{\theta_0})}-{\omega}(0),{\vec{e_3}}{\rangle}= \int_{0}^{4{r}{\tan}{\theta_0}}{\langle}{\omega}'(s),{\vec e_3}{\rangle}ds> {\frac{\sqrt 2}{2}}4{r}{}{\tan}{\theta_0}>2{r}{\tan}{\theta_0}.$$ Likewise we have: $$|{\langle}{\omega}(4{r}{}{\tan}{\theta_0})-{\omega(0)},{\vec e_1}{\rangle}|< \frac{r{\sqrt 2}}{8},$$ and $$|{\langle}{\omega}(4{r}{}{\tan}{\theta_0})-{\omega(0)},{\vec e_2}{\rangle}|< \frac{r{\sqrt 2}}{8},$$ because of the condition ${\tan}{\theta_0}{\!}<{\!}\frac{1}{16}$. But this implies that $\omega(s)$ intersects one of the two horizontal disks of ${\partial}Cyl_1$ for some value of $s$ between $0$ and $4{r}{}{\tan}{\theta_0}$, which is impossible, given the way $Cyl_1$ was constructed. Hence $\vec N$ is never horizontal on $\hat {\Sigma}(t)$, which therefore is union of graphs over $D({\hat p},\frac{r}{4})$. Moreover, notice that at most two sheets can lie over $D({\hat p},\frac{r}{4})$, otherwise ${\Sigma}(t)$ would not be embedded (at least along the boundary). If the surface is area-minimizing, then at most one sheet lies above each point, but if no assumption of area minimality is made, then there could be points above which there are two sheets. Let us remark that for $t\approx 0$ the above graphs are ``almost horizontal'', since it is clear the above argument can be strengthened to show that, for any $x {\in} (0, 1)$, it is possible to make $|{\langle}\vec N{(q)}, \vec{e_3}{\rangle}| > x$ for each $q {\in} {\Sigma}(t)$, for $t$ sufficiently close to $0$. \underline{Step 2}: {\it ${\Sigma}(t)$ is the union of two disjoint graphs in the neighborhood of each $p$ in ${\partial}{V_1}{\cap}{\partial}{V_2}$, where $V_1$ and $V_2$ are components of $V$ with multiplicities $1$ and $2$ respectively.} To see this, consider a small vertical cylinder $\cal C$ whose vertical axis contains $p$ and whose height is $t$. Moreover, let $\hat {\Sigma}(t) = {\Sigma}(t){\cap}{\cal C}$. Now homotetically expand ${\cal C}$, with coefficient of homothety $1/t$, and center of the homothety in $p$. The image of $\hat {\Sigma}(t)$ under the homothety converges, as $t{\to}\infty$, to a minimal surface having a simply connected component $S$ which is a stable minimal surface contained in a half-space and bounded by a straight line, $\ell$. The image $g(\ell)$ of $\ell$ via the Gauss map is either a single point or a great-circle on $S^2$ containing the north and south pole of $S^2$, and near the boundary $\ell$ the image of $S$ via the Gauss map is entirely contained in one of the two hemispheres determined by $g(\ell)$, because of the way the barrier to get ${\Sigma}(t)$ was constructed (theorem $3.1$). Hence the image under the Gauss map of $S$ is entirely contained in such half hemisphere, by the hypothesis of stability. By reflection with respect to the line $\ell$ one obtains a complete minimal surface containing a line and having total curvature between $- 4{\pi}$ and $0$. The two only complete minimal surfaces with total curvature $-4{\pi}$ are the catenoid and Enneper surface \cite[p.40]{BaCo}. Since the catenoid does not contain a line of reflective symmetry, and Enneper surface's Gauss map does not satisfy our conditions, then $S$ must be a half plane, and its image via the Gauss map must ba a point. Finally, notice that the points contained in the other simply connected component of the homothetic expansion of $\hat {\Sigma}(t)$ correspond to points which are contained in the interior of ${\Sigma}(t)$, and hence Schoen's curvature estimate applies to them. These observations complete the proof. \underline{Step 3}: {\it ${\Sigma}(t)$ is the union of two disjoint graphs in the neighborhood of each crossing point having multiplicity $(0, 1, 2, 1)$.} This follows from the previous two steps, and from the analysis of the possible liftings to ${\Sigma}(t)$ of small circles in $V$ around the point under consideration. Our next aim is to study ${\Sigma}(t)$ around a crossing point whose multiplicity is $(0, 1, 0, 1)$. \underline{Step 4}: {\it ${\Sigma}(t)$ is topologically a disk in the neighborhood of each crossing point with multiplicity $(0, 1, 0, 1)$.} We will show that in a closed cylindrical neighborhood $U$ of each crossing point with multiplicity $(0, 1, 0, 1)$, the compact surface ${\Sigma}(t){\cap}U$ has genus zero for $t$ sufficiently close to $0$. In order to do this, let us homothetically expand the spherical neighborhood $U$ with center in a point $p$, with expansion coefficient $\frac{1}{t}$, where $p$ is a point of maximum Gaussian curvature inside ${\Sigma}(t){\cap}U$. Let us denote by $\tilde U$ the expanded neighborhood, and notice that the expansion transforms the planes $P_0$ and $P_t$ to two new planes which have distance equal to $1$ from each other. Moreover the homothety takes the arcs ${\alpha}{\cap}U$ and ${\beta}(t){\cap}U$ to arcs $\tilde {\alpha}$ and $\tilde{\beta}(t)$ in $\tilde U$ which are segments of an almost straight line. Let ${\partial}{\tilde{\alpha}} = {\{} a_1, a_2 {\}}$, and ${\partial}{\tilde{\beta}(t)} = {\{} b_1, b_2 {\}}$, with the convention that the orthogonal projections of $a_1$ and $b_1$ on $P_0$ lie in the boundary of the same multiplicity one component of the interior of $V$, and the orthogonal projections of $a_2$ and $b_2$ lie in a different multiplicity one component. Now let us join $a_1$ to $b_1$ and $a_2$ to $b_2$ by two geodesic arcs contained in the homothetic expansion $\tilde {\Sigma}(t)$ of ${\Sigma}(t)$. Because ${\Sigma}(t)$ is a graph over $P_0$ away from the crossing points with multiplicity $(0, 1, 0, 1)$, and because for each $x{\in}(0,1)$ we have $|{\langle}{\vec N}(q), {\vec e}_3{\rangle}|>x$ away from these crossing points, for $t{\approx}0$ (namely the normal vector to ${\Sigma}(t)$ in $q$ is almost vertical if $q$ is away from the crossing points with multiplicity $(0, 1, 0, 1)$), we may assume that the two geodesic arcs defined above project orthogonally on two different multiplicity one components of the interior of $V$, and hence do not intersect each other. We will now show that the piece $\bar {\Sigma}(t)$ contained in $\tilde {\Sigma}(t)$ and bounded by the loop union of the four curves $\tilde {\alpha}$, $\tilde {\beta}(t)$, and the two geodesic arcs previously defined, is a disk. Since $\partial({\Sigma}(t))$ is contained inside the boundary of the convex hull of ${\Sigma}(t)$, ${\Sigma}(t)$ separates its convex hull (which is simply connected) into two distinct types of regions, one associated with the ``$+$'' sign and the other with the ``$-$'' sign. Hence ${\Sigma}(t)$ is orientable, and consequently also $\bar{\Sigma}(t)$ is so. This implies that the Gauss map, from the oriented $\bar{\Sigma}(t)$ to the unit sphere $S^2$, is well defined. The Gaussian image ${\nu}(t)$ of the boundary curve ${\partial}\bar{\Sigma}(t)$ is a curve that lies in a small neighborhood of the spherical region bounded by the union of two great semicircles joining the north and south poles of $S^2$. Since ${\Sigma}(t)$ is stable, $\bar{\Sigma}(t)$ is also stable. Because the Gaussian image of a stable minimal surface cannot contain a hemisphere, the image of $\bar{\Sigma}(t)$ under the Gauss map can only contain one of the two regions in the complement of this neighborhood in $S^2$, and must be disjoint from the other region. Since the winding number of the Gauss map around ${\partial}\bar{\Sigma}(t)$ is one, the Gauss map can only cover this region once. This implies that for $t$ sufficiently close to $0$ it is: $$-2{\pi} {\leq} {\int}_{\bar{\Sigma}(t)}{K}{\,}dA < 0.$$ Now, the geodesic curvature $k_g$ is zero along the two almost straight arcs contained in $\partial \bar{\Sigma}(t)$, and along $\tilde {\alpha}$ and $\tilde{\beta}(t)$ the geodesic curvature is approximately equal to zero. The sum of the exterior angles where these smooth arcs intersect is between $0$ and $4{\pi}$. Therefore by the Gauss-Bonnet theorem, one has that the Euler characteristic of $\bar{\Sigma}(t)$ is either zero or one. Since ${\partial}\bar{\Sigma}(t)$ consists exactly of one curve, we can conclude that the Euler characteristic is $1$, and that $\bar{\Sigma}(t)$ is topologically a disk. Hence, in small neighborhoods of the crossing points with multiplicity $(0, 1, 0, 1)$, ${\Sigma}(t)$ is topologically a disk. \underline{Step 5}: {\it ${\Sigma}(t)$ is approximately helicoidal around crossing points having multiplicity $(0, 1, 0, 1)$.} Normalize ${\Sigma}(t){\cap}N$ by a homothety with center a point $p$ of maximum Gaussian curvature, in such a way that ${\max}_{q{\in}{\Sigma}(t){\cap}N}\{|K(q)|\}=1$ on the normalized surface, which we shall denote by ${\breve {\Sigma}}(t)$. Modulo a translation, we can suppose that the point $p{\in}\beta$ is the origin. Let us notice that ${\max}_{q\in{{\Sigma}(t)}{\cap}N}\{|K(q)|\}\to{\infty}$ on ${\Sigma}(t){\cap}N$ as $t$ approaches $0$, since the Gauss map $\nu$ is almost vertical on $\partial{{\Sigma}(t)}$, except near the crossing points where $\nu$ changes very quickly, and hence the modulus of $K={Jac}(d{\nu})$ must be large near the crossing points. Therefore normalizing ${\max}_{q\in{{\Sigma}(t)}{\cap}N}\{|K(q)|\}=1$ involves a dilation factor which becomes arbitrarily large as $t$ approaches $0$, and hence the two planar curves in the boundary of ${\breve {\Sigma}}(t)$ become arbitrarily straight as $t$ approaches $0$, around the crossing points. A result of Anderson \cite{An} states that for each sequence of surfaces ${\{}{\breve{{\Sigma}}}_{t_{\ell}}{\}}_{{\ell}=1}^{\infty}${}, $t\to{\infty}$, it is possible to extract a convergent subsequence (in the $C^2$ norm) $\{S_{ij}\}_{i=1}^{\infty}$ in the compact spherical neighborhood $B(0,j)$ with radius $j$ in ${\R}^3$. This sequence of surfaces can be chosen in such a way that ${\{}S_{ij}{\}}_{i = 1}^{\infty}$ is a subsequence of ${\{}S_{kl}{\}}_{k = 1}^{\infty}$ if $j{\,}>{\,}l$. The sequence ${\{}S_{mm}{\}}_{m = 1}^{\infty}$ is the Cantor diagonalization, and converges in the $C^2$ norm in arbitrary compact regions to a surface $S$ having one or two boundary curves, which must be straight lines. Moreover Anderson's compactness theorem \cite{An} implies that $S$ is embedded. The limit surface $S$ is simply connected in any compact spherical neighborhood, and therefore is simply connected in ${\R}^3$. If the boundary of $S$ consists of two straight lines, then by Alexandrov reflection principle $S$ can be extended to a simply connected minimal surface in ${\R}^3$ properly embedded, without boundary and with infinite symmetry group. By virtue of a theorem of Meeks and Rosenberg \cite{MeRo}, such extended minimal surface is a plane or a helicoid. However this surface contains two straight lines which do not intersect and are not parallel to each other, and hence $S$ is a piece of a helicoid, with one or two boundary lines. If $S$ had only one boundary line, then it could be extended via a rotation of angle $\pi$ around the straight boundary line, producing a properly embedded minimal surface. Because the convergence of the above subsequence to $S$ is with respect to the $C^2$ norm, the normal vectors are converging as well, and the extended surface has finite total curvature. By a result of L{\'{o}}pez and Ros such a surface must be a plane or a catenoid; however since it is simply connected, it must be a plane. Hence $S$ is a half-space. Since the convergence to $S$ is of class $C^2$, we know that the normal vectors along $\partial S$ are not constant as $m\to {\infty}$, which implies that $S$ cannot be a plane. Therefore $S$ must be a piece of helicoid with two boundary lines. Since ${\hat {\Sigma}(t)}$ is a graph over $P_0$ and because we are considering a point with multiplicity $(0, 1, 0, 1)$, ${\hat {\Sigma}(t)}$ is a graph over the components of ${P_0}{\backslash}({\beta}(0){\cup}{\alpha})$ having the ``$+$'' sign. Hence $S$ is totally determined. If there was some subsequence which would not be eventually contained in a given ${\epsilon}$-neighborhood of $S$ in a given sphere $B(0,j)$ in ${\R}^3$, then it would be possible to find a subsequence ${\{}{\check {\Sigma}_{t_{\ell}}}{\}}_{{\ell}=1}^{\infty}$ converging to a point not belonging to $S$, which is a contradiction. This allows us to conclude that any sequence ${\{}{\check {\Sigma}_{t_{\ell}}}{\}}_{\ell =1}^{\infty}$ such that $t_{\ell}\to 1$ eventually lies in a predetermined arbitrarily small $\epsilon$-neighborhood of $S$ inside each compact region of ${\R}^3$. For an arbitrarily given $\epsilon$, let us choose $\ell$ big enough in such a way that the surface ${\check {\Sigma}_{t_{\ell}}}{\cap}{D_{j}(0)}$ be contained in an $\epsilon$-neighborhood of $S{\cap}{D_{j}(0)}$. For each pair of points $p$ and $q$, with $p$ in $\check {\Sigma}_{t_{\ell}}$ with normal vector $\vec N(p)$ to $\check {\Sigma}_{t_{\ell}}$ and $q$ in $S$ with normal vector $\vec N(q)$ to $S$ such that ${dist}(p,q)<\epsilon$, the estimate on the function $|K(q)|$ implies that $|<{\vec N(p)},{\vec N(q)}>|$ is bounded away from zero. In fact by choosing $\epsilon$ sufficiently close to zero and $\ell$ sufficiently large, we will be able to achieve $|<{\vec N(p)},{\vec N(q)}>|$ to be arbitrarily close to $1$. It follows that $\check {\Sigma}_{t_{\ell}}$ is union of graphs on $S$ for $\ell$ sufficiently large, and hence $\check {\Sigma}_{t_{\ell}}$ is a one-sheeted graph on $S$ around a crossing point with multiplicity $(1,0,1,0)$, which is in accordance to the end of the proof of Part $1$. This allows us to conclude that ${\{}{\check {\Sigma}_{t_{\ell}}}{\}}_{\ell =1}^{\infty}$ converges to $S$ in the $C^0$-norm as one-sheeted graphs, and that the normal vectors are convergent as well; hence we have in addition that ${\{}{\check {\Sigma}_{t_{\ell}}}{\}}_{\ell =1}^{\infty}$ converges to $S$ in the $C^1$-norm in any compact sphere, and that ${\Sigma}(t)$ is approximately helicoidal in a neighborhood of each crossing point with multiplicity $(1,0,1,0)$, for $t$ sufficiently close to $0$. \end{proof} \begin{question} Does there exist, for $t$ sufficiently small, an unstable embedded minimal surface bounded by ${\alpha}{\cup}{\beta}(t)$, and having genus larger than any compact stable embedded minimal surface bounded by ${\alpha}{\cup}{\beta}(t)$? \end{question} The previous theorem described the surfaces ${\Sigma}(t)$ geometrically. The next theorem describes their topological properties. Let $v_1$, $v_2$, $e_1$, $e_2$, $f_1$ and $f_2$ be defined as in section $1$. \begin{theorem} Let $V{\in}{\cal V}({\cal A}, {\cal B})$, and for $t$ sufficiently small let ${\Sigma}(V, t)$ be a minimal surface given in the statement of theorem $3.3$. Then the Euler characteristic of ${\Sigma}(V, t)$ is equal to $({v_1} + 2{v_2}) - ({e_1} + 2{e_2}) + ({f_1} + 2{f_2}) $. Since there is exactly one varifold ${V_0}{\in}{\cal V}({\cal A}, {\cal B})$ with $v_2 = e_2 = f_2 = 0$, there is exactly one topological type ${\Sigma}({V_0}, t)$ in ${\cal S}(t)$ with Euler characteristic equal to $v_1 - e_1 + f_1$. \end{theorem} \begin{proof} This follows easily from the description of ${\Sigma}(t)$, and from the observation that a cell decomposition of ${\Sigma}(t)$ can be computed via a cell decomposition of the unique varifold $V$ determined by ${\alpha}{\cup}{\beta}$, and corresponding to ${\Sigma}(t)$. Such a cell decomposition is equivalent to one that has: \begin{description} \item{---} number of vertices equal to $v_1 + 2v_2$. \item{---} number of edges equal to $e_1 + 2e_2$. \item{---} number of faces equal to $f_1 + f_2$. \end{description} The unique area-minimizing varifold gives rise to a minimal surface having largest genus, since for this varifold $v_2 = e_2 = f_2 = 0$. Hence the proof is finished. \end{proof} In sections $4$ and $5$ we will prove that the surface ${\Sigma}({V_0}, t)$ is the unique surface of least area in ${\cal S}(t)$, for $t$ sufficiently small. \begin{remark} The results in this paper generalize to the case of boundary curves being two collections $\cal A$ and ${\cal B}(t)$ of disjoint smooth closed Jordan curves contained in the planes $P_0$ and $P_t$ respectively. Moreover these results also hold if $\cal A$ and ${\cal B}$ are contained in the interior of the half plane ${P'}_0 = {\{} (x, y, z) \in {\R}^3 |\ \ z = 0, x{\geq}0{\}}$. In this case ${\cal B}(t)$ is the collection of curves in the plane ${P'}_t$, obtained by rotating $P_0$ counterclockwise around the $x$-axis by $t\frac{\pi}{2}$ radians. \end{remark} The above theorem provides a complete description of the stable surfaces ${\Sigma}(t)$, description which we will use to prove the uniqueness of the correspondence between ${\cal V}({\cal A}, {\cal B})$ and ${\cal S}(t)$ in the next two sections of this paper. \setcounter{chapter}{4} \setcounter{section}{-1} \setcounter{theorem}{0} \setcounter{lemma}{0} \setcounter{definition}{0} \setcounter{claim}{0} \setcounter{corollary}{0} \setcounter{condition}{0} \setcounter{question}{0} \setcounter{example}{0} \setcounter{remark}{0} \section{Proof of stability} The main objective of this section is to prove the following Theorem. \begin{theorem} Let ${\Sigma}(t)$ be an embedded minimal surface whose boundary is ${\alpha}{\cup}{\beta}(t)$. Suppose that ${\Sigma}(t)$ is described as in Theorem $3.6$. Then, for $t$ sufficiently close to $0$, ${\Sigma}(t)$ is stable. \end{theorem} \section{Preliminary lemmas} The proof of theorem $4.1$ will follow from some preliminary lemmas. To state these lemmas we will need some additional notation. Let $U$ be a vertical cylindrical neighborhood with height larger than $1$ of a helicoidal crossing point. Let $E(t)$ be the intersection of $U$ with ${\Sigma}(t)$. Suppose that the radius of $U$ is sufficiently small, so that $E(t)$ is stable; note that this can always be accomplished, for an appropriate choice of radius, because although ${\Sigma}(t)$ is approximately helicoidal in $U$, the boundary arcs of the helicoidal piece are not parallel, and the multiplicity of the crossing point contained in $U$ is $(0, 1, 0, 1)$; these conditions guarantee that the area of the Gaussian image of $E(t)$, counted with multiplicity, is less than $2\pi$. Let $G(t)$ be one of the connected components of $\hat{\Sigma}(t)$ (defined in Part $1$ of the proof of Theorem $3.6$) adjacent to $E(t)$, and let ${\gamma}(t) = {\partial E(t)}{\cap}{\partial G(t)}$. Also recall that here $t$ approaches $0$. The lemma that we are about to prove guarantees that the behavior of a supposed Jacobi vector field on ${\Sigma}(t)$ is not ``too wild'' away from the crossing points. We remark here, once and for all, that all the Jacobi vector fields we consider are not identically zero, unless explicitly stated otherwise. \begin{lemma} Let ${\{}{\Sigma}(t){\}}_{t{\to}0}$ be a sequence of compact embedded minimal surfaces bounded by ${\alpha}{\cup}{\beta}(t)$, described as in Theorem $3.6$, and suppose that all $\Sigma(t)$ are unstable, with Jacobi vector fields given by ${u_t}:{\Sigma(t)}\to [0, {\infty})$, ${u_t}=0$ on ${\partial}{\Sigma}(t)$, $u_t$ of class $C^{\infty}$. Then $u_t$ is not a ``bump function'' on ${\gamma}(t)$, for $t$ sufficiently close to $0$. More precisely, if $p_t {\in} {\gamma}(t)$ is a local maximum for the function $u_t$, then $p_t$ is contained in an arc intersecting $E(t)$ along which the values attained by $u_t$ are very close to $u(p)$, of the order of \newline $1 - {\cos}({\vec N}(p_t), {\vec e}_3)$. \end{lemma} \begin{proof} Since it is not ambiguous, after an appropriate choice of $t$, we will drop the $(t)$ indicating dependence on $t$, in this proof. Let us consider the foliation ${\{}{\Sigma} + r{\vec{e_3}}{\}}_{0{\leq}r{\leq}1}$, and consider the surface ${\Sigma}'$ which is obtained by deforming $\Sigma$ via the Jacobi vector field ${\tilde u} = \frac{u}{{\max}u}$, namely the set of all the points $q+{\tilde u}(q){\vec N}(q)$, as $q$ varies in $\Sigma$. Let us consider the ``restrictions'' of the foliation and of the surface ${\Sigma}'$ to the subset $G$ of $\Sigma$, and let us denote by ${\{}G_r{\}}_{0 {\leq} r {\leq} 1}$ and $G'$ the transformed images of $G$ via the foliation and the Jacobi vector field, respectively. Because all the surfaces we are considering are compact, there exists the maximum value $\overline r$ for which the surface $G'$ intersects the foliation ${\{}P_r{\}}_{0 {\leq} r {\leq}1}$ (after such value the translated surfaces ${\{}G_r{\}}_{r {\geq} {\overline r}}$ are situated ``above $G'$''). By the maximum principle at a boundary point, such last point of contact must be a point $\overline p$ contained in the boundary curve ${\gamma} = (\partial G){\cap}(\partial E)$. Let $p$ be the point where ${\tilde u}_{|_{G}}$ attains its maximum value; since $G$ is an almost horizontal graph, there is no loss of generality in the assumption that the maximum value of ${\tilde u}_{|_{G}}$ is attained at $\overline p$ (in fact this can always be achieved by slightly deforming the curve $\gamma$). Let now $S(\eta)$ be a very thin strip which is a ${\eta}$-neighborhood of $\gamma$ with respect to the metric of $\Sigma$, ${\eta}{\approx}0$, and $p'$ a point contained in $(\partial S(\eta){\cap}E){\backslash}{\partial {\Sigma}}$ corresponding to the last intersection point of ${\{}(P{\cup}S)_r{\}}_{0 {\leq} r {\leq} 1}$ with $(P{\cup}S(\eta))'$. Let us notice that: $$ 1 - {\overline r} {\leq} 1 - {\cos}({\vec N}(p), {\vec e}_3), $$ and that certainly, if ${\overline r}'$ denotes the value of $0 {\leq} r {\leq} 1$ corresponding to $p'$, then we have $1 - {\overline r}' {\leq} 1 - {\overline r}$, which implies that $$ |(p+{\tilde u}(p){\vec N}(p))_z - (p'+{\tilde u}(p'){\vec N}(p'))_z| {\leq} 1 - {\cos}({\vec N}(p), {\vec e}_3), $$ where $*_z$ denotes the ordinary $z$-coordinate of a point $*$ in ${\R}^3$. Now, since the unit normal vector to $\Sigma$ approaches ${\pm}{\vec e}_3$ in a continuous fashion away from the crossing points, the above says actually that the difference between ${\tilde u}(p)$ and ${\tilde u}(p')$ is at most $1 - {\overline r} + |p_z - {p'}_z|$. Finally, by perturbing $\gamma$ slightly in all directions around $p$ and applying the above argument to these perturbed curves, the proof of the lemma is complete. \end{proof} The next lemma is inspired by the paper \cite{Ka} of N. Kapouleas. It will show that if a Jacobi vector field on a manifold is ``close'' (in some sense, which will be specified in the statement of the lemma) to a Jacobi vector field on another appropriate manifold, then the first eigenvalues of the Jacobi operator on the two manifolds are also ``close'' to each other. \begin{lemma} Let $U$ be a neighborhood of a helicoidal crossing point such that the surface $E(t)$ given by the intersection of ${\Sigma}(t)$ with $U$ is stable, and let $G(t)$ be a connected component of $\hat{\Sigma}(t)$ adjacent to $E(t)$. Let ${M_1} = {\Sigma}(t)$, and let ${M_2}(\mu)$ be ${{\Sigma}(t)}{\backslash}S(\mu)$, where $S(\mu)$ is a $\mu$-neighborhood of $\gamma = {\partial}{E(t)}{\cap}{\partial}{G(t)}$ in the metric of ${\Sigma}(t)$, ${\mu}{\approx} 0$. Let $f$ define a Jacobi vector field on $M_1$. Suppose that it is possible to deform the function $f{\colon}{M_1} \to {\R}$ to obtain a Jacobi vector field $G(f){\colon}{M_2}(\mu) \to {\R}$, which is zero on ${\partial}{M_2}(\mu){\cap}{\partial}{\Sigma}$ and such that it satisfies the three additional conditions: \begin{description} \item(i) ${\|}f{\|}_{\infty}{\leq}2{\|}G(f){\|}_{\infty}$; \item(ii) $|{\langle}f, {f{\rangle}_2} - {\langle}G(f), G(f){{\rangle}_2}| {\leq} {\epsilon}{\|}f{\|}_{\infty}{\|}f{\|}_{\infty}$; \item(iii) ${\|}{\nabla}(G(f)){\|}_2{\leq}(1+{\epsilon}){\|}{\nabla}f{\|}_2 +{\epsilon}{\|}f{\|}_{\infty}$. \end{description}Then if $\epsilon$ can be made arbitrarily small, the first eigenvalue of the Jacobi operator on $M_1$ can be made arbitrarily close to the first eigenvalue of the Jacobi operator on ${M_2}(\mu)$. \end{lemma} \includegraphics{picture1.ps} \begin{proof} Let us consider $$\lambda_1(M_1) = {\inf}_{\overline{f}{\in}C_0^{\infty}(M_1)} \frac{{\|}{\nabla}\overline{f}{\|}_2^2}{{\|}\overline{f}{\|}_2^2}.$$ Then for each $\epsilon > 0$, if $f {\in} C_0^{\infty}(M_1)$ is a Jacobi vector field, one has, as observed in paragraph 1.2, $${\|}{\nabla}f{\|}_2^2 < (\lambda_1(M_1) + \epsilon){\|}f{\|}_2^2.$$ Let us choose ${\epsilon}=\sqrt{{Area}(S(\mu))}$. Because of our hypotheses, $f$ induces a $G(f) {\in} C_0^{\infty}({M_2}(\mu))$ such that condition (ii) above is satisfied, namely $$|{\,}{\|}f{\|}_2^2 - {\|}G(f){\|}_2^2{\,}| {\leq} \epsilon{\|}f{\|}_{\infty},$$ which implies $${\|}{\nabla}f{\|}_2^2 < (\lambda_1(M_1) +\epsilon)({\|}G(f){\|}_2+{\epsilon}{\|}f{\|}_{\infty})^2.$$ Moreover condition (iii) above implies $$(\frac{{\|}{\nabla}G(f){\|}_2 -\epsilon{\|}f{\|}_{\infty}}{1+\epsilon})^2{\leq} {\|}{\nabla}f{\|}_2^2,$$ which yields the conclusion: $$(\frac{{\|}{\nabla}G(f){\|}_2-\epsilon{\|}f{\|}_{\infty}}{1+\epsilon})^2< (\lambda_1(M_1) +\epsilon)({\|}G(f){\|}_2+{\epsilon}{\|}f{\|}_{\infty})^2.$$ If $\epsilon$ can be chosen arbitrarily small at the beginning, when $\epsilon \to 0$ one has $${\|}{\nabla}G(f){\|}_2^2 {\leq}\lambda_1(M_1){\|}G(f){\|}_2^2,$$ namely $\lambda_1({M_2}(\mu)){\leq}\lambda_1(M_1)$. Clearly in our case we can exchange the roles of $M_1$ and ${M_2}(\mu)$, since ${M_2}(\mu)$ is contained in $M_1$, and have that as ${\mu}\to 0$, ${\lambda}_1(M_1)\to {\lambda}_1({M_2}(\mu))$, and vice-versa. In particular, we have that if $M_1$ is unstable, so is ${M_2}(\mu)$, and vice-versa. \end{proof} In the next lemma we will show that, for the surfaces $M_1$ and ${M_2}(\mu)$ defined above, the construction of $G(f)$ having the properties required in Lemma $4.2$ is possible. \begin{lemma} Let $M_1 = {\Sigma}(t)$, and ${M_2}(\mu) = {{\Sigma}(t)}{\backslash}S(\mu)$, ${\mu} {\approx} 0$, and suppose that $f {\in} C_0^{\infty}(M_1)$ defines a Jacobi vector field on $M_1$. Then it is possible to define a not identically zero function $G(f) {\in} C_0^{\infty}({M_2}(\mu))$, in such a way that the conditions (i), (ii), (iii) of Lemma $3.3$ are satisfied. \end{lemma} \begin{proof} Let $f {\in} C_0^{\infty}(M_1)$ be the Jacobi vector field on $M_1$, and without loss of generality let us suppose that the maximum of $f$ is attained at some point belonging to $\gamma$ (otherwise the proof of the lemma is still valid, as one can easily see: this hypothesis takes care of the ``worst possible case''). Moreover let $\phi$ be a bump function of class $C^{\infty}$ on a thin strip $S$ containing $\gamma$ and having area less than $\delta$ such that $\phi$ is constantly equal to $1$ on $M_1{\backslash}S$, constantly equal to zero on $S'{\subset}S$ ($S'$ is a strip containing $\gamma$ and contained in $S$), and such that $|{\nabla}{\phi}| {\leq} {\frac{2}{{\delta}^{1/4}}}$. Define now $G(f) = {\phi}f$. We get: \vskip .5cm \begin{description} \item{(i)} ${\|}f{\|}_{\infty} < 2{\|}G(f){\|}_{\infty}$, because of the property shown in lemma $3.2$. \item{(ii)} $|{\langle}f, f{\rangle}-{\langle}G(f), G(f){\rangle}| {\leq} {\delta}^2{\|}f{\|}_{\infty}{\|}f{\|}_{\infty}$, namely \begin{eqnarray*} |{\|}f{\|}_2^2-{\|}{\phi}f{\|}_2^2| & = & |{\int}_{E{\cup}P}f^2-{\int}_{E{\cup}P}{\phi}^2{f}^2|\\ & \leq & {\int}_S|f^2-{\phi}^2f^2|\\ & = & {\int}_S|f^2||1-{\phi}^2|\\ & {\leq} & {\|}f^2{\|}_{\infty}{{\delta}^2}{\|}1-{\phi}^2{\|}_{\infty}\\ & {\leq} & {{\delta}^2}{\|}f{\|}_{\infty}^2. \end{eqnarray*} \item{(iii)} ${\|}{\nabla}(G(f)){\|}_2{\leq}(1+{\epsilon}){\|}{\nabla}f{\|}_2 +{\epsilon}{\|}f{\|}_{\infty}$, since \begin{eqnarray*} {\|}{\nabla}({\phi}f){\|}_2^2 & = & {\int}_{M_2}|{\nabla}({\phi}f)|^2\\ & \leq & {\int}_{M_2}(|{\nabla}{\phi}| |f|+|\phi| |{\nabla}f|)^2\\ & {\leq} & {\int}_{S}|{\nabla}{\phi}|^2 |f|^2+2{\int}_{S}|{\nabla}{\phi}| |f| |{\nabla}f|+{\int}_{M_2}|{\nabla}f|^2\\ & {\leq} & {\|}({\nabla}{\phi})^2 (f)^2{\|}_1+2{\frac{2}{{\delta}^{\frac{1}{4}}}}{\int}_{S}|f| |{\nabla}f|+{\int}_{M_2}|{\nabla}f|^2\\ & {\leq} & {\|}({\nabla}{\phi})^2{\|}_2{\|}f^2{\|}_2 +{\frac{4}{{\delta}^{\frac{1}{4}}}}{\|}(f)({\nabla}f){\|}_1+{\|}{\nabla}f{\|}_2^2\\ & {\leq} & ({\int}_{S}|{\nabla}{\phi}|^4)^{\frac{1}{2}}({\int}_{S}|f|^4)^{\frac{1}{2}} +{\frac{4}{{\delta}^{\frac{1}{4}}}}{\|}f{\|}_2{\|}{\nabla}f{\|}_2 +{\|}{\nabla}f{\|}_2^2\\ & {\leq} & ({\delta}^2 {\frac{16}{\delta}})^{\frac{1}{2}}({\delta}^2{\|}f{\|}_{\infty}^4)^{\frac{1}{2}} +{\frac{4}{{\delta}^{\frac{1}{4}}}}{\|}f{\|}_2{\|}{\nabla}f{\|}_2 +{\|}{\nabla}f{\|}_2^2\\ & {\leq} & (16{\delta})^{\frac{1}{2}}({\delta}^2{\|}f{\|}_{\infty}^4)^{\frac{1}{2}} +{\frac{4}{{\delta}^{\frac{1}{4}}}}({\delta}^2{\|}f{\|}_{\infty}^2)^{\frac{1}{2}}{\|} {\nabla}f{\|}_2+{\|}{\nabla}f{\|}_2^2\\ & = & 4{\delta}^{\frac{3}{2}}{\|}f{\|}_{\infty}^2 +{\frac{4{\delta}}{{\delta}^{\frac{1}{4}}}}{\|}f{\|}_{\infty}{\|}{\nabla}f{\|}_2 +{\|}{\nabla}f{\|}_2^2\\ & = & (2{\delta}^{\frac{3}{4}}{\|}f{\|}_{\infty}+{\|}{\nabla}f{\|}_2)^2. \end{eqnarray*} \end{description} The assertion hence follows by choosing ${\epsilon} {\leq} {\min}{\{}{{\delta}^2}, 2{\delta}^{\frac{3}{4}}{\}}$. \end{proof} In the proof of theorem $4.1$ we will denote, with the same notation adopted previously, ${\gamma} = {\partial}E_{t_n}{\cap}{\partial}P_{t_n}$. We will prove the theorem here by taking $M_1 = {\Sigma}_{t_n}$, and ${M_2}(\mu) = {\Sigma}_{t_n}{\backslash}S(\mu)$, which we suppose to be stable. However, since the number of crossing points is finite, the proof also holds if we take ${M_2}({\mu}_1, \ldots , {\mu}_{n'}) = {\Sigma}_{t_n}{\backslash}{\bigcup}_{J=1}^{n'}S({\mu}_j)$, ${n'}{\leq}n$, which we assume to be stable, and $S(\mu_j) = {\partial}(E_{t_n}(j)){\cap}{\partial}(P_{t_n})$, where $E_{t_n}(j)$ is a sufficiently small neighborhood of the crossing point $p_j$ in ${\Sigma}_{t_n}$. Let us now give a proof of theorem $4.1$, which we restate for easy reference. {\bf Theorem 4.1.} Let ${\Sigma}(t)$ be an embedded minimal surface with boundary ${\alpha}{\cup}{\beta}(t)$. Suppose that ${\Sigma}(t)$ is described as in theorem $3.6$. Then, for $t$ sufficiently close to $0$, ${\Sigma}(t)$ is stable. \begin{proof} Suppose that the assertion stated in the theorem is false. Then there would exist a sequence $t_n\to 0$, corresponding to which there would be a sequence of unstable minimal surfaces ${\Sigma}(t_n)$ with boundary ${\alpha}{\cup}{\beta}(t_n)$, and described according to theorem $3.6$. Hence there would exist a sequence of Jacobi vector fields $f_{t_n}$ defined on ${\Sigma}(t_n)$, all having the property proven in lemma $4.2$. Therefore, by induction and by lemmas $4.2$, $4.3$, $4.4$, it would be possible to fix a positive $\mu$ such that in a ${\mu}$-neighborhood (strip) of $\gamma$ one could define a bump function having bounded gradient which, when multiplied by $f_{t_n}$ would yield a new function $G(f_{t_n})$ defined on the stable part given by ${M_2}(\mu)$ (the fact that this would be possible for each $n$ follows from lemma $3.2$). But the Rayleigh quotient associated to such a function can be made (dependently on $\mu$) arbitrarily close to a number which is strictly less than $2$, thus producing a contradiction. For the sake of completeness, let us notice here that lemma $4.2$ is of fundamental importance in this proof, because it ensures that none of the functions $G(f_n)$ is the function identically equal to zero. \end{proof} \setcounter{chapter}{5} \setcounter{section}{-1} \setcounter{theorem}{0} \setcounter{lemma}{0} \setcounter{definition}{0} \setcounter{claim}{0} \setcounter{corollary}{0} \setcounter{condition}{0} \setcounter{question}{0} \setcounter{example}{0} \setcounter{remark}{0} \section{Proof of the Main Theorem} In this section we will put together the facts proven so far, to give a proof of the main theorem, and to give the exact number of ${\Sigma}(V, t)$ for $t$ sufficiently small. \section{Uniqueness} Let $Slab (t)$ be the slab determined by the planes $P_0$ and $P_t$. Let ${\cal V}({\alpha}, {\beta})$ be the finite collection of varifolds determined by $\alpha{\cup}{\beta}$. By the results proved in the previous sections, we know that for all values of $t$ sufficiently small, each varifold $V$ in ${\cal V}({\alpha}, {\beta})$ determines at least one compact stable embedded minimal surface ${\Sigma}(V, t)$ in ${\cal S}(t)$ bounded by ${\alpha}{\cup}{\beta}(t)$. We now show: \begin{theorem} The natural correspondence between ${\cal V}({\alpha}, {\beta})$ and ${\cal S}(t)$ is a well defined and one-to-one correspondence, in the sense that to each varifold in ${\cal V}({\alpha}, {\beta})$ corresponds one and only one minimal surface in ${\cal S}(t)$, for $t$ sufficiently small. \end{theorem} \begin{proof} The theorem will be proven by contradiction. Let ${\{}\overline{\Sigma^1}(t) \colon {0{\leq}t{\leq}{t^{\#} {\}}}}$ and ${\{}\overline{\Sigma^2}(t) \colon {0{\leq}t{\leq}{t^{\#} {\}}}}$ be two distinct families (for $t^{\#}$ sufficiently close to $0$) of minimal surfaces with the same boundary ${\alpha}{\cup}{\beta}(t)$, existing for all $0{\leq}t{\leq}{t^{\#}}$, and having the same limit varifold $V$. Let $M_1(t)$ be the unbounded connected component of the region $T$ of space given by the collection of points in the slab which are ``outside'' of $\overline{\Sigma^1}(t){\cup}\overline{\Sigma^2}(t)$, $R$ be the union of the bounded connected regions given by the points ``in between'' $\overline{\Sigma^1}(t)$ and $\overline{\Sigma^2}(t)$, and $M_2(t)$ be the closure of $T{\backslash}R$. Notice that $M_1(t)$ contains the truncated cylinder above $P{\backslash}V$, and that $M_2(t)$ is contained in a small neighborhood of the truncated cylinder above $V$. Notice that ${\partial}({M_1}{\cup}{M_2})$ strictly contains $\overline{\Sigma^1}(t){\cup}\overline{\Sigma^2}(t)$. Since $\overline{\Sigma^1}(t)$ and $\overline{\Sigma^2}(t)$ are stable by theorem $3.1$, applying the existence theorem by Meeks and Yau stated in section $3$ to $M_1$, we obtain the existence of a stable minimal surface ${\Sigma_a}(t)$ ``above'' $\overline{\Sigma^1}(t){\cup}\overline{\Sigma^2}(t)$. By the same theorem of Meeks and Yau, applied to the region $M_2$, we obtain the existence of a stable minimal surface ${\Sigma_b}(t)$ ``below'' $\overline{\Sigma^1}(t){\cup}\overline{\Sigma^2}(t)$. By construction, ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$ are disjoint from each other. Moreover, for $t$ sufficiently close to $0$, the two stable minimal surfaces ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$ obtained in this way can be described as stated in theorem $2.5$, namely as approximately helicoidal around the crossing points with multiplicity $(0, 1, 0, 1)$, and as almost horizontal graphs away from the crossing points with multiplicity $(0, 1, 0, 1)$, by virtue of theorem $4.1$. Hence ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$ are homeomorphic to the $\overline{\Sigma^i}(t)$, and furthermore ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$ are normal graphs above $\overline{\Sigma^i}(t)$, and hence over each other, namely there exists a function $h {\in} C_0^{\infty}({\Sigma_b}(t))$ such that: \protect\begin{equation}{\Sigma_a}(t)={\Sigma_b}(t)+h{\vec N}({\Sigma_b}(t)), \protect\end{equation}that is, for each $q{\in}{\Sigma_a}(t)$ there exists a unique ${q'}{\in}{\Sigma_b}(t)$ such that $$q={q'}+h(q'){\vec N}(q'),$$ where ${\vec N}(q')$ is the normal vector to ${\Sigma_b}(t)$ in $q'$. Hence ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$ converge to the same limiting varifold, as $t{\to}0$. It also follows from $(5.1)$ that ${\Sigma_a}(t){\cup}{\Sigma_b}(t)$ bounds a product region, say ${\cal R}(t)$. Since ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$ are normal graphs over each other, we know that the angle between the two normal vectors to ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$ at a boundary point is strictly between $0$ and $\pi$. Then by the minimax theorems due to Pitts and Rubinstein \cite{PiRu}, and generalized by Jost \cite{Jo} to the case of nonempty boundary, ${\alpha}{\cup}{\beta}(t)$ is also the boundary of an unstable embedded minimal surface ${\Sigma^*}(t)$, contained in $\cal R$, for all $t$ sufficiently small. We now wish to show that ${\Sigma^*}(t)$ is homeomorphic to ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$, and that ${\Sigma^*}(t)$ actually has the same geometric description as ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$, which will imply, for example, that $\lim_{t{\to}0}{\Sigma^*}(t) =\lim_{t{\to}0}{\Sigma_a}(t)=\lim_{t{\to}0}{\Sigma_b}(t)=V$. First notice that the connected components of the complement in ${\Sigma^*}(t)$ of the union of small neighborhoods of the crossing points, is the union of almost horizontal graphs, just like ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$. In fact, the area of each such connected component is almost equal to the area of the corresponding components of ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$. The proof of this fact follows from comparing the area of ${\Sigma^*}(t)$ with the maximum area of the family ${\Sigma_a}(t)={\Sigma_b}(t)+th{\vec N}({\Sigma_b}(t))$, indexed by $t{\in}[0, 1]$. The definition of minimax implies that the area of ${\Sigma^*}(t)$ cannot be larger than the maximum area of the family; but since every surface in the family is a graph over $\Sigma_b(t)$, we can estimate the area of ${\Sigma^*}(t)$ by that of ${\Sigma_a}(t)$ and ${\Sigma_b}(t)$, and the proof of the claim follows as in part $1$ of the proof of theorem $3.6$, since the area estimates just proven allow us to apply R. Schoen's curvature estimates. So we know that ${\Sigma^*}(t)$ given by very flat graphs away from the crossing points. In the neighborhood of a crossing point with multiplicity $(0, 1, 0, 1)$, consider the part of $\Sigma^*(t)$ bounded by four geodesic arcs, constructed as in part $3$ of the proof of theorem $3.6$. Such quadrilateral region has sum of the external angles strictly less than $4\pi$. By Nitsche's $4{\pi}$-theorem stated in section $2$, applied to an analytic smoothing having total curvature less than $4\pi$ of the quadrilateral defined by the geodesic arcs, we know that this quadrilateral region must bound a {\em stable} minimal surface which is topologically a disk. This means that $\Sigma^*(t)$ can be described as in theorem $3.6$, for all $t$ sufficiently small, and that ${\Sigma}^*(t)$ is a graph over ${\Sigma}_b(t)$. But then, for $t$ sufficiently close to $0$, $\Sigma^*(t)$ must be stable, by theorem $4.1$. This produces a contradiction and finishes the proof of the uniqueness theorem. \end{proof} \includegraphics{picture2.ps} \section{A bound on the number of ${\Sigma}(V, t)$'s} In this section we will derive a formula which gives an upper bound for the number of compact stable minimal surfaces bounded by a finite number of Jordan curves in close planes of ${\R}^3$. By theorem $5.1$, we know that once the multiplicity of the limiting varifold $V$ is fixed, then there is a unique stable compact minimal surface bounded by ${\alpha}_t{\cup}\beta$, for $t$ sufficiently close to $0$. So the number of stable compact embedded minimal surfaces will be determined once we are able to express exactly how many are the possible multiplicities for the limiting varifold $V$. In order to understand this, let us assign to each connected component of ${P_0}{\backslash}({\alpha}\cup{\beta})$ the sign ``$+$'' or ``$-$'' in such a way that the unbounded component $C_{u}$ is given the ``$-$'' sign and adjacent components have opposite signs. Moreover notice that ${P_0}{\backslash}({\alpha}\cup{\beta})$ determines a finite number of varifolds, by assigning to each of its connected components one of the numbers $0$, $1$ or $2$ (note that the connected components having the ``$+$'' sign can only be assigned multiplicity one). So the multiplicity is totally determined for the components of $V$ with the ``$+$'' sign. The only places where there are different multiplicities allowed are the components of $V$ with the ``$-$'' sign. Of course we only have two choices for the multiplicity of such components: $zero$ or $two$. However we are not free to assign multiplicities arbitrarily, as $(1, 2, 1, 2)$ crossing points must be avoided. Let $f^i_{-}$ be the number of components of $V$ having ``$-$'' sign, and contained inside ${\cal R}(\alpha)\cap{\cal R}(\beta)$, and let $f^o_{-}$ be the number of components of $V$ having ``$-$'' sign, and contained outside ${\cal R}(\alpha)\cup{\cal R}(\beta)$. Then the above observations translate in the following \begin{corollary} Once the limiting cycle $Z$, is given, the number of stable compact minimal surfaces ${\Sigma}(t)$ such that ${\partial}({\Sigma}(t)) {\to} Z$ as $t{\to}0$, is bounded above by $$2^{f^i_{-}} + 2^{f^o_{-}}.$$ \end{corollary} \begin{remark} It would be interesting to get similar bounds on the number of unstable embedded minimal surfaces bounded by Jordan curves in close planes. \end{remark} \setcounter{chapter}{6} \setcounter{section}{-1} \setcounter{theorem}{0} \setcounter{lemma}{0} \setcounter{definition}{0} \setcounter{claim}{0} \setcounter{corollary}{0} \setcounter{condition}{0} \setcounter{question}{0} \setcounter{example}{0} \setcounter{remark}{0} \section{A nonexistence result} In this section we observe that our main theorem provides some evidence in support of a conjecture made by W. Meeks in \cite{Me1}. The conjecture states that there are no minimal surfaces of positive genus bounded by two convex curves in parallel planes of ${\R}^3$. One consequence of our main theorem is: \begin{corollary} There exist no stable minimal surfaces of positive genus bounded by two convex curves in parallel planes of ${\R}^3$, when the distance between the two planes is sufficiently small (or if the two planes are not parallel, but they intersect at a sufficiently small angle). \end{corollary} Previous results by R. Schoen \cite{Sc1}, W. Meeks and B. White \cite{MeWh1} \cite{MeWh2} \cite{MeWh3}, and earlier M. Shiffman \cite{Sh}, also supported evidence towards Meeks's conjecture. \begin{remark} If one could show that as $t$, the distance between the planes, increases, the number of {\em stable} minimal surfaces bounded by two convex curves does not increase, Meeks's conjecture would follow, at least in the stable case. \end{remark}
1,116,691,499,501
arxiv
\section{Introduction} A diod is constructed of one subsystem with extra electrons which are paired with extra holes in other subsystem. By applying one external force, these pairs are broken and an electrical current is produced. Until now, less discussions have been done on this subject. For example - the researches of the past few years have shown that graphene can build junctions with $3D$ or $2D$ semi-conductor materials which have rectifying characteristics and act as excellent Schottky diodes \cite{q2}. The main novelty of these systems is the tunable Schottky barrier height-a property which makes the graphene/semiconductor junction a great platform for the consideration of interface transport methods, as well as using in photo-detection \cite{q3}, high-speed communications \cite{q4}, solar cells \cite{q5}, chemical and biological sensing \cite{q6}, etc. Also, discovering an optimal Schottky interface of graphene, on other matters like $Si$, is challenging, as the electrical transport is corresponded on the graphene quality and the temperature. Such interfaces are of increasing research hope for integration in diverse electronic systems being thermally and chemically stable in all environments, unlike standard metal/semiconductor interfaces \cite{q7}. \\ Previously, we have considered the process of formation of a holographic diode by joining polygonal manifolds \cite{q8}. In our model, first a big manifold with polygonal molecules is broken, two child manifolds and one Chern-Simons manifold appeared. Then, heptagonal molecules on one of child manifolds repel electrons and pentagonal molecules on another child manifold absorb them. Since, the angle between atoms in heptagonal molecules with respect to center of that is less than pentagonal molecules and parallel electrons come nearer to each other and in terms of Pauli exclusion principle, therefore are repelled. Also, parallel electrons in pentagonal molecules become more distant and some holes emerge. Consequently, electrons move from one of child manifolds with heptagonal molecules towards other child manifold with pentagonal molecules via the Chern-Simons manifold and a diode emerges. Also, we have discussed that this is a real diod that may be built by bringing heptagonal and pentagonal molecules among the hexagonal molecules of graphene. To construct this diode, two graphene sheets are needed which are connected through a tube. Molecules, at the junction points of one side of the tube, should have the heptagonal shapes and other molecules at the junction points of another side of the tube should have the pentagonal shapes. Heptagonal molecules repel and pentagonal molecules absorb electrons and a current between two sheets is produced. This current is very similar to the current which is produced between layers $N$ and $P$ in a system in solid state. This current was produced only from the side with heptagonal molecules towards the side with pentagonal molecules. Also, the current from the sheet with pentagonal molecules towards the sheet with heptagonal molecules is zero. This characteristic can also be seen in normal diod. \\ In this paper, we extend the consideration on holographic diodes to BIonic systems. A BIon is a system which consist of two polygonal manifolds connected by a Chern-Simons manifold. We will show that when two manifolds with two different types of polygonal molecules come close to each other, some massive photons appear. These photons join to each other and build Cherns-Simons fields. These fields lead to the motion of electrons on the Chern-Simons manifold between two manifolds and one BIonic diode emerges. The mass of these photons depend on the shape of molecules on the manifolds and the length of gap. From this point of view, our result is consistent with previous predictions for the mass of photons in \cite{qq8}. \\ The outline of the paper is as follows: In section \ref{o1}, we will show that by joining non-similar trigonal manifolds, a hexagonal diode emerges. In this diode, photons join to each other and form the Chern-Simons fields. In section \ref{o2}, we will consider the process of the formation of a BIonic diode from a manifold with heptagonal molecules, a manifold with pentagonal molecules and a Chern-Simons manifold. We will show that exchanged photons between manifold are massive and their mass depends on the length of gap between manifolds. The last section is devoted to summary and conclusion. \section{The hexagonal diode }\label{o1} In this section, we will show that a hexagonal diode can be built by joining two non-similar trigonal manifolds which exchanges photons between them form Chern-Simons fields. These fields force electrons and lead to their motion between two trigonal manifolds. Also, we will explain that if two similar trigonal manifolds join to each other, exchanged photons cancel the effect and no diode emerges. \\ Previously, in ref \cite{q8}, for explaining graphene systems, we have used of the concept of string theory. In our model, scalar strings ($X$) are produced by pairing two electrons with up and down spins. Also, $A$ denotes the photon which is exchanged between electrons and $F$ is the photonic field strength. Now, we will extend this model to polygonal manifolds and trigonal manifolds and write the following action \cite{q8,D3,Df}: \begin{eqnarray} && S_{3}=-T_{tri} \int d^{3}\sigma \sqrt{\eta^{ab} g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}+2\pi l_{s}^{2}G(F))}\nonumber\\&& G=(\sum_{n=1}^{3}\frac{1}{n!}(-\frac{F_{1}..F_{n}}{\beta^{2}})) \nonumber\\&& F=F_{\mu\nu}F^{\mu\nu}\quad F_{\mu\nu}=\partial_{\mu}A_{\nu}- \partial_{\nu}A_{\mu}\label{f1} \end{eqnarray} where $g_{MN}$ is the background metric, $ X^{M}(\sigma^{a})$'s are scalar fields which are produced by pairing electrons, $\sigma^{a}$'s are the manifold coordinates, $a, b = 0, 1, ..., 3$ are world-volume indices of the manifold and $M,N=0, 1, ..., 10$ are eleven dimensional spacetime indices. Also, $G$ is the nonlinear field \cite{Df} and $A$ is the photon which exchanges between electrons. Using the above action, the Lagrangian for trigonal manifold can be written as: \begin{eqnarray} &&\L=-4\pi T_{tri} \int d^{3}\sigma \sqrt{1+(2\pi l_{s}^{2})^{2}G(F)+ \eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}\label{f2} \;, \end{eqnarray} where prime denotes the derivative respect to $\sigma$. To derive the Hamiltonian, we have to obtain the canonical momentum density for photon. Since we are interesting to consider electrical solutions. Therefore we suppose that $F_{01}\neq 0$ and other components of $F_{\alpha \beta}$ are zero. So, we have \begin{eqnarray} &&\Pi=\frac{\delta \L}{\delta \partial_{t}A^{1}}=-\frac{\sum_{n=1}^{3}\frac{n}{n!}(-\frac{F_{1}..F_{n-1}}{\beta^{2}})F_{01}}{\beta^{2}\sqrt{1+(2\pi l_{s}^{2})^{2}G(F)+ \eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}} \label{f3} \end{eqnarray} so, by replacing $\int d^{3}\sigma =\int d\sigma \sigma^{2}$, the Hamiltonian may be built as \cite{D3,Df}: \begin{eqnarray} &&H=4\pi T_{tri}\int d\sigma \sigma^{2}\Pi\partial_{t}A^{1}-\L=4\pi\int d\sigma [ \sigma^{2}\Pi(F_{01})-\partial_{\sigma}(\sigma^{2}\Pi)A_{0}]-\L \label{f4} \end{eqnarray} In the second step, we use integration by parts and obtain the term proportional to $\partial_{\sigma}A$. Using the constraint ($\partial_{\sigma}(\sigma^{2}\Pi)=0$), we obtain \cite{D3}: \begin{eqnarray} && \Pi=\frac{k}{4\pi \sigma^{2}}, \label{f5} \end{eqnarray} where $k$ is a constant. Substituting equation (\ref{f5}) in equation (\ref{f4}) and $\int d^{3}\sigma =\int d\sigma_{3} d\sigma_{2} d\sigma_{1} $ yields the following Hamiltonian: \begin{eqnarray} &&H_{1}=4\pi T_{tri}\int d\sigma_{3} d\sigma_{2} d\sigma_{1} \sqrt{1+(2\pi l_{s}^{2})^{2}\sum_{n}\frac{n}{n!}(-\frac{F_{1}..F_{n-1}}{\beta^{2}})+\eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}O_{1} \nonumber\\&&O_{1}=\sqrt{1+\frac{k^{2}_{1}}{\sigma^{4}_{1}}} \label{f6} \end{eqnarray} To obtain the explicit form of wormhole- like tunnel which goes out of trigonal manifold, we need a Hamiltonian in terms of separation distance between sheets. For this reason, we redefine Lagrangian as: \begin{eqnarray} &&\L=-4\pi T_{tri} \int d\sigma \sigma^{2}\sqrt{1+(2\pi l_{s}^{2})^{2}\sum_{n}\frac{n}{n!}(-\frac{F_{1}..F_{n-1}}{\beta^{2}})+z'^{2}}O_{1}\label{f7} \end{eqnarray} With this new form of Lagrangian, we repeat our previous calculations. We have \begin{eqnarray} &&\Pi=\frac{\delta \L}{\delta \partial_{t}A^{1}}=-\frac{\sum_{n}\frac{n(n-1)}{n!}(-\frac{F_{1}..F_{n-2}}{\beta^{2}})F_{01}F_{1}}{\beta^{2}\sqrt{1+(2\pi l_{s}^{2})^{2}\sum_{n}\frac{n}{n!} (-\frac{F_{1}..F_{n-1}}{\beta^{2}})+\eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}} \label{f8} \end{eqnarray} So the new Hamiltonian can be constructed as: \begin{eqnarray} &&H_{2}=4\pi T_{tri}\int d\sigma \sigma^{2}\Pi\partial_{t}A^{1}-\L=4\pi\int d\sigma [ \sigma^{2}\Pi(F_{01})-\partial_{\sigma}(\sigma^{2}\Pi)A_{0}]-\L \label{f9} \end{eqnarray} Again, we use integration by parts to obtain the term proportional to $\partial_{\sigma}A$. Imposing the constraint ($\partial_{\sigma}(\sigma^{2}\Pi)=0$), we obtain: \begin{eqnarray} && \Pi=\frac{k}{4\pi \sigma^{2}} \label{f10} \end{eqnarray} where k is a constant. Substituting equation (\ref{f10}) in equation (\ref{f9}) yields the following Hamiltonian: \begin{eqnarray} &&H_{2}=4\pi T_{tri}\int d\sigma_{3} d\sigma_{2} d\sigma_{1} \sqrt{1+(2\pi l_{s}^{2})^{2}\sum_{n}\frac{n(n-1)}{n!}(-\frac{F_{1}..F_{n-2}}{\beta^{2}})+\eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}} O_{2}\nonumber\\ &&O_{2}=O_{1} \sqrt{1+\frac{k^{2}_{2}} {O_{1}\sigma^{4}_{2}}} \label{f11} \end{eqnarray} And if we repeat these calculations for 3 times, we obtain \begin{eqnarray} &&H_{3}=4\pi T_{tri}\int d\sigma_{3} d\sigma_{2} d\sigma_{1} \sqrt{1+\eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}O_{tot} \nonumber\\ && O_{tot}=\sqrt{1+\frac{k^{2}_{3}}{O_{2}\sigma^{4}_{3}}}\sqrt{1+\frac{k^{2}_{2}}{O_{1}\sigma^{4}_{2}}}\sqrt{1+\frac{k^{2}_{1}}{\sigma^{4}_{1}}} \nonumber\\ &&O_{2}=O_{1}\sqrt{1+\frac{k^{2}_{2}}{O_{1}\sigma^{4}_{2}}} \label{f12} \end{eqnarray} At this stage, we will make use of some approximations and obtain the simplest form of the Hamiltonian of trigonal manifold: \begin{eqnarray} &&A\sqrt{1+\frac{k^{2}}{O_{1}\sigma^{4}}}\sqrt{1+\frac{k^{2}}{\sigma^{4}}}\simeq A\sqrt{1+\frac{k^{2}}{\sigma^{4}}}+A\frac{k^{2}}{2\sigma^{4}}\simeq \nonumber\\ && A+A\frac{k^{2}}{2\sigma^{4}}+A\frac{k^{2}}{2\sigma^{4}}=\frac{A}{2}(1+\frac{k^{2}}{\sigma^{4}})+ \frac{A}{2}(1+\frac{k^{2}}{\sigma^{4}})\simeq \nonumber\\ && 2A'\sqrt{1+\frac{k^{2}}{\sigma^{4}}}\Rightarrow O_{tot}=\frac{3}{2}\sqrt{1+\frac{k^{2}}{\sigma^{4}}}=\frac{3}{2} O_{1}\Rightarrow \nonumber\\ && H_{3}=4\pi T_{tri}\int d\sigma \sigma^{2} \sqrt{1+\eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}O_{tot}=4\pi 3 T_{tri}\int d\sigma \sigma^{2} \sqrt{1+\eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}O_{1}=\nonumber\\ &&4\pi 3 T_{tri}\int d\sigma \sigma^{2} \sqrt{1+\eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}\sqrt{1+\frac{k^{2}}{\sigma^{4}}}=\frac{3}{2}H_{linear}\nonumber\\ &&\nonumber\\ &&H_{linear}=4\pi 3 T_{tri}\int d\sigma \sigma^{2} \sqrt{1+\eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}\sqrt{1+\frac{k^{2}}{\sigma^{4}}}\label{f16} \end{eqnarray} where $A'=\frac{A}{2}$ is a constant that depends on the trigonal manifold action($T_{tri}$)and other stringy constants. This equation shows that each pair of electrons on each side of trigonal manifold connected by a wormhole- like tunnel and form a linear BIon; then these BIons join to each other and construct a nonlinear trigonal BIon. For constrcuting a hexagonal manifold, we should put two trigonal manifolds near each other so that direction of the motion of electrons and photons on two trigonal manifolds are reverse to each other ( Fig.1.). In a symmetrical hexagonal manifold, two photons cancel the effect of each other and total energy of system becomes zero. Using expressions given in Eq. (\ref{f12}), we can write: \begin{eqnarray} && \sigma_{1}\rightarrow -\bar{\sigma}_{1} \quad \sigma_{2}\rightarrow -\bar{\sigma}_{2} \quad \sigma_{3}\rightarrow -\bar{\sigma}_{3} \nonumber\\&& \int d\sigma_{3} d\sigma_{2} d\sigma_{1}\rightarrow -\int d\bar{\sigma}_{3} d\bar{\sigma}_{2} d\bar{\sigma}_{1}\nonumber\\&& A_{0} \rightarrow \bar{A}_{0} \quad A_{1} \rightarrow \bar{A}_{1}\nonumber\\&&\Rightarrow H_{3}\rightarrow -\bar{H}_{3} \label{EQ1} \end{eqnarray} For a symmetrical hexagonal manifold, the Hamiltonians of two trigonal manifolds cancel the effect of each other and total Hamiltonian of system becomes zero. This system is completely stable and can't interact with other systems. For a non-symmetrical hexagonal manifold, fields are completely different and two Hamiltonian cannot cancel the effect of each other. Using equations (\ref{f1} and \ref{f12} ), we have: \begin{eqnarray} &&H_{3}=4\pi T_{tri}\int d\sigma_{3} d\sigma_{2} d\sigma_{1} \sqrt{1+\eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}O_{tot} \nonumber\\ && \neq \bar{H}_{3}=4\pi T_{tri}\int d\bar{\sigma}_{3} d\bar{\sigma}_{2} d\bar{\sigma}_{1} \sqrt{1+\eta^{ab}g_{MN} \partial_{a}\bar{X}^{M}\partial_{b}\bar{X}^{N}}\bar{O}_{tot} \nonumber\\ && \Rightarrow S=-T_{tri} \int d^{3}\sigma \sqrt{\eta^{ab} g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}+2\pi l_{s}^{2}G(F))}\nonumber\\ && \neq \bar{S}=-T_{tri} \int d^{3}\bar{\sigma} \sqrt{\eta^{ab} g_{MN}\partial_{a}\bar{X}^{M}\partial_{b}\bar{X}^{N}+2\pi l_{s}^{2}G(\bar{F}))} \label{EQ2} \end{eqnarray} Thus, total Hamiltonian and the action of two trigonal manifolds can be obtained as: \begin{eqnarray} &&H_{6}^{tot}=H_{3}-\bar{H}_{3}\nonumber\\&& S_{6}^{tot}=S_{3}-\bar{S}_{3} \label{EQ3} \end{eqnarray} This equation shows that if two trigonal manifolds join to each other and form the hexagonal manifold, the Hamiltonian and also the action of hexagonal manifold is equal to the difference between the actions and Hamiltonians of two trigonal manifolds. A non-symmetrical hexagonal manifold has an active potential and can interact with other manifolds (See Fig.2.). \\ At this stage, we can assert that the exchanged photons between two trigonal manifolds produce the Chern-Simons fields. To write our model in terms of concepts in supergravity, we should define G and C-fields. G- fields with four indices are constructed from two strings and C-fields with three indices are produced when three ends of G-fields are placed on one manifold and one index is located on one another manifold (Figure 3). We can define G and Cs-fields as follows: \begin{eqnarray} && G_{IJKL}\approx F_{[IJ}F_{KL]} \nonumber\\&& Cs_{IJK}= \epsilon^{IJK} F_{IJ}A_{K} \label{EQ4} \end{eqnarray} To obtain G- and Cs-fields, we will assume that two spinors with up and down spins couple to each other and exchanged photons ($X^{M}\rightarrow A^{M}\psi_{\downarrow}\psi_{\uparrow}$, $X^{0}\rightarrow t$). We also assume that spinors are only functions of coordinates ($\sigma$, $t$). Using equation (\ref{f3}, \ref{f5},\ref{f12}), we obtain: \begin{eqnarray} && \Pi=\frac{k}{4\pi \sigma^{2}}\nonumber\\&& =\frac{\sum_{n=1}^{3}\frac{n}{n!} (-\frac{\bar{F}_{1}..\bar{F}_{n-1}} {\beta^{2}})\bar{F}_{01}}{\beta^{2}\sqrt{1 + ([\bar{A}^{M}\bar{A}_{M}(\psi_{1,\downarrow} \psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})]')^{2}+2\pi l_{s}^{2}(\sum_{n=1}^{3}\frac{1}{n!}(-\frac{\bar{F}_{1}..\bar{F}_{n}}{\beta^{2}}))}} \nonumber\\&& H_{3}=4\pi T_{tri}\int d\sigma_{3} d\sigma_{2} d\sigma_{1} \sqrt{1 + ([A^{M}A_{M}(\psi_{1,\downarrow} \psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})]')^{2} }O_{tot}\nonumber\\&& H_{3}=4\pi T_{tri}\int d\sigma_{3} d\sigma_{2} d\sigma_{1} \sqrt{1 + ([A^{M}A_{M}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})]')^{2}}\times\nonumber\\ && \sqrt{1+\frac{1}{O_{2}}(\frac{\sum_{n=1}^{3}\frac{n}{n!} (-\frac{\bar{F}_{1}..\bar{F}_{n-1}} {\beta^{2}})\bar{F}^{3}_{01}}{\beta^{2}\sqrt{1 + ([\bar{A}^{M}\bar{A}_{M}(\psi_{1,\downarrow} \psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})]')^{2}+2\pi l_{s}^{2}(\sum_{n=1}^{3}\frac{1}{n!}(-\frac{\bar{F}_{1}..\bar{F}_{n}}{\beta^{2}}))}})^{2}}\times\nonumber\\&& \sqrt{1+\frac{1}{O_{1}}(\frac{\sum_{n=1}^{3}\frac{n}{n!} (-\frac{\bar{F}_{1}..\bar{F}_{n-1}} {\beta^{2}})\bar{F}_{01}^{2}}{\beta^{2}\sqrt{1 + ([\bar{A}^{M}\bar{A}_{M}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow} \psi_2,{\uparrow})]')^{2}+2\pi l_{s}^{2}(\sum_{n=1}^{3}\frac{1}{n!}(-\frac{\bar{F}_{1}..\bar{F}_{n}}{\beta^{2}}))}})^{2}}\times\nonumber\\&& \sqrt{1+(\frac{\sum_{n=1}^{3}\frac{n}{n!} (-\frac{\bar{F}_{1}..\bar{F}_{n-1}} {\beta^{2}})\bar{F}_{01}^{1}}{\beta^{2}\sqrt{1 + ([\bar{A}^{M}\bar{A}_{M}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow} \psi_2,{\uparrow})]')^{2}+2\pi l_{s}^{2}(\sum_{n=1}^{3}\frac{1}{n!}(-\frac{\bar{F}_{1}..\bar{F}_{n}}{\beta^{2}}))}})^{2}}\label{EQ5} \end{eqnarray} Where we have shown the exchanged photons on trigonal manifold by ($A,F$) and the exchanged photons on a gap between two trigonal manifolds by ($\bar{A},\bar{F}$). By using the Taylor expansion method and substituting results of (\ref{EQ4}) in equation (\ref{EQ5}), we obtain: \begin{eqnarray} && H_{tot}^{6} = H_{3} - \bar{H}_{3} \approx \nonumber\\&& (4\pi T_{tri})[1 + (\frac{2\pi l_{s}^{2}}{\beta^{2}})\bar{F}_{[IJ}\bar{F}_{KL]}\bar{F}^{[IJ}F^{KL]} +(\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})' \epsilon^{IJK} \bar{F}_{IJ}A_{K}\bar{F}_{[IJ}\bar{F}_{KL]}\bar{F}^{[IJ}\bar{F}^{KL]} \nonumber\\&& -(\frac{2\pi l_{s}^{2}}{\beta^{2}})^{3} \bar{F}_{[IJ}\bar{F}_{KL}\bar{F}_{MN]}\bar{F}^{[IJ}F^{KL}\bar{F}^{MN]} - (\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})^{3}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})' \epsilon^{IJK} \bar{F}_{IJ}A_{K} \bar{F}_{[IJ}\bar{F}_{KL}\bar{F}_{MN]}\bar{F}^{[IJ}F^{KL}\bar{F}^{MN]} +....]\nonumber\\&&- (4\pi T_{tri})[1 + (\frac{2\pi l_{s}^{2}}{\beta^{2}})\bar{F}_{[IJ}\bar{F}_{KL]}\bar{F}^{[IJ}F^{KL]} +(\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})' \epsilon^{IJK} \bar{F}_{IJ}A_{K}\bar{F}_{[IJ}\bar{F}_{KL]}\bar{F}^{[IJ}\bar{F}^{KL]} \nonumber\\&& -(\frac{2\pi l_{s}^{2}}{\beta^{2}})^{3} F_{[IJ}F_{KL}F_{MN]}F^{[IJ}F^{KL}F^{MN]} - (\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})^{3}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})' \epsilon^{IJK} F_{IJ}\bar{A}_{K} F_{[IJ}F_{KL}F_{MN]}F^{[IJ}F^{KL}F^{MN]} +....]\nonumber\\&& =(4\pi T_{tri}) [1+(\frac{2\pi l_{s}^{2}}{\beta^{2}}) [\bar{G}_{IJKL}\bar{G}^{IJKL}-G_{IJKL}G^{IJKL}]\nonumber\\&& + (\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})'[ Cs \bar{G}_{IJKL} \bar{G}^{IJKL}-\bar{Cs} G_{IJKL} G^{IJKL} ]\nonumber\\&& \nonumber\\&& -(\frac{2\pi l_{s}^{2}}{\beta^{2}})^{3} [\bar{G}_{IJKLMN}\bar{G}^{IJKLMN}-G_{IJKLMN}G^{IJKLMN}]\nonumber\\&& - (\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})^{3}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})'[ Cs \bar{G}_{IJKLMN} \bar{G}^{IJKLMN}-\bar{Cs} G_{IJKLMN} G^{IJKLMN} ]+.........] \label{EQ6} \end{eqnarray} This equation shows that exchanged photons join to each other and build Chern-Simons fields. These fields make a bridge between two trigonal manifolds and produce the BIonic diode (Figure 4.). For two similar trigonal manifolds, total Hamiltonian of BIon is zero, while for two different trigonal manifolds, a BIon is emerged. This BIon is a bridge for transferring energy of one manifold to the other. At this stage, we can obtain the mass of exchanged photons between two trigonal manifolds. The length of photon relates to the separation distance between electrons or the length of Chern-Simons manifold and the mass of photon depends on the coupling between electrons ($m^{2}=(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})$). The equation of motion for $[A^{M}A_{M}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})]'$ which is extracted from the Hamiltonian of (\ref{EQ5}) is \begin{eqnarray} && A^{M}A_{M}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})\rightarrow m_{photon}^{2}l_{Chern-Simons} \nonumber\\&&[m_{photon}^{2}l_{Chern-Simons}]'= (\frac{[O_{tot}(\sigma)^{2}-\bar{O}_{tot}(\sigma)^{2}]}{[O_{tot} (\sigma_{0})^{2}-\bar{O}_{tot}(\sigma_{0})^{2}]}-1)^{-1/2} \label{EQ7} \end{eqnarray} Solving this equation, we obtain: \begin{eqnarray} &&[m_{photon}^{2}]=\frac{1}{l_{Chern-Simons}} \int_{\sigma}^{\infty} d\sigma'(\frac{[O_{tot}(\sigma)^{2}-\bar{O}_{tot}(\sigma)^{2}]}{[O_{tot}(\sigma_{0})^{2}-\bar{O}_{tot}(\sigma_{0})^{2}]}-1)^{-1/2} \label{EQ8} \end{eqnarray} Eq. (\ref{EQ8}) shows that photonic mass depends on the length of Chern-Simons manifold and also the length of trigonal manifolds. This result is in agreement with previous predictions in \cite{qq8} that photonic mass depends on the parametters of a gap between two systems. \begin{figure*}[thbp] \begin{center} \begin{tabular}{rl} \includegraphics[width=5cm]{fig1.jpg} \end{tabular} \end{center} \caption{A symmetric hexagonal manifold is formed by joining two similar trigonal manifolds.The direction of photons on two trigonal manifolds are reverse to each other and they cancel the effect of each other. } \end{figure*} \begin{figure*}[thbp] \begin{center} \begin{tabular}{rl} \includegraphics[width=5cm]{fig2.jpg} \end{tabular} \end{center} \caption{A non-symmetric hexagonal manifold is formed by joining two different trigonal manifolds.The direction of photons on two trigonal manifolds are reverse to each other, however they can't cancel the effect of each other. } \end{figure*} \begin{figure*}[thbp] \begin{center} \begin{tabular}{rl} \includegraphics[width=5cm]{fig3.jpg} \end{tabular} \end{center} \caption{GG-fields and Cs-fields are formed by joining exchanged photons. } \end{figure*} \begin{figure*}[thbp] \begin{center} \begin{tabular}{rl} \includegraphics[width=5cm]{fig4.jpg} \end{tabular} \end{center} \caption{A hexagonal diode consisted of two trigonal manifolds are connected by a Chern-Simons manifold.} \end{figure*} \section{The BIonic diode }\label{o2} In this section, we will construct the BIonic diode by connecting a pentagonal and a heptagonal manifold by a Chern-Simons manifold. The energy and Hamiltonian of pentagonal manifold has a reverse sign in respect to the energy and the Hamiltonian of heptagonal manifold. Consequently, pentagonal manifold absorbs electrons and heptagonal molecules repels them. \\ A pentagonal manifold can be built of two trigonal manifolds with a common vertex (See Figure 5). Consequently, both of trigonal manifolds have a common photonic field. To avoid of calculating this photon for two times, we remove it from one of trigonal manifolds. We have: \begin{eqnarray} && S_{5}^{tot}=S_{3}-\bar{S}_{2}\nonumber\\&& H_{5}^{tot}=H_{3}-\bar{H}_{2} \label{EQ9} \end{eqnarray} Following the mechanism in previous section, we obtain following actions: \begin{eqnarray} && S_{3}=-T_{tri} \int d^{3}\sigma \sqrt{\eta^{ab} g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}+2\pi l_{s}^{2}(\sum_{n=1}^{3}\frac{1}{n!}(-\frac{F_{1}..F_{n}}{\beta^{2}})) )} \nonumber\\&& \bar{S}_{2}=-T_{tri} \int d^{3}\sigma \sqrt{\eta^{ab} g_{MN}\partial_{a}\bar{X}^{M}\partial_{b}\bar{X}^{N}+2\pi l_{s}^{2}(\sum_{n=1}^{2}\frac{1}{n!}(-\frac{\bar{F}_{1}..\bar{F}_{n}}{\beta^{2}})) )} \label{EQ10} \end{eqnarray} and following Hamiltonians: \begin{eqnarray} &&H_{3}=4\pi T_{tri}\int d\sigma_{3} d\sigma_{2} d\sigma_{1} \sqrt{1+\eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}O^{3}_{tot} \nonumber\\ && \neq \bar{H}_{2}=4\pi T_{tri}\int d\bar{\sigma}_{3} d\bar{\sigma}_{2} d\bar{\sigma}_{1} \sqrt{1+\eta^{ab}g_{MN}\partial_{a}\bar{X}^{M} \partial_{b}\bar{X}^{N}}\bar{O}^{2}_{tot} \nonumber\\ && O^{3}_{tot}=\sqrt{1+\frac{k^{2}_{3}}{O_{2}\sigma^{4}_{3}}} \sqrt{1+\frac{k^{2}_{2}}{O_{1}\sigma^{4}_{2}}}\sqrt{1+\frac{k^{2}_{1}}{\sigma^{4}_{1}}}\nonumber\\ &&\bar{O}^{2}_{tot}=\bar{O}_{1}\sqrt{1+\frac{k^{2}_{2}}{\bar{O}_{1}\bar{\sigma}^{4}_{2}}} \label{EQ11} \end{eqnarray} After doing some algebra on the above Hamiltonians and using the mechanism in (\ref{EQ6}), we obtain: \begin{eqnarray} && H_{tot}^{5} = H_{3} - \bar{H}_{2} \approx \nonumber\\&& -(4\pi T_{tri}) [1+(\frac{2\pi l_{s}^{2}}{\beta^{2}}) [\bar{G}_{IJKL}\bar{G}^{IJKL}-G_{IJKL}G^{IJKL}]\nonumber\\&& + (\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})'[ Cs \bar{G}_{IJKL} \bar{G}^{IJKL}-\bar{Cs} G_{IJKL} G^{IJKL} ]\nonumber\\&& \nonumber\\&& -(\frac{2\pi l_{s}^{2}}{\beta^{2}})^{3} [\bar{G}_{IJKLMN}\bar{G}^{IJKLMN}]\nonumber\\&& - (\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})^{3}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})'[ Cs \bar{G}_{IJKLMN} \bar{G}^{IJKLMN} ]+.........] \label{EQ12} \end{eqnarray} This equation shows that similar to the hexagonal manifold, the exchanged photons between trigonal manifolds in pentagonal manifold form Chern-Simons fields, however the pentagonal manifold has less Chern-Simons and GG-fields. This is because that the pentagonal manifold has less sides in respect to hexagonal manifold and consequently, it's exchanged photons are less. \begin{figure*}[thbp] \begin{center} \begin{tabular}{rl} \includegraphics[width=5cm]{fig5.jpg} \end{tabular} \end{center} \caption{A pentagonal manifold is formed by joining two trigonal manifolds with a common vertex. } \end{figure*} Also, using the Hamiltonians in equation (\ref{EQ11}) and assuming all coordinates are the same ( $\sigma_{1} = \sigma_{2} =\sigma_{3}$), we obtain: \begin{eqnarray} && E_{tot}^{5} = H_{3} - \bar{H}_{2} \approx 4k\pi T_{tri} [\frac{1}{\sigma^{5}}- \frac{1}{\sigma^{3}}] \nonumber\\&& F=- \frac{\partial E}{\partial \sigma}=4k\pi T_{tri} [\frac{1}{\sigma^{6}}- \frac{1}{\sigma^{4}}] \ll 0 \label{EQ13} \end{eqnarray} This equation shows that the force which is applied by a pentagonal manifold to an electron is attractive. Thus this manifold attracts the electrons. In fact, a pentagonal manifold should be connected by another manifold and obtain the needed electrons. In next step, we want to consider the behaviour of heptagonal manifolds. \\ A pentagonal manifold is formed by joining three trigonal manifolds which have two comon vertexes. These two trigonal manifolds build a system with four vertexes and four fields (See figure 6 and figure 7). Thus, we can write: \begin{eqnarray} && S_{7}^{tot}=S_{3}-\bar{S}_{3-3}\nonumber\\&& H_{7}^{tot}=H_{3}-\bar{H}_{3-3} \label{EQ14} \end{eqnarray} Using the method in previous section, we obtain following actions \begin{eqnarray} && S_{7}=-T_{tri} \int d^{3}\sigma \sqrt{\eta^{ab} g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}+2\pi l_{s}^{2}(\sum_{n=1}^{3}\frac{1}{n!}(-\frac{F_{1}..F_{n}}{\beta^{2}})) )} \nonumber\\&& \bar{S}_{3-3}=-T_{tri} \int d^{3}\sigma \sqrt{\eta^{ab} g_{MN}\partial_{a}\bar{X}^{M}\partial_{b}\bar{X}^{N}+2\pi l_{s}^{2}(\sum_{n=1}^{4}\frac{1}{n!}(-\frac{\bar{F}_{1}..\bar{F}_{n}}{\beta^{2}})) )} \label{EQ15} \end{eqnarray} and following Hamiltonians: \begin{eqnarray} &&H_{3}=4\pi T_{tri}\int d\sigma_{3} d\sigma_{2} d\sigma_{1} \sqrt{1+\eta^{ab}g_{MN}\partial_{a}X^{M}\partial_{b}X^{N}}O^{3}_{tot} \nonumber\\ && \neq \bar{H}_{3-3}=4\pi T_{tri}\int d\bar{\sigma}_{3} d\bar{\sigma}_{2} d\bar{\sigma}_{1} \sqrt{1+\eta^{ab}g_{MN}\partial_{a}\bar{X}^{M} \partial_{b}\bar{X}^{N}}\bar{O}^{3-3}_{tot} \nonumber\\ && O^{3}_{tot}=\sqrt{1+\frac{k^{2}_{3}}{O_{2}\sigma^{4}_{3}}}\sqrt{1+\frac{k^{2}_{2}}{O_{1}\sigma^{4}_{2}}}\sqrt{1+\frac{k^{2}_{1}} {\sigma^{4}_{1}}}\nonumber\\ &&\bar{O}^{3-3}_{tot}=\sqrt{1+\frac{k^{2}_{4}}{\bar{O}_{3}\bar{\sigma}^{4}_{4}}}\sqrt{1+\frac{k^{2}_{3}}{\bar{O}_{2}\bar{\sigma}^{4}_{3}}} \sqrt{1+\frac{k^{2}_{2}}{\bar{O}_{1}\bar{\sigma}^{4}_{2}}}\sqrt{1+\frac{k^{2}_{1}}{\bar{\sigma}^{4}_{1}}} \label{EQ16} \end{eqnarray} Using the Taylor series in the above Hamiltonians and applying the method in (\ref{EQ6}) yields: \begin{eqnarray} && H_{tot}^{7} = H_{3} - \bar{H}_{3-3} \approx \nonumber\\&& -(4\pi T_{tri}) [1+(\frac{2\pi l_{s}^{2}}{\beta^{2}}) [\bar{G}_{IJKL}\bar{G}^{IJKL}-G_{IJKL}G^{IJKL}]\nonumber\\&& + (\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})'[ Cs \bar{G}_{IJKL} \bar{G}^{IJKL}-\bar{Cs} G_{IJKL} G^{IJKL} ]\nonumber\\&& -(\frac{2\pi l_{s}^{2}}{\beta^{2}})^{3} [\bar{G}_{IJKLMN}\bar{G}^{IJKLMN}-G_{IJKLMN}G^{IJKLMN}]\nonumber\\&& - (\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})^{3}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})'[ Cs \bar{G}_{IJKLMN} \bar{G}^{IJKLMN}-\bar{Cs} G_{IJKLMN} G^{IJKLMN} ] \nonumber\\&& -(\frac{2\pi l_{s}^{2}}{\beta^{2}})^{4} [G_{IJKLMNYZ}G^{IJKLMNYZ}]\nonumber\\&& - (\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})^{4}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})'[ \bar{Cs} G_{IJKLMNYZ} G^{IJKLMNYZ} ]+.........] \label{EQ17} \end{eqnarray} This equation indicates that like the hexagonal and pentagonal manifold, the exchanged photons between trigonal manifolds in heptagonal manifold form Chern-Simons fields, however the heptagonal manifold has more Chern-Simons and GG-fields. This is because that the heptagonal manifold has more sides in respect to hexagonal manifold and consequently, it's exchanged photons are more. Similar to pentagonal manifold, using the Hamiltonians in equation (\ref{EQ16}) and assuming all coordinates are the same ($\sigma_{1} = \sigma_{2} =\sigma_{3}$), we obtain: \begin{eqnarray} && E_{tot}^{7} = H_{3} - \bar{H}_{3-3} \approx 4k\pi T_{tri} [\frac{1}{\sigma^{5}}- \frac{1}{\sigma^{9}}] \nonumber\\&& F=- \frac{\partial E}{\partial \sigma}=4k\pi T_{tri} [\frac{1}{\sigma^{6}}- \frac{1}{\sigma^{10}}] \gg 0 \label{EQ18} \end{eqnarray} This equation indicates that the force which is applied by a heptagonal manifold to an electron is repulsive. Thus this manifold repel the electrons. In fact, a heptagonal manifold should be connected by a pentagonal manifold and gives the extra electrons to it. \begin{figure*}[thbp] \begin{center} \begin{tabular}{rl} \includegraphics[width=5cm]{fig7.jpg} \end{tabular} \end{center} \caption{ Two trigonal manifolds with two comon vertexes. } \end{figure*} \begin{figure*}[thbp] \begin{center} \begin{tabular}{rl} \includegraphics[width=5cm]{fig6.jpg} \end{tabular} \end{center} \caption{ A heptagonal manifold is formed by joining three trigonal manifolds which two of them have two comon vertexes. } \end{figure*} A BIonic diode can be constructed from a pentagonal manifold which is connected to heptagonal manifold via a Chern-Simons fields (See figure 8). Using the Hamiltonians in (\ref{EQ12} and \ref{EQ17}), we obtain: \begin{eqnarray} && H_{tot}^{Diode} = H_{tot}^{5} + H_{tot}^{7} \approx \nonumber\\&& -(4\pi T_{tri}) [ -(\frac{2\pi l_{s}^{2}}{\beta^{2}})^{3} [\bar{G}_{IJKLMN}\bar{G}^{IJKLMN}]\nonumber\\&& - (\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})^{3}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})'[ Cs \bar{G}_{IJKLMN} \bar{G}^{IJKLMN} ] \nonumber\\&&-(\frac{2\pi l_{s}^{2}}{\beta^{2}})^{4} [G_{IJKLMNYZ}G^{IJKLMNYZ}]\nonumber\\&& - (\frac{(2\pi l_{s}^{2})^{2}}{\beta^{4}})^{4}(\psi_{1,\downarrow}\psi_{1,\uparrow}\psi_{2,\downarrow}\psi_2,{\uparrow})'[ \bar{Cs} G_{IJKLMNYZ} G^{IJKLMNYZ} ]+.........] \label{EQ19} \end{eqnarray} This equation shows that the Hamiltonian of the BIonic diode includes terms with 6 and 8 indices. This means that the rank of Cs-GG terms in a pentagonal-heptagonal diode is more than the rank of Cs-GG terms in a hexagonal diode. In fact, in pentagonal-heptagonal diode more photonic fields are exchanged and stability of system is more than the hexagonal diode. \\ Using equations (\ref{EQ8},\ref{EQ11}, \ref{EQ16}), we can obtain the photonic mass in a BIonic diode: \begin{eqnarray} &&[m_{photon}^{2}]=\frac{1}{l_{Chern-Simons}} \int_{\sigma}^{\infty} d\sigma'(\frac{[O_{Diode}(\sigma)^{2}- \bar{O}_{Diode}(\sigma)^{2}]}{[O_{Diode}(\sigma_{0})^{2}-\bar{O}_{Diode}(\sigma_{0})^{2}]}-1)^{-1/2} \label{EQ20} \end{eqnarray} where \begin{eqnarray} && O_{Diode}=2O^{3}_{tot} -O^{3-3}_{tot}-O^{2}_{tot} \label{EQ21} \end{eqnarray} In a pentagonal-heptagonal BIonic diode, the photonic mass depends not only on the separation distance between manifolds but also on the shape and topology of trigonal manifolds which construct manifolds. It is clear that for a small gap between manifolds, coupling of photons to electrons on the Chern-Simons manifold increases and they become massive. \begin{figure*}[thbp] \begin{center} \begin{tabular}{rl} \includegraphics[width=5cm]{fig8.jpg} \end{tabular} \end{center} \caption{ A BIonic diode can be constructed from a pentagonal manifold which is connected to heptagonal manifold via a Chern-Simons manifold. } \end{figure*} \section{Summary} \label{sum} In this paper, we have considered the formation and the evolutions of BIonic diodes on the polygonal manifolds. For example, we have shown that a hexagonal BIonic diode can be constructed by two non-similar trigonal manifolds. Photons which are exchanged between trigonal manifolds, form Chern-Simons fields which live on a Chern-Simons manifold. The hexagonal BIons interact with each other via connecting two Chern-Simons manifolds. For a hexagonal manifold with similar trigonal manifolds, exchanged photons cancel the effect of each other and the energy and also the length of Chern-Simons manifold becomes zero. These manifolds are stable and cannot interact with each other. If the symmetry of hexagonal manifolds is broken, another polygonal manifolds like heptagonal and pentagonal manifolds are formed. Phtonos that are exchanged between these manifolds form two Chern-Simons fields which live on two Chern-Simons manifolds. These manifolds connect to each other and construct a BIonic diode. Photons that move via this manifold, lead to the motion of electrons from heptagonal side to pentagonal side. These photons are massive and their mass depends of the angles between atoms and length of gap between two manifolds. \section*{Acknowledgements} \noindent The work of Alireza Sepehri has been supported financially by the Research Institute for Astronomy and Astrophysics of Maragha (RIAAM),Iran under the Research Project NO.1/5237-77. A. Pradhan also acknowledges IUCAA, Pune, India for providing facility during a visit under a Visiting Associateship where a part of this paper has been done.
1,116,691,499,502
arxiv
\subsection*{Appendix A} \label{sec:AppendixA} In Section \ref{sec:RD} we provide a simple estimate of the relic density assuming that the annihilation cross section is dominated by direct annihilations of DM via the new Yukawa couplings $\Gamma_i$. The relic density is inversely proportional to the effective annihilation cross section given in Eq.\@ \eqref{eqn:sigmaveff}. In this appendix, we illustrate that even in the scenario where only the direct annihilation of DM is relevant for the effective annihilation cross section, efficient conversions of DM, encoded in the ratio of equilibrium densities in Eq.\@ \eqref{eqn:sigmaveff}, might alter the predictions of the relic density significantly in the case of small mass splittings. More precisely, we address the interplay of the Yukawa coupling $\Gamma_i$ required to reproduce the observed relic density for a given mass ratio $\kappa$. \\ If we only consider the thermally averaged cross section of the direct DM annihilation in Eq.\@ \eqref{eqn:CrossSectionAnnihi}, we find that the cross section decreases for an increasing $\kappa$ for both Majorana and Dirac DM. This corresponds to a decreasing cross section for an increasing mediator mass. \\ If we additionally consider the contribution of the ratio of equilibrium densities at the time of thermal freeze-out, $x_f \approx 25$, the $\kappa$ dependence of the effective thermally averaged annihilation cross section results in \begin{align} \left \langle \sigma_\text{eff} v \right \rangle &\sim \frac{1}{\left( 1+\kappa^2 \right)^2} \left[ 1+\frac{g_\text{non-DM}}{g_\text{DM}} \kappa^\frac{3}{2} \exp \left( -\kappa x_f + x_f \right) \right]^{-2} \times \nonumber \\ &\times \left\{\begin{array}{ll} \frac{1}{8}, & \text{for Dirac DM} \\ \left( 1 + \kappa^4 \right) \left( 1+ \kappa^2 \right)^{-2}, & \text{for Majorana DM}\end{array}\right. \, . \tag{A.1} \end{align} Evidently, the effective annihilation cross section develops a maximum at e.g. $\kappa \sim 1.15$ for Dirac DM in the bIIA model. Consequently, we expect to match the observed relic density with the smallest $\Gamma$ for $\kappa=1.15$ in this scenario. This effect can also be seen for the models bIIA, bVA and aIA in Figure \ref{fig:SummarybVADirac}, \ref{fig:SummarybIIADirac} and \ref{fig:SummaryaIADirac} where the line for the correct relic density with $\kappa=1.1$ lies below the line for $\kappa=1.01$. Note that such an effect cannot be seen for the quarkphilic scenarios (bIIB and bVIB), as the annihilation cross section is dominated by $\Gamma_s$ and $\Gamma_b$ and the results are presented in the $M_\text{DM}-\Gamma_\mu$ plane. \\ In case of Majorana DM the situation is more complicated. As the direct DM annihilation is p-wave suppressed due to the Majorana nature of DM, s-wave coannihilations of the other dark sector particles can become dominant for smaller mass splittings. In the case of leptophilic DM (bIIA and bVA), the annihilation is dominated by annihilations of the scalar doublet into SM leptons for $\kappa=1.01$, which are as well mediated by $\Gamma_\mu$. For $\kappa=1.1$ however, direct DM annihilations dominate the effective annihilation cross section. \\ To identify the $\kappa$ dependence of the effective cross section in this case we need to add up the contributions from scalar and Majorana fermion annihilations according to Eq.\@ \eqref{eqn:sigmaveff}. The only difference between the two models is the increased number of colored fermions in the dark sector for the bVA model (their number increases from $6$ to $18$). This change is sufficient to alter the $\kappa$ dependence significantly via the efficient conversions in the dark sector. While the effective annihilation cross section for the bIIA Majorana scenario is decreasing monotonically, the effective annihilation cross section of the bVA scenario develops a maximum between $\kappa=1.01$ and $\kappa=1.1$. This explains the different ordering of the relic density lines of $\kappa=1.01$ and $\kappa=1.1$ in these two scenarios. \\ \FloatBarrier \subsection*{Appendix B} \label{sec:AppendixB} In this section, we present the constraints on the Yukawa couplings $\Gamma_s \Gamma_b^*$ and $\Gamma_\mu$ from $B$-$\bar{B}$, $R_K$ and $g-2$ of the muon in Tables \ref{tbl:couplingConstraintsYuk} and \ref{tbl:g-2}. \begin{sidewaystable}[!p] \centering \begin{tabular}{c|c|c} & $\mathcal{B}_{bs}^\text{model}(\kappa)$ & $\mathcal{B}_\mu^\text{model}(\kappa)$ \\ \hline \hline aIA & $2.45\cdot 10^{-6} \left( \sqrt{( \frac{-0.000792 + 0.000792 \kappa^4 - 0.001583 \kappa^2 \log{(\kappa^2)}}{(\kappa^2 -1)^3}} \right)^{-1}$ & $0.00958 \left(\sqrt[4]{\frac{-0.000792 + 0.000792 \kappa^4 - 0.001583 \kappa^2 \log{(\kappa^2)}}{( (\kappa^2 -1)^3}}\right)^{-1}$ \\ \hline aI$\text{A}_{\text{M}}$ & $2.45\cdot 10^{-6} \left(\sqrt{\frac{-0.003958 + 0.003166 \kappa^2 + 0.0007916 \kappa^4 + (-0.001583 - 0.003166 \kappa^2) \log{(\kappa^2})}{(\kappa^2 -1)^3}} \right)^{-1}$ & $0.00958 \left( \sqrt[4]{\frac{-0.003958 + 0.003166 \kappa^2 + 0.0007916 \kappa^4 + (-0.001583 - 0.003166 \kappa^2) \log{(\kappa^2)}}{( \kappa^2 -1)^3}} \right)^{-1}$ \\ \hline bIIA & $0.000151 \kappa$ & $0.538516 \left(\sqrt{ \frac{\kappa (-230.909 + 307.878 \kappa^2 - 76.970 \kappa^4 - 153.939 \log{(\kappa^2}))}{(\kappa^2 -1 )^3} } \right)^{-1}$ \\ \hline bVA & $0.000270 \kappa$ & $0.538516 \left( \sqrt{\frac{\kappa (-103.266 + 137.687 \kappa^2 - 34.4218 \kappa^4 - 68.8437 \log{(\kappa^2)})}{(\kappa^2 -1)^3}} \right)^{-1}$ \\ \hline bIIB & $0.000087 \left( \sqrt{ \frac{-1 + \kappa^4 - 2 \kappa^2 \log{(\kappa]^2)}}{(\kappa^2 -1)^3}} \right)^{-1}$ & $0.080783 \sqrt[4]{(\kappa^2 -1)^3} \left| \sqrt{\frac{-3 + 4 \kappa^2 - \kappa^4 - 4 \log{(\kappa)}}{ \sqrt{ -1 + \kappa^4 - 4 \kappa^2 \log{(\kappa)}}}} \right|^{-1}$ \\ \hline bVIB & $2.45\cdot 10^{-6} \left( \sqrt{\frac{-0.000791572 + 0.000791572 \kappa^4 - 0.00158314 \kappa^2 \log{(\kappa^2)}}{( \kappa^2 -1)^3}} \right)^{-1}$ & $0.538516 \left( \sqrt{ \frac{-33.3288 + 44.4384 \kappa^2 - 11.1096 \kappa^4 - 22.2192 \log{(\kappa^2)}}{( \kappa^2 -1)^3 \sqrt{\frac{-1 + \kappa^4 - 2 \kappa^2 \log{(\kappa^2)}}{ (\kappa^2 -1)^3}}}} \right)^{-1}$ \end{tabular} \caption{Presented here are the constraints on new Yukawa coupling $\Gamma_\mu$ and the product $\Gamma_s \Gamma_b^*$. The entries of this table represent the upper bound on $\Gamma_s \Gamma_b^*$ from $B$-$\bar{B}$-mixing and thus a lower bound on $\Gamma_\mu$ from $R_K$.} \label{tbl:couplingConstraintsYuk} \end{sidewaystable} \begin{sidewaystable} \centering \begin{tabular}{c|c} & $\Gamma_\mu \in $ \\ \hline \hline bIIA & $\frac{[0.0000438, 0.0000557] (1-\kappa^2)^2 \nicefrac{M_{\psi_L}}{\text{GeV}} }{\sqrt{\left| -0.0000235648 - 0.0000353472 \kappa^2 + 0.0000706944 \kappa^4 - 0.0000117824 \kappa^6 - 0.0000706944 \kappa^2 \log{(\kappa^2)} \right|}} $ \\ \hline bVA & $\frac{[0.0000438, 0.0000557] (1-\kappa^2)^2 \nicefrac{M_{\psi_L}}{\text{GeV}} }{\sqrt{\left| -0.0000235648 - 0.0000353472 \kappa^2 + 0.0000706944 \kappa^4 - 0.0000117824 \kappa^6 - 0.0000706944 \kappa^2 \log{(\kappa^2)} \right|}} $ \\ \hline bIIB & $[0.01805,0.02294] \kappa \nicefrac{M_{\psi_Q}}{\text{GeV}}$ \\ \hline bVIB & $[0.00932,0.01185] \kappa \nicefrac{M_{\psi_Q}}{\text{GeV}}$ \\ \hline aIA & $\frac{[0.0000438, 0.0000557] (1-\kappa^2)^2 \nicefrac{M_{\psi}}{\text{GeV}}}{\sqrt{\left| -0.0000235648 - 0.0000353472 \kappa^2 + 0.0000706944 \kappa^4 - 0.0000117824 \kappa^6 - 0.0000706944 \kappa^2 \log{(\kappa^2)} \right|}} $ \end{tabular} \caption{Presented here are the constraints on new Yukawa coupling $\Gamma_\mu$ for a solution of $(g-2)_\mu$. These constraints contain upper and lower bound on $\Gamma_\mu$ formulated as an interval.} \label{tbl:g-2} \end{sidewaystable} \subsection{Collider constraints} \label{sec:collider} In this section, we review existing searches of setups similar to the scenarios discussed in the previous sections. While the results cannot be explicitly applied to the setups studied in this work, they can provide an indication of the excluded parameter regions. Note that we do not perform a detailed collider study in this article. For the leptophilic DM scenarios, {\it i.e.} for models bIIA and bVA, $\psi_Q$ can be pair-produced at tree-level, subsequently decaying through $\psi_Q \rightarrow Q \phi^{\dagger}$~($\phi^{\dagger} \rightarrow \psi_L \bar{L}$). Such channels of $\psi_L$ production can be constrained by dilepton+jets+$\slashed{E}_T$ searches at the LHC~\cite{ATLAS:2016ljb,Sirunyan:2020tyy}: for example, dilepton searches can rule out $M_{\psi_L} \lesssim 600$~GeV for $M_{\psi_Q} \lesssim 800$~GeV~\cite{Cline:2017qqu}. For the quarkphilic DM scenarios, such as bIIB and bVIB, $\psi_Q$ can be produced in $t$-channel interactions mediated by the colored scalar $\phi$~\cite{Racco:2015dxa,Aaboud:2017phn}. ATLAS constrained such scenarios through monojet+$\slashed{E}_T$ searches with an integrated luminosity of 36~$fb^{-1}$ at $\sqrt{s} = 13$~TeV~\cite{Aaboud:2017phn}. In our context, for $\Gamma_{Q} = 1$, this rules out $M_{\psi_Q} \lesssim 600$~GeV for $M_{\phi} \sim 700$~GeV. For lower values of $M_{\psi_Q}$ it can rule out even higher values of $M_{\phi}$ up to $\sim 1.6$~TeV (see Fig.~8 of Ref.~\cite{Aaboud:2017phn}). In the model aIA, the DM candidate $\psi$ has tree-level couplings to the SM quarks, leading to the same production channels at the LHC as the bIIB and bVIB models. Thus, one can expect similar constraints on model aIA as well. In the b-type models, for small enough values of $\Gamma_Q$ or $\Gamma_L$, colored fermions can become sufficiently long-lived at collider scales and may decay outside the detector. Displaced vertex+$\slashed{E}_T$ searches at the LHC can be recast into constraints on the mass and lifetime of such a long-lived particle~\cite{Belanger:2018sti}. This can rule out the new physics Yukawa couplings in the range $\Gamma \sim 10^{-2} - 10^{-5}$ for fermion masses $\lesssim 1.8$~TeV. \FloatBarrier \subsection{Constraints on new Yukawa Couplings} \label{sec:YukConstraints} Following \cite{Arnan:2016cpy}, we obtain constraints on the couplings $\Gamma_\mu$ from constraints on the Wilson coefficients $\mathcal{C}_9$ and $\mathcal{C}_{B\bar{B}}$, which are obtained from global fits of LFUV observables and $B$-$\bar{B}$-mixing respectively, which is generated by the effective operator \begin{align} \mathcal{O}_{B\bar{B}}= (\bar{s}_\alpha \gamma^\mu P_L b_\alpha)(\bar{s}_\beta \gamma_\mu P_L b_\beta) \, . \end{align} The Wilson coefficients read \begin{align} \mathcal{C}_9^\text{box, a}=-\mathcal{C}_{10}^\text{box, a}&= \frac{\sqrt{2}}{4G_F V_{tb}V_{ts}^*} \frac{\Gamma_s\Gamma_b^* |\Gamma_\mu|^2}{32 \pi \alpha_\text{em} M_\psi^2}(\chi \eta F(x_Q, x_L)+2\chi^M \eta^M G(x_Q, x_L)) \, , \\ \mathcal{C}_9^\text{box, b}=-\mathcal{C}_{10}^\text{box, a}&= -\frac{\sqrt{2}}{4G_F V_{tb}V_{ts}^*} \frac{\Gamma_s\Gamma_b^* |\Gamma_\mu|^2}{32 \pi \alpha_\text{em} M_\phi^2}(\chi \eta-\chi^M \eta^M) F(y_L, y_L) \, , \\ \mathcal{C}_{B\bar{B}}^\text{a}&= \frac{(\Gamma_s \Gamma_b^*)^2}{128 \pi^2 M_\psi^2} \left( \chi_{B\bar{B}} \eta_{B\bar{B}} F(x_Q,x_Q) + 2 \chi_{B\bar{B}}^M \eta_{B\bar{B}}^M G(x_Q,x_Q) \right) \, , \\ \mathcal{C}_{B\bar{B}}^\text{b}&= \frac{(\Gamma_s \Gamma_b^*)^2}{128 \pi^2 M_\phi^2} \left( \chi_{B\bar{B}} \eta_{B\bar{B}} -\chi_{B\bar{B}}^M \eta_{B\bar{B}}^M \right) F(y_Q, y_Q) \, , \end{align} where $x_{\nicefrac{Q}{L}}=\nicefrac{M^2_{\phi_{\nicefrac{Q}{L}}}}{M^2_\psi}$ and $y_{\nicefrac{Q}{L}}=\nicefrac{M^2_{\psi_{\nicefrac{Q}{L}}}}{M^2_\phi}$. $F$ and $G$ are the dimensionless loop-functions \begin{align} F(x,y)&=\frac{1}{(1-x)(1-y)}+\frac{x^2 \ln{x}}{(1-x)^2(1-y)}+\frac{y^2 \ln{y}}{(1-x)(1-y)^2} \\ G(x,y)&=\frac{1}{(1-x)(1-y)}+\frac{x \ln{x}}{(1-x)^2(1-y)}+\frac{y \ln{y}}{(1-x)(1-y)^2} \, . \end{align} The $SU(2/3)$-factors $\eta^{(M)}_{(B\bar{B})}, \chi^{(M)}_{(B\bar{B})}$ can be extracted from Table \ref{tbl:SUfactors}. \begin{table}[H] \centering \begin{tabular}{c|c|c|c|c|c|c} $SU(2)_L$ & $\eta$ & $\eta^M$ & $\eta_{B\bar{B}}$ & $\eta^M_{B\bar{B}}$ & $\eta_{a_\mu}$ & $\tilde{\eta}_{a_\mu}$ \\ [1pt] \hline I & $1$ & $1$ & $1$ & $1$ & $-1 \mp X$ & $ \pm X$ \\ [1pt] II & $1$ & $0$ & $1$ & $0$ & $-\frac{1}{2} \mp X $ & $-\frac{1}{2} \pm X $ \\ [1pt] III & $\frac{5}{16}$ & $0$ & $\frac{5}{16}$ & $0$ & $-\frac{7}{8} \mp \frac{3}{4}X $ & $ \frac{1}{8} \pm \frac{3}{4}X$ \\ [1pt] IV & $\frac{5}{16}$ & $\frac{1}{16}$ & $\frac{5}{16}$ & $\frac{1}{16}$ & $-\frac{1}{4} \mp \frac{3}{4}X$ & $-\frac{1}{2} \pm \frac{3}{4}X$ \\ [1pt] V & $\frac{1}{4}$ & $0$ & $\frac{5}{16}$ & $0$ & $-\frac{1}{2} \mp X$ & $-\frac{1}{2}\pm X$ \\ [1pt] VI & $\frac{1}{4}$ & $0$ & $1$ & $0$ & $-\frac{7}{8} \mp \frac{3}{4}X$ & $\frac{1}{8} \pm \frac{3}{4}X$ \\ [1pt] \hline \hline $SU(3)_C$ & $\chi$ & $\chi^M$ & $\chi_{B\bar{B}}$ & $\chi^M_{B\bar{B}}$ & $\chi_{a_\mu}$ & \\ [1pt] \hline A & $1$ & $1$ & $1$ & $1$ & $1$ \\ [1pt] B & $1$ & $0$ & $1$ & $0$ & $3$ \\ [1pt] \end{tabular} \caption{$SU(2)$ and $SU(3)$ factors entering Wilson coefficients $\mathcal{C}_9^\text{box}$ and $\mathcal{C}_{B\bar{B}}$.} \label{tbl:SUfactors} \end{table} As in \cite{Arnan:2016cpy}, we neglect the influence of photon penguin diagram contributions to $\mathcal{C}_9$ and therefore assume $\mathcal{C}_9 \approx \mathcal{C}_9^\text{box}$. The $2\sigma$ bounds on $\mathcal{C}_9=-\mathcal{C}_{10}$ and $\mathcal{C}_{B\bar{B}}$ are \cite{Alguero:2021anc,FermilabLattice:2016ipl} \begin{align} \mathcal{C}_9=-\mathcal{C}_{10} &\in [-0.46,-0.29] \\ \mathcal{C}_{B\bar{B}} &\in [-2.1,0.6] \cdot 10^{-5} \,\text{TeV}^{-2} \,. \end{align} Starting from these premises, we can construct an upper bound on $\Gamma_s\Gamma_b^*$ from $B$-$\bar{B}$ mixing and use this to construct a lower bound on $\Gamma_\mu$ by taking into account the bounds on $\mathcal{C}_9$. \\ We assume mass degeneracy between the new non-dark matter particles $\phi_Q$ and $\phi_L$ in a-type models and $\psi_Q/\psi_L$ and $\phi$ in b-type models respectively.\footnote{We comment on the effects of lifting this assumption for the DM phenomenology at the beginning of section \ref{sec:Results}} For convenience, we further introduce the dimensionless parameter $\kappa$, which we define as \begin{align} \kappa = \left\{\begin{array}{ll} \nicefrac{M_{\phi_Q}}{M_{\psi}} = \nicefrac{M_{\phi_L}}{M_{\psi}}, & \text{in a-type models} \\ \nicefrac{M_{\psi_Q}}{M_{\psi_L}} = \nicefrac{M_{\phi}}{M_{\psi_L}}, & \text{in $\psi_L$-DM b-type models} \\ \nicefrac{M_{\psi_L}}{M_{\psi_Q}} = \nicefrac{M_{\phi}}{M_{\psi_Q}}, & \text{in $\psi_Q$-DM b-type models}\, , \end{array}\right. \label{kappadefs} \end{align} quantifying the mass gap between DM and the non-DM exotic particles in the model. \\ Additionally, we restrict our analysis to single-component DM scenarios. This translates into a lower bound on the decay rate of the heavier dark sector particles to ensure their decay proceeds sufficiently fast. For the mass splitting, this in turn translates to $(\kappa -1) m_{\text{DM}} > m_{\pi}$ \cite{Cirelli:2005uq}. Within this model it is possible to construct a solution to the anomalous magnetic moment of the muon, commonly dubbed $(g-2)_\mu$. In these models we have a contribution to $a_\mu = \nicefrac{(g-2)_\mu}{2}$ as \begin{align} \Delta a_\mu^{a} = \frac{m^2_\mu \left|\Gamma_\mu \right|^2}{8 \pi^2 M_\psi^2} \chi_{a_\mu} \left[\eta_{a_\mu} F_7(x_L) - \tilde{\eta}_{a_\mu} \tilde{F}_7(x_L) \right]\, , \\ \Delta a_\mu^{b} = \frac{m^2_\mu \left|\Gamma_\mu \right|^2}{8 \pi^2 M_\phi^2} \chi_{a_\mu} \left[\tilde{\eta}_{a_\mu} \tilde{F}_7(y_L) - \eta_{a_\mu} F_7(y_L) \right]\, , \end{align} where the group factors $\tilde{\eta}_{a_\mu}/\eta_{a_\mu}$ and $\chi_{a_\mu}$ can be extracted from Table \ref{tbl:SUfactors} and the functions $F_7(x)$ and $\tilde{F}_7$ are characterized as \begin{align} F_7(x)= \frac{x^3 - 6x^2 + 6x \log{(x)} +3x +2}{12(x-1)^4}, \, \tilde{F}_7(x)=\frac{F_7(\frac{1}{x})}{x} \, . \end{align} Recently, the g-2 collaboration updated the earlier results of \cite{Bennett:2006fi} and reported on a $4.2\,\sigma$ deviation from the SM, where the difference $\Delta a_\mu \equiv \Delta a_\mu^{\text{Exp}} - \Delta a_\mu^{\text{SM}}$ amounts to \cite{Muong-2:2021ojo} \begin{align} \Delta a_\mu = (251\pm 59)\cdot 10^{-11} \, . \end{align} This translates to upper and lower bound on the leptonic Yukawa coupling $\Gamma_\mu$ summarized in Table \ref{tbl:g-2}.\\ The constraints on the new Yukawa couplings $\Gamma_\mu$,$\Gamma_b$ and $\Gamma_s$ are summarized in Table \ref{tbl:couplingConstraintsYuk}. In general, Dirac and Majorana versions of the models can lead to different constraints on the Yukawa couplings, since additional contributions to the $B$-$\bar{B}$-mixing and $b \to s \, l^+ l^-$ transitions can arise. These contributions, however, only occur in the case of model aIA \footnote{Note that this is generally a feature of models, where $SU(2)\in[I,IV], SU(3)=[A,C]$ and $X=0$. The model aIA happens to be the only model of those with a fermionic singlet DM candidate.} because of the constellations of $SU(2)$ representations present in this model. This feature is thus ultimately incorporated in the Wilson coefficients $\mathcal{C}_{B\bar{B}}$ and $\mathcal{C}^{\text{box}}_{9}$, together with Table \ref{tbl:SUfactors}, where the product $\chi^M_{(B\bar{B})} \eta^M_{(B\bar{B})}$ is non-zero only in the case aIA. In general, the the constraints of the new Yukawa couplings read \begin{align} \Gamma_s \Gamma_b^* &\leq \mathcal{B}^\text{model}_{bs} \left( \kappa \right) \frac{M_\text{DM}}{\text{GeV}} \, , \label{eq:GammaBSBound} \\ \Gamma_\mu &\geq \mathcal{B}^\text{model}_\mu \left( \kappa \right) \sqrt{\frac{M_\text{DM}}{\text{GeV}}} , . \label{eq:GammaMuBound} \end{align} The coefficient functions $\mathcal{B}^\text{model}_i \left( \kappa \right)$ are model dependent and summarized in Table \ref{tbl:couplingConstraintsYuk}. \subsection{Constraints on Parameters from the Scalar Potential}\label{sec:ScalarPotConstraints} The extended scalar sector of the models leads to additional terms in the scalar potential. These parameters are mass parameters $\mu_i$ as well as quartic couplings $\lambda$. In this work, we assume that no other scalar than the SM $SU(2)$-doublet scalar field $H$ acquires a vacuum expectation value since a non-zero vacuum expectation value in the dark sector would break the stabilizing $\mathtt{Z}_2$ symmetry. The resulting condition on the model parameters is given specifically below. \\ Further, we demand perturbativity from all allowed quartic couplings. This yields \begin{align} |\lambda| \leq (4 \pi)^2 \, . \end{align} We also want to constrain ourselves to scenarios, where the vacuum is stable at tree-level. Since the particle content in the scalar sector differs between model realizations a or b, the implications for the scalar quartic couplings differ as well. \subsubsection{a-type Models} There exist two other scalar doublets $\phi_Q$ and $\phi_L$ in a-type models in addition to the SM $SU(2)$-doublet $H$. The most general scalar potential, which is invariant under the DM stabilizing symmetry in this scenario is consequently \begin{align} \begin{split} V_\text{scalar}=& \mu_H^2 H^\dagger H + \mu_{\phi_{Q}}^2 \phi_Q^\dagger \phi_Q + \mu_{\phi_{L}}^2 \phi_L^\dagger \phi_L \\ &+ \lambda_H \left(H^\dagger H \right)^2 + \lambda_{\phi_{Q}} \left( \phi_Q^\dagger \phi_Q \right)^2 + \lambda_{\phi_L} \left( \phi_L^\dagger \phi_L \right)^2 \\ &+ \lambda_{\phi_{Q},H,1} \left( \phi_Q^\dagger \phi_Q \right)\left( H^\dagger H \right)+ \lambda_{\phi_{L},H,1} \left( \phi_L^\dagger \phi_L \right)\left( H^\dagger H \right) \\ &+ \lambda_{\phi_{Q},H,2} \left( \phi_Q^\dagger H \right)\left( H^\dagger \phi_Q \right) + \lambda_{\phi_{L},H,2} \left( \phi_L^\dagger H \right)\left( H^\dagger \phi_L \right) \\ &+ \lambda_{\phi_L,\phi_Q,1} \left( \phi_L^\dagger \phi_L \right)\left( \phi_Q^\dagger \phi_Q \right) + \lambda_{\phi_L,\phi_Q,2} \left( \phi_L^\dagger \phi_Q \right)\left( \phi_Q^\dagger \phi_L \right) \\ &+ \left[ \lambda_{\phi_{Q},H,3} \left( \phi_Q^\dagger H \right)^2 + \text{h.c.}\right] + \left[ \lambda_{\phi_{L},H,3} \left( \phi_L^\dagger H \right)^2 + \text{h.c.}\right] \\ &+\left[ \lambda_{\phi_{L},\phi_Q,3} \left( \phi_Q^\dagger \phi_L \right)^2 +\text{h.c.} \right] \, . \end{split} \end{align} Adopting the limits from \cite{Keus:2014isa}, the vacuum stability bounds at tree-level for this kind of potential read \begin{align} \lambda_H, \lambda_{\phi_Q}, \lambda_{\phi_L} &> 0 \\ \lambda_{\phi_L, \phi_Q, 1} + \lambda_{\phi_L, \phi_Q, 2} &> -2 \sqrt{\lambda_{\phi_Q} \lambda_{\phi_L}} \nonumber \\ \lambda_{\phi_L, H, 1} + \lambda_{\phi_L, H, 2} &> -2 \sqrt{\lambda_{H} \lambda_{\phi_L}} \nonumber \\ \lambda_{\phi_Q, H, 1} + \lambda_{\phi_Q, H, 2} &> -2 \sqrt{\lambda_{\phi_Q} \lambda_{H}} \nonumber \\ \left|\lambda_{\phi_{L},\phi_Q,3} \right| &< \left|\lambda_{\alpha} \right|, \left|\lambda_{\alpha, \beta, i} \right| \, \end{align} where $\alpha \in [\phi_Q, \phi_L, H]$ and $i \in[1,2,3]$. Since we assume mass degeneracy within the non-DM dark sector, the Higgs portal couplings $\lambda_{\phi_L, H, 2}$ and $\lambda_{\phi_Q, H, 2}$ vanish. Another important feature of the model is the DM stabilizing symmetry $\mathtt{Z}_2$, under which all dark sector particles are oddly charged. For this symmetry to be intact, we require the other Higgs portal couplings to satisfy \begin{align} \lambda_{\phi_L, H, 1}, \lambda_{\phi_Q, H, 1} < \frac{2}{v^2} \kappa^2 M^2_{\text{DM}} \end{align} \subsubsection{b-type Models} In addition to $H$, there exists another scalar doublet $\phi$ in b-type models. The most general form (again respecting the DM stabilizing symmetry) of the scalar potential in this scenario is therefore \begin{align} \begin{split} V_\text{scalar}=& \mu_H^2 H^\dagger H + \mu_\phi^2 \phi^\dagger \phi + \lambda_H \left(H^\dagger H \right)^2 + \lambda_\phi \left( \phi^\dagger \phi \right)^2 \\ &+ \lambda_{\phi,H,1} \left( \phi^\dagger \phi \right)\left( H^\dagger H \right) + \lambda_{\phi,H,2} \left( \phi^\dagger H \right)\left( H^\dagger \phi \right) + \left[ \lambda_{\phi,H,3} \left( \phi^\dagger H \right)^2 + \text{h.c.}\right] \, . \end{split} \end{align} For this kind of models we adopt the limits of \cite{Branco:2011iw,Lindner:2016kqk}, leading to \begin{align} \lambda_{\phi}, \lambda_{H} &> 0, \nonumber \\ \lambda_{\phi,H,1} &> -2\sqrt{\lambda_\phi \lambda_H}, \nonumber \\ \lambda_{\phi,H,1} + \lambda_{\phi,H,2} - |2\lambda_{\phi,H,3}| &> -2\sqrt{\lambda_\phi \lambda_H} \, . \end{align} As in a-type models, we can infer constraints for the Higgs portal couplings from mass degeneracy of the non-DM exotic particles and the requirement that the new scalar does not acquire a non-zero vev, yielding \begin{align} \lambda_{\phi, H, 1} &< \frac{2}{v^2} \kappa^2 M^2_{DM} \\ \lambda_{\phi, H, 2} &= 0 \end{align} \subsection{Relic Density}\label{sec:RD} The relic density can be estimated by the effective annihilation cross section $\sigma_\text{eff}$ of DM. Depending on the parameter $\kappa$, which parametrizes the mass ratio of the various particles in the dark sector according to Eq.~\eqref{kappadefs}, $\sigma_\text{eff}$ is dominated solely by the direct annihilation of DM anti-DM pairs or can receive sizable contributions from annihilations of heavier dark sector particles, commonly referred to as coannihilations. The latter case can only arise if the masses in the dark sector are comparable, which is given for $\kappa \lesssim 1.2$. For larger values of $\kappa$, coannihilations are typically absent. Assuming that conversions among the different dark sector particles are efficient during freeze-out and that the heavier dark sector particles decay sufficiently fast, the coupled system of Boltzmann equations describing the time evolution of all dark sector particles can be reduced to a single effective Boltzmann equation for DM relic density \footnote{Note that \textsc{micrOMEGAs 5.0} does not validate that conversions between the dark sector particles are efficient but assumes them to be efficient enough to allow for the treatment described above. We numerically verified the efficiency of $2 \leftrightarrow 2$ conversion processes and $1 \leftrightarrow 2$ (inverse) decays between dark sector particles for coannihilating scenarios $\kappa \lesssim 1.2$ during freeze-out, i.e. $\Gamma_\text{conversion} > H$ for $T \gtrsim M_\text{DM}/30$. Reference \cite{Garny:2017rxs} discusses the efficiency of conversions in a slightly simpler but similar setup and find that effects of out-of equilibrium conversions only arise for couplings $\Gamma_i \lesssim 10^{-6}$, which is far below the couplings considered in this work.} \cite{Griest:1990kh,Edsjo:1997bg} \begin{align} \frac{\mathrm{d} \, \tilde{Y}_\text{DM}}{\mathrm{d} \, x} = -\sqrt{\frac{\pi}{45}} \frac{M_\text{Pl} M_\text{DM}g_{*,\text{eff}}^\frac{1}{2}}{x^2} \left\langle \sigma_\text{eff} v \right\rangle \left( \tilde{Y}^2 - \tilde{Y}_\text{eq}^2 \right) \, . \label{eq:effectiveBoltzmann} \end{align} Here, we use $x=m_\text{DM} / T$ and $M_\text{Pl} = 1.2 \cdot 10^{19} \, \mathrm{GeV}$ is the Planck mass. The yield $\tilde{Y}$ is defined as the sum over all coannihilating particles \begin{align} \tilde{Y}^\text{eq} = \left\{\begin{array}{ll} \sum \limits_{i=\phi_L,\phi_Q,\psi} Y_i^\text{eq}, & \text{for a-type models} \\ \sum \limits_{i=\psi_L,\psi_Q,\phi} Y_i^\text{eq}, & \text{for b-type models}\end{array}\right. , \end{align} while the Yield $Y_i$ of the particle species $i$ is related to its number density $n_i$ via $Y_i = n_i / s$, with the entropy density $s$ of \begin{align} s = \frac{2 \pi^2}{45} g_{*s}T^3 \, . \end{align} The quantity $g_{*,\text{eff}}$ combines the entropy degrees of freedom $g_{*s}$ and the energy degrees of freedom $g_*$ via \begin{align} g_{*,\text{eff}}^\frac{1}{2} = \frac{g_{*s}}{\sqrt{g_*}} \left( 1 + \frac{T}{3 g_{*s}} \frac{\mathrm{d} \, g_{*s}}{\mathrm{d}\, T} \right) \, . \end{align} Since we assume mass-degenerate unstable dark sector particles with a mass of $ \kappa M_\text{DM}$ the yields can be expressed as \begin{align} \label{eqn:equilibrium} Y_\text{DM}^\text{eq} &= \frac{90}{\left( 2 \pi \right)^\frac{7}{2}} \frac{g_\text{DM}}{g_{*s}} x^\frac{3}{2} \exp \left( -x \right) \, , \\ Y_\text{non-DM}^\text{eq} &= \frac{90}{\left( 2 \pi \right)^\frac{7}{2}} \frac{g_\text{non-DM}}{g_{*s}} \left( \kappa x \right)^\frac{3}{2} \exp \left( - \kappa x \right) \, . \end{align} Here, $g_\text{DM}$ and $g_\text{non-DM}$ refer to the degrees of freedom of the DM and the non-DM dark sector particles, respectively. For instance, in a b-type model with $\psi_L$ DM we have $g_\text{DM} = g_{\psi_L}$ and $g_\text{non-DM}=g_{\psi_Q} + g_\phi$. The effective annihilation cross section is given by a weighted sum of the thermally averaged cross sections of the various annihilation channels \begin{align} \left\langle \sigma_\text{eff} v \right\rangle = \sum_{i,j} \left\langle \sigma_{ij} v_{ij} \right\rangle \frac{Y_i^\text{eq}}{\tilde{Y}^\text{eq}} \frac{Y_j^\text{eq}}{\tilde{Y}^\text{eq}} \, , \label{eqn:sigmaveff} \end{align} where $i$ and $j$ run over all dark sector particles and $\left\langle \sigma_{ij} v_{ij} \right\rangle$ is the thermally averaged cross section for for the annihilation of the dark sector particles $i$ and $j$. \\ Due to the singlet nature of DM, the direct DM annihilation proceeds via the Yukawa couplings $\Gamma_{i}$ connecting DM to the SM leptons and/or quarks\footnote{For fermionic DM in an a-type model DM directly couples to both quarks and leptons, while $\psi_{L}$ ($\psi_Q$) DM couples to leptons (quarks) exclusively.}, while the heavier dark sector particles may be charged under the SM gauge groups and therefore can additionally annihilate via gauge interactions. Various annihilation channels are depicted in Figure \ref{fig:annihilation}. \begin{figure} \centering \includegraphics[width=8cm]{Plots/Annihilations.pdf} \caption{Examples of dark sector annihilations via Yukawa couplings (upper row), Higgs portal couplings and gauge couplings in leptophilic DM models.} \label{fig:annihilation} \end{figure} The relic density of a particle species\footnote{In case of Dirac DM the total relic density is given as the sum of the particle and antiparticle contribution.} can be estimated from the leading order velocity contribution of the annihilation cross section $\left\langle \sigma_\text{eff} v \right\rangle \sim \sigma_\text{eff,0} T^{n}$ \cite{Kolb:1990vq} \begin{align} \Omega_\text{DM} h^2 = \frac{\sqrt{g_*}}{g_{*s}} \frac{3.79 \left( n+1 \right) x_f^{n+1}}{M_\text{Pl} m_B \sigma_\text{eff,0}} \frac{\Omega_B}{Y_B} \, , \label{eq:DMEstimate} \end{align} where $x_f \sim 25$ parametrizes the freeze-out temperature, $m_B$ is the typical baryon mass and $\Omega_B$ and $Y_B$ describe the energy density and the yield of baryons today, respectively. Assuming that the annihilation cross section is dominated by the annihilations of dark matter into SM leptons and quarks of negligible mass\footnote{This scenario arises frequently in models with $\psi_L$, $\psi$-DM or for $\psi_Q$-DM significantly heavier than the top.} via the new Yukawa couplings, the leading order contribution of the thermally averaged cross section results in \begin{align} \label{eqn:CrossSectionAnnihi} \sigma_\text{eff,0} &= \frac{1}{4 \pi M_\text{DM}^2}\frac{1}{\left( 1+ \kappa^2 \right)^2} \left[ C_l^m \Gamma_\mu^4 + 3 C_q^m \left( \Gamma_b^2 +\Gamma_s^2 \right)^2 \right] \times \nonumber \\ &\times \left\{\begin{array}{ll} \frac{1}{4}, & \text{for Dirac DM} \\ \left( 1 + \kappa^4 \right) \left( 1+ \kappa^2 \right)^{-2}, & \text{for Majorana DM}\end{array}\right. \, . \end{align} Further, we find $n=0$($n=1$), corresponding to s-wave(p-wave) annihilation, for Dirac(Majorana) DM. The coefficients $C_{l/q}^m$ are model dependent and result in $C_l^\text{bIIB}=C_l^\text{bVIB}=C_q^\text{bIIA}=C_q^\text{bVA}=0$ and the remaining six coefficients are equal to $1$. \\ Given the $\sigma_{\text{eff,}0}^{-1}$ scaling of the relic density in Eq.\@ \ref{eq:DMEstimate}, we expect to generate the observed relic density in such a scenario neglecting the logarithmic mass dependence of $x_f$ for \begin{align} \left[ C_l^m \Gamma_\mu^4 + 3 C_q^m \left( \Gamma_b^2 +\Gamma_s^2 \right)^2 \right] &\approx 1.65 \cdot 10^{-7} \left( \frac{M_\text{DM}}{\text{GeV}} \right)^2 \nonumber \\ &\times \left\{\begin{array}{ll} \left( 1+ \kappa^2 \right)^{2}, & \text{for Dirac DM} \\ 5.6 \left( 1 + \kappa^4 \right)^{-1} \left( 1+ \kappa^2 \right)^{4}, & \text{for Majorana DM}\end{array}\right. \, . \label{eqn:correctRD} \end{align} In the following we derive some implications for limiting behaviors of this condition for the various models. \begin{enumerate} \item Leptophilic DM (bIIA,bVA) \\ In this scenario Eq.\@ \eqref{eqn:correctRD} results in a scaling behavior $\Gamma_\mu = \mathcal{B}_{RD}^m \left( \kappa \right) \sqrt{\nicefrac{M_\text{DM}}{\text{GeV}}}$, with $\mathcal{B}_{RD}^m \left( \kappa \right)$ a model-dependent coefficient function. Comparing this result with the lower bound on $\Gamma_\mu$ for a successful explanation of the $R_K$ anomaly, given in Eq. \eqref{eq:GammaMuBound}, we find a lower bound on $\kappa>\kappa_0^m$ for a possible simultaneous explanation of $R_K$ and DM of \begin{align} \label{eqn:RDleptophilic} \kappa_0^\text{bIIA} \approx 11.8 \, , \quad \kappa_0^\text{bIIA,Maj} \approx 4.7 \, , \quad \kappa_0^\text{bVA} \approx 26.5 \, , \quad \kappa_0^\text{bVA,Maj} \approx 10.9 \end{align} \item Quarkphilic DM (bIIB,bVIB) \\ In this scenario the observed relic density is reproduced for $\left( \Gamma_s^2 + \Gamma_b^2 \right) \sim \mathcal{B}_{RD}^\text{m} \left( \kappa \right) \nicefrac{M_\text{DM}}{\text{GeV}}$. The product of $\Gamma_s$ and $\Gamma_b$ is constrained from above by $B-\bar{B}$-mixing, given in Eq.\@ \eqref{eq:GammaBSBound}. The annihilation cross section is given by the sum $\Gamma_s^2 + \Gamma_b^2$. Thus, it cannot be constrained from $B$-$\bar{B}$ mixing since either the second or third generation coupling could be arbitrarily small. However, we obtain an upper bound on the annihilation cross section by setting $\Gamma_s \sim 0$ and $\Gamma_b = 4 \pi$ at its perturbative limit \footnote{Note that the second generation coupling $\Gamma_s$ cannot be close to its perturbative limit, as it is constrained from $D$-$\bar{D}$ mixing.}. We find $\nicefrac{M_\text{DM}}{\text{GeV}} \leq \left[ \mathcal{B}_\text{RD}^\text{m} \left( \kappa \right) \right]^{-1} 16 \pi^2$. For the four different configurations we find \begin{align} M_\text{DM,max}^\text{bIIB}=M_\text{DM,max}^\text{bVIB} \approx \frac{674 \, \mathrm{TeV}}{1+ \kappa^2} \, , \quad M_\text{DM,max}^\text{bIIB,Maj} = M_\text{DM,max}^\text{bVIB,Maj} \approx \frac{254 \, \mathrm{TeV} \sqrt{1+\kappa^4}}{\left( 1+ \kappa^2 \right)^2} \end{align} \item Amphiphilic DM (aIA) \\ Since both, couplings to leptons an quarks, are present in this scenario, we can apply the limits derived in the two scenarios above if $\Gamma_\mu \gg \Gamma_q$ or $\Gamma_q \gg \Gamma_\mu$. \end{enumerate} \FloatBarrier \subsection{Direct Detection} \label{sec:DD} In this section we discuss the direct detection limits arising for the exemplary case of a fermionic singlet DM particle $\chi$ interacting with the SM quarks via a vector~($V_{\mu}$) or scalar~($A$) mediator. Dark matter direct detection (DD) is a means of experimental measurement of DM properties via the attempted observation of recoil of nuclei from scattering with DM particles. This scattering is assumed to occur at low momentum transfer, as the typical relative velocity of the Earth and DM lies at $v \sim 10^{-3}$. As experiments like for instance XENON1T, LUX, PICO and also IceCube put bounds on the DM-nucleon cross section, we shortly discuss the estimates of the spin-independent (SI) as well as spin-dependent (SD) parts of this quantity for the different diagram structures present in this work. For this discussion, we mainly use the results of \cite{Berlin:2014tja},\cite{Agrawal:2010fh} and \cite{Mohan:2019zrk}. We put the main focus on dependence of the cross sections on the DM mass and the couplings. An important note is that the bounds presented by the aforementioned experiments are given under the assumption that the total DM relic density is constituted of the particle in question. In the case where a model underproduces the relic density of a DM candidate, the DD bounds have to be rescaled such that only the produced fraction of the observed relic density $\Omega_{\text{DM}}h^2$ is regarded. Thus, the relation \begin{align} \label{eqn:BoundRescaling} \sigma_{\text{DD}} \leq \frac{\Omega_{\text{DM}}}{\Omega_\chi} \text{Bound}(m_\chi) \end{align} must hold for a model to be in accordance with DD bounds. Bounds for underproduced DM are relaxed, as the allowed DM-nucleon cross section rises by a factor $\nicefrac{\Omega_{\text{DM}}}{\Omega_\chi}>1$. If the RD is in turn overprocuded, the bounds are tightened in our setup. Note however, that this part of the parameter space is already excluded by the RD and therefore DD results in these regions are of no special interest. We estimate the results for $t$-channel DD interactions\footnote{Note that $t$-channel DD typically implies $s$-channel annihilation into quarks and vice versa.} as shown in Figure \ref{fig:tChannelDD}, where an EFT reduction is made, which is eventually matched to the operator structure in Eq.\@ \eqref{eq:Effectivetchannel}. If the DM-quark operator structure differs from this arrangement, as for instance in an $s$-channel diagram, this operator structure must be translated to the $t$-channel structure via Fierz transformations. \begin{figure} \centering \includegraphics[width=\textwidth]{Plots/tChannelDD.pdf} \caption{Several $t$-channel diagrams contributing to the DM-nucleon cross section relevant for DD.} \label{fig:tChannelDD} \end{figure} This is done because for the $t$-channel operators, DM and SM particle content strictly factorizes. \\ In this work, we evaluate the interaction within the blobs in Fig. \ref{fig:tChannelDD} up to one-loop level with \textsc{FeynArts}. The effective $\bar{\chi} \chi H/Z/\gamma/g$ - vertices are then implemented in the \textsc{CalcHEP}-files, which are used by \textsc{micrOMEGAs} to calculate the DM-nucleon cross section. Typical $t$-channel operators with bosonic mediators contributing to DD are structured as \begin{align} \mathcal{L}_{\text{DD}} &\supset \mathcal{O}_{\text{DM}}\cdot \Pi_{\text{med}} \cdot \mathcal{O}_{q} \label{eq:Effectivetchannel} \\ &\stackrel{q^2 \ll m_{\text{med}}^2}{\to} \left(\frac{1}{m_{\text{med}}^2} \mathcal{O}_{\text{DM}}\right) \mathcal{O}_{N} \, , \end{align} where the $ \Pi_{\text{med}}$ is the propagator of the mediator and $ \mathcal{O}_{N}$ denotes the nucleon-level operator. As we are interested in the nucleon rather than the partons, a summation over the quark content is necessary. This procedure, even up to the level of the whole nucleus, is explained in detail in the appendix of \cite{Berlin:2014tja} and references therein. \subsubsection{Fermionic DM $\chi$ and Scalar Mediator $A$} \label{sec:DDscalar} In the fermionic DM and scalar mediator case, the Lagrangian yields \begin{align} \mathcal{L} \supset \left[ \left(\frac{1}{2}\right) \bar{\chi} (\lambda_{\chi s} + \lambda_{\chi p} \mathrm{i}\gamma^5) \chi + \bar{q} (\lambda_{q s} + \lambda_{q p} \mathrm{i}\gamma^5) q \right] A \, , \end{align} where the factor of $\frac{1}{2}$ enters in the Majorana case. In this type of interaction, the combinations scalar-scalar ($s,s$),scalar-pseudoscalar ($s,p$), pseudoscalar-scalar ($p,s$) and pseudoscalar-pseudoscalar ($p,p$) between the DM and the nucleon operators $\mathcal{O}_{\text{DM}}$ and $\mathcal{O}_{N}$ arise. The DM-nucleon cross sections for each combination scale like \begin{align} \label{eqn:sigmaScaMed} \begin{split} \sigma^{\text{SI}}_{s,s} &\sim \frac{\mu_{\chi N'}^2 \lambda_{\chi s}^2}{m^4_A} \\ \sigma^{\text{SI}}_{p,s} &\sim \frac{\mu_{\chi N'}^2 v^2}{m^2_\chi} \frac{\mu_{\chi N'}^2 \lambda_{\chi p}^2}{m^4_A} \\ \sigma^{\text{SD},p/n}_{s,p} &\sim \frac{\mu_{\chi N'}^2 v^2}{m^2_{N'}} \frac{\mu_{\chi N'}^2 \lambda_{\chi s}^2}{m^4_A} \\ \sigma^{\text{SD}, p/n}_{p,p} &\sim \left( \frac{\mu_{\chi N'}^2 v^2}{m_\chi m_{N'}} \right)^2 \frac{\mu_{\chi N'}^2 \lambda_{\chi p}^2}{m^4_A} \, , \end{split} \end{align} where $\mu_{\chi N'}= \nicefrac{m_\chi m_{N'}}{m_\chi + m_{N'}}$ denotes the reduced mass of DM $\chi$ and the nucleus $N'$. Interactions that are scalar on the nucleon operator contribute to the SI cross section, whereas SD contributions arise from interactions that are pseudoscalar on the nucleon operator side. Note here especially that the ($s,s$) interaction is the only interaction unsuppressed by the relative velocity $v$. The mixed terms ($s,p$) and ($p,s$) are suppressed by two powers of $v$, while the ($p,p$) interaction features a suppression of $v^4$. In this work, those interactions are mainly SM-Higgs-exchange induced. \subsubsection{Fermionic DM $\chi$ and Vector Mediator $V_\mu$} \label{sec:DDvector} In the fermionic DM and vector-boson mediator case, the Lagrangian reads \begin{align} \mathcal{L} \supset \left[ \left(\frac{1}{2}\right) \bar{\chi} \gamma^\mu(g_{\chi v} + g_{\chi a} \gamma^5) \chi + \bar{q} \gamma^\mu (g_{q v} + g_{q a} \gamma^5) q \right] V_\mu \, , \end{align} where the factor $\frac{1}{2}$ enters only in the Majorana case. A special caveat here is that $g_{\chi v}$ \textit{vanishes} in the Majorana case, since terms of the form $\bar{\chi} \gamma^\mu \chi$ are forbidden. For the possible interactions, vector-vector ($v,v$), vector-axial vector ($v,a$), axial vector-vector ($a,v$) and axial vector-axial vector ($a,a$), we obtain \begin{align} \label{eqn:sigmaVecMed} \begin{split} \sigma^{\text{SI}}_{v,v} &\sim \frac{\mu_{\chi N'}^2 g_{\chi v}^2}{m^4_V} \\ \sigma^{\text{SI}}_{a,v} &\sim \frac{\mu^2_{\chi N'} v^2}{m_\chi^2} \frac{\mu_{\chi N'}^2}{\mu_{\chi N}^2} \frac{\mu_{\chi N'}^2 g_{\chi a}^2}{m^4_V} \\ \sigma^{\text{SD},p/n}_{v,a} &\sim \frac{\mu^2_{\chi N'}}{\mu^2_{\chi N}} \frac{g^2_{\chi v} v^2}{m_V^4} \\ \sigma^{\text{SD},p/n}_{a,a} &\sim \frac{\mu_{\chi N'}^2 g^2_{\chi a}}{ m_V^4} \, . \end{split} \end{align} Analogously to the scalar mediator case, the nucleon current containing $\gamma^5$, the axial vector current, leads to SD contributions, while the SI contribution stems from the vector current. The velocity suppression in these configurations is, however, not quite analogous. The ($v,v$) \textit{and} ($a,a$) interactions are unsuppressed, while the mixed terms ($a,v$) and ($v,a$) are suppressed by two powers of $v$. Therefore, there does exists a contribution to the SD cross section unsuppressed by velocity. \subsubsection{$s$-channel Lagrangians and Fierz Identities} \label{sec:Fierz} In the case of $s$-channel-type direct detection, it is possible to relate the Lagrangian to the $t$-channel interactions whose cross sections we presented in Section \ref{sec:DDscalar} and \ref{sec:DDvector} via Fierz identities. The $s$-channel type DD interactions that are typically present in quarkphilic models are mediated by a scalar. Starting from the structure of the Lagrangian \begin{align} \mathcal{L} \supset - \bar{\chi} (\lambda_{s}- \lambda_{p} \gamma^5) q \, A - \bar{q} (\lambda^*_{s}+ \lambda^*_{p}\gamma^5) \chi A^\dagger \, , \end{align} we obtain the effective Lagrangian \cite{Agrawal:2010fh} \begin{align} \mathcal{L}_{\text{eff}} \supsetsim \frac{1}{m_A^2} \left[ |\lambda_{s}|^2 (\bar{\chi} q)(\bar{q}\chi ) -|\lambda_p|^2 (\bar{\chi}\gamma^5 q)(\bar{q}\gamma^5\chi) + \lambda_s \lambda_p^* (\bar{\chi}q)(\bar{q}\gamma^5 \chi) - \lambda_s^* \lambda_p (\bar{\chi}\gamma^5 q)(\bar{q}\chi) \right] \end{align} by integrating out the mediator. Further manipulation with Fierz transformations yields \begin{align} \label{eqn:LeffFierz} \begin{split} \mathcal{L}_{\text{eff}} &\supsetsim \frac{1}{4m_A^2} \left[ (|\lambda_s|^2 - |\lambda_p|^2) (\bar{q}q\bar{\chi}\chi + \frac{1}{2}\bar{q}\sigma^{\mu \nu}q \bar{\chi}\sigma_{\mu \nu}\chi) \right. \\ &\left. + (|\lambda_s|^2 + |\lambda_p|^2) (\bar{q}\gamma^\mu q \bar{\chi}\gamma_\mu \chi - \bar{q}\gamma^\mu \gamma^5 q \bar{\chi}\gamma_\mu \gamma^5 \chi ) \vphantom{\frac{1}{2}} \right] \, , \end{split} \end{align} where velocity suppressed combinations are neglected. Eq. \eqref{eqn:LeffFierz} suggests that in the limit $|\lambda_s|=|\lambda_p|$, the scalar-scalar and tensor-tensor operators vanish completely. The remaining quadrilinears can be related to the scaling of the cross sections presented in Sections \ref{sec:DDvector} and \ref{sec:DDscalar}. Note that both vector-vector and tensor-tensor contributions vanish in the Majorana case. \subsubsection{Twist-$2$ Contributions to the SI DM-Nucleon Cross Section} \label{sec:twist2} In some Majorana versions of models presented in this work, there is no unsuppressed contribution to the SI cross section from quark operators at leading order. In this case, twist-2 operators become important. While the twist-2 quark operators come from a higher order propagator expansion of the tree-level quark diagrams, also gluonic twist-2 operators generated by the box-diagrams shown in Fig. \ref{fig:DMGluonScattering} have to be considered. These diagrams are taken into account by \textsc{micrOMEGAs} automatically. In this case, the SI cross sections receive additional contributions from these diagrams, which scale like \cite{Mohan:2019zrk} \begin{align} \label{eqn:sigmaTwist2} \sigma^{\text{SI}} &\sim \left( \frac{m_\chi m_N}{m_{\text{med}} + m_N} \right)^2 \left| -\frac{8\pi}{9\alpha_s} 0.8 f_G + \frac{3}{4} 0.416 \left( g_G^{(1)} + g_G^{(2)} \right) \right|^2 \, , \nonumber \\ f_G &\approx \frac{\alpha_s \Gamma_{s/b}^2 m_\chi}{192\pi} \frac{m_\chi^2 - 2 m_{\text{med}}^2 }{m_{\text{med}}^2 \left(m_{\text{med}}^2 -m_\chi^2 \right)^2} \, , \nonumber \\ g_G^{(1)} &\approx \frac{\alpha_s \Gamma_{s/b}^2 }{96\pi m_\chi^3 (m_{\text{med}}^2 -m_\chi^2)} \left[-2m_\chi^4 \log{\left( \frac{m_q^2}{m_{\text{med}}^2} \right)} - m_\chi^2 (m_{\text{med}}^2 + 3m_\chi^2 ) \right. \\ & \left. + (m_{\text{med}}^2 - 3m_\chi^2)(m_{\text{med}}^2 + m_\chi^2) \log{\left( \frac{m_{\text{med}}^2}{m_{\text{med}}^2 - m_\chi^2} \right)} \right] \, , \nonumber \\ g_G^{(2)} &\approx \alpha_s \Gamma_{s/b}^2 \frac{-2 m_{\text{med}}^2 m_\chi^2 + 2(m_{\text{med}}^2 - m_\chi^2)^2 \log{\left(\frac{m_{\text{med}}^2}{m_{\text{med}}^2 - m_\chi^2} \right) +3m_\chi^4 }}{48\pi m_\chi^3 (m_{\text{med}}^2 - m_\chi^2)^2} \, . \nonumber \end{align} We will refer to these analytic expressions in Section \ref{sec:Results}, where we present the numerical results, as they play an important role in the SI direct detection procedure in quarkphilic DM Majorana models. Note that in the expressions above, $m_\text{med}$ corresponds to the mass of the colored particle directly coupling to DM. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Plots/DMGluonScattering.pdf} \caption{Leading order diagrams for DM-gluon scattering for quarkphilic DM.} \label{fig:DMGluonScattering} \end{figure} \subsection{Indirect Detection} \label{sec:IndirectDetection} In this section we shortly discuss indirect detection limits for the model classes presented in this work. Majorana type models are typically unconstrained by standard indirect searches due to the kinematic suppression of the pair annihilation cross section $\sigma(\bar{\chi} \chi \to \bar{f}f) \sim v^2$. However, this suppression is lifted for dark matter annihilations into a three-body final state containing a fermion pair and a photon $\bar{\chi} \chi \to \bar{f}f \gamma$ \cite{Bringmann:2007nk}. Such processes, commonly called Virtual Internal Bremsstrahlung (VIB) lead to a special spectral feature that is distinct from astrophysical backgrounds. Analyses searching for these features by FermiLAT \cite{Bringmann:2012vr} and HESS \cite{Abramowski:2013ax} can be used constrain models that allow a DM coupling to light fermions \cite{Garny:2013ama,Belanger:2015nma}, such as the model classes discussed in this article. We calculate the VIB contribution to the annihilation cross section using \textsc{micrOMEGAs} 5.0 and compare the results to the the 95\% C.L. upper limits on the annihilation cross section $\langle \sigma v \rangle_{\bar{f}f\gamma} + 2\langle \sigma v \rangle_{\gamma \gamma}$ for $\kappa=1.1,1.01$ obtained by \cite{Garny:2013ama}. As in the case of DD, we rescale the experimental constraints with a fraction of the observed relic density so that \begin{align} \label{eqn:BoundRescaling2} \sigma_{\text{ID}} \leq \left(\frac{\Omega_{\text{DM}}}{\Omega_\chi}\right)^2 \text{Bound}(m_\chi) \, . \end{align} Note that for $\kappa < 2$, the contribution of the two-photon final state is negligible. Moreover, for large $\kappa$ VIB is suppressed by $\kappa^{-4}$ and thus we do not present limits for these cases. Our analysis shows that even for coannihilating scenarios the parameter space of the models analyzed in this work \textit{cannot} be constrained by VIB. Principally, for large couplings to leptons (in the case of leptophilic/amphiphilic DM) or quarks (quarkphilic/amphiphilic) the DM annihilation cross section can be relatively large. Nevertheless, the corresponding points in the parameter space typically feature a small relic density, which results in relaxed ID constraints because of the quadratic rescaling given in Eq. \eqref{eqn:BoundRescaling2}. Since our numerical results indicate that the relevant parameter space for Dirac models is excluded by DD already, we do not apply the standard ID searches constraining the DM annihilation into a pair of SM fermions for these models. \subsection{Setup and Classification} The model classes proposed by \cite{Arnan:2016cpy} as a one-loop solution to the $b \to s \mu^- \mu^+$ anomalies are distinct in their particle content. In either of the two model realizations, three additional particles are added to the SM. In realization a, two heavy scalars, $\phi_Q$ and $\phi_L$, and a vector-like fermion $\psi$ are present, whereas in realization b there exist two vector-like fermions, $\psi_Q$ and $\psi_L$, and a heavy scalar $\phi$. The indices $Q$ and $L$ denote their coupling to quarks/leptons respectively. The corresponding Lagrangians for each realization are \begin{align} \label{eq: ModelLagrangiana} \mathcal{L}_{int}^{a} &= \Gamma_{Q_i} \bar{Q}_i P_R \psi \phi_Q + \Gamma_{L_i} \bar{L}_i P_R \psi \phi_L + \text{h.c.} \\ \label{eq: ModelLagrangianb} \mathcal{L}_{int}^{b} &= \Gamma_{Q_i} \bar{Q}_i P_R \psi_Q \phi + \Gamma_{L_i} \bar{L}_i P_R \psi_L \phi + \text{h.c.} \, . \end{align} where $Q_i$ and $L_i$ denote the left-handed quark/lepton doublets, while $i$ is a flavor index. The Eqns.\@\eqref{eq: ModelLagrangiana} and \eqref{eq: ModelLagrangianb} reveal that the additional heavy particles couple only to left-handed SM fermions. This is a phenomenologically driven feature, implemented to ensure that $\mathcal{C}_9=-\mathcal{C}_{10}$ holds, which is the preferred scenario to fit the \textit{B} anomalies, where $\mathcal{C}_9$ and $\mathcal{C}_{10}$ are Wilson coefficients of \begin{align} \mathcal{O}_9 &= (\bar{s}\gamma^{\nu}P_{L} b)(\bar{\mu}\gamma_{\nu}\mu), \,\,\,\,\, \mathcal{O}_{10} = (\bar{s}\gamma^{\nu}P_{L} b)(\bar{\mu}\gamma_{\nu}\gamma^5\mu) \, . \end{align} The model classification up to the adjoint representation can be extracted from Table \ref{tbl:SUclassification}, where the representations of the new particles under the SM gauge group are presented. The roman numerals I-VI classifies the representation of the new fields under $SU(2)_L$ and capital latin letters A-B specifies the $SU(3)_C$ charges. \begin{table}[h] \centering \begin{tabular}{c|ccc} $SU(2)_L$ & $\phi_Q,\psi_Q$ & $\phi_L,\psi_L$ & $\psi,\phi$ \\ [1pt] \hline I & \textbf{2} & \textbf{2} & \textbf{1} \\ [1pt] II & \textbf{1} & \textbf{1} & \textbf{2} \\ [1pt] III & \textbf{3} & \textbf{3} & \textbf{2} \\ [1pt] IV & \textbf{2} & \textbf{2} & \textbf{3} \\ [1pt] V & \textbf{3} & \textbf{1} & \textbf{2} \\ [1pt] VI & \textbf{1} & \textbf{3} & \textbf{2} \\ [1pt] \hline \hline $SU(3)_C$ & & & \\ [1pt] \hline A & \textbf{3} & \textbf{1} & \textbf{1} \\[1pt] B & \textbf{1} & $\overline{\textbf{3}}$ & \textbf{3} \\[1pt] C & \textbf{3} & \textbf{8} & \textbf{8} \\[1pt] D & \textbf{8} & $\overline{\textbf{3}}$ & \textbf{3} \\ [1pt] \hline \hline $Y$ & & & \\ [1pt] \hline & $\nicefrac{1}{6} \mp X$ & $-\nicefrac{1}{2} \mp X$ & $\pm X$ \end{tabular} \caption{All possible choices for the combinations of representation of the new particles such that they allow for an one-loop contribution to $b \rightarrow s \mu^+ \mu^-$. The upper sign of $\pm$ belongs to a-type models, the lower to b-type models.} \label{tbl:SUclassification} \end{table} In this context, $X$ is defined as the hypercharge of $\psi$ in model class a, while it is defined as the negative hypercharge of $\phi$ in model realization b. The parameter $X$ can be freely chosen in units of $\nicefrac{1}{6}$ in the interval $X\in(-1,1)$. In this work, we want to analyze whether the model classes presented above contain a viable DM candidate, while still being able to explain the \textit{B} anomalies. A dark matter candidate is constrained to be a colorless, electrically neutral, massive particle \footnote{In fact, DM could be colored and exist in form of eventually colorless bound states. We, however, only consider single particle DM. Please note also, that in principle DM could possess a small electric charge, such as in scenarios of millicharged DM \cite{Berlin:2018sjs,Munoz:2018pzp}.}. This statement alone eliminates one half of the 48 possible model configurations, since in the categories $C$ and $D$ there is not a single colorless particle. The remaining 24 models can be classified in terms of the properties of their DM candidates. In this article, we limit ourselves to models with a fermionic singlet dark matter candidate, which amounts to five models where the DM can be either Majorana or Dirac fermion. All singlet DM models are categorized in Table \ref{tbl:DMclassification}. Note that in order to stabilize the DM candidate against a decay, we assume all BSM particles to carry an odd charge under a $\mathtt{Z}_2$ symmetry, while all SM particles are evenly charged. \begin{table}[H] \centering \begin{tabular}{c|c} fermionic singlet & scalar singlet \\ [1pt] \hline \textcolor{darkred}{aIA} & aIIA \\ [1pt] \textcolor{darkred}{bIIA} & aIIB \\ [1pt] \textcolor{darkred}{bIIB} & aVA \\ [1pt] \textcolor{darkred}{bVA} & aVIB \\ [1pt] \textcolor{darkred}{bVIB} & bIA \end{tabular} \caption{Models containing a singlet DM candidate. Only the models containing a fermionic singlet DM candidate, highlighted in red, are considered in this work. Note that the value for $X$ is fixed within each model by the condition that there is a singlet DM candidate.} \label{tbl:DMclassification} \end{table} \subsection{bIIA} \FloatBarrier Following Table \ref{tbl:SUclassification}, the representations of the dark sector particles in bIIA are \begin{align} \psi_L = (\textbf{1},\textbf{1})_0, \, \psi_Q= (\textbf{3},\textbf{1})_{\nicefrac{2}{3}}, \, \phi=(\textbf{1},\textbf{2})_{-\nicefrac{1}{2}} \nonumber \\ \stackrel{\text{EWSB}}{\Rightarrow} \psi_L \to \psi_L^0 , \, \, \psi_Q \to \psi_Q^{+\nicefrac{2}{3}} , \, \, \phi \to \begin{pmatrix} \frac{1}{\sqrt{2}}\left(\phi^0 + \phi^{0'} \right) \\ \phi^{-} \end{pmatrix} \,. \end{align} As $\psi_L$ qualifies as the DM candidate, bIIA belongs to the class of \textit{leptophilic DM} models. \subsubsection{Dirac DM} \label{sec:bIIADirac} \FloatBarrier \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Plots/PlotLegends.pdf} \\ \subfigure[democratic]{ \includegraphics[width=0.5\textwidth]{Plots/bIIASummaryDemoUpdated.pdf}}% \subfigure[hierarchical]{ \includegraphics[width=0.5\textwidth]{Plots/bIIASummaryHighGbUpdated.pdf}} \caption{Summary plot for Dirac DM in bIIA. The plot provides the implications on the parameter space of the relic density (RD), the $R_K$ observable ($R_K$), direct detection (DD) and the anomalous magnetic moment of the muon ($\Delta a_\mu$) for $\kappa=1.01$ (dashed line), $\kappa=1.1$ (dotted line), $\kappa=5$ (dot-dashed line) and $\kappa=15$ (solid line). The orange lines give the parameters that generate the observed relic density. Parameter points to the left of those lines underproduce DM, while DM is overproduced to the right. The gray region displays the region where the $R_K$ anomaly can be addressed at $2 \sigma$. The blue region indicates the parameter space \emph{allowed} by direct detection experiments including both SI and SD constraints. Lastly, the green regions show areas where $\Delta a_\mu$ can be reproduced at $1 \sigma$. The area above the green bands lead to overly large contributions to $\Delta a_\mu$ and is therefore excluded.} \label{fig:SummarybIIADirac} \end{figure} \begin{comment} \begin{itemize} \item DM and $R_K$ only for $\kappa \gtrsim 12$. \item large Z vector coupling leads to stron SI DD constraints. \item No overlap between gray and blue -> DD excludes model as explanation for $R_K$ \item Lower mass bound on successful DM in coannihilation scenario results from gauge coupling/Quark Yukawas. \item The lines for coannihilating scenarios cross for large $\Gamma_\mu$ due to effecient conversions. \item $g-2$ excluded by DD \item Two effects in strength of DD constraints: Loop induced coupling becomes smaller with increasing mass (therefore with larger $\kappa$). Conversely, larger $\kappa$ leads to a smaller $<\sigma v>$ and thus larger $\Omega_DM$. This leads to tighter constraints from XENON. \item Flavor Structure influences the strength of DD limits in strongly coannihilating scenarios. This comes from efficient DM annihilation via $\Gamma_B$ in return leading to softer DD limits. Annihilation into quarks goes like $\Gamma_b^4 + \Gamma_b^2 \Gamma_s^2 + \Gamma_s^4$ which is maximized for large $\Gamma_b$ or $\Gamma_s$ maximal. \end{itemize} \end{comment} The numerical results obtained for bIIA Dirac are summarized in Figure \ref{fig:SummarybIIADirac}. Figure \ref{fig:SummarybIIADirac} indicates that of the depicted mass configurations only $\kappa=15$ principally allows for a solution of $R_K$, since the corresponding the RD line lies within the allowed $R_K$ region. Typically, the correct relic density is not just a line, but a small allowed band. However, as the relative experimental error of $\Omega_\text{DM} h^2$ measured by Planck is $\approx 0.8\%$ the allowed band is considerably tiny. Since it is a purely non-coannihilating scenario, the DM freeze-out is dominated by $\bar{\psi}_L \psi_L \to \bar{L} L$ annihilations and we can estimate the slope of the RD graph using Eq. \eqref{eqn:correctRD}, thus ultimately indicating a $\Gamma_\mu \sim \sqrt{\nicefrac{M_\text{DM}}{\text{GeV}}}$ dependence. The '$\Gamma_\mu$-intercept' or the 'height' of the RD line is thus determined by the individual $\mathcal{B}_{RD}^m(\kappa)$ of the model, $\mathcal{B}_{RD}^\text{bIIA}(\kappa)$ is this case. As shown in Eq. \eqref{eqn:RDleptophilic}, a common solution to the correct relic density and the $R_K$ anomaly can only exist if $\kappa \gtrsim 11.8$, while $\kappa \lesssim 11.8$ configurations on the other hand are factually excluded because of DM overproduction. Coannihilating scenarios are not always dominated by the aforementioned annihilations and therefore do not behave as plainly as their non-coannihilating counterparts in this model. As can be seen in Fig. \ref{fig:SummarybIIADirac}, there is a lower mass threshold on successful RD reconstruction stemming from annihilations of $\psi_Q$ via the strong gauge coupling or via the new quark Yukawa and also $\phi$ via the Higgs portal (see Figure \ref{fig:annihilation}). The threshold exists also in the case of vanishing Higgs portal coupling and the new quark Yukawa couplings $\Gamma_s$ and $\Gamma_b$, since the contribution from the strong gauge coupling $g_3$ always exists. Typically, Higgs portal interactions are dominated by the other two types of interaction and therefore we choose to fix the corresponding coupling $\lambda_{\phi, H, 1}$ during the entire scan. The visible $\kappa$-dependence of the threshold is explained by the typical coannihilation suppression factor of $\sim \exp{(-2x_f \kappa)}$ (see Eqs. \eqref{eqn:equilibrium}-\eqref{eqn:sigmaveff}): As $\kappa$ increases, coannihilation channels become more and more suppressed and thus the thermally averaged total cross section shrinks, leading to an increased DM relic density. This way, the threshold shifts towards lower masses for increasing $\kappa$. Furthermore, the annihilation cross section into quarks scales like $\sim \Gamma_b^4 + \Gamma_b^2 \Gamma_s^2 +\Gamma_s^4$ at any given $\kappa$, which explains the shift of the threshold to higher masses in the hierarchical scenario, since this factor is maximized for large values of $\Gamma_b$ realized in the hierarchical coupling structure. As the masses grow larger, coannihilating scenarios with the correct relic density become dominated by direct annihilations into leptons, as a larger $\Gamma_\mu$ is needed to achieve the correct thermally averaged cross section, leading to the same scaling behavior as in non-coannihilating scenarios. In the democratic setup, another effect can be observed in the coannihilating scenarios: The intersection of the RD lines of $\kappa = 1.1$ and $\kappa = 1.01$. This effect cannot be understood from annihilations described in Eq. \eqref{eqn:CrossSectionAnnihi} alone, but by taking into account effective conversions of the dark sector particles. We provide a general estimate of the thermally averaged cross section in \hyperref[sec:AppendixA]{Appendix A}. In there, we also shortly the discuss the $\kappa$ interval, where such intersections occur. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Plots/DDDiagramsPsiLDM.pdf} \caption{Leading order DM-quark diagrams contributing to the DD cross section of $\psi_L$ DM.} \label{fig:DiagramsDDPsiL} \end{figure} Direct detection is mediated at leading order at one-loop level in this model, as there are no tree-level vertices with quarks for leptophilic DM. The corresponding leading order diagrams are depicted in Figure \ref{fig:DiagramsDDPsiL}. It is important to note at this point that the box diagram exhibits one additional heavy dark sector particle propagator. Different from the penguin diagrams also, the final state quarks are of second or third generation, as $\Gamma_d=0$, which suppresses the diagram even further due to the reduced fractions of higher generation quark parton distribution functions (PDFs). \\ \indent Taking DD results into account, we can state that a solution of the $R_K$ anomaly is impossible in this setup, since the allowed regions for DD and $R_K$ are completely disjoint in both hierarchical and democratic implementation. It is important to stress that DD even excludes configurations where DM is underproduced. For this model to not be excluded, coannihilations are essential, since only those configurations feature non-overproduced RD within the allowed DD regions. The tightness of the constraints present in this model is due to the strong effective vector coupling of $\psi_L$ to the $Z$-boson, which in turn lead to a strong constraint from the SI DM-nucleon cross section. This constraint is weaker in coannihilating scenarios by $1-2$ orders of magnitude, which stresses the tension between a solution to $R_K$ and DD even more. In general, two competing effects are at work in this model: On the one hand, the effective $\bar{\psi}_L \psi_L Z$-coupling becomes smaller with increasing $\kappa$, since the masses of the dark sector particles within the loop increase accordingly, thus alleviating the bounds. An increasing $\kappa$ on the other hand generally increases $\Omega_\text{DM} h^2$ by lowering the thermally averaged cross section $\langle \sigma v \rangle$. As the bound posed by XENON is dependent on the fraction of $\psi_L$-DM compared in the observed DM density according to Eq. \eqref{eqn:BoundRescaling}, bounds are tightened by this rescaling. This interplay explains the intuitively unexpected flipping behavior of the DD bounds of $\kappa=15$ and $\kappa=5$, since the competing effects scale differently and thus a turning point of hierarchies is now expected. \\ \indent A comparison between Fig. \ref{fig:SummarybIIADirac} (a) and (b) leads to the conclusion that the flavor structure of the new quark Yukawas only influences the DD bounds of coannihilating scenarios. This is due to the fact that in leptophilic DM models only the suppressed box diagram exhibits a $\Gamma_{s/b}$ dependence, while the relic density in turn is indeed affected by these couplings in coannihilating scenarios, as mentioned above. As a hierarchical flavor structure tends to deplete DM more severely, the fraction of $\psi_L$-DM shrinks and DD bounds are softened as a consequence. \\ \indent As a last point, an explanation of the $g-2$ of the muon cannot be constructed in all versions of the bIIA Dirac model, as the allowed region and the DD allowed region are also disjoint. \subsubsection{Majorana DM} \FloatBarrier \begin{figure} \centering \subfigure[democratic]{ \includegraphics[width=0.5\textwidth]{Plots/bIIAMajoranaSummaryDemoUpdated.pdf}}% \subfigure[hierarchical]{ \includegraphics[width=0.5\textwidth]{Plots/bIIAMajoranaSummaryHighGbUpdated.pdf}}% \caption{Summary plot for Majorana DM in bIIA. The legend and an explanation of the color scheme are given in Fig. \ref{fig:SummarybIIADirac}.} \label{fig:SummarybIIAMajorana} \end{figure} \begin{comment} \begin{itemize} \item DD constraints much weaker caused by the missing vector current contribution. \item Actually, SD constraints stronger than SI though 6 orders weaker. \item Allowed region for DD and RK. \textbf{Collider !} \item Lower mass coannihilation bounds move to the right for high $\Gamma_b$. \item \textbf{To do: Why do coannihilation lines not cross? Possible answer: Coannihilation channels are efficient much longer due to Majorana DM annihilation suppression with $v^2$. This effect is correlated with non DM degrees of freedom.} \end{itemize} \end{comment} The Majorana version's results are presented in Figure \ref{fig:SummarybIIAMajorana}. In comparison to the Dirac version, the most striking differences of the Majorana version are the DD bounds, which are weakened by about one order of magnitude in $\Gamma_\mu$. This is mainly due to the vanishing of vector current contribution to the $\bar{\psi}_L \psi_L Z$ vertex, which is forbidden in the Majorana case. In the absence of this contribution, the most stringent bounds do not come from the SI DD cross section, but from the SD DM-nucleon cross section generated by the axial vector current contribution to $\bar{\psi}_L \psi_L Z$ vertex, although the SD limits are $\sim 6$ orders of magnitude weaker. The bounds on SD DM-neutron cross section also come from XENON \cite{Aprile:2019dbj}, while IceCube puts the more stringent bounds on the DM-proton cross section \cite{Aartsen:2016zhm} in this mass range. The DM-neutron cross section tends to constrain the lower mass region up to $\sim 300\,$GeV, while higher masses are typically constrained by the DM-proton cross section bounds from IceCube. We always apply the strongest bound at any given mass\footnote{In the discussion of the upcoming models, the mass regions where the aforementioned experiments are most constraining vary but are not explicitly stated}. As the blue regions allowed by DD now overlap with the gray ($R_K$) and green ($(g-2)_\mu$) regions, valid solutions to $R_K$ or $(g-2)_\mu$ (though not simultaneous) exist. We state here, however, that these solutions do not solve the DM problem, as DM is strongly underproduced in this part of the parameter space. Such constellations thus require a multi-component DM solution \footnote{Note that the other DM component(s) must stem from a separate dark sector to the one presented in this work in this case.} that goes beyond this model. In the democratic setup, the overlap with the $R_K$ region exists in the mass region $\lesssim 180\,$GeV, while a slightly bigger window arises in the hierarchical setup with DM masses ranging up to $\sim 420\,$GeV for $\kappa=1.1$ and $\sim 1000\,$GeV for $\kappa=1.01$. For a valid $(g-2)_\mu$ solution a flavor hierarchy is beneficial, as the allowed mass range for a democratic case is up to less than $100\,$GeV, whereas in the hierarchical case we observe viable masses up to $\sim 170\,$GeV for $\kappa=1.1$ and even $\sim 290\,$GeV for $\kappa=1.01$. The mass regions mentioned above appear to be accessible at the LHC, since they feature a colored particle of a mass less than a TeV. We briefly review results of searches of similar setups in Section \ref{sec:collider}, which suggest such low-massive regions are excluded. A dedicated collider study of this model beyond the contents of Section \ref{sec:collider} could be interesting. Another feature of the Majorana variant is that $R_K$ and RD can be reconciled with lower $\kappa$ values, as Majorana DM annihilations are p-wave at leading order and thus suffer from velocity suppression. This in turn requires a larger $\Gamma_\mu$ to achieve to the observed relic density, ultimately pushing relic density lines of lower $\kappa$ into the gray area. Mathematically quantified, this effect leads to a new lower bound on $\kappa$ for a reconciliation of $R_K$ and RD of $\kappa \gtrsim 4.7$ as indicated in Eq.\eqref{eqn:RDleptophilic}. Furthermore, we note at this point that the relic density lines of the coannihilating scenarios do not intersect in this version of the model. The reason for this effect is an alteration of the degrees of freedom of DM because of the Majorana nature (see \hyperref[sec:AppendixA]{Appendix A} for more details). \subsection{bVA} \FloatBarrier Following Table \ref{tbl:SUclassification}, the representations of the dark sector particles in bVA are \begin{align} \psi_L = (\textbf{1},\textbf{1})_0, \, \psi_Q= (\textbf{3},\textbf{3})_{\nicefrac{2}{3}}, \, \phi=(\textbf{1},\textbf{2})_{-\nicefrac{1}{2}} \nonumber \\ \stackrel{\text{EWSB}}{\Rightarrow} \psi_L \to \psi_L^0 , \, \, \psi_Q \to \begin{pmatrix} \psi_Q^{+\nicefrac{5}{3}} \\ \psi_Q^{+\nicefrac{2}{3}} \\ \psi_Q^{-\nicefrac{1}{3}} \end{pmatrix} , \, \, \phi \to \begin{pmatrix} \frac{1}{\sqrt{2}}\left(\phi^0 + \phi^{0'} \right) \\ \phi^{-} \end{pmatrix} \, . \end{align} This model can be called the 'sibling' of bIIA, since the fields are almost in the same representations. The only difference is that $\psi_Q$ is an $SU(2)_L$-triplet rather than a singlet. \subsubsection{Dirac DM}\label{sec:bVADirac} \FloatBarrier \begin{figure}[H] \centering \subfigure[democratic]{ \includegraphics[width=0.5\textwidth]{Plots/bVASummaryDemoUpdated.pdf}}% \subfigure[hierarchical]{ \includegraphics[width=0.5\textwidth]{Plots/bVASummaryHighGbUpdated.pdf}}% \caption{Summary plot for Dirac DM in bVA. The legend and an explanation of the color scheme are given in Fig. \ref{fig:SummarybIIADirac}.} \label{fig:SummarybVADirac} \end{figure} The numerical results of Dirac version of bVA are summarized in Figure \ref{fig:SummarybVADirac}. As the singlet $\to$ triplet shift of $\psi_Q$ is the only distinction, no difference in the relic densities of non-coannihilating scenarios occur, since they are dominated by direct annihilation of DM into leptons. This fact is well confirmed by a comparison of Figs. \ref{fig:SummarybIIADirac} and \ref{fig:SummarybVADirac}. Compared to bIIA, bVA has a greater number of coannihilation channels and therefore features an increased $\langle \sigma v \rangle$ in coannihilation scenarios, leading to a higher mass threshold. As Eq. \eqref{eqn:RDleptophilic} suggests, the model dependent coefficient $\mathcal{B}^\text{bVA}_{\mu}(\kappa)$ is larger than the one belonging to bIIA. This stems from the fact that there are now also more contributions to $B$-$\bar{B}$ mixing and $b \to s l^+ l^-$ transitions. This results in a comparably higher $\kappa_0^\text{bVA}$. Concerning the direct detection results, the main statement from bIIA also applies to bVA. There are no viable solutions to either the $R_K$ or $(g-2)_\mu$ anomalies in this model. The DD results are virtually the same as in bIIA. This is due to two competing effects. The RD is slightly more depleted for coannihilating scenarios, which weakens the bounds by a factor of $\nicefrac{\Omega}{\Omega_{DM}} , thus rendering the DD bounds weaker than in the bIIA model by $\mathcal{O}(10\%)$. \begin{comment} \begin{itemize} \item very similar to bIIA. Models are "siblings". Only $SU\left( 2 \right)$ triplet instead of singlet $\psi_Q$. \item More coannihilation partners thus more channels. But also more contributions to DD at 1-loop. \end{itemize} \end{comment} \subsubsection{Majorana DM} \FloatBarrier \begin{figure} \centering \subfigure[democratic]{ \includegraphics[width=0.5\textwidth]{Plots/bVAMajoranaSummaryDemoUpdated.pdf}}% \subfigure[hierarchical]{ \includegraphics[width=0.5\textwidth]{Plots/bVAMajoranaSummaryHighGbUpdated.pdf}}% \caption{Summary plot for Majorana DM in bVA. The legend and an explanation of the color scheme are given in Fig. \ref{fig:SummarybIIADirac}.} \label{fig:SummarybVAMajorana} \end{figure} Figure \ref{fig:SummarybVAMajorana} presents the results for the bVA Majorana version. As indicated in Section \ref{sec:bVADirac}, there is a strong resemblance between bIIA and bVA. The most interesting difference is the parameter space viable for an $R_K$ solution, which shrinks substantially due to the more severe $R_K$ bounds. The range is diminished to an upper bound on the DM mass of $\sim 260\,$GeV for $\kappa=1.1$ and $\sim 410\,$GeV for $\kappa=1.01$ in the hierarchic version of this model. Another minor difference is the intersection of the coannihilating RD lines compared to the non-intersecting behavior in bIIA Majorana. The reason for this is the change in degrees of freedom of the color charged fermion and the subsequent change in number density of the non-DM dark sector particles (see \hyperref[sec:AppendixA]{Appendix A}). \begin{comment} \begin{itemize} \item Here the lines behave as expected (they cross) \end{itemize} \end{comment} \FloatBarrier \subsection{bIIB} \label{sec:bIIB} \FloatBarrier Following Table \ref{tbl:SUclassification}, the representations of the dark sector particles in bIIB are \begin{align} \psi_L = (\bar{\textbf{3}},\textbf{1})_{-\nicefrac{2}{3}}, \, \psi_Q= (\textbf{1},\textbf{1})_0 , \, \phi=(\textbf{3},\textbf{2})_{\nicefrac{1}{6}} \nonumber \\ \stackrel{\text{EWSB}}{\Rightarrow} \psi_Q \to \psi_Q^0 , \, \, \psi_L \to \psi_L^{-\nicefrac{2}{3}} , \, \, \phi \to \begin{pmatrix} \phi^{+\nicefrac{2}{3}} \\ \phi^{-\nicefrac{1}{3}} \end{pmatrix} \, . \end{align} This model's DM candidate is $\psi_Q$, which qualifies bIIB as a \textit{quarkphilic} DM model. A notable feature of quarkphilic DM is a tree-level contribution to the direct detection cross section via the new Yukawa couplings $\Gamma_{s/b}$. However, since the first generation down Yukawa $\Gamma_d$ is set to zero, these contributions are suppressed via PDF. \subsubsection{Dirac DM} \FloatBarrier \begin{figure} \centering \subfigure[democratic]{ \includegraphics[width=0.5\textwidth]{Plots/bIIBSummaryDemoUpdated.pdf}}% \subfigure[hierarchical]{ \includegraphics[width=0.5\textwidth]{Plots/bIIBSummaryHighGbUpdated.pdf}}% \caption{Summary plot for Dirac DM in bIIB. The legend and an explanation of the color scheme are given in Fig. \ref{fig:SummarybIIADirac}.} \label{fig:SummarybIIBDirac} \end{figure} \begin{comment} \begin{itemize} \item Here $\psi_Q$ DM. For large $\kappa$ no coannihilation therefore $\Gamma_\mu$ "irrelevant". Coupling to quarks not strong enough to deplete DM on its own. \item Sizable Tree lvl direct detection contribution to SI. \textbf{Guess: For small masses also Z vetor 1 loop contributions.} \item Flavor structure: For large $\Gamma_b$ the impact of $\Gamma_\mu$ is completely negligible. \end{itemize} \end{comment} The results of the numerical scans for bIIB Dirac are summarized in Figure \ref{fig:SummarybIIBDirac}. As $\psi_L$ is the coannihilation partner in quarkphilic DM models like bIIB, $\Gamma_\mu$, the Yukawa coupling directly contributing to $R_K$ and $\Delta a_\mu$ is unimportant in scenarios where $\kappa \gtrsim 1.2$. As can be seen in Fig. \ref{fig:SummarybIIBDirac}, in both democratic and hierarchical versions, $\kappa=5$ and $\kappa=15$ cannot reproduce the observed DM relic density for DM masses $> 100\,$GeV. This is due to the fact that $\Gamma_{s/b}$ are not strong enough to deplete the RD by themselves. Further, a $\Gamma_\mu$ within the perturbative bounds is not sufficient to overcome the mass suppression by increasing $\langle \sigma v \rangle$ in a way that DM is not overproduced. However, successful relic density reproduction is possible within coannihilation scenarios, where this mass suppression is lowered and thus contributions from $\psi_L$ annihilation are sizable enough to offer some viable parameter space. Furthermore, a hierarchical flavor structure between second and third generation quark couplings lowers the influence of $\Gamma_\mu$ on the RD, because the thermal freeze-out is dominated by direct annihilation into third generation quarks. This behavior is well illustrated by a comparison between Fig. \ref{fig:SummarybIIBDirac} (a) and (b), as no change in the $\Gamma_\mu$-direction is apparent. We also observe the shift of the RD mass threshold to higher masses already described in Sec. \ref{sec:bIIADirac}. Presumably the most obvious feature of all in this model is the complete absence of parameter space allowed by direct detection. The reasons for this include the aforementioned tree-level diagrams ($s$ and $t$ channel) inducing a sizable contribution to $\sigma^\text{SI}_\text{DD}$ and a vector current contribution to the $\bar{\psi}_Q \psi_Q Z$-vertex. All contributions to DD up to one-loop are presented in Figure \ref{fig:DiagramsDDPsiQ}. Note here, that contribution of the effective $\bar{\psi}_Q \psi_Q H$ vertex is small compared to the ones from the $\bar{\psi}_Q \psi_Q Z$-vertex. This is the case, although the top-quark Yukawa $y_t$ enters in these processes and thus the Higgs exchange is not suppressed by a small Yukawa coupling, which is the case in leptophilic DM models. Moreover, the Higgs portal diagram suffers suppression due to an additional heavy scalar propagator and is thus less important. \begin{figure} \centering \subfigure[tree-level]{\centering \includegraphics[width=0.25\textwidth]{Plots/treelvlDDPsiQDM.pdf}} \subfigure[one-loop]{\centering \includegraphics[width=0.7\textwidth]{Plots/DDDiagramsPsiQDM.pdf}} \caption{Tree-level (a) and one-loop (b) DM-quark diagrams contributing to the DD cross section of $\psi_Q$ DM. The legend and an explanation of the color scheme are given in Fig. \ref{fig:SummarybIIADirac}.} \label{fig:DiagramsDDPsiQ} \end{figure} \newpage \subsubsection{Majorana DM} \FloatBarrier Figure \ref{fig:SummarybIIBMajorana} presents the numerical results of the Majorana DM version of bIIB. The behavior of the relic density is basically the same for Majorana as for Dirac with a slight difference in the mass threshold, which is due to the annihilation being $p$-wave rather than $s$-wave. \begin{figure} \centering \subfigure[democratic]{ \includegraphics[width=0.5\textwidth]{Plots/bIIBMajoranaSummaryDemoUpdated.pdf}}% \subfigure[hierarchical]{ \includegraphics[width=0.5\textwidth]{Plots/bIIBMajoranaSummaryHighGbUpdated.pdf}}% \caption{Summary plot for Majorana DM in bIIB. The legend and an explanation of the color scheme are given in Fig. \ref{fig:SummarybIIADirac}.} \label{fig:SummarybIIBMajorana} \end{figure} With regards to direct detection, the Majorana version exhibits parameter space in agreement with DD limits. As mentioned in earlier sections, the vector current contribution to the effective $\bar{\psi}_Q \psi_Q Z$ vertex vanishes in Majorana DM models, so that the main contribution to the SI DM-nucleon cross section on one-loop level is absent. The tree-level diagrams (see Fig. \ref{fig:DiagramsDDPsiQ}) are of the scalar mediated $s$-channel type\footnote{Note that tree-level t-channel diagram has an operator structure of $(\bar{\psi_Q} Q) (\bar{Q} \psi_Q)$, which resembles the structure of an $s$-channel diagram.} and can be related to $t$-channel type DD via Fierz transformations. According to the discussion in Section \ref{sec:Fierz}, this leads to $(v,v)$, $(v,a)$, $(a,v)$ and $(a,a)$ contributions. Since the $(v,v)$ structure does not exist in Majorana models, there is no unsuppressed contribution to the SI DM-nucleon cross section in this case and the twist-$2$ operators discussed in Section \ref{sec:twist2} become important, which depend on two powers of $\Gamma_{s/b}$ (see Fig. \ref{fig:DMGluonScattering}). The mass scaling of $\sigma^{\text{SI}}$ can be derived from Eq. \eqref{eqn:sigmaTwist2}. As $\sigma^{\text{SI}}$ effectively diminishes with increasing DM mass and the bound posed by XENON1T softens as $\sim m_\text{DM}$, twist-$2$ contributions generally constrain lower masses stronger than higher masses. An analogous behavior can be observed for $\kappa$. The cross section $\sigma^{\text{SI}}$ is lowered with increasing $\kappa$, pushing the threshold of DD to lower masses effectively. In the democratic scenario, where $\Gamma_{s/b}$ take moderate values, a considerable amount of parameter space is left open in coannihilation scenarios. We observe a complementary behavior of SD and SI DD, as SI limits rule out small masses, whereas SD limits rule out larger masses. In the area of small $M_{\psi_Q}$ and large $\Gamma_\mu$, RD is significantly depleted by coannihilation channels involving $\psi_L$ and thus the bound is softened, while the contributions to the SI cross section do not depend on $\Gamma_\mu$ so that high couplings to muons become viable ultimately. \\ \indent The shape of the SD exclusion can be explained by the DD bound rescaling due to the relic density. Areas in the parameter space where RD is strongly overproduced feature tighter bounds compared to the underproduced part of the parameter space \footnote{Note that this parameter space is ruled out by RD anyways.}. \\ \indent Non-coannihilating scenarios are ruled out by SD DD as the complete parameter space overproduces DM, although they are completely unconstrained by SI DD. In this model, there is viable parameter space for a solution to $R_K$ and DM in the $\kappa=1.1$ scenario for DM masses $M_{\psi_Q} \gtrsim 1.5\,$TeV and couplings $\Gamma_\mu \gtrsim 3$. In this window, the correct RD and $R_K$ can be solved while the sets of parameters are still allowed by DD. This window, however, is narrow and requires a finely tuned mass gap $\kappa$, as both larger and smaller values of $\kappa$ studied in this work either exclude the parameter space via direct detection ($\kappa=5$) or underproduce DM ($\kappa=1.01$). This model can in principle also explain $R_K$ and $(g-2)_\mu$ simultaneously within the $\kappa=1.1$ scenario with the caveat of DM being heavily underproduced. This solution, however, is also unstable in $\kappa$, as both lower and higher values of $\kappa$ lead to exclusion via DD. The hierarchical scenario is completely excluded by DD due to the large new Yukawa coupling $\Gamma_b$. The magnitude of this coupling drives the SI threshold to larger masses and causes the SI and SD exclusion areas to overlap completely. Additionally, the RD rescaling in the large $\Gamma_\mu$ does not soften the bound sufficiently to open up the parameter space in this scenario as opposed to the democratic one. \FloatBarrier \subsection{bVIB} \label{sec:bVIB} \FloatBarrier \begin{align} \psi_L = (\bar{\textbf{3}},\textbf{3})_{-\nicefrac{2}{3}}, \, \psi_Q= (\textbf{1},\textbf{1})_0 , \, \phi=(\textbf{3},\textbf{2})_{\nicefrac{1}{6}} \nonumber \\ \stackrel{\text{EWSB}}{\Rightarrow} \psi_Q \to \psi_Q^0 , \, \, \psi_L \to \begin{pmatrix} \psi_L^{+\nicefrac{1}{3}} \\ \psi_L^{-\nicefrac{2}{3}} \\ \psi_L^{-\nicefrac{5}{3}} \end{pmatrix} , \, \, \phi \to \begin{pmatrix} \phi^{+\nicefrac{2}{3}} \\ \phi^{-\nicefrac{1}{3}} \end{pmatrix} \, . \end{align} For the same reason bVA is considered the 'sibling' model to bIIA, the singlet to triplet shift, bVIB can be considered the 'sibling' of bIIB. \FloatBarrier \subsubsection{Dirac DM} \FloatBarrier \begin{figure} \centering \subfigure[democratic]{ \includegraphics[width=0.5\textwidth]{Plots/bVIBSummaryDemoUpdated.pdf}}% \subfigure[hierarchical]{ \includegraphics[width=0.5\textwidth]{Plots/bVIBSummaryHighGbUpdated.pdf}}% \caption{Summary plot for Dirac DM in bVIB. The legend and an explanation of the color scheme are given in Fig. \ref{fig:SummarybIIADirac}.} \label{fig:SummarybVIBDirac} \end{figure} The results for Dirac DM in bVIB are summarized in Figure \ref{fig:SummarybVIBDirac}. A comparison between Figs. \ref{fig:SummarybIIBDirac} and \ref{fig:SummarybVIBDirac} shows that the two sibling models bIIB and bVIB expectedly lead to similar results. In bVIB we can observe a more stringent bound on $\Gamma_\mu$ for an $R_K$ solution than in bIIB. This feature can also be observed in the bIIA vs. bVA comparison, where the triplet-model bVA exhibits more stringent $R_K$ bounds. The reason for this is again the increased number of diagrams contributing to the $b \to s l^+ l^-$ transitions in a triplet model. The relic densities of the non-coannihilating scenarios $\kappa=5,15$ are completely unaltered in comparison to bIIB, since only the $\psi_L$ gauge representation differ. Regarding direct detection, it is left to state that the whole parameter space studied in this work is excluded for this model, as is the case in bIIB. The two competing effects, namely rescaling of the DD bounds from lower RD for coannihilation scenarios and more contributions to DD cross section from additional diagrams involving triplet particles can have an $\mathcal{O}(10\%)$-effect on the DD bounds (see Section \ref{sec:bVADirac}). This, however, is not enough to render any of the parameter space viable in the quarkphilic Dirac DM model. \subsubsection{Majorana DM} \FloatBarrier \begin{figure} \centering \subfigure[democratic]{ \includegraphics[width=0.5\textwidth]{Plots/bVIBMajoranaSummaryDemoUpdated.pdf}}% \subfigure[hierarchical]{ \includegraphics[width=0.5\textwidth]{Plots/bVIBMajoranaSummaryHighGbUpdated.pdf}}% \caption{Summary plot for Majorana DM in bVIB. The legend and an explanation of the color scheme are given in Fig. \ref{fig:SummarybIIADirac}.} \label{fig:SummarybVIBMajorana} \end{figure} The results of the Majorana version of bVIB are summarized in Figure \ref{fig:SummarybVIBMajorana}. This model's results mostly resemble the results obtained for bIIB Majorana, due to the minor differences between the models. The hierarchical scenario is still completely ruled out by direct detection and does therefore not show any significant difference to the hierarchical bIIB Majorana model. The democratic scenario on the other hand exhibits some notable differences. The most important difference is that there is no parameter space for a simultaneous solution of $R_K$ and DM, as the $R_K$ region shrinks significantly due to the shift in contributions to both $b\to s l^+ l^-$ transitions and $B$-$\bar{B}$ mixing. Therefore, $\psi_Q$-DM is a possibility in this model only if the original motivation of the model is abandoned. Another minor difference is that parameter space allowed by DD in the $\kappa=5$-scenario reaches mass values $>100\,$GeV, as the mass thresholds of SI DD are generally shifted to higher masses. This is due to stronger underproduction of DM and subsequent relaxation of the DD bounds compared to bIIB Majorana. Furthermore, the conclusions regarding a solution of $(g-2)_\mu$ remain almost the same as in bIIB. There is a possibility for a correct $\Delta a_\mu$ with the downside of underproduced $\psi_Q$-DM. The main difference between the solutions of bIIB and bVIB is the mass range where such a realization is possible. This model shows also that in principle there is a possibility of a solution to $(g-2)_\mu$ involving a relatively light $\psi_Q$, as demonstrated in the case of $\kappa=5$. In this case, however, we expect collider searches to become more and more restrictive, as we outline in Section \ref{sec:collider}. \FloatBarrier \subsection{aIA} \label{sec:aIA} \FloatBarrier \begin{align} \psi= (\textbf{1},\textbf{1})_0 , \, \phi_L = (\textbf{1},\textbf{2})_{-\nicefrac{1}{2}} , \, \phi_Q=(\textbf{3},\textbf{2})_{\nicefrac{1}{6}} \nonumber \\ \stackrel{\text{EWSB}}{\Rightarrow} \psi \to \psi^0 , \, \, \phi_L \to \begin{pmatrix} \frac{1}{\sqrt{2}} \left( \eta^0 + \eta^{0'} \right) \\ \eta^- \end{pmatrix} , \, \, \phi_Q \to \begin{pmatrix} \sigma^{+\nicefrac{2}{3}} \\ \sigma^{-\nicefrac{1}{3}} \end{pmatrix} \, . \end{align} As aIA is the only a-type model with a fermionic singlet DM candidate, it makes up it's own category of \textit{amphiphilic} DM, meaning that DM couples to both quarks and leptons. Thus, aIA possesses with properties of both quark- and leptophilic models. Moreover, this model is the only model that features different $R_K$ bounds in Dirac and Majorana versions, as the $SU(2)_L$ structure allows for additional diagrams to contribute to the $B$-$\bar{B}$ mixing and $b \to s l^+l^-$ transitions (see \cite{Arnan:2016cpy} for a more detailed discussion). \subsubsection{Dirac DM} \FloatBarrier \begin{figure} \centering \subfigure[democratic]{ \includegraphics[width=0.5\textwidth]{Plots/aIASummaryDemoUpdated.pdf}}% \subfigure[hierarchical]{ \includegraphics[width=0.5\textwidth]{Plots/aIASummaryHighGbUpdated.pdf}}% \caption{Summary plot for Dirac DM in aIA. The legend and an explanation of the color scheme are given in Fig. \ref{fig:SummarybIIADirac}.} \label{fig:SummaryaIADirac} \end{figure} \begin{comment} \begin{itemize} \item Different structure: "Mediator" is the DM candidate. Can couple to SM via quarks \textbf{and} leptons. \item Here the lines for coannihilations are moved to the right since depletion of dark matter is very efficient. \item Mass threshold for $\kappa=5$ is caused by direct DM annihilation into quarks. \item Everything excluded by DD. We have slightly more efficient couplings to the Higgs since they leptonic contributions are added. \item Z dominates. \end{itemize} \end{comment} We present the results for aIA in Figure \ref{fig:SummaryaIADirac}. As the parameter space principally allowing for an $R_K$ solution is significantly enlarged in comparison to all other models in this study ($\sim$ 1-2 order(s) of magnitude), all $\kappa$-configurations have the potential to provide a simultaneous solution DM for relic density and $R_K$ in the democratic scenario. We can observe the leptophilic DM characteristics especially in the scaling of the non-coannihilating configurations. The hierarchical scenario on the other hand shows very high mass thresholds. This is caused by direct annihilations into quarks, as the annihilation cross section is compeletely dominated by annihilation into third generation quarks. This feature can be also observed in purely quarkphilic DM models (see Figs. \ref{fig:SummarybIIBDirac} and \ref{fig:SummarybVIBDirac}). Direct detection rules out all of the interesting parameter space in this model. For the diagrams contributing to DD, we refer to Fig. \ref{fig:DiagramsDDPsi}. First of all, we obtain the tree-level contributions to the SI DM-nucleon cross section typical for quarkphilic DM. Furthermore, aIA Dirac exhibits $t$-channel $Z$-diagrams depending on both new leptonic and quark Yukawa couplings. This leads to even more enhanced vector contributions to the $\bar{\psi} \psi Z$-vertex compared to leptophilic DM or quarkphilic DM models. \begin{figure} \centering \subfigure[tree-level]{\centering \includegraphics[width=0.25\textwidth]{Plots/treelvlDDPsiDM.pdf}} \subfigure[one-loop]{\centering \includegraphics[width=0.7\textwidth]{Plots/DDDiagramsPsiDM.pdf}} \caption{Tree-level (a) and one-loop (b) DM-quark diagrams contributing to the DD cross section of $\psi_Q$ DM.} \label{fig:DiagramsDDPsi} \end{figure} \newpage \subsubsection{Majorana DM} \FloatBarrier \begin{figure} \centering \subfigure[democratic]{ \includegraphics[width=0.5\textwidth]{Plots/aIAMajoranaSummaryDemoUpdated.pdf}}% \subfigure[hierarchical]{ \includegraphics[width=0.5\textwidth]{Plots/aIAMajoranaSummaryHighGbUpdated.pdf}}% \caption{Summary plot for Majorana DM in aIA. The legend and an explanation of the color scheme are given in Fig. \ref{fig:SummarybIIADirac}.} \label{fig:SummaryaIAMajorana} \end{figure} Figure \ref{fig:SummaryaIAMajorana} summarizes the results for the Majorana version of aIA. Compared to the Dirac version, the Majorana version of aIA features shifted relic density lines, which is due to the p-wave annihilation. The coannihilating scenarios exhibit higher mass thresholds, which is a feature in all models. As indicated at the beginning of Section \ref{sec:aIA}, the $R_K$ regions follow an altered hierarchy regarding the coannihilation parameter $\kappa$ in the Majorana version compared to the Dirac version. This is a unique feature of aIA among all models studied in this work, as it is the only model where additional diagrams enter in the Majorana case. The $R_K$ bound of $\kappa=1.01$ is more restrictive than the $\kappa=15$ bound, while $\kappa=5$ and $\kappa=1.1$ differ only at the $\mathcal{O}(1\%)$-level\footnote{The $R_K$ bound as a function of $\kappa$ has a global minimum at $\kappa \approx 1.78$ and is divergent at $\kappa=1,\infty$.}. Generally speaking, the direct detection results are the most interesting in this model. Since aIA features traits of both quarkphilic and leptophilic models, areas allowed by DD stem from a dynamic interplay between these model characteristics. \\ \indent As is characteristic for leptophilic models, the DM-nucleon cross section is constrained for large $\Gamma_\mu$, because of the axial vector part of the one-loop diagrams involving leptons. The shape of the exclusion line in this model, however, differs from leptophilic models. There are two major reasons for this: One the hand, the relic density rescaling is not only influenced by direct annihilation into leptons but also by direct annihilation into quarks in this model. This is the reason for the much more parallel alignment of DD exclusion curves compared to leptophilic models. On the other hand, SD loop-contributions differ because of additional quark loops, which come with an opposite sign compared to leptonic contributions because of their hypercharge structure\footnote{Note that the additional quark contributions do not necessarily enhance the SD DM-nucleon cross section because of potential cancellations with the leptonic contributions.}. \\ \indent The exclusion from below is a typical characteristic of one-loop contributions involving quarks in the loop, as they are dependent on $\Gamma_{s/b}$ rather than $\Gamma_\mu$ but still sensitive to the masses of the particles running in the loop (and therefore also $\kappa$). \\ \indent The allowed regions visible in Fig. \ref{fig:SummaryaIAMajorana} therefore arise due to the interplay of the leptonic and hadronic loop contributions and relic density rescaling of the DD bound. The above mentioned contributions are all SD but as discussed in the Sections \ref{sec:bIIB} and \ref{sec:bVIB}, SI DD induced by DM-gluon scattering can also occur in quarkphilic models. This effect is visible in coannihilation scenarios, where the allowed regions feature a cut-off at a certain DM mass, such that smaller DM masses are excluded. In the hierarchical scenario, the allowed area for $\kappa=1.01$ disappears completely. In coannihilation scenarios, we find a 'sweet spot' for a valid DM production in the region $M_\psi \in[1.22,\, 1.32]\,$TeV. This, however, does not offer a solution to either the $R_K$ or the $(g-2)_\mu$ anomalies. In the non-coannihilation scenario $\kappa=5$, the mass region $M_\psi \lesssim 200\,$GeV offers a solution to both $R_K$ and DM in this model. Simultaneous solutions for $R_K$, DM and $(g-2)_\mu$ do not exist in this model. Moreover, even individual solutions to $(g-2)_\mu$ are excluded by DD in this setup. \section{Introduction} \input{Chapters/Introduction.tex} \section{Model Classification and Coupling Constraints} \label{sec:ModelandAnalysis} \input{Chapters/ModelandAnalysis.tex} \input{Chapters/couplingConstraints.tex} \section{Dark Matter Phenomenology} \label{sec:DMPheno} \input{Chapters/DMPheno.tex} \section{Results of the Numerical Analysis} \label{sec:Results} \input{Chapters/Numerical.tex} \input{Chapters/Collider.tex} \section{Summary and Conclusions}\label{sec:conclusion} \input{Chapters/Conclusion.tex} \section*{Acknowledgements} \input{Chapters/Acknowledgements.tex}
1,116,691,499,503
arxiv
\section{Introduction} Partitioning algorithms play a key role in machine learning, signal processing and communications. Given a set $\mathbb{M}$ consisting of $M$ $N$-dimensional elements and a loss function over the subsets of $\mathbb{M}$, a $K$-optimal partition algorithm splits $\mathbb{M}$ into $K$ subsets such that the total loss over all $K$ subsets is minimized. The loss function has also termed the impurity which measures the "impurity" of the set. Some of the popular impurity functions are the entropy function and the Gini index \cite{quinlan2014c4}. For example, when the empirical entropy of a set is large, this indicates a high level of non-homogeneity of the elements in the set, i.e., "impurity". Thus, a $K$-optimal partition algorithm divides the original set into $K$ subsets such that the weighted sum of entropies in each subset is minimal. In general, the partitioning problem is NP-hard. For small $M$, $N$, and $K$, the optimal partition can be found using an exhaustive search with time complexity $O(K^M)$. In some special cases such as when $N = 2$, and a particular form of impurity functions is used, it is possible to determine the optimal partition in $O(M\log{M})$, independent of $K$ \cite{breiman2017classification}. On the other hand, for large $M$, $N$, and $K$, exhaustive search is infeasible, and it is necessary to use approximate algorithms. To that end, several heuristic algorithms are commonly used \cite{nadas1991iterative}, \cite{chou1991optimal}, \cite{coppersmith1999partitioning}, \cite{burshtein1992minimum} to find the optimal partition. These algorithms exploit the property of the impurity function to reduce the time complexity. Specially, in \cite{coppersmith1999partitioning}, \cite{burshtein1992minimum}, a class of impurity function called "frequency weighted concave impurity" is investigated. Both Gini index and entropy function belong to the frequency weighted concave impurity class. Furthermore, assuming the concavity of the impurity function, Brushtein et al. \cite{burshtein1992minimum} and Coppersmith et al. \cite{coppersmith1999partitioning} showed that the optimal partition can be separated by a hyper-plane in the probability space. Consequently, they proposed approximate algorithms to find the optimal partition. Recently, in \cite{laber2018binary}, an approximate algorithm is proposed for a binary partition ($K=2$) that guarantees the true impurity is within a constant factor of the approximation. From a communication/coding theory perspective, the problem of finding an optimal quantizer that maximizes the mutual information between the input and the quantized output is an important instance of the partition problem. In particular, algorithms for constructing polar codes \cite{tal2013construct} and for decoding LDPC codes \cite{romero2015decoding} made use of the quantizers. Consequently, there has been recent works on designing quantizers for maximizing mutual information \cite{kurkoski2014quantization}, \cite{nguyen2018capacities}. In this paper, we extended the problem of minimizing impurity partition under the constraints of the output variable. It is worth noting that many of problem in the real scenario is the optimization under constraints, therefore, our extension problem is interesting and applicable. For example, our setting generalizes the recently proposed deterministic information bottleneck \cite{strouse2017deterministic} that finds the optimal partition to maximize the mutual information between input and quantized output while keeps the output entropy is as small as possible. It is worth noting that Strouse et al. used a technique which is similar to the information bottleneck method \cite{tishby2000information} and is hard to extend to other impurity and constraint functions. On the other hand, our proposed method is developed based on a novel optimality condition, which allows us to find a locally optimal solution efficiently for an arbitrary frequency weighted concave impurity functions under arbitrary concave constraints. Moreover, we show that the optimal partition produces a hard partition that is equivalent to the cuts by hyper-planes in the probability space of the posterior probability that finally yields a polynomial time complexity algorithm to find the globally optimal partition. \section{Problem Formulation} \begin{figure} \centering \includegraphics[width=1.6 in]{fig_1_setup.eps}\\ \caption{$Q(Y) \rightarrow Z$ for a given joint distribution $p_{(X,Y)}$.}\label{fig: 1} \end{figure} Consider an original discrete data $X_i \in X=\{X_1,X_2,\dots,X_N\}$ with distribution $p_X=[p_1,p_2,\dots,p_N]$ is given. Due to the affection of noise, one only can view a noisy version of data $X$ named $Y=\{Y_1,\dots,Y_M\}$ with the joint probability distribution $p_{(X_i,Y_j)}$ is given $\forall$ $i =1,2,\dots,N$ and $j=1,2,\dots,M$. It is easily to compute the distribution of $Y$, i.e., $p_Y=[q_1,q_2,\dots,q_M]$. Therefore, each sample $Y_i$ is specified by a joint probability distribution vector $p_{(X,Y_i)}=[p_{(X_1,Y_i)},p_{(X_2,Y_i)}, \dots, p_{(X_N,Y_i)}]$ which involves two parameters (i) the probability weight $q_i$ and (ii) a conditional probability vector tuple $p_{X|Y_i}=[p_{X_1|Y_i},p_{X_2|Y_i},\dots,p_{X_N|Y_i}]$. From the discrete data $Y$, the partitioned output $ Z=\{Z_1,\dots,Z_K\}$ with the distribution $p_Z=[v_1,v_2,\dots,v_K ]$ is obtained by applying an quantizer (possible stochastic) $Q$ which assigns $Y_j \in Y$ to a partitioned output subset $Z_i \in Z$ by a probability $p_{Z_i|Y_j}$ where $0 \leq p_{Z_i|Y_j} \leq 1$. \begin{equation} Q( Y) \rightarrow Z. \end{equation} Fig. \ref{fig: 1} illustrates our setting. Our goal is finding an optimal quantizer (partition) $Q^*$ such that the impurity function $F(X,Z)$ between original data $X$ and partitioned output $Z$ is minimized while the partitioned output probability distribution $p_Z=[v_1,v_2,\dots,v_K]$ satisfies a constraint $C(p_Z) \leq D$. \subsection{Impurity measurement} The impurity $F(X,Z)$ between $X$ and $Z$ is defined by adding up the impurity in each output subset $Z_i \in Z$ i.e., $F(X,Z) = \sum_{i=1}^{K} F(X,Z_i)$ where \begin{eqnarray} \label{eq: definition of impurity} F(X,Z_i) &\!=\!& p_{Z_i} f[p_{X|Z_i}] \nonumber\\ &\!=\!& v_i f[p_{X_1|Z_i},\! p_{X_2|Z_i},\! \dots,\! p_{X_N|Z_i}] \label{eq: definition of impurity} \end{eqnarray} is the impurity function in $Z_i$, $p_{X|Z_i}=[p_{X_1|Z_i},\! p_{X_2|Z_i},\! \dots,\! p_{X_N|Z_i}]$ denotes the conditional distribution $p_{X|Z_i}$. The loss function $f(.)$ is a concave function which is defined as following. \begin{definition} \label{def: 1} A concave loss function $f(.)$ is a real function in $\mathbf{R^N}$ such that: (i) For all probability vector $a=[a_1,a_2,\dots,a_N]$ and $b=[b_1,b_2,\dots,b_N]$ \begin{equation} \label{eq: concave function} f(\lambda a + (1-\lambda)b) \geq \lambda f(a) + (1-\lambda)f(b), \forall \lambda \in (0,1) \end{equation} with equality if and only if $a=b$. (ii) $f(a)=0$ if $a_i=1$ for some $i$. \end{definition} We note that the above definition of impurity function was proposed in \cite{chou1991optimal}, \cite{coppersmith1999partitioning}, \cite{burshtein1992minimum}. Many of interesting impurity functions such as Entropy and Gini index \cite{chou1991optimal}, \cite{coppersmith1999partitioning}, \cite{burshtein1992minimum} satisfy the Definition \ref{def: 1}. \\ \textbf{Reformulation of the impurity function:} We will show that the impurity function $F(X,Z_i)$ can be rewritten as the function of only the joint distribution variable $p_{(X,Z_i)}=[p_{(X_1,Z_i)}, p_{(X_2,Z_i)}, \dots, p_{(X_N,Z_i)}]$. Therefore, one can denote $F(X,Z_i)$ as $F(p_{(X,Z_i)})$. Indeed, define $$p_{(X_j,Z_i)}=\sum_{Y_k \in Y}^{}p_{(X_j,Y_k)} p_{Z_i|Y_k}. $$ Now, the impurity function can be rewritten by: \begin{eqnarray} F(X,Z_i) &\!=\!& \sum_{j\!=\!1}^{N}p_{(X_j\!,\!Z_i)}f[\dfrac{p_{(X_1\!,\!Z_i)}}{\sum_{j\!=\!1}^{N}p_{(X_j\!,\!Z_i)}}, \dots, \dfrac{p_{(X_N\!,\!Z_i)}}{\sum_{j\!=\!1}^{N}p_{(X_j\!,\!Z_i)}}] \nonumber\\ \label{eq: new formulation} \end{eqnarray} where $\sum_{j=1}^{N}p_{(X_j,Z_i)}=v_i$ denotes the weight of $Z_i$ and $\dfrac{p_{(X_k,Z_i)}}{\sum_{j=1}^{N}p_{(X_j,Z_i)}}$ denotes the conditional distribution $p_{(X_k|Z_i)}$. \textit{The impurity function $F(X,Z_i)$, therefore, is a function of $p_{(X_j,Z_i)}$ variables. In the rest of this paper, we will denote $F(X,Z_i)$ by $F(p_{(X,Z_i)})$ and $F(X,Z)$ by $F(p_{(X,Z)})$.} \subsection{Partitioned output constraint} Now, we formulate a new problem such that the impurity function is minimized while the partitioned output distribution $p_Z=[v_1,v_2,\dots,v_K]$ satisfies a constraint. \begin{equation*} C(p_Z)=g(v_1) + g(v_2) + \dots + g(v_K) \leq D \end{equation*} where $g(.)$ is a concave function. For example, \begin{itemize} \item{} Entropy function: \begin{equation*} H[p_Z]=H[v_1,v_2,\dots,v_K]=-\sum_{i=1}^{n}v_i \log(v_i). \end{equation*} For example, if we want to compress data $Y$ to $Z$ and then transmit $Z$ as the intermediate representation of $Y$ over a low bandwidth channel to the next destination, the entropy of $p_Z$ which is controlled the maximum compression rate, is important. A lower of $H(p_Z)$, a smaller of channel capacity is required \cite{strouse2017deterministic}. \item{} Linear function: Similar to previous example, to transmit $Z$ over a channel, each value in the same subset $Z_1,Z_2,\dots,Z_K$ is coded to a pulse, i.e., $Z_1 \rightarrow 0$, $Z_2 \rightarrow 1$, $Z_3 \rightarrow 2$ which have a difference cost of transmission i.e., power consumption or time delay. The cost of transmission now is \begin{equation*} T[p_Z]=T[v_1,v_2,\dots,v_K]=\sum_{i=1}^{K}t_iv_i. \end{equation*} where $t_i$ is a constant relate to power consumption or time delay. An example of transmission cost can be viewed in \cite{verdu1990channel}. \end{itemize} \subsection{Problem Formulation} Now, our problem can be formulated as finding an optimal quantizer $Q^*$ such that the impurity function $F(X,Z)$ is minimized while the partitioned output probability distribution $p_Z$ satisfies a constraint $C(p_Z) \leq D$. Since both $F(X,Z)$ and $C(p_Z) $ depend on the quantizer design, we are interested in solving the following optimization problem \begin{equation} \label{eq: main problem} Q^*=\min_{Q}[\beta F(X,Z) + C(p_Z) ], \end{equation} where $\beta$ is pre-specified parameter to control a given trade-off between minimizing $F(X,Z)$ or $C(p_Z)$. \textbf{Relate to Deterministic Information Bottleneck (DIB) method:} we also note that our optimization problem in (\ref{eq: main problem}) covers the proposed problem called Deterministic Information Bottleneck Method \cite{strouse2017deterministic} which solved the following problem \begin{equation} \label{eq:determinisitc information bottleneck problem} Q^*=\min_{Q}[H(Z)- \beta I(X;Z) ], \end{equation} where $H(Z)$ is the entropy of output $Z$ and $I(X;Z)$ is the mutual information between original data $X$ and quantized output $Z$. Minimizing $H(Z)$ is equivalent to minimizing $C(p_Z)$. Moreover, \begin{equation*} I(X;Z)=H(X)-H(X|Z). \end{equation*} Thus, minimizing $-I(X;Z)$ is equivalent to minimizing $H(X|Z)$ due to $p_X$ is given. That said Deterministic Information Bottleneck \cite{strouse2017deterministic} is a special case of our problem where both $f(.)$ and $g(.)$ are entropy functions. \section{Solution approach} \subsection{Optimality condition} We first begin with some properties of the impurity function. For convenience, we recall that $F(p_{(X,Z_i)})$ denotes the impurity function in output subset $Z_i$ and $p_{X|Z_i}=[\dfrac{p_{(X_1\!,\!Z_i)}}{\sum_{j=1}^{N}p_{(X_j,Z_i)}}, \dots, \dfrac{p_{(X_N,Z_i)}}{\sum_{j=1}^{N}p_{(X_j,Z_i)}}]$. \begin{proposition} \label{prop: 2} The impurity function $F(p_{(X,Z_i)})$ in partitioned output $Z_i$ has the following properties: (i) \textbf{proportional increasing/ decreasing to its weight:} if $p_{(X,Z_i)}=\lambda p_{(X,Z_j)}$, then \begin{equation} \dfrac{F(p_{(X,Z_i)})}{F(p_{(X,Z_j)})}=\lambda. \end{equation} (ii)\textbf{ impurity gain after partition is always non-negative:} If $p_{(X,Z_i)}=p_{(X,Z_j)}+p_{(X,Z_k)}$, then \begin{equation} \label{eq: concave of partition} F(p_{(X,Z_i)}) \geq F(p_{(X,Z_j)}) + F(p_{(X,Z_k)}). \end{equation} \end{proposition} \begin{proof} (i) From $p_{(X,Z_i)}=\lambda p_{(X,Z_j)}$, then $p_{X|Z_i}=p_{X|Z_j}$ and $p_{Z_i}=\lambda p_{Z_j}$. Thus, using the definition of $F(p_{(X,Z_i)})$ in (\ref{eq: definition of impurity}), it is obviously to prove the first property. (ii) By dividing both side of $p_{(X,Z_i)}=p_{(X,Z_j)}+p_{(X,Z_k)}$ to $p_{Z_i}$, we have \begin{equation} \label{eq: 9} p_{X|Z_i}= \dfrac{p_{Z_j}}{p_{Z_i}}p_{X|Z_j}+ \dfrac{p_{Z_k}}{p_{Z_i}}p_{X|Z_k}. \end{equation} Now, using the definition of $F(X,Z_i)$ in (\ref{eq: definition of impurity}), \begin{eqnarray} F(\!p_{(X,Z_i)}\!) &\!=\!& p_{Z_i}f(p_{X|Z_i}) \nonumber\\ &\!=\!& p_{Z_i} f [\dfrac{p_{Z_j}}{p_{Z_i}}p_{X|Z_j} \!+\! \dfrac{p_{Z_k}}{p_{Z_i}}p_{X|Z_k}] \label{eq: 10}\\ &\!\geq \!& p_{\!Z_i\!} [\dfrac{p_{Z_j}}{p_{Z_i}} f(\!p_{X|Z_j}\!) \!+\! \dfrac{p_{Z_k}}{p_{Z_i}}f(\!p_{X|Z_k}\!)] \label{eq: 11}\\ &\!=\!&p_{Z_j}f(p_{X|Z_j}) \!+\! p_{Z_k}f(p_{X|Z_k}) \nonumber\\ &\!=\!& F(p_{(X,Z_j)}) + F(p_{(X,Z_k)}) \nonumber \end{eqnarray} with (\ref{eq: 10}) is due to (\ref{eq: 9}) and (\ref{eq: 11}) due to concave property of $f(.)$ which is defined in (\ref{eq: concave function}) using $\lambda= \dfrac{p_{Z_j}}{p_{Z_i}}$, $1-\lambda=\dfrac{p_{Z_k}}{p_{Z_i}}$. \end{proof} Now, we are ready to prove the main result which characterizes the condition for an optimal partition $Q^*$. \begin{theorem} \label{theorem: 1} Suppose that an optimal partition $Q^*$ yields the optimal partitioned output $Z=\{Z_1,Z_2,\dots,Z_K \}$. For each optimal subset $Z_l$, $l \in \{1,2,\dots,K\}$, we define vector $c_l=[c_l^1,c_l^2,\dots,c_l^N]$ where \begin{equation} \label{eq: 16} c_l^i= \frac{\partial F(p_{(X,Z_l)})}{\partial p_{(X_i,Z_l)}}, \forall i \in \{1,2,\dots,N\}. \end{equation} We also define \begin{equation} \label{eq: 17} d_l=\frac{\partial g(v_l)}{\partial v_l}. \end{equation} Define the "distance" from $Y_i \in Y$ to $Z_l$ is \begin{eqnarray} D(Y_i,Z_l) &=&\beta \sum_{q=1}^{N}[p_{(X_q,Y_i)} c_l^q] + d_l q_i \nonumber\\ &=& q_i (\beta \sum_{q=1}^{N}[p_{X_q|Y_i} c_l^q] + d_l) \label{eq: optimality condition}. \end{eqnarray} Then, data $Y_i$ with probability $q_i$ is quantized to $Z_l$ if and only if $D(Y_i,Z_l) \leq D(Y_i,Z_s)$ for $\forall s \in \{1,2,\dots,K\}$, $ s \neq l$. \end{theorem} \begin{proof} \begin{figure} \centering \includegraphics[width=2.7 in]{soft_partition.eps}\\ \caption{"Soft" partition of $Y_i$ between $Z_l$ and $Z_s$ by changing amount of $tbp_{(X,Y_i)}$.}\label{fig: 2} \end{figure} Now, consider two arbitrary partitioned outputs $Z_l$ and $Z_s$ and a trial data $Y_i$. For a given optimal quantizer $Q^*$, we suppose that $Y_i$ is allocated to $Z_l$ with the probability of $p_{Z_l|Y_i}=b$, $0 < b \leq 1$. We remind that $p_{(X,Y_i)}=[p_{(X_1,Y_i)},p_{(X_2,Y_i)}, \dots, p_{(X_N,Y_i)}]$ denotes the joint distribution of sample $Y_i$. We consider the change of the impurity function $F(p_{(X,Z)})$ and the constraint $C(p_Z)$ as a function of $t$ by changing amount $tbp_{(X,Y_i)}$ from $p_{(X,Z_l)}$ to $p_{(X,Z_s)}$ where $t$ is a scalar and $0 \leq t \leq 1$. \begin{small} \begin{eqnarray} F(p_{(X\!,\!Z)})(t) &\!=\!& \sum_{q=1, q \neq l,s}^{K}F(p_{(X,Z_q)}) \nonumber \\ &\!+\!& F(p_{(X,Z_s)} \!+\! tbp_{(X,Y_i)}) \!+\! F(p_{(X,Z_l)} \!-\! tbp_{(X,Y_i)}),\nonumber \\ \label{eq: 18} \end{eqnarray} \begin{eqnarray} C(p_Z)(t) &=& \sum_{q=1,q \neq l,s}^{K}g(p_{Z_q}) + g(p_{Z_l}-tbq_i) + g(p_{Z_s} + tbq_i), \nonumber \\ \label{eq: 19} \end{eqnarray} \end{small} where $p_{(X,Z_s)} + tbp_{(X,Y_i)}$ and $p_{(X,Z_l)} - tbp_{(X,Y_i)}$ denotes the new joint distributions in $Z_s$ and $Z_l$ by changing amount of $tbp_{(X,Y_i)}$ from $Z_l$ to $Z_s$. Fig. \ref{fig: 2} illustrates our setting. From (\ref{eq: 18}) and (\ref{eq: 19}), the total instantaneous change of $\beta F(p_{(X,Z)}) + C(p_Z)$ by changing amount of $tbp_{(X,Y_i)}$ is \begin{eqnarray} \label{eq: I(t)} I(t) &=& \beta [F(p_{(X,Z_s)} \!+\! tbp_{(X,Y_i)}) \!+\! F(p_{(X,Z_l)} \!-\! tbp_{(X,Y_i)})] \nonumber\\ &+& g(p_{Z_s} + tbq_i) + g(p_{Z_l}-tbq_i). \end{eqnarray} However, \begin{small} \begin{equation} \label{eq: derivative 1} \frac{\partial F(p_{(X,Z_l)} \!-\! tbp_{(X,Y_i)})}{\partial t}|_{t\!=\!0} \!=\! \!-\! b\sum_{q=1}^{N}[c_l^q p_{(X_q,Y_i)}] \!=\!-\! q_i b \sum_{q=1}^{N}[c_l^q p_{X_q|Y_i}]. \end{equation} \begin{equation} \label{eq: derivative 2} \frac{\partial F(p_{(X,Z_s)} \!+\! tbp_{(X,Y_i)})}{\partial t}|_{t\!=\!0}=b\sum_{q=1}^{N}[c_s^q p_{(X_q,Y_i)}]=q_i b \sum_{q=1}^{N}[c_s^q p_{X_q|Y_i}]. \end{equation} \end{small} Similarly, \begin{equation} \label{eq: derivative 3} \frac{\partial g(p_{Z_s} + tbq_i) + g(p_{Z_l}-tbq_i) }{\partial t}|_{t=0}=b(d_sq_i-d_lq_i). \end{equation} From (\ref{eq: 16}), (\ref{eq: 17}), (\ref{eq: derivative 1}), (\ref{eq: derivative 2}) and (\ref{eq: derivative 3}), we have \begin{eqnarray} \frac{\partial I(t)}{\partial t}|_{t=0} &=& b \beta \sum_{q=1}^{N}[c_s^q p_{(X_q,Y_i)}] + b d_sq_i \nonumber \\ &-& b \beta \sum_{q=1}^{N}[c_l^q p_{(X_q,Y_i)}] - b d_lq_i \nonumber \\ &=& b q_i [\beta \sum_{q=1}^{N}c_s^q p_{X_q|Y_i}+d_s] \nonumber \\ &-& b q_i [\beta \sum_{q=1}^{N}c_l^q p_{X_q|Y_i} +d_l] \nonumber\\ &=&b (D(Y_i,Z_s)-D(Y_i,Z_l)). \nonumber \end{eqnarray} Now, using contradiction method, suppose that $D(Z_l,Y_i) > D(Z_s,Y_i)$. Thus, \begin{equation} \label{eq: 20} \frac{\partial I(t)}{\partial t} |_{t=0} < 0. \end{equation} \begin{proposition} \label{prop: 1} Consider $I(t)$ which is defined in (\ref{eq: I(t)}). For $0 < t < a < 1$, we have: \begin{equation} \label{eq: gradient of I(t)} \dfrac{I(t)-I(0)}{t} \geq \dfrac{I(a)-I(0)}{a}. \end{equation} \end{proposition} \begin{proof} From Proposition \ref{prop: 2}, \begin{footnotesize} \begin{eqnarray} F(p_{(X,Z_s)} \!+\! tbp_{(X,Y_i)}) &\!\geq\!& F((1\!-\!\dfrac{t}{a}) p_{(X,Z_s)}) \!+\! F(\dfrac{t}{a}(p_{(X,Z_s)} \!+\! abp_{(X,Y_i)})) \nonumber\\ &\!=\!& (1\!-\!\dfrac{t}{a}) F(p_{(X,Z_s)}) \!+\! \dfrac{t}{a}F(p_{(X,Z_s)} \!+\! abp_{(X,Y_i)}).\nonumber \\\label{eq: 21} \end{eqnarray} \begin{eqnarray} F(p_{(X,Z_l)} \!-\! tbp_{(X,Y_i)}) &\!\geq\!& F((1-\dfrac{t}{a})p_{(X,Z_l)})+F(\dfrac{t}{a}(p_{(X,Z_l)} \!-\! abp_{(X,Y_i)})) \nonumber\\ &\!=\!& (1\!-\!\dfrac{t}{a}) F(p_{(X,Z_l)}) \!+\! \dfrac{t}{a}F(p_{(X,Z_l)} \!-\! abp_{(X,Y_i)}),\nonumber \\ \label{eq: 22} \end{eqnarray} \end{footnotesize} where the inequality due to (ii) and the equality due to (i) in Proposition \ref{prop: 1}, respectively. Similar, since $g(.)$ is a concave function, \begin{eqnarray} g(p_{Z_s}\!+\!tbq_i) &\!=\!& g((1-\dfrac{t}{a})p_{Z_s} \!+\! \dfrac{t}{a} (p_{Z_s}+abq_i) ) \nonumber\\ &\!\geq\! & (1-\dfrac{t}{a})g(p_{Z_s}) \!+\! \dfrac{t}{a} g(p_{Z_s}+abq_i), \nonumber \\ \label{eq: 24} \end{eqnarray} \begin{eqnarray} g(p_{Z_l}\!-\!tbq_i) &\!=\!& g((1-\dfrac{t}{a})p_{Z_l} \!+\! \dfrac{t}{a} (p_{Z_l}-abq_i) ) \nonumber\\ &\!\geq\!& (1-\dfrac{t}{a})g(p_{Z_l}) \!+\! \dfrac{t}{a} g(p_{Z_l}-abq_i). \nonumber \\ \label{eq: 25} \end{eqnarray} Thus, adding up (\ref{eq: 21}), (\ref{eq: 22}), (\ref{eq: 24}), (\ref{eq: 25}) and using a little bit of algebra, one can show that \begin{equation} I(t) \geq (1-\dfrac{t}{a})I(0) + \dfrac{t}{a}I(a) \end{equation} which is equivalent to (\ref{eq: gradient of I(t)}). \end{proof} Now, we continue to the proof of Theorem \ref{theorem: 1}. From Proposition \ref{prop: 1} and the assumption in (\ref{eq: 20}), we have: \begin{equation*} 0 > \frac{\partial I(t)}{\partial t} |_{t=0}=\lim \dfrac{I(t)-I(0)}{t} \geq \dfrac{I(1)-I(0)}{1}. \end{equation*} Thus, $I(0)>I(1)$. That said, by completely changing all $bp_{(X,Y_i)}$ from $Z_l$ to $Z_s$, the total of the impurity is obviously reduced. This contradicts to our assumption that the quantizer $Q^*$ is optimal. By contradiction method, the proof is complete. \end{proof} \begin{lemma} \label{lemma: 1} The optimal solution to the problem (\ref{eq: main problem}) is a deterministic quantizer (hard clustering) i.e., $p_{Z_i|Y_j} \in \{0,1\}$, $\forall$ $i,j$. \end{lemma} \begin{proof} Lemma \ref{lemma: 1} directly follows by the proof of Theorem \ref{theorem: 1}. Since the distance function $D(Y_i,Z_l)$ does not depend on the soft partition $p_{Z_l|Y_i}=b$, one should completely allocate $Y_i$ to a $Z_l$ such that $D(Y_i,Z_l)$ is minimized. \end{proof} \subsection{Practical Algorithm} Theorem \ref{theorem: 1} gave an optimality condition such that the "distance" from a data $Y_i$ to its optimal partition $Z_l$ should be shortest. Therefore, a simple algorithm which is similar to a k-means algorithm can be applied to find the locally optimal solution. Our algorithm is proposed in Algorithm \ref{alg: 1}. We also note that the distance from $Y_i$ to $Z_l$ is defined by \begin{eqnarray} D(Y_i,Z_l) &=&\beta \sum_{q=1}^{N}[p_{(X_q,Y_i)} c_l^q] + d_l q_i \nonumber\\ &=& q_i [\beta \sum_{q=1}^{N}[p_{X_q|Y_i} c_l^q] + d_l]. \end{eqnarray} Therefore, one can ignore the constant $q_i$ while comparing the distances between $D(Y_i,Z_l)$ and $D(Y_i,Z_s)$. \begin{footnotesize} \begin{algorithm} \caption{Finding the optimal partition under partitioned output constraint} \label{alg: 1} \begin{algorithmic}[1] \State{\textbf{Input}: $p_X$, $p_Y$, $p_{(X,Y)}$, $f(.)$, $g(.)$ and $\beta$.} \State{\textbf{Output}: $Z=\{Z_1,Z_2,\dots,Z_K$ \} } \State{\textbf{Initialization}: Randomly hard clustering $Y$ into $K$ clusters. } \State{\textbf{Step 1}: Updating $p_{(X,Z_l)}$ and $d_l$ for output subset $Z_l$ for $\forall$ $l \in \{1,2,\dots,K\}$:} \begin{equation*} p_{(X,Z_l)}= \sum_{Y_q \in Z_l}^{}p_{(X,Y_q)}, \end{equation*} \begin{equation*} c_l^i= \frac{\partial F(p_{(X,Z_l)})}{\partial p_{(X_i,Z_l)}}, \forall i \in \{1,2,\dots,N\}, \end{equation*} \begin{equation*} v_l =\sum_{Y_q \in Z_l}p_{Y_q}, \end{equation*} \begin{equation*} d_l= \frac{\partial g(v_l)}{\partial v_l}. \end{equation*} \State{\textbf{Step 2}: Updating the membership by measurement the distance from each $Y_i \in Y$ to each subset $Z_j \in Z$} \begin{equation} \label{eq: nearest clustering} Z_l=\{Y_i| D(Y_i,Z_l) \leq D(Y_i,Z_s), \forall s \neq l. \end{equation} \State{\textbf{Step 3}: Go to Step 1 until the partitioned output $\{Z_1,Z_2,\dots,Z_K\}$ stop changing or the maximum number of iterations has been reached.} \end{algorithmic} \end{algorithm} \end{footnotesize} The Algorithm \ref{alg: 1} works similarly to k-means algorithm and the distance from each point in $Y$ to each partition subset in $Z$ is updated in each loop. The complexity of this algorithm, therefore, is $O(TNKM)$ where $T$ is the number of iterations, $N$, $K$, $M$ are the size of the data dimensional, the output size and the data size. \subsection{Hyperplane separation} \label{subsection: 3-C} \begin{figure} \centering \includegraphics[width=2 in]{fig_3_hyperplane.eps}\\ \caption{For $N=3$, $M=5$ and $K=3$, the optimal quantizer is equitvalent to the hyperplane cuts in 2 dimensional probability space.}\label{fig: 4} \end{figure} Similar to the work in \cite{burshtein1992minimum}, we show that the optimal partitions correspond to the regions separated by hyper-plane cuts in the probability space of the posterior distribution. Consider the optimal quantizer $Q^*$ that produces a given partitioned output sets $Z=\{Z_1,Z_2,\dots,Z_K\}$ and a given conditional probability $p_{X|Z_l}=\{ p_{X_1|Z_l}, \dots, p_{X_N|Z_l}\}$ for $\forall$ $l=1,2,\dots,K$. From the optimality condition in Theorem \ref{theorem: 1}, we know that $\forall$ $Y_i \in Z_l$, then \begin{equation*} \beta \sum_{q=1}^{N}[p_{X_q|Y_i} c_l^q] + d_l \leq \beta \sum_{q=1}^{N}[p_{X_q|Y_i} c_s^q] + d_s. \end{equation*} Thus, $ 0 \geq \beta \sum_{q=1}^{N}p_{X_q|Y_i}[c_l^q-c_s^q] + d_l-d_s.$ By using $p_{X_N|Y_i}=1-\sum_{q=1}^{N-1} p_{X_q|Y_i}$, we have \begin{small} \begin{eqnarray} d_s\!-\!d_l\!+\!\beta(c_s^N\!-\!c_l^N) & \! \geq \! & \sum_{q=1}^{N-1} \beta p_{X_q|Y_i}[c_l^q \!-\! c_s^q \!-\! c_l^N+c_s^N].\nonumber \\ \label{eq: hyperplane} \end{eqnarray} \end{small} For a given optimal quantizer $Q^*$, $c_l^q$ ,$c_s^q$, $d_l$, $d_s$ are scalars and $0 \leq p_{X_q|Y_i} \leq 1$, $\sum_{q=1}^{N}p_{X_q|Y_i}=1$. From (\ref{eq: hyperplane}), $Y_i \in Z_l$ belongs to a region separated by a hyper-plane cut in probability space of posterior distribution $p_{X|Y_i}$. Similar to the result proposed in \cite{burshtein1992minimum}, existing a polynomial time algorithm having time complexity of $O(M^{N})$ that can determine the globally optimal solution for the problem in (\ref{eq: main problem}). Fig. \ref{fig: 4} illustrates the hyper-plane cuts in two dimensional probability space for $N=3,M=5$ and $K=3$. \subsection{Application} \label{sec: application} As discussed in the previous part that the Deterministic Information Bottleneck \cite{strouse2017deterministic} is a special case of our problem for the impurity function and the output constraints are entropy functions. We refer reader to \cite{strouse2017deterministic} for more detail of applications. In this paper, we want to provide a simple example that using the results in Sec. \ref{subsection: 3-C} to find the globally optimal quantizer for a binary input communication channel quantization. Fig. \ref{fig: 5} illustrates our application. Output $Z$ is quantized from data $Y$. Next, $Z$ is mapped to $W$ by a mapping function $W=f(Z)$. Now, $W$ is the input for a limited rate channel $C$. Our goal is to design a good quantizer such that the mutual information $I(X;Z)$ has remained as much as possible while the rate of output $Z$ is under the limited rate $C$. We also note that a similar constraint, i.e., cost transmission, time delay can be replaced to formulate other interesting problems. \begin{figure} \centering \includegraphics[width=3.5 in]{limited_channel.eps}\\ \caption{Partition with output contraints: an example with a relay channel having a limited capacity.}\label{fig: 5} \end{figure} \textbf{Example 1:} To illustrate how the Algorithm \ref{alg: 1} work, we provide the following example. Consider a communication system which transmits input $X=\{X_1=-1,X_2=1\}$ having $p_{X_1}=0.2$, $p_{X_2}=0.8$ over an additive noise channel with i.i.d noise distribution $N(\mu=0,\sigma=1)$. The output signal $Y$ is a continuous signal which is the result of input $X$ adding to the noise $N$. \vspace{-0.05 in} $$Y = X + N.$$ Due to the additive property, the conditional distribution of output $Y$ given input $X_1$ is $p_{Y|X_1=-1}=N(-1,1)$ while the conditional distribution of output $Y$ given input $X_2$ is $p_{Y|X_2=1}=N(1,1)$. We also note that due to the additive noise is continuous, $Y$ is in continuous domain. The continuous output $Y$ then is quantized to binary output $Z=\{Z_1=-1,Z_2=1\}$ using a quantizer $Q$. Quantized output $Z$ is transmitted over a limited rate channel $C$ with the highest rate $R=0.5$. We have to find an optimal quantizer $Q^*$ such that the mutual information $I(X;Z)$ is maximized while $H(Z) \leq R$. Now, we first discrete $Y$ to $M=200$ pieces from $[-10,10]$ with the same width $\epsilon=0,1$. Thus, $Y=\{Y_1,Y_2,\dots,Y_{200}\}$ with the joint distribution $p_{(X_i,Y_j)}$, $i=1,2$, $j=1,2,\dots,200$ can be determined by using two given conditional distributions $p_{Y|X_1=-1}=N(-1,1)$ and $p_{Y|X_2=1}=N(1,1)$. Next, to find the optimal quantizer $Q^*$, we scan all the possible value of $\beta \geq 0$. For each value of $\beta$, we run the Algorithm \ref{alg: 1} many times to find the globally optimal quantizer. Finally, the largest mutual information $I(X;Z)^*$ is $0.18623$ which corresponds to $H(Z)=0.48873$ at $\beta^*=6$. \textbf{Using hyper-plane separation to find the globally optimal solution:} Using the result in Sec. \ref{subsection: 3-C}, the optimal quantizer (both local and global) is equivalent to a hyper-plane cut in probability space. Due to $|X|=N=2$, the hyper-plane is a scalar in posterior distribution $p_{X_2|Y}$. Noting that $p_{X_1|Y}$ is a strictly increasing function over $Y=[-10,10]$. Thus, an exhausted searching of $y \in [-10,10]$ can be applied to find the optimal quantizer. Fig. \ref{fig: 6} illustrates the function of $I(X;Z)$ and $H(Z)$ with variable $y \in [-10;10]$ using the resolution $\epsilon=0.1$. For $\beta=6$, the optimal mutual information $I(X;Z)^*=0.18623$ corresponds to $H(Z)=0.48873$ that are achieved at $y=-1.1$. This result confirms the globally optimal solution using Algorithm \ref{alg: 1} in Example 1. \begin{figure} \centering \includegraphics[width=2.8 in]{mutual_information_constraint.eps}\\ \caption{Finding the maximum of $I(X;Z)$ under the constraint $H(Z) \leq 0.5$.}\label{fig: 6} \end{figure} \section{Conclusion} In this paper, we presented a new framework to minimize the impurity partition while the probability distribution of partitioned output satisfies a concave constraint. Based on the optimality condition, we show that the optimal partition should be a hard partition. A low complexity algorithm is provided to find the locally optimal solution. Moreover, we show that the optimal partitions (local/global) correspond to the regions separated by hyper-plane cuts in the probability space of the posterior distribution. Therefore, existing a polynomial time complexity algorithm that can find the truly globally optimal solution. \bibliographystyle{unsrt}
1,116,691,499,504
arxiv
\section*{Abstract (Not appropriate in this style!)}% \else \small \begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}% \quotation \fi }% }{% }% \@ifundefined{endabstract}{\def\endabstract {\if@twocolumn\else\endquotation\fi}}{}% \@ifundefined{maketitle}{\def\maketitle#1{}}{}% \@ifundefined{affiliation}{\def\affiliation#1{}}{}% \@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}% \@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}% \@ifundefined{newfield}{\def\newfield#1#2{}}{}% \@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }% \newcount\c@chapter}{}% \@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}% \@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}% \@ifundefined{subsection}{\def\subsection#1% {\par(Subsection head:)#1\par }}{}% \@ifundefined{subsubsection}{\def\subsubsection#1% {\par(Subsubsection head:)#1\par }}{}% \@ifundefined{paragraph}{\def\paragraph#1% {\par(Subsubsubsection head:)#1\par }}{}% \@ifundefined{subparagraph}{\def\subparagraph#1% {\par(Subsubsubsubsection head:)#1\par }}{}% \@ifundefined{therefore}{\def\therefore{}}{}% \@ifundefined{backepsilon}{\def\backepsilon{}}{}% \@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}% \@ifundefined{registered}{% \def\registered{\relax\ifmmode{}\r@gistered \else$\m@th\r@gistered$\fi}% \def\r@gistered{^{\ooalign {\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr \mathhexbox20D}}}}{}% \@ifundefined{Eth}{\def\Eth{}}{}% \@ifundefined{eth}{\def\eth{}}{}% \@ifundefined{Thorn}{\def\Thorn{}}{}% \@ifundefined{thorn}{\def\thorn{}}{}% \def\TEXTsymbol#1{\mbox{$#1$}}% \@ifundefined{degree}{\def\degree{{}^{\circ}}}{}% \newdimen\theight \@ifundefined{Column}{\def\Column{% \vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}% \theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip \kern -\theight \vbox to \theight{% \rightline{\rlap{\box\z@}}% \vss }% }% }}{}% \@ifundefined{qed}{\def\qed{% \ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi \hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}% }}{}% \@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}% \@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}% \@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}% \@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}% \@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}% \@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}% \@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}% \@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}% \@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}% \@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}% \@ifundefined{vvert}{\def\vvert{\Vert}}{ \@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}% \@ifundefined{dB}{\def\dB{\hbox{{}}}}{ \@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{ \@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{ \@ifundefined{note}{\def\note{$^{\dag}}}{}% \defLaTeX2e{LaTeX2e} \ifx\fmtnameLaTeX2e \DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} \DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf} \DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt} \DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} \DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} \DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl} \DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc} \fi \def\alpha{{\Greekmath 010B}}% \def\beta{{\Greekmath 010C}}% \def\gamma{{\Greekmath 010D}}% \def\delta{{\Greekmath 010E}}% \def\epsilon{{\Greekmath 010F}}% \def\zeta{{\Greekmath 0110}}% \def\eta{{\Greekmath 0111}}% \def\theta{{\Greekmath 0112}}% \def\iota{{\Greekmath 0113}}% \def\kappa{{\Greekmath 0114}}% \def\lambda{{\Greekmath 0115}}% \def\mu{{\Greekmath 0116}}% \def\nu{{\Greekmath 0117}}% \def\xi{{\Greekmath 0118}}% \def\pi{{\Greekmath 0119}}% \def\rho{{\Greekmath 011A}}% \def\sigma{{\Greekmath 011B}}% \def\tau{{\Greekmath 011C}}% \def\upsilon{{\Greekmath 011D}}% \def\phi{{\Greekmath 011E}}% \def\chi{{\Greekmath 011F}}% \def\psi{{\Greekmath 0120}}% \def\omega{{\Greekmath 0121}}% \def\varepsilon{{\Greekmath 0122}}% \def\vartheta{{\Greekmath 0123}}% \def\varpi{{\Greekmath 0124}}% \def\varrho{{\Greekmath 0125}}% \def\varsigma{{\Greekmath 0126}}% \def\varphi{{\Greekmath 0127}}% \def{\Greekmath 0272}{{\Greekmath 0272}} \def\FindBoldGroup{% {\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}% } \def\Greekmath#1#2#3#4{% \if@compatibility \ifnum\mathgroup=\symbold \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \else \FindBoldGroup \ifnum\mathgroup=\theboldgroup \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \fi} \newif\ifGreekBold \GreekBoldfalse \let\SAVEPBF=\pbf \def\pbf{\GreekBoldtrue\SAVEPBF}% \@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{} \@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{} \@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{} \@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{} \@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{} \@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{} \@ifundefined{remark}{\newtheorem{remark}{Remark}}{} \@ifundefined{example}{\newtheorem{example}{Example}}{} \@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{} \@ifundefined{definition}{\newtheorem{definition}{Definition}}{} \@ifundefined{mathletters}{% \newcounter{equationnumber} \def\mathletters{% \addtocounter{equation}{1} \edef\@currentlabel{\arabic{equation}}% \setcounter{equationnumber}{\c@equation} \setcounter{equation}{0}% \edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}% } \def\endmathletters{% \setcounter{equation}{\value{equationnumber}}% } }{} \@ifundefined{BibTeX}{% \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}% \@ifundefined{AmS}% {\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}% A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}% \@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}% \def\@@eqncr{\let\@tempa\relax \ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}% \else \def\@tempa{&}\fi \@tempa \if@eqnsw \iftag@ \@taggnum \else \@eqnnum\stepcounter{equation}% \fi \fi \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@eqnswtrue \global\@eqcnt\z@\cr} \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}% \global\def\@currentlabel{#1}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}% \global\def\@currentlabel{#1}} \def\QATOP#1#2{{#1 \atop #2}}% \def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}% \def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}% \def\QABOVE#1#2#3{{#2 \above#1 #3}}% \def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}% \def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}% \def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}% \def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}% \def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}% \def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}% \def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}% \def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}% \def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}% \def\QTABOVED#1#2#3#4#5{{\textstyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\QDABOVED#1#2#3#4#5{{\displaystyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\tint{\msi@int\textstyle\int}% \def\tiint{\msi@int\textstyle\iint}% \def\tiiint{\msi@int\textstyle\iiint}% \def\tiiiint{\msi@int\textstyle\iiiint}% \def\tidotsint{\msi@int\textstyle\idotsint}% \def\toint{\msi@int\textstyle\oint}% \def\tsum{\mathop{\textstyle \sum }}% \def\tprod{\mathop{\textstyle \prod }}% \def\tbigcap{\mathop{\textstyle \bigcap }}% \def\tbigwedge{\mathop{\textstyle \bigwedge }}% \def\tbigoplus{\mathop{\textstyle \bigoplus }}% \def\tbigodot{\mathop{\textstyle \bigodot }}% \def\tbigsqcup{\mathop{\textstyle \bigsqcup }}% \def\tcoprod{\mathop{\textstyle \coprod }}% \def\tbigcup{\mathop{\textstyle \bigcup }}% \def\tbigvee{\mathop{\textstyle \bigvee }}% \def\tbigotimes{\mathop{\textstyle \bigotimes }}% \def\tbiguplus{\mathop{\textstyle \biguplus }}% \newtoks\temptoksa \newtoks\temptoksb \newtoks\temptoksc \def\msi@int#1#2{% \def\@temp{{#1#2\the\temptoksc_{\the\temptoksa}^{\the\temptoksb}} \futurelet\@nextcs \@int } \def\@int{% \ifx\@nextcs\limits \typeout{Found limits}% \temptoksc={\limits}% \let\@next\@intgobble% \else\ifx\@nextcs\nolimits \typeout{Found nolimits}% \temptoksc={\nolimits}% \let\@next\@intgobble% \else \typeout{Did not find limits or no limits}% \temptoksc={}% \let\@next\msi@limits% \fi\fi \@next }% \def\@intgobble#1{% \typeout{arg is #1}% \msi@limits } \def\msi@limits{% \temptoksa={}% \temptoksb={}% \@ifnextchar_{\@limitsa}{\@limitsb}% } \def\@limitsa_#1{% \temptoksa={#1}% \@ifnextchar^{\@limitsc}{\@temp}% } \def\@limitsb{% \@ifnextchar^{\@limitsc}{\@temp}% } \def\@limitsc^#1{% \temptoksb={#1}% \@ifnextchar_{\@limitsd}{\@temp } \def\@limitsd_#1{% \temptoksa={#1}% \@temp } \def\dint{\msi@int\displaystyle\int}% \def\diint{\msi@int\displaystyle\iint}% \def\diiint{\msi@int\displaystyle\iiint}% \def\diiiint{\msi@int\displaystyle\iiiint}% \def\didotsint{\msi@int\displaystyle\idotsint}% \def\doint{\msi@int\displaystyle\oint}% \def\dsum{\mathop{\displaystyle \sum }}% \def\dprod{\mathop{\displaystyle \prod }}% \def\dbigcap{\mathop{\displaystyle \bigcap }}% \def\dbigwedge{\mathop{\displaystyle \bigwedge }}% \def\dbigoplus{\mathop{\displaystyle \bigoplus }}% \def\dbigodot{\mathop{\displaystyle \bigodot }}% \def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}% \def\dcoprod{\mathop{\displaystyle \coprod }}% \def\dbigcup{\mathop{\displaystyle \bigcup }}% \def\dbigvee{\mathop{\displaystyle \bigvee }}% \def\dbigotimes{\mathop{\displaystyle \bigotimes }}% \def\dbiguplus{\mathop{\displaystyle \biguplus }}% \if@compatibility\else \RequirePackage{amsmath} \fi \def\makeatother\endinput{\makeatother\endinput} \bgroup \ifx\ds@amstex\relax \message{amstex already loaded}\aftergroup\makeatother\endinput \else \@ifpackageloaded{amsmath}% {\if@compatibility\message{amsmath already loaded}\fi\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amstex}% {\if@compatibility\message{amstex already loaded}\fi\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amsgen}% {\if@compatibility\message{amsgen already loaded}\fi\aftergroup\makeatother\endinput} {} \fi \egroup \typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE} \let\DOTSI\relax \def\RIfM@{\relax\ifmmode}% \def\FN@{\futurelet\next}% \newcount\intno@ \def\iint{\DOTSI\intno@\tw@\FN@\ints@}% \def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}% \def\iiiint{\DOTSI\intno@4 \FN@\ints@}% \def\idotsint{\DOTSI\intno@\z@\FN@\ints@}% \def\ints@{\findlimits@\ints@@}% \newif\iflimtoken@ \newif\iflimits@ \def\findlimits@{\limtoken@true\ifx\next\limits\limits@true \else\ifx\next\nolimits\limits@false\else \limtoken@false\ifx\ilimits@\nolimits\limits@false\else \ifinner\limits@false\else\limits@true\fi\fi\fi\fi}% \def\multint@{\int\ifnum\intno@=\z@\intdots@ \else\intkern@\fi \ifnum\intno@>\tw@\int\intkern@\fi \ifnum\intno@>\thr@@\int\intkern@\fi \int \def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi \ifnum\intno@>\tw@\intop\intkern@\fi \ifnum\intno@>\thr@@\intop\intkern@\fi\intop}% \def\intic@{% \mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}% \def\negintic@{\mathchoice {\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}% \def\ints@@{\iflimtoken@ \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits \else\multint@\nolimits\fi \eat@ \else \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits\else \multint@\nolimits\fi}\fi\ints@@@}% \def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}% \def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}% \def\intdots@{\mathchoice{\plaincdots@}% {{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}% \def\RIfM@{\relax\protect\ifmmode} \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi} \let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice {\textdef@\displaystyle\f@size{#1}}% {\textdef@\textstyle\tf@size{\firstchoice@false #1}}% {\textdef@\textstyle\sf@size{\firstchoice@false #1}}% {\textdef@\textstyle \ssf@size{\firstchoice@false #1}}% \glb@settings} \def\textdef@#1#2#3{\hbox{{% \everymath{#1}% \let\f@size#2\selectfont #3}}} \newif\iffirstchoice@ \firstchoice@true \def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}% \def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}% \def\multilimits@{\bgroup\vspace@\Let@ \baselineskip\fontdimen10 \scriptfont\tw@ \advance\baselineskip\fontdimen12 \scriptfont\tw@ \lineskip\thr@@\fontdimen8 \scriptfont\thr@@ \lineskiplimit\lineskip \vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}% \def\Sb{_\multilimits@}% \def\endSb{\crcr\egroup\egroup\egroup}% \def\Sp{^\multilimits@}% \let\endSp\endSb \newdimen\ex@ \[email protected] \def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}% \def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow \mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\overrightarrow{\mathpalette\overrightarrow@}% \def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \let\overarrow\overrightarrow \def\overleftarrow{\mathpalette\overleftarrow@}% \def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\overleftrightarrow{\mathpalette\overleftrightarrow@}% \def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr \leftrightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\underrightarrow{\mathpalette\underrightarrow@}% \def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}% \let\underarrow\underrightarrow \def\underleftarrow{\mathpalette\underleftarrow@}% \def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}% \def\underleftrightarrow{\mathpalette\underleftrightarrow@}% \def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th \hfil#1#2\hfil$\crcr \noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}% \def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@} \let\nlimits@\displaylimits \def\setboxz@h{\setbox\z@\hbox} \def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr \hfil$#1\m@th\operator@font lim$\hfil\crcr \noalign{\nointerlineskip}#2#1\crcr \noalign{\nointerlineskip\kern-\ex@}\crcr}}}} \def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\copy\z@\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$} \def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill \mkern-6mu\box\z@$} \def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}} \def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}} \def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@} \def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@} \def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}} \def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@ \hbox{$#1\m@th\operator@font lim$}}}} \def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}} \def\mathpalette\varlimsup@{}@#1{\mathop{\overline {\hbox{$#1\m@th\operator@font lim$}}}} \def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}% \begingroup \catcode `|=0 \catcode `[= 1 \catcode`]=2 \catcode `\{=12 \catcode `\}=12 \catcode`\\=12 |gdef|@alignverbatim#1\end{align}[#1|end[align]] |gdef|@salignverbatim#1\end{align*}[#1|end[align*]] |gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]] |gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]] |gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]] |gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]] |gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]] |gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]] |gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]] |gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]] |gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]] |endgroup \def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim You are using the "align" environment in a style in which it is not defined.} \let\endalign=\endtrivlist \@namedef{align*}{\@verbatim\@salignverbatim You are using the "align*" environment in a style in which it is not defined.} \expandafter\let\csname endalign*\endcsname =\endtrivlist \def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim You are using the "alignat" environment in a style in which it is not defined.} \let\endalignat=\endtrivlist \@namedef{alignat*}{\@verbatim\@salignatverbatim You are using the "alignat*" environment in a style in which it is not defined.} \expandafter\let\csname endalignat*\endcsname =\endtrivlist \def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim You are using the "xalignat" environment in a style in which it is not defined.} \let\endxalignat=\endtrivlist \@namedef{xalignat*}{\@verbatim\@sxalignatverbatim You are using the "xalignat*" environment in a style in which it is not defined.} \expandafter\let\csname endxalignat*\endcsname =\endtrivlist \def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim You are using the "gather" environment in a style in which it is not defined.} \let\endgather=\endtrivlist \@namedef{gather*}{\@verbatim\@sgatherverbatim You are using the "gather*" environment in a style in which it is not defined.} \expandafter\let\csname endgather*\endcsname =\endtrivlist \def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim You are using the "multiline" environment in a style in which it is not defined.} \let\endmultiline=\endtrivlist \@namedef{multiline*}{\@verbatim\@smultilineverbatim You are using the "multiline*" environment in a style in which it is not defined.} \expandafter\let\csname endmultiline*\endcsname =\endtrivlist \def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim You are using a type of "array" construct that is only allowed in AmS-LaTeX.} \let\endarrax=\endtrivlist \def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.} \let\endtabulax=\endtrivlist \@namedef{arrax*}{\@verbatim\@sarraxverbatim You are using a type of "array*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endarrax*\endcsname =\endtrivlist \@namedef{tabulax*}{\@verbatim\@stabulaxverbatim You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endtabulax*\endcsname =\endtrivlist \def\endequation{% \ifmmode\ifinner \iftag@ \addtocounter{equation}{-1} $\hfil \displaywidth\linewidth\@taggnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \else $\hfil \displaywidth\linewidth\@eqnnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \fi \else \iftag@ \addtocounter{equation}{-1} \eqno \hbox{\@taggnum} \global\@ifnextchar*{\@tagstar}{\@tag}@false% $$\global\@ignoretrue \else \eqno \hbox{\@eqnnum $$\global\@ignoretrue \fi \fi\fi } \newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}% \global\def\@currentlabel{#1}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}% \global\def\@currentlabel{#1}} \@ifundefined{tag}{ \def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}} \def\@tag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@tagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} }{} \def\tfrac#1#2{{\textstyle {#1 \over #2}}}% \def\dfrac#1#2{{\displaystyle {#1 \over #2}}}% \def\binom#1#2{{#1 \choose #2}}% \def\tbinom#1#2{{\textstyle {#1 \choose #2}}}% \def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}% \makeatother \endinput \section{Density} Difficulties presented by the numerical integration of $f_{12}(x,y)$ are evident in Figure 2. \ The surface appears to touch the $xy$-plane only when $y=0$; its prominent ridge occurs along the line $y=(1-x)/2$ because $(1-x-y)/y=1$ corresponds to a unique point of nondifferentiability for $\xi\mapsto\rho(\xi)$; its remaining boundary hovers over the broken line $y=\min\{x,1-x\}$, everywhere finite except in the vicinity of $x=0$. \begin{figure} [ptb] \begin{center} \includegraphics[ height=7.3838in, width=6.5311in {yx.eps \caption{Probability density of $(\Lambda_{1},\Lambda_{2})$, over $0\leq y\leq1/2$ and $y\leq x\leq1-y.$ \end{center} \end{figure} Complications are compounded for the three other densities (which are, in themselves, approximations). \ Figure 3 contains a plot o \[ f_{13}(x,z) {\displaystyle\int\limits_{z}^{x}} f_{123}(x,y,z)dy. \] The surface appears to touch the $xz$-plane when $z=0$ and $0<x<1/2$ simultaneously, as well as everywhere along the broken line $z=\min \{x,(1-x)/2\}$ \begin{figure} [ptb] \begin{center} \includegraphics[ height=7.958in, width=5.0678in {zx.eps \caption{Probability density of $(\Lambda_{1},\Lambda_{3})$, over $0\leq z\leq1/3$ and $z\leq x\leq1-2z.$ \end{center} \end{figure} Figure 4 contains a plot o \[ f_{14}(x,w) {\displaystyle\int\limits_{w}^{\min\{x,1/3\}}} \ {\displaystyle\int\limits_{z}^{x}} f_{1234}(x,y,z,w)dy\,dz. \] The (precipitously rising) surface appears to touch the $xw$-plane only when $w=0$ and $0<x<1/2$ simultaneously; its remaining boundary hovers over the broken line $w=\min\{x,(1-x)/3\}$, everywhere finite except in the vicinity of $x=0$.\ The vertical scale is more expansive here than for the other plots.\ \begin{figure} [ptb] \begin{center} \includegraphics[ height=5.7242in, width=6.8277in {wx.eps \caption{Probability density of $(\Lambda_{1},\Lambda_{4})$, over $0\leq w\leq1/4$ and $w\leq x\leq1-3w.$ \end{center} \end{figure} Figure 5 contains a plot o \[ f_{23}(y,z) {\displaystyle\int\limits_{y}^{1}} f_{123}(x,y,z)dx. \] The (fairly undulating)\ surface appears to touch the $yz$-plane only when $z=1-2y$. \ Unlike the other densities, a singularity here occurs at $(y,z)=(0,0)$. \begin{figure} [ptb] \begin{center} \includegraphics[ height=5.8487in, width=6.7403in {zy.eps \caption{Probability density of $(\Lambda_{2},\Lambda_{3})$, over $0\leq z\leq1/3$ and $z\leq y\leq(1-z)/2.$ \end{center} \end{figure} \section{Correlation} Le \ \begin{array} [c]{ccc E(x) {\displaystyle\int\limits_{x}^{\infty}} \dfrac{e^{-t}}{t}dt=-\operatorname{Ei}(-x), & & x>0 \end{array} \] be the exponential integral. \ Upon normalization, the $h^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{th}}$ moment of the $r^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{th}}$ longest cycle length is \cite{SL-tcs9, ABT2-tcs9, Pin-tcs9 \[ \lim_{n\rightarrow\infty}\frac{\mathbb{E}\left( \Lambda_{r}^{h}\right) }{n^{h}}=\frac{1}{h!(r-1)! {\displaystyle\int\limits_{0}^{\infty}} x^{h-1}E(x)^{r-1}\exp\left[ -E(x)-x\right] dx \] (in this paper, rank $r=1,2,3$ or $4$; height $h=1$ or $2$). \ The cross-correlation between $r^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{th}}$ longest and $s^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{th}}$ longest cycle lengths i \begin{align*} \kappa_{r,s} & =\frac{\mathbb{E}\left( \Lambda_{r}\Lambda_{s}\right) -\mathbb{E}\left( \Lambda_{r}\right) \mathbb{E}\left( \Lambda_{s}\right) }{\sqrt{\mathbb{E}\left( \Lambda_{r}^{2}\right) -\mathbb{E}\left( \Lambda_{r}\right) ^{2}}\sqrt{\mathbb{E}\left( \Lambda_{s}^{2}\right) -\mathbb{E}\left( \Lambda_{s}\right) ^{2}}}\\ & \rightarrow\left\{ \begin{array} [c]{lll -0.75803584... & & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }r=1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }s=2,\\ -0.78421290... & & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }r=1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }s=3,\\ -0.68442819... & & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }r=1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }s=4,\\ +0.35549741... & & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }r=2\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }s=3 \end{array} \right. \end{align*} with cross-moments given by \cite{Grf-tcs9, Shi-tcs9 \[ \lim_{n\rightarrow\infty}\frac{\mathbb{E}\left( \Lambda_{1}\Lambda _{2}\right) }{n^{2}}=\frac{1}{2}\ {\displaystyle\int\limits_{0}^{\infty}} {\displaystyle\int\limits_{0}^{x}} \exp\left[ -E(y)-x-y\right] dy\,dx, \ \[ \lim_{n\rightarrow\infty}\frac{\mathbb{E}\left( \Lambda_{1}\Lambda _{3}\right) }{n^{2}}=\frac{1}{2}\ {\displaystyle\int\limits_{0}^{\infty}} {\displaystyle\int\limits_{0}^{x}} {\displaystyle\int\limits_{0}^{y}} \frac{1}{y}\exp\left[ -E(z)-x-y-z\right] dz\,dy\,dx, \ \[ \lim_{n\rightarrow\infty}\frac{\mathbb{E}\left( \Lambda_{1}\Lambda _{4}\right) }{n^{2}}=\frac{1}{2}\ {\displaystyle\int\limits_{0}^{\infty}} {\displaystyle\int\limits_{0}^{x}} {\displaystyle\int\limits_{0}^{y}} {\displaystyle\int\limits_{0}^{z}} \frac{1}{y\,z}\exp\left[ -E(w)-x-y-z-w\right] dw\,dz\,dy\,dx, \ \[ \lim_{n\rightarrow\infty}\frac{\mathbb{E}\left( \Lambda_{2}\Lambda _{3}\right) }{n^{2}}=\frac{1}{2}\ {\displaystyle\int\limits_{0}^{\infty}} {\displaystyle\int\limits_{0}^{x}} {\displaystyle\int\limits_{0}^{y}} \frac{1}{x}\exp\left[ -E(z)-x-y-z\right] dz\,dy\,dx. \] The fact that $\Lambda_{1}$ is negatively correlated with other $\Lambda_{r}$, yet $\Lambda_{2}$ is positively correlated with other $\Lambda_{s}$, is due to longest cycles typically occupying a giant-size portion of permutations, but second-longest cycles less so. \ \section{Distribution} Bach \&\ Peralta \cite{BP-tcs9} discussed a remarkable heuristic model, based on random bisection, that simplifies the computation of joint probabilities involving $\Lambda_{1}$ and $\Lambda_{2}$. \ In the same paper, they rigorously proved that asymptotic predictions emanating from the model\ are valid. \ Subsequent researchers extended the work to $\Lambda_{1}$ and $\Lambda_{3}$, to $\Lambda_{1}$ and $\Lambda_{4}$, and to $\Lambda_{2}$ and $\Lambda_{3}$. \ We shall not enter into details of the model nor its absolute confirmation, preferring instead to dwell on numerical results and certain relative verifications. \subsection{First and Second} For $0<a\leq b\leq1$, Bach \&\ Peralta \cite{BP-tcs9} demonstrated tha \[ \lim_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{2}}{n}\leq a\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }\frac{\Lambda_{1}}{n}\leq b\right\} =\underset{=I_{0}(a)}{\underbrace {\rho\left( \frac{1}{a}\right) }}+\underset{=I_{1}(a,b)}{\underbrace {\displaystyle\int\limits_{a}^{b}} \rho\left( \frac{1-x}{a}\right) \frac{dx}{x}}}. \] Note the slight change from earlier -- writing $\Lambda_{2}$ before $\Lambda_{1}$ -- a convention we adopt so as to be consistent with the literature. \ Let $J_{1}(a,b)=I_{0}(a)+I_{1}(a,b)$. \ Return now to the example from the introduction.\ \ Evaluatin \[ J_{1}\left( \frac{1}{3},\frac{1}{2}\right) =\rho(3) {\displaystyle\int\limits_{1/3}^{1/2}} \rho\left( \frac{1-x}{1/3}\right) \frac{dx}{x \] is less numerically problematic than evaluatin \[ \underset{=\rho(3)}{\underbrace {\displaystyle\int\limits_{0}^{1/3}} \ {\displaystyle\int\limits_{0}^{x}} f_{12}(x,y)dy\,dx}} {\displaystyle\int\limits_{1/3}^{1/2}} \ {\displaystyle\int\limits_{0}^{1/3}} f_{12}(x,y)dy\,dx \] for two reasons: \begin{itemize} \item a double integral has been miraculously reduced to a single integral, \item the argument of $\rho$ within the integral is $(1-x)/a$ rather than $(1-x-y)/y$, which is unstable as $y\rightarrow0$. \end{itemize} \noindent The advantages of using the Bach \&\ Peralta formulation will become more apparent as we move forward (incidently, their $G$ is the same as our $J_{1}$). \ \begin{center \begin{tabular} [c]{|c|c||c|c|c|c|c|}\hline $u\backslash v$ & $1$ & $1$ & $2$ & $3$ & $4$ & $5$\\\hline $2$ & 0.30685282 & 0.69314718 & & & & \\\hline $3$ & 0.04860839 & 0.80417093 & 0.17604345 & & & \\\hline $4$ & 0.00491093 & 0.61877013 & 0.09148808 & 0.01974468 & & \\\hline $5$ & 0.00035472 & 0.46286746 & 0.03043740 & 0.00578984 & 0.00149456 & \\\hline $6$ & 0.00001965 & 0.36519810 & 0.00849154 & 0.00107262 & 0.00029307 & 0.00008552\\\hline \end{tabular} Table 1:$\ I_{0}(1/u)$ and $I_{1}(1/u,1/v)$ for $2\leq u\leq6$, $1\leq v<u$ \bigski \begin{tabular} [c]{|c|c|c|c|c|c|c|}\hline $u\backslash v$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$\\\hline $2$ & 1.00000000 & 0.30685282 & & & & \\\hline $3$ & 0.85277932 & 0.22465184 & 0.04860839 & & & \\\hline $4$ & 0.62368106 & 0.09639901 & 0.02465561 & 0.00491093 & & \\\hline $5$ & 0.46322219 & 0.03079212 & 0.00614457 & 0.00184928 & 0.00035472 & \\\hline $6$ & 0.36521775 & 0.00851119 & 0.00109227 & 0.00031272 & 0.00010517 & 0.00001965\\\hline \end{tabular} Table 2: $J_{1}(1/u,1/v)$ for $2\leq u\leq6$, $1\leq v\leq u$ \end{center} A verification of $J_{1}(a,b)$ is as follows \[ \frac{\partial J_{1}}{\partial b}=\rho\left( \frac{1-b}{a}\right) \frac {1}{b \] by the Second Fundamental Theorem of Calculus, henc \[ \frac{\partial^{2}J_{1}}{\partial a\,\partial b}=-\rho^{\prime}\left( \frac{1-b}{a}\right) \frac{1-b}{a^{2}}\frac{1}{b}=\frac{\rho\left( \dfrac{1-b}{a}-1\right) }{\dfrac{1-b}{a}}\frac{1-b}{a^{2}b}=\frac{\rho\left( \dfrac{1-a-b}{a}\right) }{a\,b}=f_{12}(b,a) \] as anticipated by Billingsley \cite{Bill-tcs9}. \ An interpretation of $I_{1}(a,b)$ is helpful \[ I_{1}(a,b)=\lim_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{2} {n}\leq a\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }a<\frac{\Lambda_{1}}{n}\leq b\right\} \] i.e., the probability that exactly one cycle has length in the interval $(a\,n,$ $b\,n]$ and all others have length $\leq a\,n$. \ We have, for instance \ \begin{array} [c]{ccc \left. \dfrac{\partial I_{1}}{\partial a}\right\vert _{b=1}=0, & & I_{1}(a,1)\approx0.8285 \end{array} \] when $a\approx0.3775\approx1/(2.649)$, the value maximizing $\mathbb{P \left\{ \Lambda_{2}\leq a\,n<\Lambda_{1}\right\} $ as $n\rightarrow\infty$. \ \subsection{First and Third} For $0<a\leq1/2$ and $a\leq b\leq1$, Lambert \cite{Lmb-tcs9} demonstrated tha \[ J_{2}(a,b)=\lim_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{3} {n}\leq a\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }\frac{\Lambda_{1}}{n}\leq b\right\} =J_{1}(a,b)+\underset {=I_{2}(a,b)}{\underbrace {\displaystyle\int\limits_{a}^{b}} {\displaystyle\int\limits_{y}^{b}} \rho\left( \frac{1-x-y}{a}\right) \frac{dx}{x}\frac{dy}{y}}}. \] (Incidently, his $G_{2}$ is the same as our $J_{2}-J_{1}=I_{2}$.) \begin{center \begin{tabular} [c]{|c|c|c|c|c|c|}\hline $u\backslash v$ & $1$ & $2$ & $3$ & $4$ & $5$\\\hline $3$ & 0.14722068 & 0.08220098 & & & \\\hline $4$ & 0.36143259 & 0.19556747 & 0.01998464 & & \\\hline $5$ & 0.46463747 & 0.20709082 & 0.02278925 & 0.00201596 & \\\hline $6$ & 0.48588944 & 0.16644726 & 0.01263312 & 0.00136571 & 0.00013356\\\hline \end{tabular} Table 3: $I_{2}(1/u,1/v)$ for $3\leq u\leq6$, $1\leq v<u$ \bigski \begin{tabular} [c]{|c|c|c|c|c|c|c|}\hline $u\backslash v$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$\\\hline $3$ & 1.00000000 & 0.30685282 & 0.04860839 & & & \\\hline $4$ & 0.98511365 & 0.29196647 & 0.04464025 & 0.00491093 & & \\\hline $5$ & 0.92785965 & 0.23788294 & 0.02893382 & 0.00386524 & 0.00035472 & \\\hline $6$ & 0.85110720 & 0.17495845 & 0.01372538 & 0.00167843 & 0.00023872 & 0.00001965\\\hline \end{tabular} Table 4: $J_{2}(1/u,1/v)$ for $3\leq u\leq6$, $1\leq v\leq u$ \end{center} A verification of $J_{2}(a,b)$ is as follows \begin{align*} \frac{\partial I_{2}}{\partial b} & =\frac{1}{2}\frac{\partial}{\partial b {\displaystyle\int\limits_{a}^{b}} {\displaystyle\int\limits_{a}^{b}} \rho\left( \frac{1-x-y}{a}\right) \frac{dx}{x}\frac{dy}{y}\\ & =\frac{1}{2 {\displaystyle\int\limits_{a}^{b}} \rho\left( \frac{1-b-y}{a}\right) \frac{1}{b}\frac{dy}{y}+\frac{1}{2 {\displaystyle\int\limits_{a}^{b}} \rho\left( \frac{1-x-b}{a}\right) \frac{1}{b}\frac{dx}{x} {\displaystyle\int\limits_{a}^{b}} \rho\left( \frac{1-x-b}{a}\right) \frac{1}{b}\frac{dx}{x \end{align*} by symmetry; thus by Leibniz's Rule \begin{align*} \frac{\partial^{2}I_{2}}{\partial a\,\partial b} & = {\displaystyle\int\limits_{a}^{b}} \rho^{\prime}\left( \frac{1-x-b}{a}\right) \frac{1-x-b}{a^{2}}\frac{1 {b}\frac{dx}{x}-\rho\left( \frac{1-a-b}{a}\right) \frac{1}{a\,b}\\ & {\displaystyle\int\limits_{a}^{b}} \,\frac{\rho\left( \dfrac{1-a-x-b}{a}\right) }{\dfrac{1-x-b}{a}}\frac {1-x-b}{a^{2}x\,b}dx-\frac{\partial^{2}J_{1}}{\partial a\,\partial b \end{align*} henc \[ \frac{\partial^{2}J_{2}}{\partial a\,\partial b} {\displaystyle\int\limits_{a}^{b}} \,\frac{\rho\left( \dfrac{1-a-x-b}{a}\right) }{a\,x\,b}dx {\displaystyle\int\limits_{a}^{b}} f_{123}(b,x,a)dx=f_{13}(b,a), \] as was to be shown. \ An interpretation of $I_{2}(a,b)$ is helpful \[ I_{2}(a,b)=\lim_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{3} {n}\leq a\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }a<\frac{\Lambda_{2}}{n}\leq\frac{\Lambda_{1}}{n}\leq b\right\} \] i.e., the probability that exactly two cycles have length in the interval $(a\,n,$ $b\,n]$ and all others have length $\leq a\,n$. \ \subsection{First and Fourth} For $0<a\leq1/3$ and $a\leq b\leq1$, Cavallar \cite{Cvlr-tcs9} and Zhang \cite{Zhng-tcs9} independently demonstrated tha \[ J_{3}(a,b)=\lim_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{4} {n}\leq a\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }\frac{\Lambda_{1}}{n}\leq b\right\} =J_{2}(a,b)+\underset {=I_{3}(a,b)}{\underbrace {\displaystyle\int\limits_{a}^{b}} {\displaystyle\int\limits_{z}^{b}} {\displaystyle\int\limits_{y}^{b}} \rho\left( \frac{1-x-y-z}{a}\right) \frac{dx}{x}\frac{dy}{y}\frac{dz}{z}}}. \] (Incidently, Cavallar's $G_{3}$ is the same as our $J_{3}-J_{2}=I_{3}$ while Zhang's $G_{3}$ is the same as our $J_{3}$.) \begin{center \begin{tabular} [c]{|c|c|c|c|c|c|}\hline $u\backslash v$ & $1$ & $2$ & $3$ & $4$ & $5$\\\hline $4$ & 0.01488635 & 0.01488635 & 0.00396814 & & \\\hline $5$ & 0.07126587 & 0.06809540 & 0.01884107 & 0.00094238 & \\\hline $6$ & 0.14082221 & 0.12382378 & 0.02870816 & 0.00222512 & 0.00009015\\\hline \end{tabular} Table 5: $I_{3}(1/u,1/v)$ for $4\leq u\leq6$, $1\leq v<u$ \bigski \begin{tabular} [c]{|c|c|c|c|c|c|c|}\hline $u\backslash v$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$\\\hline $4$ & 1.00000000 & 0.30685282 & 0.04860839 & 0.00491093 & & \\\hline $5$ & 0.99912552 & 0.30597834 & 0.04777489 & 0.00480762 & 0.00035472 & \\\hline $6$ & 0.99192941 & 0.29878222 & 0.04243355 & 0.00390355 & 0.00032887 & 0.00001965\\\hline \end{tabular} Table 6: $J_{3}(1/u,1/v)$ for $4\leq u\leq6$, $1\leq v\leq u$ \end{center} We omit details of the verification of $J_{3}(a,b)$, except to mention the start point \[ \frac{\partial I_{3}}{\partial b}=\frac{1}{6}\frac{\partial}{\partial b {\displaystyle\int\limits_{a}^{b}} {\displaystyle\int\limits_{a}^{b}} {\displaystyle\int\limits_{a}^{b}} \rho\left( \frac{1-x-y-z}{a}\right) \frac{dx}{x}\frac{dy}{y}\frac{dz}{z \] and the end point $\partial^{2}J_{3}/\partial a\,\partial b=f_{14}(b,a)$. \ An interpretation of $I_{3}(a,b)$ is helpful \[ I_{3}(a,b)=\lim_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{4} {n}\leq a\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }a<\frac{\Lambda_{3}}{n}\leq\frac{\Lambda_{1}}{n}\leq b\right\} \] i.e., the probability that exactly three cycles have length in the interval $(a\,n,$ $b\,n]$ and all others have length $\leq a\,n$. \ \subsection{Second and Third} For $0<a<1/3$, $a\leq b<1/2$ and $b\leq c\leq1$, Ekkelkamp \cite{Ekk1-tcs9, Ekk2-tcs9} demonstrated tha \[ \lim_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{3}}{n}\leq a\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }a<\frac{\Lambda_{2}}{n}\leq b\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }\frac{\Lambda_{1}}{n}\leq c\right\} {\displaystyle\int\limits_{a}^{b}} {\displaystyle\int\limits_{y}^{c}} \rho\left( \frac{1-x-y}{a}\right) \frac{dx}{x}\frac{dy}{y \] under the additional condition $a+b+c\leq1$. \ If we were to suppose that this condition is unnecessary and set $c=1$, then by definition of $\rho_{2}$, we would hav \[ L_{1}(a,b)=\lim_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{3} {n}\leq a\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }\frac{\Lambda_{2}}{n}\leq b\right\} =\underset{=K_{0 (a)}{\underbrace{\rho_{2}\left( \frac{1}{a}\right) }}+\underset{=K_{1 (a,b)}{\underbrace {\displaystyle\int\limits_{a}^{b}} {\displaystyle\int\limits_{y}^{1}} \rho_{1}\left( \frac{1-x-y}{a}\right) \frac{dx}{x}\frac{dy}{y}} \] where $K_{1}$ is similar (but not identical) to $I_{2}$: \ \[ K_{1}(a,b)=\lim_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{3} {n}\leq a\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }a<\frac{\Lambda_{2}}{n}\leq b\right\} . \] On the one hand, our supposition is evidently false. \ In the following, we compare provisional theoretical values (eight digits of precision) against simulated values (just two digits): \ \begin{center \begin{tabular} [c]{|l|l||l|l|l|}\hline $u\backslash v$ & $3$ & $3$ & $4$ & $5$\\\hline $4$ & 0.62368106 & 0.27362816 $>$ 0.21 & & \\\hline $5$ & 0.46322219 & 0.40043992 $>$ 0.32 & 0.17285583 $>$ 0.14 & \\\hline $6$ & 0.36521775 & 0.43489680 $>$ 0.35 & 0.24479052 $>$ 0.20 & 0.10650591 $>$ 0.09\\\hline \end{tabular} Table 7: $K_{0}(1/u)$ and $K_{1}(1/u,1/v)$ for $4\leq u\leq6$, $3\leq v<u$ \bigski \begin{tabular} [c]{|l|l|l|l|l|l|}\hline $u\backslash v$ & $2$ & $3$ & $4$ & $5$ & $6$\\\hline $3$ & 1.00000000 & 0.85277932 & & & \\\hline $4$ & 0.98511365 & 0.89730922 $>$ 0.84 & 0.62368106 & & \\\hline $5$ & 0.92785965 & 0.86366210 $>$ 0.79 & 0.63607802 $>$ 0.60 & 0.46322219 & \\\hline $6$ & 0.85110720 & 0.80011455 $>$ 0.72 & 0.61000827 $>$ 0.56 & 0.47172366 $>$ 0.45 & 0.36521775\\\hline \end{tabular} Table 8: $L_{1}(1/u,1/v)$ for $3\leq u\leq6$, $2\leq v\leq u$ \end{center} \noindent where special case \[ L_{1}(a,b)=\left\{ \begin{array} [c]{lll \rho_{2}(1/b) & & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }a=b\leq1/3,\\ \rho_{3}(1/a) & & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }a\leq1/3\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }b=1/2 \end{array} \right. \] are surely true. On the other hand, a verification of $L_{1}(a,b)$ is as follows \[ \frac{\partial L_{1}}{\partial b}=\frac{\partial}{\partial b {\displaystyle\int\limits_{a}^{b}} {\displaystyle\int\limits_{y}^{1}} \rho\left( \frac{1-x-y}{a}\right) \frac{dx}{x}\frac{dy}{y} {\displaystyle\int\limits_{b}^{1}} \rho\left( \frac{1-x-b}{a}\right) \frac{1}{b}\frac{dx}{x \] hence by Leibniz's Rule \begin{align*} \frac{\partial^{2}L_{1}}{\partial a\,\partial b} & = {\displaystyle\int\limits_{b}^{1}} \rho^{\prime}\left( \frac{1-x-b}{a}\right) \frac{1-x-b}{a^{2}}\frac{1 {b}\frac{dx}{x} {\displaystyle\int\limits_{b}^{1}} \,\frac{\rho\left( \dfrac{1-a-b-x}{a}\right) }{\dfrac{1-b-x}{a}}\frac {1-b-x}{a^{2}b\,x}dx\\ & {\displaystyle\int\limits_{b}^{1}} \,\frac{\rho\left( \dfrac{1-a-b-x}{a}\right) }{a\,b\,x}dx {\displaystyle\int\limits_{b}^{1}} f_{123}(x,b,a)dx=f_{23}(b,a), \end{align*} as was to be shown. \ If a correction term of the form $\varphi(a)+\psi(b)$ could be incorporated into $K_{1}(a,b)$, rendering it suitably smaller, then the above argument would still go through. \ Determining such expressions $\varphi(a)$, $\psi(b)$ is an open problem. For $0<\alpha<1/4$, $\alpha\leq\beta<1/3$, $\beta\leq\gamma<1/2$ and $\gamma\leq\delta\leq1$, Ekkelkamp \cite{Ekk1-tcs9, Ekk2-tcs9} further demonstrated tha \begin{align*} & \lim_{n\rightarrow\infty}\mathbb{P}\left\{ \dfrac{\Lambda_{4}}{n \leq\alpha\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }\alpha<\dfrac{\Lambda_{3}}{n}\leq\beta\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }\beta <\dfrac{\Lambda_{2}}{n}\leq\gamma\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }\dfrac{\Lambda_{1}}{n}\leq \delta\right\} \\ & {\displaystyle\int\limits_{\alpha}^{\beta}} {\displaystyle\int\limits_{z}^{\gamma}} {\displaystyle\int\limits_{y}^{\delta}} \rho\left( \frac{1-x-y-z}{\alpha}\right) \frac{dx}{x}\frac{dy}{y}\frac {dz}{x \end{align*} under the additional condition $\alpha+\beta+\gamma+\delta\leq1$. \ Such a formula might eventually assist in calculating \ \begin{array} [c]{ccc \lim\limits_{n\rightarrow\infty}\mathbb{P}\left\{ \dfrac{\Lambda_{4}}{n \leq\alpha\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }\dfrac{\Lambda_{2}}{n}\leq\gamma\right\} , & & \lim\limits_{n\rightarrow\infty}\mathbb{P}\left\{ \dfrac{\Lambda_{4}}{n \leq\alpha\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }\dfrac{\Lambda_{3}}{n}\leq\beta\right\} . \end{array} \] We leave this task for others. \ Accuracy can be improved by including a subordinate term -- we have studied only main terms of asymptotic expansions -- this fact was mentioned in \cite{BS-tcs9}, citing \cite{Ekk1-tcs9}, but for proofs one must refer to \cite{Ekk2-tcs9}. \ It is striking that so much of this material remains unpublished (seemingly abandoned but thankfully preserved in doctoral dissertations; see \cite{Clff-tcs9, Trmr-tcs9} for more).\pagebreak An odd confession is necessary at this point and it is almost surely overdue. \ The multivariate probabilities discussed here were originally conceived not in the context of $n$-permutations as $n\rightarrow\infty$, but instead in the difficult realm of integers $\leq N$ (prime factorizations with cryptographic applications) as $N\rightarrow\infty$. \ Knuth \&\ Trabb Pardo \cite{KTP-tcs9, Grn1-tcs9, Grn2-tcs9} were the first to tenuously observe this analogy. \ Lloyd \cite{Llyd-tcs9, Kng2-tcs9} reflected, \textquotedblleft They do not explain the coincidence... No isomorphism of the problems is established\textquotedblright. \ Early in his article, Tao \cite{Tao-tcs9} wrote how a certain calculation doesn't offer understanding for \textquotedblleft\textit{why} there is such a link\textquotedblright, but later gave what he called a\ \textquotedblleft satisfying conceptual (as opposed to computational) explanation\textquotedblright. \ After decades of waiting, the fog has apparently lifted. \section{Addendum:\ Mappings} A counterpart of Billingsley's $f_{1234}$ \[ g_{1234}(x,y,z,w)=\dfrac{1}{16\,x\,y\,z\,w}\,\sigma\left( \dfrac {1-x-y-z-w}{w}\right) \frac{1}{\sqrt{w}}, \ \ \begin{array} [c]{ccc 1>x>y>z>w>0, & & x+y+z+w<1; \end{array} \ \ \begin{array} [c]{ccc \xi\,\sigma^{\prime}(\xi)+\frac{1}{2}\sigma(\xi)+\frac{1}{2}\sigma (\xi-1)=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }\xi>1, & & \sigma(\xi)=1/\sqrt{\xi}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }0<\xi \leq1 \end{array} \] is applicable to the study of connected components in random mappings \cite{Watt-tcs9, ABT1-tcs9}. \ Let $\Lambda_{1}$ and $\Lambda_{2}$ denote the largest and second-largest such components. We use similar notation, but different techniques (because not as much is known about $\sigma$ as about $\rho$.) \ For example \begin{align*} \lim\limits_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{1}}{n >\frac{1}{2}\right\} & {\displaystyle\int\limits_{1/2}^{1}} g_{1}(x)dx {\displaystyle\int\limits_{1/2}^{1}} \frac{1}{2x}\sigma\left( \frac{1-x}{x}\right) \frac{dx}{\sqrt{x}}\\ & =\frac{1}{2 {\displaystyle\int\limits_{1/2}^{1}} \frac{1}{x\sqrt{1-x}}\,dx=\ln\left( 1+\sqrt{2}\right) . \end{align*} Call this probability $Q$. \ The analog here of what we called $A$ in the introduction i \begin{align*} & 1-\lim\limits_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{1} {n}>\frac{1}{2}\right\} -\lim\limits_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{1}}{n}\leq\frac{1}{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }\frac{1}{3}<\frac{\Lambda_{2 }{n}\leq\frac{1}{2}\right\} \\ & =1-Q {\displaystyle\int\limits_{1/3}^{1/2}} \ {\displaystyle\int\limits_{1/3}^{x}} g_{12}(x,y)dy\,dx=1-Q {\displaystyle\int\limits_{1/3}^{1/2}} \ {\displaystyle\int\limits_{1/3}^{x}} \frac{1}{4\,x\,y}\sigma\left( \frac{1-x-y}{y}\right) \frac{dy\,dx}{\sqrt{y }\\ & =1-Q-\frac{1}{4 {\displaystyle\int\limits_{1/3}^{1/2}} \ {\displaystyle\int\limits_{1/3}^{x}} \frac{dy\,dx}{x\,y\sqrt{1-x-y}}=0.065484671719... \end{align*} and the analog of we called $1-A-B$ i \begin{align*} & \lim\limits_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{1} {n}>\frac{1}{2}\right\} -\lim\limits_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{1}}{n}>\frac{1}{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \& }\frac{1}{3}<\frac{\Lambda_{2} {n}\leq\frac{1}{2}\right\} \\ & =Q {\displaystyle\int\limits_{1/2}^{2/3}} \ {\displaystyle\int\limits_{1/3}^{1-x}} g_{12}(x,y)dy\,dx=Q {\displaystyle\int\limits_{1/2}^{2/3}} \ {\displaystyle\int\limits_{1/3}^{1-x}} \frac{1}{4\,x\,y}\sigma\left( \frac{1-x-y}{y}\right) \frac{dy\,dx}{\sqrt{y }\\ & =Q-\frac{1}{4 {\displaystyle\int\limits_{1/2}^{2/3}} \ {\displaystyle\int\limits_{1/3}^{1-x}} \frac{dy\,dx}{x\,y\sqrt{1-x-y}}=0.780087954710.... \end{align*} Thus the analog of $B$ (associated with the orange$\,\cup\,$brown triangle in Figure 1) i \[ \lim\limits_{n\rightarrow\infty}\mathbb{P}\left\{ \frac{\Lambda_{2}}{n >\frac{1}{3}\right\} =1-A-(1-A-B)=0.154427373569... \] and should lead in due course to a formula for $\sigma_{2}$, generalizing $\sigma_{1}=\sigma$ \begin{figure} [ptb] \begin{center} \includegraphics[ height=5.31in, width=5.6498in {fg.eps \caption{$f_{1}(x)=\dfrac{1}{x}\,\rho\left( \dfrac{1-x}{x}\right) $ and $g_{1}(x)=\dfrac{1}{2x^{3/2}}\sigma\left( \dfrac{1-x}{x}\right) $ comparison;\protect\linebreak the differential expression $g_{1}(x)=\dfrac {d}{dx}\left( \dfrac{1}{x^{1/2}}\sigma\left( \dfrac{1}{x}\right) \right) $ is akin to $f_{1}(x)=\dfrac{d}{dx}\,\rho\left( \dfrac{1}{x}\right) $. \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ height=6.5847in, width=6.5639in {yxmap.eps \caption{$g_{12}(x,y)=\dfrac{1}{4\,x\,y^{3/2}}\sigma\left( \dfrac{1-x-y {y}\right) $ over $0\leq y\leq1/2$ and $y\leq x\leq1-y$;\medskip \ \protect\linebreak this contrasts sharply from plot of $f_{12}(x,y)$ in Figure 2 along diagonal segment $x=y$. \end{center} \end{figure} \section{Addendum:\ Short Cycles} Given a random $n$-permutation, let $S_{r}$ denote the length of the $r^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{th}}$ shortest cycle ($0$ if the permutation has no $r^{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{th}}$ cycle) and $C_{\ell}$ denote the number of cycles of length $\ell$. \ Since, as $n\rightarrow\infty$, the distribution of $C_{\ell}$ approaches Poisson($1/\ell$) and $C_{1}$, $C_{2}$, $C_{3}$, \ldots\ become asymptotically independent \cite{AT-tcs9}, we can calculate corresponding probabilities for $S_{r}$. \ For example \[ \mathbb{P}\left\{ S_{1}=1\right\} =\mathbb{P}\left\{ C_{1}\geq1\right\} =1-\mathbb{P}\left\{ C_{1}=0\right\} =1-e^{-1}, \ \begin{align*} \mathbb{P}\left\{ S_{1}=2\right\} & =\mathbb{P}\left\{ C_{1}=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \&\ }C_{2}\geq1\right\} =\mathbb{P}\left\{ C_{1}=0\right\} -\mathbb{P \left\{ C_{1}=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \&\ }C_{2}=0\right\} \\ & =\mathbb{P}\left\{ C_{1}=0\right\} \left( 1-\mathbb{P}\left\{ C_{2}=0\right\} \right) =e^{-1}\left( 1-e^{-1/2}\right) =e^{-1}-e^{-3/2 \end{align*} and, more generally \ \begin{array} [c]{ccc \mathbb{P}\left\{ S_{1}=i\right\} =e^{-H_{i-1}}-e^{-H_{i}}, & & H_{m} {\displaystyle\sum\limits_{k=1}^{m}} \dfrac{1}{k}. \end{array} \] It is understood that these are limiting quantities as $n\rightarrow\infty$. \ As another example \[ \mathbb{P}\left\{ S_{2}=1\right\} =\mathbb{P}\left\{ C_{1}\geq2\right\} =1-\mathbb{P}\left\{ C_{1}\leq1\right\} =1-2e^{-1}, \ \begin{align*} \mathbb{P}\left\{ S_{2}=2\right\} & =\mathbb{P}\left\{ C_{1}=1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \&\ }C_{2}\geq1\right\} +\mathbb{P}\left\{ C_{1}=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \&\ }C_{2 \geq2\right\} \\ & =\mathbb{P}\left\{ C_{1}=1\right\} -\mathbb{P}\left\{ C_{1}=1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \&\ }C_{2}=0\right\} +\mathbb{P}\left\{ C_{1}=0\right\} -\mathbb{P}\left\{ C_{1}=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \&\ }C_{2}\leq1\right\} \\ & =e^{-1}\left( 1-e^{-1/2}\right) +e^{-1}\left( 1-\tfrac{3}{2 e^{-1/2}\right) =2e^{-1}-\tfrac{5}{2}e^{-3/2 \end{align*} an \[ \mathbb{P}\left\{ S_{2}=j\right\} =\left( H_{j-1}+1\right) e^{-H_{j-1 }-\left( H_{j}+1\right) e^{-H_{j}}. \] Similar reasoning leads to \[ \mathbb{P}\left\{ S_{1}=i\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \&\ }S_{2}=j\right\} =\left\{ \begin{array} [c]{lll e^{-H_{i-1}}-\left( 1+\dfrac{1}{i}\right) e^{-H_{i}} & \bigskip & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }i=j,\\ \dfrac{1}{i}\left( e^{-H_{j-1}}-e^{-H_{j}}\right) & \bigskip & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }i<j,\\ 0 & & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise \end{array} \right. \] enabling a conjecture: $\mathbb{E}(S_{1}S_{2})=O(\ln(n)^{3})$. \ A\ proof still remains out of reach. \section{Acknowledgements} I am grateful to Michael Rogers, Josef Meixner, Nicholas Pippenger, Eran Tromer, John Kingman, Andrew Barbour, Ross Maller and Joseph Blitzstein for helpful discussions. \ The creators of Mathematica, as well as administrators of the MIT Engaging Cluster, earn my gratitude every day. \ Interest in this subject has, for me, spanned many years \cite{Fi4-tcs9, Fi5-tcs9}. \ A\ sequel to this paper will be released soon \cite{Fi6-tcs9}.
1,116,691,499,505
arxiv
\section{Introduction} The Ricci flow $\tfrac{\partial}{\partial t}\mathrm g(t)=-2\operatorname{Ric}_{\mathrm g(t)}$ of Riemannian metrics on a smooth manifold is an evolution equation that continues to drive a wide range of breakthroughs in Geometric Analysis, see e.g.~\cite{bamler-survey} for a survey. One of the keys to using Ricci flow is to control how the curvature of $\mathrm g(t)$ evolves; in particular, which curvature conditions of the original metric $\mathrm g(0)$ are preserved. Our main result establishes that, in dimension $n=4$, positive sectional curvature ($\sec>0$) is \emph{not} among them: \begin{mainthm}\label{mainthmA} There exist smooth Riemannian metrics with $\sec>0$ on $S^4$ and $\mathds{C} P^2$ that evolve under the Ricci flow to metrics with sectional curvatures of mixed sign. \end{mainthm} In contrast, $\sec>0$ is preserved on closed manifolds of dimension $n\leq 3$, by the seminal work of Hamilton~\cite{hamilton-original}. Moreover, it was previously known~\cite{maximo2} that $\operatorname{Ric}>0$ is not preserved in dimension $n=4$, even among K\"ahler metrics, but these examples do not have $\sec>0$. Although \Cref{mainthmA} does not readily extend to all $n>4$, there are examples of homogeneous metrics on flag manifolds of dimensions $6$, $12$, and $24$ with $\sec>0$ that lose that property when evolved via Ricci flow, see \cite{bw-gafa-2007,cheung-wallach,abiev-nikonorov}. A state-of-the-art discussion of Ricci flow invariant curvature conditions can be found in \cite{bcrw}, see also \Cref{rem:max-princ}. \Cref{mainthmA} builds on our earlier result~\cite{bettiol-krishnan1} that certain metrics with $\sec\geq0$, introduced by Grove and Ziller~\cite{grove-ziller-annals} in a much broader context (see \Cref{subsec:GZ-metrics}), immediately acquire negatively curved planes on $S^4$ and $\mathds{C} P^2$, when evolved under Ricci flow. In light of the appropriate continuous dependence of Ricci flow on its initial data \cite{BGI20}, the metrics in \Cref{mainthmA} are obtained by means~of: \begin{mainthm}\label{mainthmB} Every Grove--Ziller metric on $S^4$ or $\mathds{C} P^2$ is the limit (in $C^\infty$-topology) of cohomogeneity one metrics with $\sec>0$. \end{mainthm} In full generality, the problem of perturbing $\sec\geq0$ to $\sec>0$ is notoriously difficult, see e.g.~\cite[Prob.~2]{wilking-survey}. Aside from clearly being unobstructed on $S^4$ and $\mathds{C} P^2$, the deformation problem is facilitated here by the presence of natural directions for perturbation, given by the round metric and the Fubini--Study metric, respectively. Indeed, we deform $\sec\geq0$ into $\sec>0$ in \Cref{mainthmB} by linearly interpolating lengths of Killing vector fields for the $\mathsf{SO}(3)$-action which is isometric for both the Grove--Ziller metric $\mathrm g_0$ and the standard metric $\mathrm g_1$ on these spaces. The resulting $\mathsf{SO}(3)$-invariant metrics $\mathrm g_s$, $s\in [0,1]$, are smooth and have $\sec>0$ for all sufficiently small $s>0$. For a lower-dimensional illustration, consider the $\mathsf{T}^2$-action on $S^3\subset \mathds{C}^2$ via $(e^{i\theta_1},e^{i\theta_2})\cdot(z,w)=\big(e^{i\theta_1}z,e^{i\theta_2}w\big)$, and invariant metrics \begin{equation*} \mathrm g=\mathrm{d} r^2+ \varphi(r)^2\,\mathrm{d} \theta_1^2+\xi(r)^2\,\mathrm{d} \theta_2^2, \quad 0<r<\tfrac\pi2, \end{equation*} written along the geodesic segment $\gamma(r)=(\sin r,\cos r)$. The functions $\varphi$ and $\xi$ encode the $\mathrm g$-lengths of the Killing fields $\frac{\partial}{\partial \theta_1}$ and $\frac{\partial}{\partial \theta_2}$ respectively, and must satisfy certain smoothness conditions at the endpoints $r=0$ and $r=\frac\pi2$. The unit round metric $\mathrm g_1$ is given by setting $\varphi$ and $\xi$ to be $\varphi_1(r)=\sin r$ and $\xi_1(r)=\cos r$, while a Grove--Ziller metric $\mathrm g_0$ corresponds to concave monotone functions $\varphi_0$ and $\xi_0$ that plateau at a constant value $b>0$ for at least half of $\left[0,\frac\pi2\right]$. The curvature operator of $\mathrm g$ is easily seen to be diagonal, with eigenvalues $-\varphi''/\varphi$, $-\xi''/\xi$, and $-\varphi'\xi'/\varphi\xi$, see e.g.~\cite[Sec.~4.2.4]{petersen-book-3}, so it has $\sec\geq0$ if and only if $\varphi$ and $\xi$ are concave and monotone, and $\sec>0$ if and only if they are \emph{strictly} concave and monotone. Thus, \begin{equation*} \varphi_s=(1-s)\,\varphi_0+s\,\varphi_1 \quad \text{ and }\quad \xi_s=(1-s)\,\xi_0+s\,\xi_1 \end{equation*} give rise to metrics $\mathrm g_s$ deforming $\mathrm g_0$ to have $\sec>0$ for $s>0$. It turns out that a similar approach works for proving \Cref{mainthmB}, with the addition of a third (nowhere vanishing) function $\psi$, to deal with $\mathsf{SO}(3)$-invariant metrics on $4$-manifolds. The biggest challenge is verifying that these metrics have $\sec>0$, since that is no longer equivalent to positive-definiteness of the curvature operator if $n\geq4$. To overcome this difficulty, we use a much simpler algebraic characterization of $\sec>0$ in dimension $n=4$, given by the Finsler--Thorpe trick (\Cref{prop:FTtrick}). Motivated by the above, it is natural to ask whether the set of cohomogeneity one metrics with $\sec\geq0$ on a given closed manifold coincides with the closure (say, in $C^2$-topology) of the set of such metrics with $\sec>0$, if the latter is nonempty. In contrast to \Cref{mainthmB}, there is some evidence to suggest that Grove--Ziller metrics on certain $7$-manifolds cannot be perturbed to have $\sec>0$, see~\cite[Sec.~4]{ziller-coh1survey}. This paper is organized as follows. Background material on cohomogeneity one manifolds and the Finsler--Thorpe trick in dimension $4$ is presented in \Cref{sec:prelim}. The smoothness conditions and curvature operator of $\mathsf{SO}(3)$-invariant metrics on $S^4$ and $\mathds{C} P^2$ are discussed in \Cref{sec:s4andcp2}. \Cref{sec:sec>0gs} contains the proof of \Cref{mainthmB}, focusing mainly on the case of $S^4$, since the proof for $\mathds{C} P^2$ is mostly analogous. Finally, \Cref{mainthmA} is proved in \Cref{sec:pos-neg}. \section{Preliminaries}\label{sec:prelim} \subsection{Cohomogeneity one} We briefly discuss the geometry of cohomogeneity one manifolds to fix notations, see \cite{mybook,bettiol-krishnan1,grove-ziller-annals,gz-inventiones,VZ18,ziller-coh1survey} for details. A cohomogeneity one manifold is a Riemannian manifold $(M,\mathrm g)$ endowed with an isometric action by a Lie group $\mathsf{G}$, such that the orbit space $M/\mathsf{G}$ is one-dimensional. Let $\pi\colon M\to M/\mathsf{G}$ be the projection map. Throughout, we assume $M/\mathsf{G}=[0,L]$ is a closed interval, and the nonprincipal orbits $B_-=\pi^{-1}(0)$ and $B_+=\pi^{-1}(L)$ are \emph{singular orbits}. In other words, $B_\pm$ are smooth submanifolds of dimension strictly smaller than the principal orbits $\pi^{-1}(r)$, $r\in (0,L)$, which are smooth hypersurfaces of $M$. Fix $x_{-}\in B_{-}$, and consider a minimal geodesic $\gamma(r)$ in $M$ joining $x_{-}$ to $B_{+}$, meeting it at $x_{+}=\gamma(L)$; that is, $\gamma$ is a horizontal lift of $[0,L]$ to $M$. Denote by $\mathsf{K}_{\pm}$ the isotropy group at $x_{\pm}$, and by $\H$ the isotropy at $\gamma(r)$, for $r\in (0,L)$. By the Slice Theorem, given $r_{\mathrm{max}}^\pm>0$ so that $r_{\mathrm{max}}^+ +r_{\mathrm{max}}^-=L$, the tubular neighborhoods $D(B_{-}) = \pi^{-1}\left(\left[0, r_{\mathrm{max}}^-\right]\right)$ and $D(B_{+}) = \pi^{-1}\left(\left[L-r_{\mathrm{max}}^+ ,L\right]\right)$ of the singular orbits are disk bundles over $B_-$ and $B_+$. Let $D^{l_{\pm}+1}$ be the normal disks to $B_{\pm}$ at $x_{\pm}$. Then $\mathsf{K}_{\pm}$ acts transitively on the boundary $\partial D^{l_{\pm}+1}$, with isotropy $\H$, so $\partial D^{l_{\pm}+1} = S^{l_{\pm}} = \mathsf{K}_{\pm}/\H$, and the $\mathsf{K}_{\pm}$-action on $\partial D^{l_{\pm}+1}$ extends to a $\mathsf{K}_\pm$-action on all of $D^{l_{\pm}+1}$. Moreover, there are equivariant diffeomorphisms $D(B_{\pm}) \cong\mathsf{G}\times_{\mathsf{K}_{\pm}}D^{l_{\pm}+1}$, and $M\cong D(B_-)\cup D(B_+)$, where the latter is given by gluing these disk bundles along their common boundary $\partial D(B_{\pm}) \cong\mathsf{G}\times_{\mathsf{K}_{\pm}} S^{l_{\pm}}\cong \mathsf{G}/\H$. In this situation, one associates to $M$ the \emph{group diagram} \begin{equation*} \H\subset\{\mathsf{K}_-,\mathsf{K}_+\}\subset \mathsf{G}. \end{equation*} Conversely, given a group diagram as above, where $\mathsf{K}_{\pm}/\H$ are spheres, there exists a cohomogeneity one manifold $M$ given as the union of the above disk bundles. Fix a bi-invariant metric $Q$ on the Lie algebra $\mathfrak{g}$ of $\mathsf{G}$, and set $\mathfrak{n} = \mathfrak{h}^\perp$, where $\mathfrak{h}\subset\mathfrak{g}$ is the Lie algebra of $\H$. Identifying $\mathfrak{n}\cong T_{\gamma(r)}(\mathsf{G}/\H)$ for each $0<r<L$ via action fields $X\mapsto X^*_{\gamma(r)}$, any $\mathsf{G}$-invariant metric on $M$ can be written~as \begin{equation}\label{eqn:coh1metric} \mathrm g =\mathrm{d} r^2 + \mathrm g_r, \quad 0<r<L, \end{equation} along the geodesic $\gamma(r)$, where $\mathrm g_r$ is a $1$-parameter family of left-invariant metrics on $\mathsf{G}/\H$, i.e., of $\operatorname{Ad}(\H)$-invariant metrics on $\mathfrak{n}$. As $r\searrow0$ and $r\nearrow L$, the metrics $\mathrm g_r$ degenerate, according to how $\mathsf{G}(\gamma(r))\cong\mathsf{G}/\H$ collapse to $B_\pm=\mathsf{G}/\mathsf{K}_\pm$. Namely, they satisfy \emph{smoothness conditions} that characterize when a tensor defined by means of \eqref{eqn:coh1metric} on $M\setminus (B_-\cup B_+)\cong (0,L)\times \mathsf{G}/\H$ extends smoothly to all of $M$, see \cite{VZ18}. \subsubsection{Grove--Ziller metrics}\label{subsec:GZ-metrics} If both singular orbits $B_\pm$ of a cohomogeneity one manifold $M$ have codimension two, then $M$ can be endowed with a new $\mathsf{G}$-invariant metric $\mathrm g_{\mathrm{GZ}}$ with $\sec\geq0$, as shown in the celebrated work of Grove and Ziller~\cite[Thm.~2.6]{grove-ziller-annals}. We now describe this construction, building metrics with $\sec\geq0$ on each disk bundle $D(B_\pm)$ that restrict to a fixed product metric $\mathrm{d} r^2+b^2 Q|_{\mathfrak n}$ near $\partial D(B_\pm)\cong \mathsf{G}/\H$, so that these two pieces can be isometrically glued together. Consider one such disk bundle $D(B)$ at a time, say over a singular orbit $B=\mathsf{G}/\mathsf{K}$, and let $\k$ be the Lie algebra of $\mathsf{K}$. Set $\mathfrak{m} = \k^\perp$ and $\mathfrak{p} = \mathfrak{h}^\perp \cap \k$, so that $\mathfrak{g}=\mathfrak{m}\oplus\mathfrak{p}\oplus\mathfrak{h}$ is a $Q$-orthogonal direct sum. Since $\mathfrak{p}$ is $1$-dimensional, the metric $Q_{a,b}$ on $\mathsf{G}$, given~by \begin{align*} Q_{a,b}|_\mathfrak{m} := b^2\, Q|_\mathfrak{m}, \qquad Q_{a,b}|_\mathfrak{p} := ab^2\, Q|_\mathfrak{p}, \qquad Q_{a,b}|_\mathfrak{h} :=b^2\, Q|_\mathfrak{h}, \end{align*} has $\sec \geq 0$ whenever $0<a \leq \frac{4}{3}$ and $b>0$, see \cite[Prop.~2.4]{grove-ziller-annals} or \cite[Lemma~3.2]{bm-mathann}. Fix $1<a\leq \frac43$, and let $r_{\mathrm{max}}>0$ be such that \begin{equation}\label{eq:MVT-obstruction} y:=\tfrac{ \rho\sqrt{a}}{\sqrt{a-1}} \;\;\text{ satisfies }\;\; y< \, r_{\mathrm{max}}, \end{equation} where $\rho=\rho(b)$ is the radius of the circle(s) $\mathsf{K}/\H$ endowed with the metric $b^2\,Q|_{\mathfrak{p}}$. Then, we can find a smooth nondecreasing function $f\colon \left[0,r_{\mathrm{max}}\right]\to\mathds{R}$ and some $0<r_0<r_{\mathrm{max}}$, with $f(0)=0$, $f'(0)=1$, $f^{(2n)}(0)=0$ for all $n\in\mathds N$, $f''(r)\leq 0$ for all $r\in \left[0,r_{\mathrm{max}}\right]$, $f^{(3)}(r) > 0$ for all $r\in [0, r_0)$, and $f(r) \equiv y$ for all $r\in \left[r_0, r_{\mathrm{max}}\right]$. The rotationally symmetric metric $\mathrm g_{D^2} = \mathrm{d} r^2 + f(r)^2 \mathrm{d}\theta^2$, $0<r\leq r_{\mathrm{max}}$, on the punctured disk $D^2\setminus\{0\}$ extends to a smooth metric $\mathrm g_{D^2}$ on $D^2$ with $\sec\geq0$ that, near $\partial D^2=\{r=r_{\mathrm{max}}\}$, is isometric to a round cylinder $\left[r_0, r_{\mathrm{max}}\right]\times S^1(y)$ of radius $y$. Thus, the product manifold $(\mathsf{G}\times D^2, Q_{a,b} + \mathrm g_{D^2})$ has $\sec \geq 0$, and so does the orbit space $D(B)\cong \mathsf{G}\times_\mathsf{K} D^2$ of the $\mathsf{K}$-action on $\mathsf{G}\times D^2$, when endowed with the metric $\mathrm g_{\mathrm{GZ}}$ that makes the projection map $\Pi\colon (\mathsf{G}\times D^2, Q_{a,b} + \mathrm g_{D^2})\to (\mathsf{G}\times_\mathsf{K} D^2,\mathrm g_{\mathrm{GZ}})$ a Riemannian submersion. Writing this metric $\mathrm g_{\mathrm{GZ}}$ in the form \eqref{eqn:coh1metric}, we have \begin{equation}\label{eq:gGZcoh1form} \mathrm g_{\mathrm{GZ}}=\mathrm{d} r^2+b^2\, Q|_{\mathfrak{m}} +\tfrac{f(r)^2a}{f(r)^2 + a \rho^2}b^2\,Q|_{\mathfrak{p}}, \quad 0<r\leq r_{\mathrm{max}}, \end{equation} see e.g.~\cite[Lemma~2.1, Rem.~2.7]{grove-ziller-annals} or \cite[Lemma 3.1~(ii)]{bm-mathann}. In particular, $\mathrm g_{\mathrm{GZ}}=\mathrm{d} r^2 +b^2\, Q|_{\mathfrak{n}}$ for all $r\in \left[r_0, r_{\mathrm{max}}\right]$, since $\tfrac{f(r)^2a}{f(r)^2 + a \rho^2}\equiv 1$ for all such $r$; hence $(D(B),\mathrm g_{\mathrm{GZ}})$ is isometric to the prescribed product metric near $\partial D(B)\cong \mathsf{G}/\H$. This construction can be performed on each disk bundle $D(B_\pm)$ with the same $b>0$, provided $r_{\mathrm{max}}^\pm>0$ are chosen sufficiently large so that \eqref{eq:MVT-obstruction} holds for the corresponding radii $\rho_\pm(b)$ of the circles $\mathsf{K}_\pm/\H$ endowed with the metric $b^2\,Q|_{\mathfrak p_\pm}$. Gluing these two disk bundles together, we obtain the desired $\mathsf{G}$-invariant metric $\mathrm g_{\mathrm{GZ}}$ with $\sec\geq0$ on $M\cong D(B_-)\cup D(B_+)$ and $M/\mathsf{G}=[0,L]$, where $L = r_{\mathrm{max}}^+ + r_{\mathrm{max}}^-$. Although it is natural to pick the same (largest) value for $r_{\mathrm{max}}^\pm$, so that the gluing occurs at $r=\frac{L}{2}$, it is convenient to not impose this restriction. Note that \begin{equation}\label{eq:lowerboundL} L = r_{\mathrm{max}}^+ + r_{\mathrm{max}}^- > \tfrac{\sqrt{a}}{\sqrt{a-1}}\,\big(\rho_+(b) +\rho_-(b)\big), \end{equation} if the gluing interface $\partial D(B_\pm)$ is isometric to $(\mathsf{G}/\H,b^2 Q|_\mathfrak{n})$. Conversely, given $1<a\leq\frac43$, $b>0$, and $L$ satisfying \eqref{eq:lowerboundL}, there exists a Grove--Ziller metric on $M$ with gluing interface $(\mathsf{G}/\H,b^2 Q|_\mathfrak{n})$, induced by $Q_{a,b}+\mathrm g_{D^2}$, and with $M/\mathsf{G}=[0,L]$. \begin{remark}\label{rem:psec0r1]} Although this is not a requirement in the original Grove--Ziller construction, we assume that $f^{(3)}(r) > 0$ on $[0, r_0)$, hence the curvature of $(D^2, \mathrm g_{D^2})$ is monotonically decreasing for $r\in [0,r_0)$. As a consequence, for each $0<r_*<r_0$, there is a constant $c>0$, depending on $r_*$, so that $\sec_{\mathrm g_{D^2}} \geq c$ for all $r\in [0,r_*]$. \end{remark} \subsection{Finsler--Thorpe trick} In order to verify $\sec>0$ on Riemannian $4$-manifolds, we shall use a result that became known in the Geometric Analysis community as \emph{Thorpe's trick}, attributed to Thorpe~\cite{Thorpe72}, but that actually follows from much earlier work of Finsler~\cite{finsler}, and is often referred to as \emph{Finsler's Lemma} in Convex Algebraic Geometry. This rather multifaceted result is also known as the \emph{$S$-lemma}, or \emph{$S$-procedure}, in the mathematical optimization and control literature, see e.g.~\cite{slemma-survey}. Details and other geometric perspectives can be found in \cite{bkm-siaga}. Let $\operatorname{Sym}^2_{\mathrm b}(\wedge^2\mathds{R}^n)\subset \operatorname{Sym}^2(\wedge^2\mathds{R}^n)$ be the subspace of symmetric endomorphisms $R\colon \wedge^2\mathds{R}^n\to\wedge^2\mathds{R}^n$ that satisfy the first Bianchi identity. These objects are called \emph{algebraic curvature operators}, and serve as pointwise models for the curvature operators of Riemannian $n$-manifolds. For instance, $R\in\operatorname{Sym}^2_{\mathrm b}(\wedge^2\mathds{R}^n)$ is said to have $\sec\geq0$, respectively $\sec>0$, if the restriction of the quadratic form $\langle R(\sigma),\sigma\rangle$ to the oriented Grassmannian $\operatorname{Gr}_2^+(\mathds{R}^n)\subset\wedge^2\mathds{R}^n$ of $2$-planes is nonnegative, respectively positive. A Riemannian manifold $(M^n,\mathrm g)$ has $\sec\geq0$, or $\sec>0$, if and only if its curvature operator $R_p\in\operatorname{Sym}^2_{\mathrm b}(\wedge^2 T_pM)$ has $\sec\geq0$, or $\sec>0$, for all $p\in M$. The orthogonal complement to $\operatorname{Sym}^2_{\mathrm b}(\wedge^2\mathds{R}^n)$ is identified with $\wedge^4\mathds{R}^n$; so, if $n=4$, it is $1$-dimensional, and spanned by the Hodge star operator $*$. Since $\sigma\in\wedge^2\mathds{R}^4$ satisfies $\sigma\wedge\sigma=0$ if and only if $\langle *\sigma,\sigma\rangle=0$, the quadric defined by $*$ in $\wedge^2\mathds{R}^4$ is precisely the Pl\"ucker embedding $\operatorname{Gr}_2^+(\mathds{R}^4)\subset\wedge^2\mathds{R}^4$. As shown by Finsler~\cite{finsler}, a quadratic form $\langle R(\sigma),\sigma\rangle$ is nonnegative when restricted to the quadric $\langle *\sigma,\sigma\rangle=0$ if and only if some linear combination of $R$ and $*$ is positive-semidefinite, yielding: \begin{proposition}[Finsler--Thorpe trick]\label{prop:FTtrick} Let $R\in \operatorname{Sym}^2_{\mathrm b}(\wedge^2 \mathds{R}^4)$ be an algebraic curvature operator. Then $R$ has $\sec\geq0$, respectively $\sec>0$, if and only if there exists $\tau\in\mathds{R}$ such that $R+\tau\, *\succeq0$, respectively $R+\tau\, *\succ0$. \end{proposition} \begin{remark}\label{rem:set-of-taus} For a given $R\in \operatorname{Sym}^2_{\mathrm b}(\wedge^2 \mathds{R}^4)$ with $\sec\geq0$, the set of $\tau\in\mathds{R}$ such that $R+\tau\,*\succeq0$ is a closed interval $[\tau_{\mathrm{min}},\tau_{\mathrm{max}}]$, which degenerates to a single point, i.e., $\tau_{\mathrm{min}}=\tau_{\mathrm{max}}$, if and only if $R$ does not have $\sec>0$, see \cite[Prop.~3.1]{bkm-siaga} \end{remark} The equivalences given by Finsler--Thorpe's trick offer substantial computational advantages to test for $\sec\geq0$ or $\sec>0$, see the discussion in~\cite[Sec.~5.4]{bkm-siaga}. \section{\texorpdfstring{Cohomogeneity one structure of $S^4$ and $\mathds{C} P^2$}{Cohomogeneity one structure of the sphere and complex projective plane}}\label{sec:s4andcp2} Both $S^4$ and $\mathds{C} P^2$ admit a cohomogeneity one action by $\mathsf{G}=\mathsf{SO}(3)$ as we now recall, see~\cite[Sec.~3]{bettiol-krishnan1} and \cite[Sec.~2]{ziller-coh1survey} for details. The $\mathsf{G}$-action on $S^4$ is the restriction to the unit sphere of the $\mathsf{SO}(3)$-action by conjugation on the space of symmetric traceless $3\times 3$ real matrices, while the $\mathsf{G}$-action on $\mathds{C} P^2$ is a subaction of the transitive $\mathsf{SU}(3)$-action. The corresponding orbit spaces are $S^4/\mathsf{G}=\left[0,\frac\pi3\right]$ and $\mathds{C} P^2/\mathsf{G}=\left[0,\frac\pi4\right]$, endowing $S^4$ with the round metric with $\sec\equiv1$, and $\mathds{C} P^2$ with the Fubini--Study metric with $1\leq\sec\leq4$. Their group diagrams are as follows: \begin{align*} S^4&: &\mathds{Z}_2\oplus\mathds{Z}_2\cong \S(\mathsf{O}(1)\mathsf{O}(1)\mathsf{O}(1)) &\subset \{ \S(\mathsf{O}(1)\mathsf{O}(2)),\S(\mathsf{O}(2)\mathsf{O}(1))\} \subset\mathsf{SO}(3),\\ \mathds{C} P^2&: &\mathds{Z}_2\cong \langle\operatorname{diag}(-1,-1,1) \rangle &\subset \{ \S(\mathsf{O}(1)\mathsf{O}(2)), \mathsf{SO}(2)_{1,2}\} \subset\mathsf{SO}(3), \end{align*} according to an appropriate choice of minimal geodesic $\gamma(r)$, $r\in [0,L]$, see~\cite[Sec.~3]{bettiol-krishnan1}. In both cases, since $\H$ is discrete, $\mathfrak{n} \cong\mathfrak{g}= \mathfrak{so}(3)$. We henceforth fix $Q$ to be the bi-invariant metric such that $\{E_{23}, E_{31}, E_{12}\}$ is a $Q$-orthonormal basis of $\mathfrak{so}(3)$, where $E_{ij}$ is the skew-symmetric $3\times 3$ matrix with a $+1$ in the $(i,j)$ entry, a $-1$ in the $(j,i)$ entry, and zeros in the remaining entries. The $1$-dimensional subspaces $\mathfrak{n}_k=\operatorname{span}(E_{ij})$, where $(i,j,k)$ is a cyclic permutation of $(1,2,3)$, are pairwise inequivalent for the adjoint action of $\H$ in the case of $S^4$, while $\mathfrak{n}_1$ and $\mathfrak{n}_2$ are equivalent in the case of $\mathds{C} P^2$, but neither is equivalent to $\mathfrak{n}_3$. Collectively denoting $S^4$ and $\mathds{C} P^2$ with the above cohomogeneity one structures by $M^4$, we consider \emph{diagonal} $\mathsf{G}$-invariant metrics $\mathrm g$ on $M^4$, i.e., metrics of the form \begin{equation}\label{eq:g-phi,psi,xi} \mathrm g =\mathrm{d} r^2+ \varphi(r)^2 \, Q|_{\mathfrak{n}_1} + \psi(r)^2 \, Q|_{\mathfrak{n}_2} + \xi(r)^2 \, Q|_{\mathfrak{n}_3}, \quad 0<r<L \end{equation} where $L=\frac\pi3$ or $L=\frac\pi4$ according to whether $M^4=S^4$ or $M^4=\mathds{C} P^2$, cf.~\eqref{eqn:coh1metric}. Note that every $\mathsf{G}$-invariant metric on $S^4$ is of the above form, i.e., $\mathfrak{n}_k$ are pairwise orthogonal, but $\mathfrak{n}_1$ and $\mathfrak{n}_2$ need not be orthogonal for all $\mathsf{G}$-invariant metrics on $\mathds{C} P^2$, i.e., the off-diagonal term $\mathrm g(E_{23},E_{31})$ need not vanish identically. The standard metric on $M^4$, with curvatures normalized as above, is obtained setting $\varphi,\psi,\xi$ to \begin{equation}\label{eq:can-phi,psi,xi} \begin{aligned} S^4&: &\varphi_1(r)&=2\sin r, &\psi_1(r)&= \sqrt{3}\cos r + \sin r, &\xi_1(r)&= \sqrt{3}\cos r - \sin r, \\ \mathds{C} P^2&: &\varphi_1(r)&=\sin r, &\psi_1(r)&= \cos r, &\xi_1(r)&= \cos 2r, \end{aligned} \end{equation} see \Cref{fig:canmetrics} below for their graphs. \subsection{Smoothness} The conditions required of $\varphi, \psi, \xi$ for the metric $\mathrm g$ in \eqref{eq:g-phi,psi,xi}, which is defined on the open dense set $M^4\setminus (B_-\cup B_+)\cong (0,L)\times \mathsf{G}/\H$, to extend smoothly to all of $M^4$ can be extracted from~\cite[Sec.~3.1, 3.2]{VZ18} as follows: \begin{proposition}\label{prop:smoothness} The $\mathsf{G}$-invariant metric \eqref{eq:g-phi,psi,xi} on $M^4\setminus (B_-\cup B_+)$ extends to a smooth metric on $M^4$ if and only if $\varphi,\psi,\xi$ extend smoothly to $r=0$ and $r=L$ satisfying the following, where $\phi_k$ are smooth, $z=L-r$, and $\varepsilon>0$ is small: \smallskip \begin{center} \begin{tabular}{|c|l|} \hline $M^4$ & \rule[-1.2ex]{0pt}{0pt} \rule{0pt}{2.2ex} Smoothness conditions on $\varphi,\psi,\xi$ \\ \hline \noalign{\medskip} \hline $\begin{array}{c} S^4 \\[5pt] L =\frac\pi3 \end{array}$ & $\begin{array}{l} {\rm (i)} \; \varphi(0) = 0,\, \rule{0pt}{2.5ex} \varphi'(0) = 2, \,\varphi^{(2n)}(0) = 0, \text{ for all } n \geq 1, \\[1pt] {\rm (ii)} \;\psi(r)^2 + \xi(r)^2 = \phi_1(r^2), \text{ for all } r\in [0,\varepsilon), \\[1pt] {\rm (iii)} \;\psi(r)^2 - \xi(r)^2 = r\,\phi_2(r^2), \text{ for all } r\in [0,\varepsilon), \\[3pt] {\rm (iv)} \;\xi(L) = 0, \, \xi'(L) = -2, \, \xi^{(2n)}(L) = 0, \text{ for all } n \geq 1, \\[1pt] {\rm (v)} \;\psi(z)^2 + \varphi(z)^2 = \phi_3(z^2), \text{ for all } z\in [0,\varepsilon), \\[1pt] {\rm (vi)} \;\psi(z)^2 - \varphi(z)^2 = z\,\phi_4(z^2), \text{ for all } z\in [0,\varepsilon). \end{array}$ \\ \hline \noalign{\smallskip} \hline $\begin{array}{c} \mathds{C} P^2 \\[5pt] L =\frac\pi4 \end{array}$ & $\begin{array}{l} {\rm (i)} \; \varphi(0) = 0,\, \rule{0pt}{2.5ex} \varphi'(0) = 1, \,\varphi^{(2n)}(0) = 0, \text{ for all } n \geq 1, \\[1pt] {\rm (ii)} \;\psi(r)^2 + \xi(r)^2 = \phi_5(r^2), \text{ for all } r\in [0,\varepsilon), \\[1pt] {\rm (iii)} \; \psi(r)^2 - \xi(r)^2 = r^2\,\phi_6(r^2), \text{ for all } r\in [0,\varepsilon), \\[3pt] {\rm (iv)} \; \xi(L) = 0, \, \xi'(L) = -2, \, \xi^{(2n)}(L) = 0, \text{ for all } n \geq 1, \\[1pt] {\rm (v)} \; \psi(z)^2 + \varphi(z)^2 = \phi_7(z^2), \text{ for all } z\in [0,\varepsilon), \\[1pt] {\rm (vi)} \; \psi(z)^2 - \varphi(z)^2 = z\,\phi_8(z^2), \text{ for all } z\in [0,\varepsilon). \end{array}$ \\ \hline \end{tabular} \end{center} \end{proposition} \begin{remark}\label{rem:extra-symm} Since the isotropy groups $\mathsf{K}_\pm$ for the $\mathsf{G}$-action on $S^4$ are conjugate, the smoothness conditions at the endpoints $r=0$ and $r=L$ can be obtained from one another by interchanging the roles of $\varphi$ and $\xi$. Furthermore, just as the round metric \eqref{eq:can-phi,psi,xi}, all metrics we consider on $S^4$ have the following additional symmetries: \begin{equation}\label{eqn:gS4sym} \varphi(r)=\xi\left(L-r\right), \quad \text{and}\quad \psi(r)=\psi\left(L-r\right), \quad \text{ for all } 0\leq r\leq L. \end{equation} However, metrics on $\mathds{C} P^2$ do not have any of these features or extra symmetries, as $\mathsf{K}_\pm$ are not conjugate, and, in general $\varphi(r) \neq \xi\left(L-r\right)$ and $\psi(r)\neq\psi\left(L-r\right)$. \end{remark} \begin{figure}[!ht] \begin{tikzpicture}[scale=0.75] \begin{axis}[ axis x line=middle, axis y line=middle, axis line style = thick, tick style = thick, ymax=2.1, xmax={pi/3+.1}, xtick={pi/6,pi/3}, xticklabels={$\frac\pi6$, $\frac\pi3$}, ytick={sqrt(3)}, yticklabels={$\sqrt3$}, ] \addplot[domain=0:pi/3, red, ultra thick] {2*sin(deg(x))}; \addplot[domain=0:pi/3, black!40!green, ultra thick] {sqrt(3)*cos(deg(x))+sin(deg(x))}; \addplot[domain=0:pi/3, blue, ultra thick] {sqrt(3)*cos(deg(x))-sin(deg(x))}; \draw (pi/4,1.55) node {$\color{red}\varphi_1$}; \draw (pi/4,2.05) node {$\color{black!40!green}\psi_1$}; \draw (pi/4,0.69) node {$\color{blue}\xi_1$}; \end{axis} \end{tikzpicture} \begin{tikzpicture}[scale=0.75] \begin{axis}[ axis x line=middle, axis y line=middle, axis line style = thick, tick style = thick, ymax=2.1, xmax={pi/4+.1}, xtick={pi/8,pi/6,pi/4}, xticklabels={$\frac\pi8$, $\frac{\pi}{6}$, $\frac\pi4$}, ytick={sqrt(2)/2, 1}, yticklabels={$\frac{\sqrt2}{2}$, $1$}, ] \addplot[domain=0:pi/4, red, ultra thick] {sin(deg(x))}; \addplot[domain=0:pi/4, black!40!green, ultra thick] {cos(deg(x))}; \addplot[domain=0:pi/4, blue, ultra thick] {cos(2*deg(x))}; \draw (pi/5,0.67) node {$\color{red}\varphi_1$}; \draw (pi/5,0.9) node {$\color{black!40!green}\psi_1$}; \draw (pi/5,0.42) node {$\color{blue}\xi_1$}; \end{axis} \end{tikzpicture} \vspace{-.2cm} \caption{Graphs of $\varphi_1,\psi_1,\xi_1$, for $S^4$ (left) and $\mathds{C} P^2$ (right).}\label{fig:canmetrics} \vspace{-.2cm} \end{figure} \subsection{Curvature} Computing the curvature operator of the $\mathsf{G}$-invariant metric \eqref{eq:g-phi,psi,xi} on $M^4$, with the formulae in \cite[Prop.~1.12]{gz-inventiones}, one obtains the following: \begin{proposition}\label{propn:curv_op} Let $\{ e_i \}_{i=0}^3$ be the $\mathrm g$-orthonormal frame along the geodesic $\gamma(r)$, $0<r<L$, given by $e_0=\gamma'(r)$, $e_1 = \frac{1}{\varphi(r)} E_{23}^*$, $e_2 = \frac{1}{\psi(r)} E_{31}^*$, $e_3 = \frac{1}{\xi(r)} E_{12}^*$, i.e., $e_0$ is the unit horizontal direction and $\{e_1,e_2,e_3\}$ are unit Killing vector fields. In the basis $\mathcal{B}:=\{ e_2\wedge e_3,\, e_0\wedge e_1,\, e_3\wedge e_1,\, e_0\wedge e_2,\, e_1\wedge e_2,\, e_0\wedge e_3 \},$ the curvature operator $R\colon\wedge^2 T_{\gamma(r)}M^4 \to\wedge^2 T_{\gamma(r)}M^4$, $0<r<L$, is block diagonal, that is, $R = \operatorname{diag}(R_1, R_2, R_3)$, with $2\times 2$ blocks given as follows: \begin{align*} R_1 &= \begin{bmatrix} \frac{\psi^4+\xi^4 -\varphi^4 + 2(\xi^2-\varphi^2)(\varphi^2-\psi^2)}{4\varphi^2 \psi^2\xi^2 } - \frac{\psi'\xi'}{\psi\xi } & % \; \frac{\psi'(\psi^2+\varphi^2-\xi^2)}{2 \varphi\psi^2\xi} + \frac{\xi'(\xi^2+\varphi^2-\psi^2)}{2\varphi\psi\xi^2} -\frac{\varphi'}{\psi\xi} \\[5pt] % \; \frac{\psi'(\psi^2+\varphi^2-\xi^2)}{2 \varphi\psi^2\xi} + \frac{\xi'(\xi^2+\varphi^2-\psi^2)}{2\varphi\psi\xi^2} -\frac{\varphi'}{\psi\xi} & % -\frac{\varphi''}{\varphi} \end{bmatrix},\\[3pt] % R_2 &= \begin{bmatrix} \frac{\varphi^4 + \xi^4-\psi^4 + 2(\varphi^2-\psi^2)(\psi^2-\xi^2)}{4\varphi^2 \psi^2\xi^2} - \frac{\varphi'\xi'}{\varphi \xi} & % \; \frac{\varphi'(\varphi^2+\psi^2-\xi^2)}{2\varphi^2\psi \xi } + \frac{\xi'(\xi^2+\psi^2-\varphi^2)}{2\varphi \psi\xi^2} -\frac{\psi'}{\varphi \xi} \\[5pt] % \; \frac{\varphi'(\varphi^2+\psi^2-\xi^2)}{2\varphi^2\psi \xi } + \frac{\xi'(\xi^2+\psi^2-\varphi^2)}{2\varphi \psi\xi^2} -\frac{\psi'}{\varphi \xi} & % -\frac{\psi''}{\psi} \end{bmatrix},\\[3pt] % R_3 &= \begin{bmatrix} \frac{ \varphi^4+\psi^4-\xi^4 + 2(\psi^2-\xi^2)(\xi^2-\varphi^2)}{4\varphi^2 \psi^2\xi^2 } - \frac{\varphi'\psi'}{\varphi\psi} & % \frac{\varphi'(\varphi^2+\xi^2-\psi^2)}{2\varphi^2\psi\xi}+ \frac{\psi'(\psi^2+\xi^2-\varphi^2)}{2\varphi\psi^2 \xi} -\frac{\xi'}{\varphi\psi} \\[5pt] % \frac{\varphi'(\varphi^2+\xi^2-\psi^2)}{2\varphi^2\psi\xi}+ \frac{\psi'(\psi^2+\xi^2-\varphi^2)}{2\varphi\psi^2 \xi} -\frac{\xi'}{\varphi\psi} & % -\frac{\xi''}{\xi} \end{bmatrix}. \end{align*} \end{proposition} The Hodge star operator $*$ is also clearly block diagonal in the basis $\mathcal{B}$, namely, \begin{equation}\label{eq:defH} * = \operatorname{diag}(H, H, H), \quad\text{where}\quad H= \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix}. \end{equation} Thus, by the Finsler--Thorpe trick (\Cref{prop:FTtrick}), such $R=\operatorname{diag}(R_1,R_2,R_3)$ as in \Cref{propn:curv_op} has $\sec\geq0$, respectively $\sec>0$, if and only if there exists $\tau(r)$ such that $R_i+\tau\,H\succeq0$ for $i=1,2,3$, respectively $R_i+\tau\,H\succ0$ for $i=1,2,3$. \begin{remark} Diagonal entries in $R_i$ are sectional curvatures $\sec(e_i\wedge e_j)=R_{ijij}$ of coordinate planes, while off-diagonal entries are $R_{ijkl}$, with $i,j,k,l$ all distinct, so the Finsler--Thorpe trick states that $\sec\geq0$ and $\sec>0$ are respectively equivalent to the existence of $\tau$ such that all $R_{ijij}\,R_{klkl}- (R_{ijkl}+\tau)^2$ are $\geq0$ and $>0$. \end{remark} To illustrate the above, note that setting $\varphi,\psi,\xi$ to be the functions in \eqref{eq:can-phi,psi,xi} that correspond to the standard metrics in $S^4$ and $\mathds{C} P^2$, the blocks $R_i$ become constant: \begin{equation}\label{eq:can-R-blocks} \begin{aligned} S^4&: \qquad R_1=R_2=R_3=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \\[1pt] \mathds{C} P^2&: \qquad R_1 = R_2= \begin{bmatrix} 1 & -1 \\ -1 & 1 \end{bmatrix}, \quad R_3 = \begin{bmatrix} 4 & 2 \\ 2 & 4 \end{bmatrix}. \end{aligned} \end{equation} In particular, $\tau$ can be chosen constant, and $R+\tau\,*\succeq0$ if and only if $\tau\in[-1,1]$ for $S^4$, and $\tau\in[0,2]$ for $\mathds{C} P^2$, and $R+\tau\,*\succ0$ if and only if $\tau$ is in the open intervals. Similarly, the curvature of a Grove--Ziller metric with gluing interface $\partial D(B_\pm)$ isometric to $(\mathsf{G}/\H,b^2Q|_\mathfrak{n})$ and $L=r_{\mathrm{max}}^+ +r_{\mathrm{max}}^-$ can be computed by setting $\varphi,\psi,\xi$ instead to be the functions that make \eqref{eq:g-phi,psi,xi} match with \eqref{eq:gGZcoh1form}, namely (see \Cref{fig:GZmetrics}) \begin{align} \varphi(r) &= \begin{cases} \frac{f(r)\,b \,\sqrt{a}}{\sqrt{f(r)^2 + a \rho^2}}, & \text{ if } r\in \left(0,r_{\mathrm{max}}^- \right], \text{ where } \rho=\rho_-(b), \; f=f_-, \\ b, & \text{ if } r\in \left[r_{\mathrm{max}}^- , L\right) \end{cases}\nonumber \\[3pt] \psi(r)&\equiv b,\label{eqn:f1intermsoff} \\[3pt] \xi(r) &= \begin{cases} b, & \text{ if } r\in \left(0,r_{\mathrm{max}}^- \right],\\ \frac{f(L-r)\,b \,\sqrt{a}}{\sqrt{f(L-r)^2 + a \rho^2}},& \text{ if }r \in \left[r_{\mathrm{max}}^- , L\right), \text{ where } \rho=\rho_+(b), \; f=f_+, \end{cases}\nonumber \end{align} as $\mathfrak{m}=\mathfrak{n}_2\oplus \mathfrak{n}_3$ and $\mathfrak{p}=\mathfrak{n}_1$ for the disk bundle $D(B_-)$, but $\varphi$ and $\xi$ switch roles on the disk bundle $D(B_+)$, in which $\mathfrak{m}=\mathfrak{n}_1\oplus \mathfrak{n}_2$ and $\mathfrak{p}= \mathfrak{n}_3$. Recall that $f(r)\equiv \tfrac{\sqrt{a}\,\rho}{\sqrt{a-1}}$ for $r_0\leq r\leq r_{\mathrm{max}}$ on each of $D(B_\pm)$, so, in a neighborhood of the gluing interface $r=r_{\mathrm{max}}^-=L-r_{\mathrm{max}}^+$, the functions $\varphi=\psi=\xi$ are all constant and equal to $b$ . In what follows, to simplify the exposition, we shall work with $\varphi,\psi,\xi$ only on the interval $\left(0,r_{\mathrm{max}}^-\right]$, which, at least on $S^4$, determines their values for all $0<r<L$ by setting $r_{\mathrm{max}}^+=r_{\mathrm{max}}^-$ and imposing the additional symmetries \eqref{eqn:gS4sym}, see \Cref{rem:extra-symm}. Straightforward computations using \Cref{propn:curv_op} imply the following: \begin{proposition} \label{propn:GZRR} The curvature operator of the Grove--Ziller metric \eqref{eq:gGZcoh1form}; i.e., the metric \eqref{eq:g-phi,psi,xi} with $\varphi,\psi,\xi$ as in \eqref{eqn:f1intermsoff}, for $r\in \left(0,r_{\mathrm{max}}^-\right]$, is $R=\operatorname{diag}(R_1,R_2,R_3)$,~with: \begin{equation*} R_1 = \begin{bmatrix} \frac{4b^2 - 3\varphi^2}{4b^4} & -\frac{\varphi'}{b^2} \\ -\frac{\varphi'}{b^2} & -\frac{\varphi''}{\varphi} \end{bmatrix}, \quad R_2 = R_3 = \begin{bmatrix} \frac{\varphi^2}{4b^4} & \frac{\varphi'}{2b^2} \\ \frac{\varphi'}{2b^2} & 0 \end{bmatrix}. \end{equation*} In particular, $R+\tau\,*\succeq0$ if and only if $\tau=-\frac{\varphi'}{2b^2}$. \end{proposition} Indeed, it is easy to verify that $\tau=-\frac{\varphi'}{2b^2}$ is the \emph{only} function $\tau(r)$, $r\in \left(0,r_{\mathrm{max}}^-\right]$, such that $R+\tau\,*\succeq0$. Namely, for such $r$, we have that $[R_i + \tau H]_{22}\equiv 0$ for both $i=2,3$, and hence $\det(R_2 + \tau H)=-(\frac{\varphi'}{2b^2}+\tau)^2\geq0$. This pointwise uniqueness of $\tau$ corresponds to the presence of flat planes for the Grove--Ziller metric at every point $\gamma(r)$; e.g., $\sec(e_0\wedge e_2)\equiv 0$ for all $r$. It is interesting to observe how this (forceful) choice of $\tau$ stemming from $R_i+\tau H\succeq0$, $i=2,3$, also satisfies $R_1+\tau H\succeq0$, i.e., how the expression for $\varphi$ in \eqref{eqn:f1intermsoff} ensures $\det(R_1 + \tau H) = \big( \frac{4b^2 - 3\varphi^2}{4b^4}\big)\big(-\frac{\varphi''}{\varphi}\big) - \big(\frac{3\varphi'}{2b^2}\big)^2\geq0$. \begin{lemma}\label{propn:eqnf1_sec>0} The function $\varphi(r)$ in the Grove--Ziller metric \eqref{eq:gGZcoh1form}, given by \eqref{eqn:f1intermsoff} for $r\in \left(0,r_{\mathrm{max}}^-\right]$, satisfies $(4b^2 - 3\varphi^2)(-\varphi'') - 9\varphi\varphi'^2 \geq 0$ for all $r\in \left(0,r_{\mathrm{max}}^-\right]$. \end{lemma} \begin{proof} Solving for $f(r)$ in \eqref{eqn:f1intermsoff}, we find $f(r)= \frac{\varphi(r)\rho\sqrt{a}}{ \sqrt{ab^2 - \varphi(r)^2}}$; in particular, we have that $\varphi(r)<\sqrt{a}\,b$. Differentiating twice, it follows that: \begin{equation}\label{eq:f''intermsofphi} f'' = \frac{a^{3/2}b^2\rho}{(ab^2 - \varphi^2)^{5/2}} \big( \varphi''(a b^2 - \varphi^2) + 3\varphi\varphi'^2\big). \end{equation} Since $f''\leq 0$, we have $\varphi''(a b^2 - \varphi^2) + 3\varphi\varphi'^2\leq0$, so $(3a b^2 - 3\varphi^2)(-\varphi'') - 9\varphi\varphi'^2 \geq 0$, which implies the desired differential inequality since $a \leq \frac{4}{3}$. \end{proof} \section{Positively curved metrics near Grove--Ziller metrics}\label{sec:sec>0gs} In this section, we prove \Cref{mainthmB} in the Introduction, perturbing arbitrary Grove--Ziller metrics with $\sec\geq0$ on $S^4$ and $\mathds{C} P^2$ into cohomogeneity one metrics that we show have $\sec>0$ via the Finsler--Thorpe trick (\Cref{prop:FTtrick}). \subsection{Metric perturbation} Let $M^4$ be either $S^4$ or $\mathds{C} P^2$, with the cohomogeneity one action of $\mathsf{G}=\mathsf{SO}(3)$ from the previous section. Given a Grove--Ziller metric $\mathrm g_{\mathrm{GZ}}$ on $M^4$ with gluing interface isometric to $(\mathsf{G}/\H,b^2 Q|_\mathfrak{n})$, we have that the length of the circle(s) $\mathsf{K}_\pm/\H$ endowed with the metric $b^2\,Q|_{\mathfrak{p}_\pm}$ is $\rho_\pm(b) = b/|(\mathsf{K}_\pm)_0\cap \H|$, where $\mathsf{K}_0$ is the identity component of $\mathsf{K}$. From the group diagrams, we compute $|(\mathsf{K}_\pm)_0\cap \H|$ and obtain $\rho_\pm(b) = b/2$ if $M^4=S^4$, while $\rho_-(b) = b$ and $\rho_+(b) = b/2$ if $M^4=\mathds{C} P^2$. Thus, by \eqref{eq:lowerboundL}, the length $L$ of the orbit space $M/\mathsf{G}=[0,L]$ satisfies $L>\frac{\sqrt{a}}{\sqrt{a-1}} \,b$ if $M^4=S^4$, and $L> \frac{3\sqrt{a}}{2\sqrt{a-1}}\, b$ if $M^4=\mathds{C} P^2$. Rescaling $(M^4,\mathrm g_{\mathrm{GZ}})$ so that $L=\frac\pi3$ if $M^4=S^4$, and $L = \frac\pi4$ if $M^4=\mathds{C} P^2$, we obtain a Grove--Ziller metric $\mathrm g_0$ homothetic to $\mathrm g_{\mathrm{GZ}}$, with standardized $L$, and whose parameters $a$ and $b$ satisfy \begin{equation}\label{eq:max-beta} \textstyle b< \frac\pi3\frac{\sqrt{a-1}}{\sqrt{a}} \;\text{ if }\; M^4=S^4, \quad\text{and}\quad b< \frac\pi6\frac{\sqrt{a-1}}{\sqrt{a}} \;\text{ if }\; M^4=\mathds{C} P^2. \end{equation} Using \eqref{eq:MVT-obstruction}, it follows that $r_{\mathrm{max}}^\pm = \frac\pi6$ for $M^4=S^4$, while $r_{\mathrm{max}}^- = \frac{\pi}{6}$ and $r_{\mathrm{max}}^+ = \frac{\pi}{12}$ for $M^4=\mathds{C} P^2$. Note that $\varphi_1(r)=\xi_1(r)$ precisely at these values of $r=r^-_{\mathrm{max}}$. Writing $\mathrm g_0$ in the form \eqref{eq:g-phi,psi,xi} we obtain the functions $\varphi,\psi,\xi$ in \eqref{eqn:f1intermsoff}, which we decorate with the subindex $_0$, i.e., $\varphi_0,\psi_0,\xi_0$. Similarly, let $\mathrm g_1$ be the standard metric on $M^4$, and use a subindex $_1$ to decorate the $\varphi,\psi,\xi$ given in \eqref{eq:can-phi,psi,xi}. Now, define: \begin{equation}\label{eq:def-phis-psis-xis} \begin{aligned} \varphi_s(r) &:= (1-s)\varphi_0(r) + s\,\varphi_1(r),\\ \psi_s(r) &:= (1-s)\psi_0(r) + s\,\psi_1(r), \qquad r\in \left[ 0, L \right],\\ \xi_s(r) &:= (1-s)\,\xi_0(r) + s\,\xi_1(r), \end{aligned} \end{equation} i.e., linearly interpolate from $\varphi_0,\psi_0,\xi_0$ to $\varphi_1,\psi_1,\xi_1$, and set $\mathrm g_s$, $s\in[0,1]$, to be \begin{equation}\label{eq:gs} \mathrm g_s := \mathrm{d} r^2 + \varphi_s(r)^2 \, Q|_{\mathfrak{n}_1} + \psi_s(r)^2 \, Q|_{\mathfrak{n}_2} + \xi_s(r)^2 \, Q|_{\mathfrak{n}_3}, \quad 0<r<L. \end{equation} The functions \eqref{eq:def-phis-psis-xis} can be visualized as affine homotopies between \Cref{fig:canmetrics,fig:GZmetrics}. \begin{figure}[!ht] \begin{tikzpicture}[scale=0.75] \begin{axis}[ axis x line=middle, axis y line=middle, axis line style = thick, tick style = thick, ymax=2.2, xmax={pi+.1}, xtick={pi/2,pi}, xticklabels={$r_{\mathrm{max}}^-=\frac\pi6$, $L=\frac\pi3$}, ytick={0.8,1}, yticklabels={$b$,$\frac{\pi}{6}$}, ] \addplot[domain=(pi/2)/.85:pi, blue, ultra thick] {0.8*sin(deg((pi-x)/.85))}; \addplot[domain=0:pi, black!40!green, ultra thick] {0.8}; \addplot[domain=0:pi/2*.85, red, ultra thick] {0.8*sin(deg(x/.85))}; \addplot[domain=pi/2*.85:pi, red, ultra thick] {0.8}; \draw (3*pi/4,0.9) node {$\color{red}\varphi_0$}; \draw (pi/4,0.9) node {$\color{black!40!green}\psi_0$}; \draw (9*pi/10,0.45) node {$\color{blue}\xi_0$}; \end{axis} \end{tikzpicture} \begin{tikzpicture}[scale=0.75] \begin{axis}[ axis x line=middle, axis y line=middle, axis line style = thick, tick style = thick, ymax=2.2, xmax={3*pi/4+.1}, xtick={pi/2,3*pi/4}, xticklabels={$r_{\mathrm{max}}^-=\frac\pi6$, $L=\frac\pi4$}, ytick={0.3,0.5}, yticklabels={$b$,$\frac{\pi}{12}$}, ] \addplot[domain=(3*pi/4-pi/2)/.5:3*pi/4, blue, ultra thick] {0.3*sin(deg((3*pi/4-x)/.5))}; \addplot[domain=0:3*pi/4, black!40!green, ultra thick] {0.3}; \addplot[domain=0:pi/2*.85, red, ultra thick] {0.3*sin(deg(x/.85))}; \addplot[domain=pi/2*.85:3*pi/4, red, ultra thick] {.3}; \draw (3*pi/4-.2,0.4) node {$\color{red}\varphi_0$}; \draw (pi/5+.1,0.4) node {$\color{black!40!green}\psi_0$}; \draw (3*pi/4-.1,0.17) node {$\color{blue}\xi_0$}; \end{axis} \end{tikzpicture} \vspace{-.2cm} \caption{Graphs of $\varphi_0,\psi_0,\xi_0$, for $S^4$ (left) and $\mathds{C} P^2$ (right), cf.~\eqref{eqn:f1intermsoff}. The upper bound on $b$ and $r_{\mathrm{max}}^-=\frac\pi6$ follow from \eqref{eq:max-beta}. }\label{fig:GZmetrics} \vspace{-.1cm} \end{figure} It is a straightforward consequence of \Cref{prop:smoothness} that $\mathrm g_s$ are smooth metrics: \begin{lemma}\label{propn:gssmooth} The $\mathsf{G}$-invariant metrics $\mathrm g_s$, $s\in [0,1]$, defined on $M^4\setminus (B_-\cup B_+)$ by \eqref{eq:gs}, extend to smooth metrics on $M^4$, which we also denote by $\mathrm g_s$, $s\in [0,1]$. \end{lemma} \begin{proof} For simplicity, we focus on the case $M^4=S^4$, and the case $M^4=\mathds{C} P^2$ is left to the reader. The metrics $\mathrm g_s$ are clearly smooth away from the singular orbits, which correspond to $r=0$ and $r=L$. In light of \Cref{rem:extra-symm}, it suffices to check the smoothness conditions (i)--(iii) in \Cref{prop:smoothness}, i.e., those regarding $r=0$. First, since $\varphi_s^{(k)}(r)=(1-s)\varphi_0^{(k)}(r)+s\,\varphi_1^{(k)}(r)$ for all $k\geq0$, it is clear that $\varphi_s$ satisfies (i), as both $\varphi_0$ and $\varphi_1$ do. Second, if $r\in \left[0,r_{\mathrm{max}}^-\right]$, then $\psi_0(r)=\xi_0(r)=b$, cf.~\eqref{eqn:f1intermsoff}, so $\psi_s(r) = (1-s) b + s\,\psi_1(r)$ and $\xi_s(r) = (1-s)b + s\,\xi_1(r)$, and thus: \begin{align*} \psi_s(r)^2 + \xi_s(r)^2 &= 2(1-s)^2b^2 + 2s(1-s)b (\psi_1(r) + \xi_1(r)) + s^2 \left( \psi_1(r)^2 + \xi_1(r)^2 \right)\\ &= 2(1-s)^2b^2 + 2s(1-s)b\,\sqrt{3}\cos r + s^2 \phi_1(r^2) = \widetilde{\phi_1}(r^2),\\ \psi_s(r)^2 - \xi_s(r)^2 &= 2s(1-s)b \, (\psi_1(r) - \xi_1(r)) + s^2 \left( \psi_1(r)^2 - \xi_1(r)^2 \right) \\ &= 2s(1-s)b\,(-2\sin r) + s^2 \,r\,\phi_2(r^2) = r\,\widetilde{\phi_2}(r^2), \end{align*} where $\widetilde{\phi_k}$, $k=1,2$, are smooth functions, hence (ii) and (iii) are also satisfied. \end{proof} Let us introduce functions $\Delta_\varphi,\Delta_\psi,\Delta_\xi$ of $r$ so that \eqref{eq:def-phis-psis-xis} can be written as \begin{equation}\label{eqn:Deltai} \varphi_s=\varphi_0+s\,\Delta_\varphi, \quad \psi_s=\psi_0+s\,\Delta_\psi, \quad \xi_s=\xi_0+s\,\Delta_\xi, \end{equation} i.e., $\Delta_\varphi(r) := \varphi_1(r)-\varphi_0(r)$, and similarly for $\Delta_\psi$ and $\Delta_\xi$. Note that each of these functions is smooth up to $r=0$ and $r=L$; in particular, bounded on $[0,L]$. In the sequel, we take the point of view \eqref{eqn:Deltai} that $\varphi_s,\psi_s,\xi_s$ are perturbations of $\varphi_0,\psi_0,\xi_0$. \subsection{Regularity of perturbation} By \eqref{eq:gs}, \Cref{propn:gssmooth}, and \Cref{propn:curv_op}, each entry of the curvature operator matrix $R_s$ of $\mathrm g_s$ along $\gamma(r)$ is a smooth function \begin{equation}\label{eq:generic-entry} \frac{ P(\varphi_s,\, \psi_s,\,\xi_s,\, \varphi'_s,\, \psi'_s,\, \xi'_s,\, \varphi''_s,\, \psi''_s, \,\xi''_s)}{\varphi_s^2\,\psi_s^2\,\xi_s^2}, \end{equation} where $P$ is a polynomial. Note that the $\mathrm g_s$-orthonormal basis on which the matrix $R_s$ is being written varies smoothly with $s$. The singularities in \eqref{eq:generic-entry} at $r=0$ and $r=L$, due to $\varphi_s(0)=0$ and $\xi_s(L)=0$, are removable as a consequence of \Cref{propn:gssmooth}. This corresponds to the fact that also $P$ vanishes to the appropriate order because $\varphi_s,\psi_s,\xi_s$ satisfy the required smoothness conditions. Moreover, these smoothness conditions imply that \eqref{eq:generic-entry} equals \begin{equation}\label{eq:generic-entry2} \frac{ P(\varphi_s,\, \psi_s,\,\xi_s,\, \varphi'_s,\, \psi'_s,\, \xi'_s,\, \varphi''_s,\, \psi''_s, \,\xi''_s)}{\varphi_0^2\,\psi_0^2\,\xi_0^2}+Q(s,r)\,s, \end{equation} where $Q$ is continuous. Furthermore, by \eqref{eqn:Deltai}, the numerator above can be written as a polynomial $\widetilde P$ in the parameter $s$, the functions $\varphi_0,\psi_0,\xi_0$ and their first and second derivatives, and the functions $\Delta_\varphi,\Delta_\psi,\Delta_\xi$ and their first and second derivatives (indicated as $\dots$ below). Thus, \eqref{eq:generic-entry2} and hence \eqref{eq:generic-entry} are equal~to \begin{equation}\label{eq:generic-entry3} \frac{ \widetilde P(s,\, \varphi_0,\, \psi_0,\,\xi_0,\dots, \Delta_\varphi, \, \Delta_\psi,\, \Delta_\xi, \dots)}{\varphi_0^2\,\psi_0^2\,\xi_0^2}+Q(s,r)\,s. \end{equation} In particular, the dependence of the above on $s$ is polynomial in the first term, and smooth on the second. Expanding in $s$, we~have \begin{equation*} \widetilde P(s,\, \varphi_0, \psi_0,\xi_0,\dots, \Delta_\varphi, \Delta_\psi, \Delta_\xi, \dots)=\sum_{n=0}^d \widetilde{P}_n(\varphi_0, \psi_0,\xi_0,\dots, \Delta_\varphi, \Delta_\psi,\Delta_\xi, \dots)\,s^n, \end{equation*} where $\widetilde{P}_n$ are polynomials. Each coefficient in this sum is a smooth function of $r$ that vanishes at $r=0$ and $r=L$ in such way that the limits of \eqref{eq:generic-entry3} as $r\searrow0$ and $r\nearrow L$ are both finite, so the corresponding coefficients in \eqref{eq:generic-entry3} extend to smooth (hence bounded) functions on $[0,L]$. Thus, $ \widetilde P(s,\, \varphi_0,\, \psi_0,\,\xi_0,\dots, \Delta_\varphi, \, \Delta_\psi,\, \Delta_\xi, \dots)/\varphi_0^2\,\psi_0^2\,\xi_0^2$ can be regarded as a polynomial in the variable $s$ whose coefficients are \emph{continuous} functions of $r$. We will implicitly (and repeatedly) use this fact in what follows. \begin{notation*} We use $O(s^n)$, respectively $O(r^m)$, to denote any functions of the form $s^n\, F(s,r)$, respectively $r^m\, F(s,r)$, where $F\colon [0,1]\times[0, L]\to\mathds{R}$ is \emph{bounded}. \end{notation*} \subsection{\texorpdfstring{Positive curvature on $S^4$}{Positive curvature on the four-sphere}} To simplify the exposition, we shall focus primarily on the case $M^4=S^4$, in which $r_{\mathrm{max}}^\pm=\frac{L}{2}=\frac\pi6$ and it suffices to verify $\sec>0$ along the geodesic segment $\gamma(r)$ with $r\in \left[0, r_{\mathrm{max}}^- \right]$ due to the additional additional symmetries \eqref{eqn:gS4sym}, cf.~\Cref{rem:extra-symm}. Let $R_s=\operatorname{diag}\!\big( (R_s)_1,(R_s)_2,(R_s)_3 \big)$ be the curvature operator of $(S^4,\mathrm g_s)$ along $\gamma(r)$, given by \Cref{propn:curv_op}, where $\varphi,\psi,\xi$ are set to be $\varphi_s,\psi_s,\xi_s$ defined in \eqref{eq:def-phis-psis-xis}. As discussed above, $R_s$, $s\in [0,1]$, extends smoothly to $r=0$, and this extension (as well as its entries) will be denoted by the same symbol(s). Clearly, $R_0$ is the curvature operator of the Grove--Ziller metric $\mathrm g_0$, so $R_0+\tau_0\,*\succeq0$ for all $r\in \left[0,r_{\mathrm{max}}^-\right]$, where $\tau_0 := -\frac{\varphi_0'}{2b^2}$, see~\Cref{propn:GZRR}. The proof of \Cref{mainthmB} hinges on the next: \begin{claim}\label{claim-S4} If $s>0$ is sufficiently small, then $R_s+\tau_s\,*\succ0$ for all $r\in \left[0,r_{\mathrm{max}}^-\right]$,~with \begin{equation}\label{eqn:thorpe_s} \tau_s(r) := \tau_0(r)+ \frac{2(\sqrt{3}-b)}{b^3}\,s= -\frac{\varphi_0'(r)}{2b^2} + \frac{2(\sqrt{3}-b)}{b^3}\,s. \end{equation} \end{claim} We begin the journey towards \Cref{claim-S4} observing that certain diagonal entries of $R_s$, which are sectional curvatures with respect to $\mathrm g_s$, are positive for all $s\in(0,1]$. \begin{proposition} \label{propn:sec>0_1234} For all $s\in(0,1]$ and $r\in \left[0, r_{\mathrm{max}}^-\right]$, the following hold: \begin{enumerate}[\rm (i)] \item $[(R_s)_i]_{22} = \sec_{\mathrm g_s}(e_0\wedge e_i) > 0$ for $1 \leq i \leq 3$; \item $[(R_s)_1]_{11} =\sec_{\mathrm g_s}(e_2 \wedge e_3)>0$. \end{enumerate} \end{proposition} \begin{proof} As the round metric $\mathrm g_1$ has $\sec \equiv1$, we have $\varphi_1''(r) <0$, $\psi_1''(r) <0$, $\xi_1''(r) <0$ by \Cref{propn:curv_op}, cf.~\eqref{eq:can-phi,psi,xi} and \eqref{eq:can-R-blocks}. Thus $\varphi_s''(r) <0$, $\psi_s''(r) <0$, $\xi_s''(r) <0$ for all $s\in(0,1]$ and $r\in\left[0,r_{\mathrm{max}}^-\right]$, which implies, by \Cref{propn:curv_op}, that $\sec_{\mathrm g_s}(e_0\wedge e_i) > 0$, for $i=2,3$. In the case of $\sec_{\mathrm g_s}(e_0\wedge e_1)$, a further argument is required at $r=0$. Namely, using the smoothness conditions, we see that if $s\in (0,1]$, then \begin{equation*} \lim_{r\searrow 0} \sec_{\mathrm g_s}(e_0\wedge e_1)(r) = (1-s)\sec_{\mathrm g_0}(e_0\wedge e_1)(0) + s\sec_{\mathrm g_1}(e_0\wedge e_1)(0) >0, \end{equation*} where $(e_0\wedge e_1)(r)$ denotes the $2$-plane in $T_{\gamma(r)}S^4$ spanned by $e_0$ and $e_1$, which concludes the proof of (i). Regarding (ii), if $s\in (0,1]$ and $r \in \left(0, r_{\mathrm{max}}^-\right]$, then \begin{equation*} \varphi_s \leq \xi_s < \psi_s, \quad \xi_s' < 0, \quad \psi_s' \geq 0, \end{equation*} which implies that \begin{align*} \sec_{\mathrm g_s}(e_2\wedge e_3) &= \frac{\psi_s^4+\xi_s^4-\varphi_s^4 + 2(\xi_s^2-\varphi_s^2)(\varphi_s^2-\psi_s^2)}{4\,\varphi_s^2\, \psi_s^2\,\xi_s^2 } - \frac{\psi_s'\xi_s'}{\psi_s\xi_s} \\ &= \frac{(\xi_s^2 - \psi_s^2)^2}{4\,\varphi_s^2\,\psi_s^2 \,\xi_s^2} +\frac{2\psi_s^2 - \varphi_s^2}{4\,\psi_s^2 \,\xi_s^2} +\frac{\xi_s^2-\varphi_s^2}{2\,\psi_s^2 \,\xi_s^2} - \frac{\psi_s'\xi_s'}{\psi_s\xi_s} \geq\frac{b^2}{4\psi_s^2\,\xi_s^2}, \end{align*} since $2\psi_s^2-\varphi_s^2\geq \psi_s^2$ and $\psi_s\geq \psi_0\equiv b$ is uniformly bounded from below. \end{proof} Let us introduce functions $\eta_i,\mu_i,\nu_i$, $i=1,2,3$, such that the blocks of the curvature operator $R_s=\operatorname{diag}\!\big((R_s)_1,(R_s)_2,(R_s)_3\big)$ of $\mathrm g_s$ can be written as a perturbation \begin{equation}\label{eq:def-etai-mui-nui} \phantom{, \qquad i = 1,2,3,} (R_s)_i = (R_0)_i + \begin{bmatrix} \eta_i(s,r) & \mu_i(s,r)\\ \mu_i(s,r) & \nu_i(s,r) \end{bmatrix}, \qquad i = 1,2,3, \end{equation} of the blocks of the curvature operator $R_0=\operatorname{diag}\!\big((R_0)_1,(R_0)_2,(R_0)_3\big)$ of the Grove--Ziller metric $\mathrm g_0$. Recall that, for $r\in\left(0,r_{\mathrm{max}}^-\right]$, these blocks $(R_0)_i$ are computed in \Cref{propn:GZRR}, setting $\varphi=\varphi_0$, i.e., $\varphi$ is given by \eqref{eqn:f1intermsoff}. Clearly, each of $\eta_i,\mu_i,\nu_i$ is $O(s^n)$ for some $n\geq1$. \subsubsection{First block} We first analyze the block $i=1$ of the matrices $R_s$ and $R_s+\tau_s\,*$. \begin{proposition}\label{propn:Rs1-new} For all $r\in \left[0, r_{\mathrm{max}}^-\right]$, the entries of $(R_s)_1$ satisfy: \begin{align*} \eta_1(s, r) &= \left( \frac{3\varphi_0}{2b^5} (\varphi_0 (\Delta_\psi + \Delta_\xi)-b\Delta_\varphi ) - \frac{\Delta_\psi + \Delta_\xi}{b^3} \right)s + O(s^2),\\ \mu_1(s,r) &= \left( \frac{\varphi_0(\psi_1' + \xi_1')}{2b^3} - \frac{\Delta_\varphi'}{b^2} + \frac{\varphi_0'}{b^3}(\Delta_\psi + \Delta_\xi) \right)s + O(s^2),\\ \nu_1(s,r) &= \left( \frac{-\varphi_1''\varphi_0 + \varphi_0''\varphi_1}{\varphi_0^2} \right)s + O(s^2). \end{align*} \end{proposition} \begin{proof} First, let us consider $\eta_1$. From \Cref{propn:curv_op}, \begin{align*} [(R_s)_1]_{11} &= \frac{\psi_s^4 +\xi_s^4 - \varphi_s^4 + 2(\xi_s^2 - \varphi_s^2)(\varphi_s^2 - \psi_s^2)}{4\varphi_s^2\,\psi_s^2\,\xi_s^2} - \frac{\psi_s'\xi_s'}{\psi_s\xi_s}\\ &= \frac{(\xi_s^2 - \psi_s^2)^2}{4\varphi_s^2\,\psi_s^2\,\xi_s^2} - \frac{3\varphi_s^2}{4\psi_s^2\,\xi_s^2} + \frac{\xi_s^2 + \psi_s^2}{2\psi_s^2\,\xi_s^2} - \frac{\psi_s'\xi_s'}{\psi_s\xi_s}. \end{align*} We analyze these four terms separately using \eqref{eqn:Deltai}, as follows \begin{align*} - \frac{3\varphi_s^2}{4\psi_s^2\,\xi_s^2} &= -\frac{3\varphi_0^2}{4b^4} - \frac{3\varphi_0}{2b^5} (b\Delta_\varphi - \varphi_0(\Delta_\psi +\Delta_\xi) )s + O(s^2),\\ \frac{\xi_s^2 + \psi_s^2}{2\psi_s^2\,\xi_s^2} &= \frac{1}{b^2} - \frac{\Delta_\psi + \Delta_\xi}{b^3}\,s + O(s^2),\quad \frac{(\xi_s^2 - \psi_s^2)^2}{4\varphi_s^2\,\psi_s^2\,\xi_s^2} = O(s^2), \quad - \frac{\psi_s'\xi_s'}{\psi_s\xi_s} = O(s^2). \end{align*} Therefore, adding the above together, we find: \begin{equation*} [(R_s)_1]_{11} = \frac{4b^2 - 3\varphi_0^2}{4b^4} +\left( \frac{3\varphi_0}{2b^2} (\varphi_0 (\Delta_\psi + \Delta_\xi)-b \Delta_\varphi ) - \frac{\Delta_\psi + \Delta_\xi}{b^3} \right)s + O(s^2), \end{equation*} which establishes the claimed expansion of $\eta_1(s,r)= [(R_s)_1]_{11} - \frac{4b^2 - 3\varphi_0^2}{4b^4}$, cf.~\eqref{eq:def-etai-mui-nui}. Next, consider $\mu_1$. From \Cref{propn:curv_op}, \begin{align*} [(R_s)_1]_{12} &= \frac{\xi_s'(\xi_s^2+\varphi_s^2-\psi_s^2)}{2\varphi_s\,\psi_s\,\xi_s^2} + \frac{\psi_s'(\varphi_s^2+\psi_s^2-\xi_s^2)}{2\varphi_s\,\psi_s^2\, \xi_s} - \frac{\varphi_s'}{\psi_s\,\xi_s}\\ &= \frac{(\xi_s^2 - \psi_s^2)(\xi_s'\psi_s - \psi_s'\xi_s)}{2\varphi_s\,\psi_s^2\,\xi_s^2} + \frac{\varphi_s(\xi_s'\psi_s + \psi_s'\xi_s)}{2\psi_s^2\,\xi_s^2} - \frac{\varphi_s'}{\psi_s\,\xi_s}. \end{align*} We analyze these three terms separately, using \eqref{eqn:Deltai}, as before: \begin{multline*} \frac{(\xi_s^2 - \psi_s^2)(\xi_s'\psi_s - \psi_s'\xi_s)}{2\varphi_s\,\psi_s^2\,\xi_s^2} = O(s^2), \quad \frac{\varphi_s(\xi_s'\psi_s + \psi_s'\xi_s)}{2\psi_s^2\,\xi_s^2} = \frac{\varphi_0(\psi_1' + \xi_1')}{2b^3}\,s + O(s^2),\\ -\frac{\varphi_s'}{\psi_s\,\xi_s} = -\frac{\varphi_0'}{b^2} + \left( \frac{\varphi_0'(\Delta_\psi + \Delta_\xi)}{b^3}-\frac{\Delta_\varphi'}{b^2} \right)s + O(s^2). \end{multline*} Thus, adding the above, we have: \begin{equation*} [(R_s)_1]_{12} = -\frac{\varphi_0'}{b^2} + \left( \frac{\varphi_0(\psi_1' + \xi_1')}{2b^3} - \frac{\Delta_\varphi'}{b^2} + \frac{\varphi_0'(\Delta_\psi + \Delta_\xi)}{b^3} \right)s + O(s^2), \end{equation*} which establishes the claimed expansion of $\mu_1(s,r)=[(R_s)_1]_{12} +\frac{\varphi_0'}{b^2}$, cf.~\eqref{eq:def-etai-mui-nui}. Finally, let us consider $\nu_1$. From \Cref{propn:curv_op}, we have: \begin{equation*} [(R_s)_1]_{22} = -\frac{\varphi_s''}{\varphi_s} = -\frac{\varphi_0''}{\varphi_0} + \left( \frac{-\varphi_1''\varphi_0 + \varphi_0''\varphi_1}{\varphi_0^2} \right)s + O(s^2), \end{equation*} which establishes the claimed expansion of $\nu_1(s,r)= [(R_s)_1]_{22}+\frac{\varphi_0''}{\varphi_0}$, cf.~\eqref{eq:def-etai-mui-nui}. \end{proof} \begin{proposition}\label{prop:Rs1-positive} If $s>0$ is sufficiently small, then the matrix \begin{equation* (R_s)_1 + \tau_s H = \begin{bmatrix} \frac{4b^2 - 3\varphi_0^2}{4b^4} +\eta_1(s,r) & -\frac{3\varphi_0'}{2b^2}+\mu_1(s,r)+\frac{2(\sqrt{3}-b)}{b^3}s \\ -\frac{3\varphi_0'}{2b^2}+\mu_1(s,r)+\frac{2(\sqrt{3}-b)}{b^3}s & -\frac{\varphi_0''}{\varphi_0}+\nu_1(s,r) \end{bmatrix} \end{equation*} is positive-definite for all $r\in\left[0,r_{\mathrm{max}}^-\right]$. \end{proposition} \begin{proof} The expression above for $(R_s)_1 + \tau_s H$ follows from \Cref{propn:GZRR}, as well as \eqref{eq:defH}, \eqref{eqn:thorpe_s}, and \eqref{eq:def-etai-mui-nui}. From \Cref{propn:sec>0_1234} (ii), we know that $[(R_s)_1]_{11} > 0$ for all $s\in(0,1]$ and $r\in\left[0,r_{\mathrm{max}}^-\right]$. So, by Sylvester's criterion, it suffices to show that if $s>0$ is sufficiently small, then the following is positive: \begin{align*} \det\!\big((R_s)_1 + \tau_s H\big) &= \left(\frac{4b^2 - 3\varphi_0^2}{4b^4}\right) \left( -\frac{\varphi_0''}{\varphi_0} \right) - \left( \frac{3\varphi_0'}{2b^2} \right)^2 -\frac{\varphi_0''}{\varphi_0} \,\eta_1(s,r)\\ & \quad + \frac{4b^2 - 3\varphi_0^2}{4b^4} \, \nu_1(s,r) + \frac{3\varphi_0'}{b^2} \left(\mu_1(s,r)+\frac{2(\sqrt{3}-b)}{b^3}s\right)\\ & \quad + \eta_1(s,r)\, \nu_1(s,r) - \left(\mu_1(s,r)+\frac{2(\sqrt{3}-b)}{b^3}s\right)^2. \end{align*} By \Cref{propn:Rs1-new}, we have $\det\!\big((R_s)_1 + \tau_s H\big)= A(r) + B(r) \, s+ O(s^2)$, where \begin{align*} A(r) &:= \left(\frac{4b^2 - 3\varphi_0^2}{4b^4}\right) \left( -\frac{\varphi_0''}{\varphi_0} \right) - \left( \frac{3\varphi_0'}{2b^2} \right)^2,\\ B(r) &:= \left( -\frac{\varphi_0''}{\varphi_0} \right) \left( \frac{3\varphi_0}{2b^5} ( \varphi_0(\Delta_\psi + \Delta_\xi) -b \Delta_\varphi) - \frac{\Delta_\psi + \Delta_\xi}{b^3} \right)\\ & \quad + \left( \frac{4b^2 - 3\varphi_0^2}{4b^4} \right) \left( \frac{-\varphi_1''\varphi_0 + \varphi_0''\varphi_1}{\varphi_0^2} \right) \\ & \quad + \frac{3\varphi_0'}{b^2} \, \left( \frac{\varphi_0( \psi_1'+\xi_1')}{2b^3} - \frac{\Delta_\varphi'}{b^2} + \frac{\varphi_0'}{b^3}(\Delta_\psi + \Delta_\xi) + \frac{2(\sqrt{3}-b)}{b^3}\right). \end{align*} Note that $A(r)\geq0$ if $r\in\left[0,r_{\mathrm{max}}^-\right]$ by \Cref{propn:eqnf1_sec>0}, but $A(r) \equiv 0$ near $r = r_{\mathrm{max}}^-$. We claim that there exist $0<r_*<r_{\mathrm{max}}^-$ and constants $\alpha>0$ and $\beta>0$ such that \begin{equation}\label{eq:claim-a-b} \begin{aligned} &A(r)\geq \alpha>0 \mbox{ for all } 0\leq r\leq r_*, \\ &B(r)\geq \beta>0 \mbox{ for all } r_*\leq r\leq r_{\mathrm{max}}^-, \end{aligned} \end{equation} from which it clearly follows that $\det\!\big((R_s)_1 + \tau_s H\big) >0$ for all $r\in\left[0,r_{\mathrm{max}}^-\right]$ and sufficiently small $s>0$, as desired. Recall that there exists $0< r_0 < r_{\mathrm{max}}^-$ so that: \begin{itemize} \item for all $r\in (0,r_0)$, we have $\varphi_0'(r) > 0$ and $\varphi_0''(r)<0$, \item for all $r\in \left[r_0,r_{\mathrm{max}}^-\right]$, we have $\varphi_0(r) = b$, and hence $\varphi_0'(r) = \varphi_0''(r) =0$, \end{itemize} cf.~\eqref{eqn:f1intermsoff} and the Grove--Ziller construction (\Cref{subsec:GZ-metrics}). Moreover, for all $\varepsilon >0$, there exists $0<r_*<r_0$, such that for $r \in \left[r_*, r_{\mathrm{max}}^-\right]$, we have: \begin{equation} \label{eqn:f1epsilon} 0 \leq \varphi_0'(r) <\varepsilon, \quad 0 \leq -\varphi_0''(r) < \varepsilon, \;\;\mbox{and}\;\; b - \varepsilon < \varphi_0(r) \leq b, \end{equation} and these inequalities are strict on $[r_*, r_0)$. Thus, choosing $\varepsilon >0$ sufficiently small, we have that for all $r\in \left[r_*, r_{\mathrm{max}}^-\right]$, \begin{equation*} \frac{-\varphi_1''\varphi_0 + \varphi_0''\varphi_1}{\varphi_0^2} % = \frac{(2\sin r)(\varphi_0 + \varphi_0'')}{\varphi_0^2} \geq \frac{(2\sin r)(b - 2\varepsilon)}{b^2} > \frac{1}{4b}. \end{equation*} Furthermore, by continuity, the following are uniformly bounded on $r\in \left[r_*,r_{\mathrm{max}}^-\right]$, \begin{align*} \left| -\frac{1}{\varphi_0} \left( \frac{3\varphi_0 }{2b^5}(\varphi_0(\Delta_\psi +\Delta_\xi)-b \Delta_\varphi ) - \frac{\Delta_\psi + \Delta_\xi}{b^3} \right) \right| < C_1,\\ \left| \frac{3}{b^2} \left( \frac{\varphi_0(\psi_1'+\xi_1')}{2b^3} -\frac{\Delta_\varphi'}{b^2} + \frac{\varphi_0'(\Delta_\psi + \Delta_\xi)}{b^3} + \frac{2(\sqrt{3}-b)}{b^3} \right) \right| < C_2, \end{align*} where $C_1$ and $C_2$ are constants independent of $r_*$; and $\left( \frac{4b^2 - 3\varphi_0^2}{4b^4} \right) \geq \frac{1}{4b^2}$ by \eqref{eqn:f1epsilon}. Putting the above together, and making $\varepsilon>0$ even smaller if needed, we conclude \begin{equation*} B(r) > -\varepsilon \, C_1 + \tfrac{1}{16b^3} - \varepsilon \, C_2 = \tfrac{1}{16b^3} - \varepsilon\,(C_1 + C_2) > \beta > 0 \end{equation*} for all $r \in \left[r_*, r_{\mathrm{max}}^- \right]$, where, e.g., $\beta=\tfrac{1}{32b^3}$. Finally, in order to prove the inequality regarding $A(r)$ in \eqref{eq:claim-a-b}, recall there exists $c >0$ such that $\sec_{\mathrm g_{D^2}} \geq c>0$ for all $r\in [0, r_*]$, by \Cref{rem:psec0r1]}. From \eqref{eq:f''intermsofphi}, in the proof of \Cref{propn:eqnf1_sec>0}, we have that \begin{equation*} \sec_{\mathrm g_{D^2}} = -\frac{f''}{f} = \frac{ab^2}{(ab^2 - \varphi_0^2)^2}\, \frac{(-\varphi_0'')(ab^2 - \varphi_0^2) - 3\varphi_0\varphi_0'^2}{\varphi_0}, \end{equation*} from which it follows that \begin{equation*} \frac{3(ab^2 - \varphi_0^2)^2}{ab^2}\,\sec_{\mathrm g_{D^2}} = 3\left( -\frac{\varphi_0''}{\varphi_0} \right) (ab^2 - \varphi_0^2) - 9\varphi_0'^2 \leq \left( -\frac{\varphi_0''}{\varphi_0} \right)(4b^2 - 3\varphi_0^2) - 9\varphi_0'^2, \end{equation*} because $1<a\leq\frac43$. Therefore, as $\varphi_0(r)<\sqrt{a}\,b$ for all $r$, there exists $\alpha>0$ so that \begin{equation*} A(r) \geq \frac{3}{4} \, \frac{(ab^2 - \varphi_0^2)^2}{ab^2}\, \sec_{\mathrm g_{D^2}} \geq \frac{3}{4} \, \frac{(a b^2 - \varphi_0^2)^2}{ab^2}\, c > \alpha > 0, \;\; \mbox{ for all } r\in [0, r_*].\qedhere \end{equation*} \end{proof} \subsubsection{Remaining blocks} We now handle the remaining blocks $i=2,3$. \begin{proposition}\label{propn:Rs23-new} For all $r\in \left[0, r_{\mathrm{max}}^-\right]$, the entries of $(R_s)_i$, for $i=2,3$, satisfy: \begin{align*} \eta_i(s, r) &= \left( \tfrac{\sqrt{3}}{b} + O(r) \right)s + O(s^2), &\mu_i(s,r) &= \left(- \tfrac{2(\sqrt{3}-b)}{b^3} + O(r) \right)s + O(s^2),\\ & &\nu_i(s,r)&= \left( \tfrac{\sqrt{3}}{b} + O(r) \right)s + O(s^2). \end{align*} \end{proposition} \begin{proof} First, let us consider $\eta_2$. From \Cref{propn:curv_op}, \begin{equation*} [{(R_s)_2}]_{11} = \frac{\varphi_s^2}{4 \psi_s^2\,\xi_s^2} + \frac{\psi_s^2 - \xi_s^2}{2 \psi_s^2\,\xi_s^2} + \frac{\xi_s^4 + 2\xi_s^2 \psi_s^2 - 3\psi_s^4 - 4\varphi_s\psi_s^2\xi_s\varphi_s'\xi_s'}{4\varphi_s^2 \,\psi_s^2\,\xi_s^2}. \end{equation*} We analyze these three terms separately using \eqref{eqn:Deltai}. The first two satisfy \begin{equation*} \frac{\varphi_s^2}{4 \psi_s^2\,\xi_s^2 } = \frac{\varphi_0^2}{4b^4} + s\, O(r^2) + O(s^2), \quad \text{ and }\quad \frac{\psi_s^2 - \xi_s^2}{2 \psi_s^2\,\xi_s^2 }= s\, O(r) + O(s^2), \end{equation*} while the third satisfies \begin{align*} \frac{\xi_s^4 + 2 \psi_s^2\xi_s^2 - 3\psi_s^4 - 4\varphi_s\psi_s^2\xi_s\varphi_s'\xi_s'}{4\varphi_s^2\, \psi_s^2\,\xi_s^2} &=\frac{2(\Delta_\xi - \Delta_\psi) - \varphi_0\varphi_0'\Delta_\xi'}{b \varphi_0^2} \, s + O(s^2)\\ &= \left(\tfrac{\sqrt{3}}{b} + O(r)\right)\,s + O(s^2), \end{align*} since $\displaystyle\lim_{r\searrow 0} \tfrac{2(\Delta_\xi - \Delta_\psi) - \varphi_0\varphi_0'\Delta_\xi'}{\varphi_0^2} = \sqrt{3}$, by L'H\^{o}pital's rule and \Cref{prop:smoothness} (i). Altogether, the above yields $ [(R_s)_2]_{11} = \frac{\varphi_0^2}{4b^4} + \left( \frac{\sqrt{3}}{b} + O(r) \right)s + O(s^2)$, and hence establishes the claimed expansion of $\eta_2(s,r) = [(R_s)_2]_{11} -\tfrac{\varphi_0^2}{4b^4}$, cf.~\eqref{eq:def-etai-mui-nui}. Second, the proof that $\eta_3$ has the same expansion as $\eta_2$ is similar. Namely, \begin{align*} [(R_s)_3]_{11} &= \frac{\varphi_s^2}{4\psi_s^2\,\xi_s^2 } + \frac{(\psi_s^2 - \xi_s^2)(\psi_s^2 + 3\xi_s^2 - 2\varphi_s^2) - 4\varphi_s\psi_s\xi_s^2\varphi_s'\psi_s'}{4\varphi_s^2\, \psi_s^2\, \xi_s^2}, \end{align*} where the first term was already considered above, and the second term satisfies \begin{equation*} \frac{(\psi_s^2 - \xi_s^2)(\psi_s^2 + 3\xi_s^2 - 2\varphi_s^2) - 4\varphi_s\psi_s\xi_s^2\varphi_s'\psi_s'}{4\varphi_s^2\,\psi_s^2 \,\xi_s^2 }\\ =\left( \frac{\sqrt{3}}{b} + O(r) \right)s + O(s^2), \end{equation*} by similar considerations involving L'H\^{o}pital's rule and \Cref{prop:smoothness} (i). Thus, $\eta_3(s,r) = [(R_s)_3]_{11} - \tfrac{\varphi_0^2}{4b^4} = \left( \frac{\sqrt{3}}{b} + O(r) \right)s + O(s^2)$, cf.~\eqref{eq:def-etai-mui-nui}. Next, consider $\mu_2$. From \Cref{propn:curv_op}, \begin{equation*} [(R_s)_2]_{12} = \frac{\varphi_s'}{2\psi_s\,\xi_s} + \frac{\varphi_s'\xi_s(\psi_s^2 - \xi_s^2) + \varphi_s\xi_s'(\xi_s^2+\psi_s^2-\varphi_s^2) - 2\varphi_s\psi_s\xi_s\psi_s'}{2\varphi_s^2\,\psi_s\,\xi_s^2} \end{equation*} The first term above satisfies \begin{equation*} \frac{\varphi_s'}{2\xi_s\psi_s} = \frac{\varphi_0'}{2b^2} +\left( O(r^2) - \frac{2(\sqrt{3}-b)}{b^3} \right)s + O(s^2), \end{equation*} while the second satisfies \begin{equation*} \frac{\varphi_s'\xi_s(\psi_s^2 - \xi_s^2) + \varphi_s\xi_s'(\xi_s^2+\psi_s^2-\varphi_s^2) - 2\varphi_s\psi_s\xi_s\psi_s'}{2\varphi_s^2\,\psi_s\,\xi_s^2} = s\, O(r) + O(s^2). \end{equation*} So, $\mu_2(s,r) = [(R_s)_2]_{12} -\tfrac{\varphi_0'}{2b^2}=\left(- \frac{2(\sqrt{3}-b)}{b^3} + O(r) \right)s + O(s^2)$, cf.~\eqref{eq:def-etai-mui-nui}. The proof that $\mu_3$ has the same expansion as $\mu_2$ is analogous, and left to the reader. Finally, let us consider $\nu_2$ and $\nu_3$. From \Cref{propn:curv_op} and \eqref{eq:def-etai-mui-nui}, we have \begin{equation*} \nu_2(s,r) =[(R_s)_2]_{22} =-\tfrac{\psi_s''}{\psi_s}\quad\text{ and } \quad \nu_3(s,r) =[(R_s)_3]_{22} = -\tfrac{\xi_s''}{\xi_s}. \end{equation*} By \eqref{eqn:Deltai}, we have $\psi_s'' = \Delta_\psi'' \, s= \psi_1'' \,s$ and $\xi_s'' = \Delta_\xi'' \,s = \xi_1'' \,s$, so \begin{equation*} \nu_2(s,r) = \left( \tfrac{\sqrt{3}}{b} + O(r) \right)s + O(s^2), \; \text{ and }\; \nu_3(s,r) = \left( \tfrac{\sqrt{3}}{b} + O(r) \right)s + O(s^2).\qedhere \end{equation*} \end{proof} \begin{proposition}\label{prop:Rs23-positive} If $s>0$ is sufficiently small, then the matrices \begin{equation}\label{eq:modified-blocks-23} (R_s)_i + \tau_s H = \begin{bmatrix} \frac{\varphi_0^2}{4b^4} + \eta_i(s,r) & \mu_i(s,r)+\frac{2(\sqrt3-b)}{b^3}s\\ \mu_i(s,r)+\frac{2(\sqrt3-b)}{b^3}s & \nu_i(s,r) \end{bmatrix}, \quad i=2,3, \end{equation} are positive-definite for all $r\in\left[0, r_{\mathrm{max}}^-\right]$. \end{proposition} \begin{proof} The expression \eqref{eq:modified-blocks-23} for $(R_s)_i + \tau_s H$, $i=2,3$, follows from \Cref{propn:GZRR}, as well as \eqref{eq:defH}, \eqref{eqn:thorpe_s}, and \eqref{eq:def-etai-mui-nui}. First, consider the $(1,1)$-entry of these matrices: \begin{equation*} \phantom{\quad \text{ for }i=2,3.}[(R_s)_i]_{11} = \tfrac{\varphi_0^2}{4b^4} + \left( \tfrac{\sqrt{3}}{b} + O(r) \right)s + O(s^2), \quad \text{ for }i=2,3, \end{equation*} cf.~\Cref{propn:Rs23-new}. Since $\varphi_0(r)>0$ away from $r=0$, and the $O(s)$ part of the above is uniformly positive near $r=0$, it follows that $[(R_s)_i]_{11}>0$ for all $r\in\left[0, r_{\mathrm{max}}^-\right]$ and $i=2,3$, provided $s>0$ is sufficiently small. Second, let us analyze the determinant of \eqref{eq:modified-blocks-23}. By \Cref{propn:Rs23-new}, \begin{align*} \eta_i(s,r) \nu_i(s,r) &= \left( \tfrac{3}{b^2} + O(r) \right)s^2 + O(s^3),\\ \mu_i(s,r)+\tfrac{2(\sqrt3-b)}{b^3}s &= s\, O(r) + O(s^2). \end{align*} Thus, using that $\nu_i(s,r)=[(R_s)_i]_{22}$, for $i=2,3$, we have: \begin{align*} \det\!\big( (R_s)_i + \tau_s H\big) &= \nu_i(s,r)\, \tfrac{\varphi_0^2}{4b^4} + \left( \tfrac{3}{b^2} + O(r) \right)s^2 + O(s^3)\\ &= [(R_s)_i]_{22} \, \tfrac{\varphi_0^2}{4b^4} + \left( \tfrac{3}{b^2} + O(r) \right)s^2 + O(s^3). \end{align*} By \Cref{propn:sec>0_1234} (i), the $O(s)$ part of the above is positive for $r\in\left(0,r_{\mathrm{max}}^- \right]$, but vanishes at $r= 0$, as $\varphi_0(0)=0$. Since the $O(s^2)$ part has a positive limit as $r\searrow 0$, we have that $\det\!\big( (R_s)_i+ \tau_s H\big) >0$ for all $r\in \left[0, r_{\mathrm{max}}^-\right]$ and $i=2,3$, if $s>0$ is sufficiently small. Positive-definiteness now follows from Sylvester's criterion. \end{proof} The above \Cref{prop:Rs1-positive,prop:Rs23-positive} imply \Cref{claim-S4}, since $R_s+\tau_s\,*$ is block diagonal with blocks $(R_s)_i+\tau_s H$, $i=1,2,3$, see \Cref{propn:curv_op} and \eqref{eq:defH}. In turn, \Cref{claim-S4} and the Finsler--Thorpe trick (\Cref{prop:FTtrick}) imply that $\sec_{\mathrm g_s}>0$ for sufficiently small $s>0$. This proves \Cref{mainthmB} for $M^4=S^4$; since, if the original Grove--Ziller metric $\mathrm g_{\mathrm{GZ}}$ was rescaled as $\mathrm g_0=\lambda^2\,\mathrm g_{\mathrm{GZ}}$ to standardize $L=\frac\pi3$, then $\lambda^{-2}\,\mathrm g_s$ has $\sec>0$ and is arbitrarily $C^\infty$-close to $\mathrm g_{\mathrm{GZ}}$ for $s>0$ sufficiently small. \subsection{\texorpdfstring{Positive curvature on $\mathds{C} P^2$}{Positive curvature on the complex projective plane}} We now briefly discuss the proof of \Cref{mainthmB} for $M^4=\mathds{C} P^2$. Recall that, in this case, $L=\tfrac{\pi}{4}$, with $r_{\mathrm{max}}^-=\frac\pi6$ and $r_{\mathrm{max}}^+=\frac{\pi}{12}$. Differently from $S^4$, for $M^4=\mathds{C} P^2$, the situation on the intervals $[0,r_{\mathrm{max}}^-]=\left[0, \tfrac{\pi}{6}\right]$ and $[r_{\mathrm{max}}^-,L]=\left[\tfrac{\pi}{6},\tfrac{\pi}{4}\right]$ has to be analyzed separately, cf.~\Cref{rem:extra-symm}. Denoting by $R_0$ the curvature operator of the Grove--Ziller metric $\mathrm g_0$ on $\mathds{C} P^2$, the function $\tau_0\colon [0,L]\to\mathds{R}$ so that $R_0+\tau_0\,*\succeq0$ for all $r\in [0,L]$ is given by \begin{equation*} \tau_0(r) = \begin{cases} -\frac{\varphi_0'(r)}{2b^2}, & \text{ if } r \in \left[0,r_{\mathrm{max}}^-\right], \\[5pt] -\frac{\xi_0'(r)}{2b^2}, & \text{ if } r \in \left[r_{\mathrm{max}}^-,L\right], \end{cases} \end{equation*} cf.~\Cref{propn:GZRR}. Note that $\varphi_0'=\xi_0'=0$ near $r=r_{\mathrm{max}}^-$. The proof of \Cref{mainthmB} follows in the same way as in the case $M^4=S^4$ above, replacing \Cref{claim-S4} with: \begin{claim}\label{claim-CP2} If $s>0$ is sufficiently small, then $R_s+\tau_s\,*\succ0$ for all $r\in \left[0,L\right]$,~where \begin{equation*} \tau_s(r) := \begin{cases} -\frac{\varphi_0'(r)}{2b^2} + \left(\frac{3}{2b} +\frac{1-b}{b^3}\right)s, & \text{ if } r \in \left[0,r_{\mathrm{max}}^-\right], \\[5pt] % -\frac{\xi_0'(r)}{2b^2} + \tfrac{\sqrt{2}\, -\, 2b}{b^3} \, s, & \text{ if } r \in \left(r_{\mathrm{max}}^-,L\right]. \end{cases} \end{equation*} \end{claim} \begin{remark} Similarly to \eqref{eqn:thorpe_s} in \Cref{claim-S4}, the above function $\tau_s$ is obtained from $\tau_0$ by adding a locally constant multiple of $s$. This $O(s)$ perturbation is not constant as in the case of $M^4=S^4$, and, as a result, $\tau_s(r)$ is \emph{discontinuous} at $r=r_{\mathrm{max}}^-$ for all $s>0$. Nevertheless, the application of the Finsler--Thorpe trick (\Cref{prop:FTtrick}) is pointwise and no regularity is needed. A posteriori, a continuous function $\widetilde\tau_s(r)$ such that $R_s+\widetilde{\tau_s}\,*\succ0$ for all sufficiently small $s>0$ can be chosen, e.g., as the midpoint $\widetilde\tau_s(r)=\frac{1}{2}(\tau_{\mathrm{min}} +\tau_{\mathrm{max}})$ of $[\tau_{\mathrm{min}},\tau_{\mathrm{max}}]$ for each $r\in [0,L]$, see \Cref{rem:set-of-taus}. \end{remark} The proof of \Cref{claim-CP2} follows the same template from \Cref{claim-S4}, relying on expansions in $s$ of the functions $\eta_i,\mu_i,\nu_i$, cf.~\eqref{eq:def-etai-mui-nui}. The statement of \Cref{propn:Rs1-new}, regarding $i=1$ and $r\in [0, r_{\mathrm{max}}^-]$, holds \emph{tout court} for $\mathds{C} P^2$, since the smoothness conditions of $\varphi,\psi,\xi$ at $r=0$ are not used in the proof. The case of $i=3$ and $r\in [r_{\mathrm{max}}^-,L]$ is analogous. The replacement for \Cref{propn:Rs23-new} is the following: \begin{proposition}\label{propn:RsCP2} For $r\in \left[0, r_{\mathrm{max}}^-\right]$, the entries of $(R_s)_i$, $i=2,3$, satisfy: \begin{align*} \eta_2(s,r) &= \left(\tfrac{1}{b} + O(r)\right)s + O(s^2), & \mu_2(s,r) &= \left(-\tfrac{3}{2b} -\tfrac{1-b}{b^3} + O(r)\right)s + O(s^2),\\ & &\nu_3(s,r) &= \left(\tfrac{4}{b} + O(r)\right)s + O(s^2),\\ \eta_3(s,r) &= \left(\tfrac{4}{b} + O(r)\right)s + O(s^2), & \mu_3(s,r) &= \left( \tfrac{3}{2b} -\tfrac{1-b}{b^3}+ O(r)\right)s + O(s^2),\\ & &\nu_2(s,r) &=\left(\tfrac{1}{b} + O(r)\right)s + O(s^2). \end{align*} For $r\in \left[r_{\mathrm{max}}^-,L\right]$, setting $z=L-r$, the entries of $(R_s)_i$, $i=1,2$, satisfy: \begin{align*} \eta_i(s,z) &= \left(\tfrac{1}{b\sqrt{2}} + O(z)\right)s + O(s^2), & \mu_i(s,z) &= \left( -\tfrac{\sqrt{2} - 2b}{b^3} + O(z)\right)s + O(s^2), \\ & &\nu_i(s,z) &= \left(\tfrac{1}{b\sqrt{2}} + O(z)\right)s + O(s^2). \end{align*} \end{proposition} The proof of \Cref{propn:RsCP2} is totally analogous to that of \Cref{propn:Rs23-new}; noting that, in terms of $z = L - r\in\left[0,r_{\mathrm{max}}^+\right]$, the functions $\varphi_1,\psi_1,\xi_1$ are: \begin{equation*} \textstyle \varphi_1(z) = \frac{1}{\sqrt{2}}\left(\cos z - \sin z\right), \quad \psi_1(z) = \frac{1}{\sqrt{2}}\left(\cos z + \sin z\right), \quad \xi_1(z) = \sin 2z. \end{equation*} Finally, similarly to \Cref{prop:Rs1-positive,prop:Rs23-positive}, it can be shown that $(R_s)_i+\tau_sH$, $i=1,2,3$, are positive-definite for all $r\in [0,L]$ and $s>0$ sufficiently small, which proves \Cref{claim-CP2} (and hence \Cref{mainthmB}) for $\mathds{C} P^2$. Details are left to the~reader. \section{Positive turns negative} \label{sec:pos-neg} In this section, we prove \Cref{mainthmA}, using the fact that Grove--Ziller metrics on $S^4$ and $\mathds{C} P^2$ immediately acquire negatively curved planes under Ricci flow~\cite{bettiol-krishnan1}, together with \Cref{mainthmB}, and continuous dependence on initial data~\cite{BGI20}. \begin{proof}[Proof of \Cref{mainthmA}] Let $M^4$ be either $S^4$ or $\mathds{C} P^2$, and consider the $1$-parameter family of metrics $\mathrm g_s$ on $M^4$, defined in \eqref{eq:gs}, such that $\mathrm g_0$ is a Grove--Ziller metric and $\mathrm g_1$ is either the round metric or the Fubini--Study metric, accordingly. From \Cref{propn:gssmooth}, the metrics $\mathrm g_s$ are smooth, and it is evident from \eqref{eq:def-phis-psis-xis} and \eqref{eq:gs} that, for all $k\geq0$ and $0<\alpha<1$, there exists a constant $\lambda_{k,\alpha}>0$ such that \begin{equation}\label{eq:gs-g0} \phantom{\quad \text{ for all } 0\leq s\leq 1.}\|\mathrm g_s-\mathrm g_0\|_{C^{k,\alpha}}\leq \lambda_{k,\alpha}\, s, \quad \text{ for all } 0\leq s\leq 1, \end{equation} where $\|\cdot\|_{C^{k,\alpha}}$ denotes the H\"older norm on sections of the bundle $E=\operatorname{Sym}^2 TM^4$ with respect to a fixed background metric. For $0\leq s\leq 1$, let $\mathrm g_s(t)$, $0\leq t < T(\mathrm g_s)$, be the maximal solution to Ricci flow starting at $\mathrm g_s(0)=\mathrm g_s$, where $0<T(\mathrm g_s)\leq +\infty$ denotes the maximal (smooth) existence time of the flow. For all $0\leq s\leq 1$ and $0\leq t<T(\mathrm g_s)$, we have that $\mathrm g_s(t)\in C^\infty(E)$, so $\mathrm g_s(t)$ is in the proper closed subspace $h^{k,\alpha}(E) \subset C^{k,\alpha}(E)$ for all $k\geq0$ and $0<\alpha<1$, in the notation of \cite{BGI20}. From the main theorem in \cite{bettiol-krishnan1}, there exist a $2$-plane $\sigma$ tangent to $M^4$ and $t_0>0$ such that $\sec_{\mathrm g_0}(\sigma)=0$ and $\sec_{\mathrm g_0(t)}(\sigma)<0$ for all $0<t<t_0$. Fix $0<t_*<t_0$, and let $\delta>0$ be such that $\sec_\mathrm g(\sigma)<0$ for all metrics $\mathrm g$ with $\|\mathrm g-\mathrm g_0(t_*)\|_{C^{2,\alpha}}<\delta$. By the continuous dependence of Ricci flow on initial data~\cite[Thm A]{BGI20}, there exist constants $\mathrm r>0$ and $C>0$, depending only on $t_*$ and $\mathrm g_0$, such that, if $\|\mathrm g_s-\mathrm g_0\|_{C^{4,\alpha}}\leq \mathrm r$, then $T(\mathrm g_s)\geq t_0$ and $\|\mathrm g_s(t) - \mathrm g_0(t)\|_{C^{2,\alpha}} \leq C \|\mathrm g_s - \mathrm g_0\|_{C^{4,\alpha}}$ for all $t\in [0, t_0]$. Together with \eqref{eq:gs-g0}, this yields that if $0\leq s\leq \mathrm r/\lambda_{4,\alpha}$, then \begin{equation*} \|\mathrm g_s(t) - \mathrm g_0(t)\|_{C^{2,\alpha}} \leq C\,\|\mathrm g_s - \mathrm g_0\|_{C^{4,\alpha}} \leq C\, \lambda_{4,\alpha}\, s, \quad \text{ for all } 0\leq t\leq t_0. \end{equation*} Thus, $\|\mathrm g_s(t_*)-\mathrm g_0(t_*)\|_{C^{2,\alpha}}<\delta$ and so $\sec_{\mathrm g_s(t_*)}(\sigma)<0$, for all $0\leq s<\delta/(C \,\lambda_{4,\alpha})$, while $\mathrm g_s=\mathrm g_s(0)$ has $\sec>0$ if $s>0$ is sufficiently small, by \Cref{mainthmB}. \end{proof} \begin{remark}\label{rem:max-princ} The curvature operators $R(t)\colon \wedge^2 TM\to\wedge^2 TM$ of metrics $\mathrm g(t)$ on $M^n$ evolving under Ricci flow satisfy the PDE $\frac{\partial}{\partial t}R=\Delta R+2Q(R)$, where $Q(R)$ depends quadratically on $R$. By Hamilton's Maximum Principle, if an $\mathsf O(n)$-invariant cone $\mathcal C\subset \operatorname{Sym}^2_{\mathrm b}(\wedge^2TM)$ is preserved by the ODE $\frac{\mathrm{d}}{\mathrm{d} t}R=2Q(R)$, then it is also preserved by the above PDE. It was previously known that the cone $\mathcal C_{\sec>0}$ of curvature operators with $\sec>0$ is \emph{not} preserved under the above ODE on $R$ in dimensions $n\geq4$, since it is easy to find $R_0\in\partial \mathcal C_{\sec>0}$ with $Q(R_0)$ pointing outside of $\mathcal C_{\sec>0}$. Nevertheless, this observation alone does not imply the existence of metrics realizing such a family of curvature operators on some closed $n$-manifold, thus evolving under Ricci flow and losing $\sec>0$, as the above metrics $\mathrm g_s(t)$ do. \end{remark}
1,116,691,499,506
arxiv
\section{INTRODUCTION} Intermediate-mass black holes (IMBH; $M_{BH} \sim 10^{2}-10^{4}M_{\odot}$) accreting gas from their surroundings have been postulated to explain the engines behind ultraluminous X-ray sources recently discovered in nearby galaxies (see Colbert \& Miller 2005, for a review). IMBHs would fill the existing gap between stellar and supermassive black holes found in active galactic nuclei. According to the tight relation between $M_{BH}$ and the central velocity dispersion $\sigma$ (Gebhardt et al.~2000; Ferrarese \& Merrit 2000; Tremaine et al.~2002), or between $M_{BH}$ and the mass of the bulge (Magorrian et al.~1998), a natural place to look for these IMBHs are astrophysical systems less massive than normal galaxies, such as dense star clusters, globular clusters, and dwarf galaxies. If $M_{BH}$ is correlated with the total gravitational mass of their host galaxy (Ferrarese 2002; Baes et al.~2003), central BHs should be an essential element in dark matter dominated objects -such as dwarf spheroidal (dSph) galaxies. Estimates of the mass of BH in these systems would be of great importance in completing the $M_{BH}-\sigma$ relation. Direct and indirect searches for IMBH at the centers of globular clusters and small galaxies have been attempted (e.g., Gerssen et al.~2002, 2003; Valluri et al.~2005; Maccarone et al.~2005; Ghosh et al.~2006; Maccarone et al.~2007; Noyola et al.~2008). For instance, Noyola et al.~(2008) have reported a central density cusp and higher velocity dispersions in their central field of the globular cluster $\omega$ Centauri, which could be due to a central BH ($M_{BH}\sim 4\times 10^{4}M_{\odot}$). However, a new analysis by Anderson \& van der Marel (2009) and van der Marel \& Anderson (2009) does not confirm the arguments given by Noyola et al.~(2008), and provides an upper limit for the BH mass of $1.2\times 10^{4} M_{\odot}$. There is sparse evidence that BHs could be present in at least some dSph. The Ursa Minor (UMi) dSph galaxy has been also suspected to contain a BH of $\sim 10^{6}$ $M_{\odot}$ (Strobel \& Lake 1994; Demers et al.~1995). Maccarone et al.~(2005) discuss the possibility that the radio source found near the core of UMi is, in fact, a BH with a mass $\sim 10^{4}M_{\odot}$. In this Letter, we examine the dynamical effects of putative IMBHs in the core of dSphs on the very integrity of cold, long-lived substructure as that observed on the northeast side of the major axis of UMi. Although UMi has long been suspected of experiencing ongoing tidal disruption, regions with enhanced volume density and cold kinematics cannot be the result of tidal interactions (Kleyna et al.~2003, hereafter K03; Read et al.~2006; S\'anchez-Salcedo \& Lora 2007). This suggests that the secondary peak in UMi is a long-lived structure, surviving in phase-space because the underlying gravitational potential is close to harmonic (K03). S\'anchez-Salcedo \& Lora (2007) derived an upper limit on the mass and abundance of massive dark objects in the halo of UMi to avoid a quick destruction of the clump by the continuous gravitational encounters with these objects. In this work, we will assume that the dark halo is comprised of a smooth distribution of elementary particles and then study the disintegration of the clump, placing contraints on the mass of a possible central IMBH in UMi. \section{Initial conditions and $N-$body simulations} \subsection{Ursa Minor and its clump} \label{sec:genUMi} Ursa Minor, located at a galactocentric distance of $R_{gc}=76\pm 4$~kpc (Carrera et al.~2002; Bellazzini et al.~2002), is one of the most dark matter-dominated dSphs in the Local Group, with a central mass-to-light ratio $M/L\gtrsim 100 $ $M_{\odot}$/$L_\odot$ (e.g., Wilkinson et al.~2004). The measured central velocity dispersion is $17\pm 4$ km s$^{-1}$ (Mu\~noz et al.~2005) and the core radius of the stellar component along the semimajor axis is $\sim 300$~pc (Palma et al.~2003). UMi reveals several morphological peculiarities: (1) The shape of the inner isodensity contours of the surface density of stars appears to have a large ellipticity of $0.54$, (2) the highest density of stars is not found at the center of symmetry of the outer isodensity contours but instead is offset southwest of center, (3) the secondary density peak on the northeast of the major axis is kinematically cold. \begin{figure*} \plotone{FIG1new.eps} \caption{Snapshots at $t=2,$ $8$ and $13$ Gyr for $R_{1/2}=50$~pc and $R_{\rm core}=510$~pc without self-gravity (top panel) and with self-gravity (middle panel). For comparison, the snapshots for $R_{\rm core}=300$ pc and with self-gravity are also shown (bottom panel).} \label{fig:b510} \end{figure*} The secondary density peak has a $1\sigma$ radius of $\simeq 1.6'$ ($\sim 35$ pc at a distance of $76$ kpc) when fitted with a Gaussian profile. The stellar distribution that forms this density excess is elongated not along the major axis of the isodensity contours of the elliptized King model, but along a line at an intermediate angle between the major and minor axes of these contours (Palma et al.~2003). The bend in the isodensity contours indicates that such clump is gravitationally unbound. Interestingly, K03 found that the velocity of stars within a $6'$ ($130$ pc) radius aperture on the clump are best fitted by a two-Gaussian population, one representing the underlying $8.8$ km s$^{-1}$ population, and the other with a line-of-sight velocity dispersion of $0.5$~km~s$^{-1}$. Although the value of the cold population is ill determined, we can be certain that the velocity dispersion is $<2.5$~km~s$^{-1}$ at a 95\% confidence level. The mean velocity of the cold population is equal to the systemic velocity of UMi, implying that either the orbit is radial and the clump is now at apocenter or the orbit lies in the plane of the sky (K03). \subsection{Initial conditions} We consider the evolution of a clump inside a rigid halo of dark matter with a density law: \begin{equation} \rho(r)=\frac{\rho_{0}}{\left(1+\left[r/R_{\rm core}\right]^{2}\right)^{1/2}}, \label{eq:densitydm} \end{equation} where $\rho_{0}$ is the central density and $R_{\rm core}$ is the dark halo core radius. This profile was chosen in order to facilitate comparison with K03. We explore different values for $R_{\rm core}$. Following K03, once $R_{\rm core}$ is fixed, we rescale the central density to have a dark matter mass of $5\times 10^{7}$ $M_{\odot}$ inside $600$ pc, which is approximately the maximum extent of the stellar distribution observed in UMi. Therefore, in all our models, we have a total mass-to-light ratio of $\left( M_{tot}/L_{V}\right) \approx 90 M_{\odot}/L_{\odot}^{V}$ inside $600$~pc, for a visual luminosity $L_{V}=5.4\times 10^{5}$ $L_{\odot}$ (Palma et al.~2003). For a normal stellar population with $M/L_{V}=2 M_{\odot}/L_{\odot}^{V}$, the stellar mass within the core radius of the dwarf is $\sim 4.5\times 10^{5} M_{\odot}$, whereas the dark matter mass is $> 6.2\times 10^{6} M_{\odot}$. Therefore, the contribution to the potential of the baryonic mass was ignored. The initial density profile of the clump follows a Plummer mass distribution, \begin{equation} \rho(r)=\frac{3}{4 \pi} \frac{M_{c} R_{p}^{2}}{(r^{2}+R_{p}^{2})^{5/2}}, \end{equation} where $M_{c}=2\times 10^{4}$ $M_{\odot}$ is the total mass of the clump and $R_{p}$ is the Plummer radius. One should note that there is a simple relation for a Plummer model between $R_{p}$ and the half-mass radius: $R_{1/2}=1.3R_{p}$. We use $R_{1/2}$ values between $25$ and $50$ pc to initialize our simulations. The clump's self-gravity is included to have a realistic and complete description of its internal dynamics because, as we will see later, the tidal radius of the clump may be larger than $R_{1/2}$ when large values of $R_{\rm core}$ are used. The clump is dropped at the apogalactocentric distance of $200$ pc from the UMi center with a certain tangential velocity $v_{T}$, which defines the eccentricity $e$ of the orbit. The orbit of the clump lies in the $x$-$y$ plane, which is also the plane of the sky. For simulations with a central BH, it is clear that the evolution of the clump depends on its orbital eccentricity. For instance, we found that for a radial orbit, the BH dissolves the group of stars in its first passage through the galactic center. In order to provide an upper limit on the mass of the BH and since there is more phase space available for nearly circular orbits than for radial ones, we will take, in most of our simulations, a rather circular orbit with $e=0.5$, which corresponds to $v_{T}=5.7$ km s$^{-1}$ for $R_{\rm core}=510$ pc. However, to our surprise, the disintegration time was found to be insensitive to the initial eccentricity of the clump's orbit as long as $e<0.87$. Altogether, there are essentially three model parameters that we have explored in our simulations: $R_{\rm core}$, $R_{1/2}$ and $M_{BH}$. \subsection{$N$-body simulations} We developed an N-body code that links an individual timestep to each particle in the simulation. Only for the particle that has the minimum associated time, the equations of motion are integrated (with a second order predictor-corrector method). This ``multi-timestep'' method reduces the typical CPU time of direct, particle-particle integrations ($\propto N^{2}$, with $N$ the number of particles), and allows integrations of systems of $\sim$1000 particles to be carried out with relatively short CPU times. The cluster density is so tiny that the internal relaxation timescale is very long ($\sim 11$ Gyr for an initial cluster with $R_{1/2}=50$ pc). Therefore, although our code is suitable to include two- and three-body encounters, the cluster behaves as collisionless and internal processes such as evaporation do not contribute to the dissolution of the cluster. For the same reason, all the particles have the same mass, and the presence of binaries was ignored. All the simulations presented in this paper used $600$ particles, each one having a mass of $33 M_{\odot}$. We chose a smoothing length of $0.7$ pc, which is approximately 1/10th the typical separation among cluster particles within $R_{1/2}$ at the beginning of the simulation. The convergence of the results was tested by comparing runs with different softening length and $N$. The effect of adopting a dfferent smoothing radius between $0.1$ and $1$ times the typical distance was found to be insignificant. We run the same simulation with different $N$ ($N=200, 400, 600$ and $1800$ particles), and convergence was found for $N\geq 400$. In order to validate our simulations, we checked that, when the cluster evolves at isolation, the Plummer configuration is stationary and that the energy is conserved over $12$ Gyr. To be certain that the interaction with the BH is resolved well, we compared the change in kinetic energy of the cluster, when colliding with a particle of mass $10^{5} M_{\odot}$, moving at $200$ km s$^{-1}$, with the predictions in the impulse approximation (Binney \& Tremaine 2008), for different impact parameters, and found good agreement (differences less than $10\%$) between them. We were also able to reproduce K03 simulations of the evolution of an unbound clump (ignoring self-gravity) in the gravitational potential created by the mass distribution given by Eq.~(\ref{eq:densitydm}). We confirm the K03 claim that even if the clump is initially very compact, with a $1\sigma$ radius of $12$ pc, cusped halos cannot explain the survival of the substructure for more than $1$ Gyr, while the substructure can persist for $\sim 12$ Gyr in halos with core radii $2$--$3$ times the clump's orbit. \begin{figure*} \plotone{FIG2.eps} \caption{Snapshots at $2,$ $8$ and $13$ Gyr for a simulation with $R_{1/2}=50$~pc and $R_{\rm core}=510$~pc (top panel), and the same simulation but with a BH (triangle) with mass $M_{BH}=3\times10^{4}$~$M_{\odot}$ (bottom panel).} \label{fig:std1} \end{figure*} \section{RESULTS} \subsection{Simulations without BH: the role of self-gravity} We have first examined the evolution of a clump with $R_{1/2}=50$ pc embedded in a galaxy with a large core of $510$ pc (see Fig.~\ref{fig:b510}). Note that when self-gravity is included, an initial $R_{1/2}$ value of $\sim 15$ pc, as that used in K03, is no longer realistic because the clump remains too compact over its lifetime to explain its present appearance. With self-gravity, strong substructure continues to persist for a Hubble time, while in the non-self-gravitating case the substructure is completely erased in $\sim 10$ Gyr. The evolution of the same self-gravitating clump but in a galaxy with $R_{\rm core}=300$ pc is also shown in Fig. \ref{fig:b510}. As expected, when the size of the galaxy core is decreased, the clump does not survive so long, but we can have a galaxy core of $300$ pc without causing the clump to disintegrate in $13$ Gyr. We confirm the K03 claim that cored halos allow the structures to remain uncorrupted for a Hubble time. Moreover, we find that, when including self-gravity, a galaxy core of $1.5$ times the clump's orbit is enough to preserve the integrity of the clump. \subsection{Simulations with BH} If UMi hosts an IMBH, the density substructure is erased not only by the orbital phase mixing and the tidal field of the galaxy but also by the gravitational interaction with the hypothetical BH. Since the destructive effects depend on the mass of the BH, we can establish an upper limit on the mass of the BH in UMi by imposing that the BH must preserve the longevity of the clump for more than $10$~Gyr. We proceeded to add a massive particle of $3\times10^{4}~M_{\odot}$ at the center of the galaxy potential, emulating a BH. Figure~\ref{fig:std1} shows the evolution of a self-gravitating clump with an initial size of $R_{1/2}=50$ pc in a galaxy with $R_{\rm core}=510$~pc. Under the influence of the gravitational pull exerted by the clump, the BH, initially at rest, is displaced from the center of UMi. The azimuthal orbital lag angle of the BH in the $(x,y)$ plane is $\approx \pi/2$. As a consequence, the clump feels a gravitational drag and loses angular momentum which is transferred to the BH. Due to the angular momentum loss, the clump starts to spiral into the center. At $2$ Gyr, we can see that, in fact, the clump has smaller orbital radius than it would have in the absence of the BH (see Fig.~\ref{fig:std1}). The BH, on the other hand, spirals outwards to larger radii until it reaches its maximum galactocentric radius. At $t=4$ Gyr, the clump reaches the galactic center and this ``exchange'' of orbits starts all over again. Therefore, when looking at the BH of UMi, one should bear in mind that the BH does not necessarily settle into the center of the galaxy. It would be interesting to perform N-body simulations including the stellar background to elucidate if the two observed off-centered regions with the highest stellar density (e.g., Palma et al.~2003) are a consequence of the dynamical response to the BH. The minimum distance between the BH and the center of mass of the clump is $\sim 100$ pc in this model. Since the orbits of the clump and the BH never cross, the tidal disruption of the group of stars can be described as a secular process of stellar diffusion in phase-space. This slow relaxation process forms a stellar debris of stripped stars that move on a galactic orbit very similar to that of the clump itself. In Fig.~\ref{fig:std1} we see that, under the influence of the BH, the clump is completely dissolved at $8$ Gyr. In order to quantify the clump's evolution, we calculated a map of the surface density of particles in the $(x,y)$ plane at any given time $t$. We sample this two-dimensional map searching for the parcel (of $20\times 20$ pc size) that contains the highest number of stars ${\mathcal{N}}_{\rm max}(t)$. This region is centered at the remnant of the clump. The number of stars ${\mathcal{N}}_{\rm max}$ as a function of time is shown in Fig.~\ref{fig:nmax}. As expected, ${\mathcal{N}}_{\rm max}$ decreases as the simulation evolves due to the spatial dilution of the clump caused by tidal heating. Once ${\mathcal{N}}_{\rm max}$ drops a factor $2$, which occurs in $\sim 6$ Gyr, the tidal disruption process is accelerated and ${\mathcal{N}}_{\rm max}$ dramatically declines at $\simeq 8$ Gyr (around the orbit $55$), which can be taken as the disruption time, denoted by $t_{d}$. The disruption of the clump was confirmed by visual inspection of the simulations. Further simulations show that even if the eccentricity of the orbit is similar to that of the isodensity contours of the surface density of UMi background stars, the disruption time is essentially the same. We carried out simulations with $R_{1/2}=50$ pc, $R_{\rm core}=510$ pc and $e =0.5$, but with different $M_{BH}$. We found that the substructure disintegrates in $11$ Gyr for a $10^{4}$ $M_{\odot}$ BH, and in $2$ Gyr for $M_{BH}=10^{5}$ $M_{\odot}$. We also explored different combinations for $R_{\rm core}$ and $R_{1/2}$. A reduction of the core of the galaxy leads to a more stringent upper limit on the mass of the BH. However, one can increase the longevity of the structure if a smaller value for $R_{1/2}$ is adopted. For instance, we also find that $t_{d}\simeq 8$ Gyr for $R_{\rm core}=200$ pc, $R_{1/2}=25$ pc and $M_{BH}=3\times 10^{4}$ $M_{\odot}$. In the absence of any knowledge about the initial dynamical state of the BH, the most natural assumption is that the center of the galaxy was its presumed birth site\footnote{The BH candidate in UMi reported by Maccarone et al.~(2005) is placed at about $7$ pc from the center.}. For completeness, we have also carried out simulations for an off-centered BH. A compilation of the models is given in Table \ref{tab:parameters}. When the BH is on radial orbit, $t_{d}\simeq 4$--$9$ Gyr, for $M_{BH}=3\times 10^{4} M_{\odot}$. Only in very special circunstances --when the BH is not on a radial orbit, the orbits are coplanar and the azimuthal lag angle is $\pi$--, the disruption of the cluster is less efficient because the separation between the clump and the BH is larger on average over the simulation. We conclude that, if UMi's clumpiness is a primordial artifact, then the survival of the secondary peak imposes an upper limit on the mass of the putative BH of $M_{BH}=(2-3)\times10^{4}$ $M_{\odot}$, if the BH originally lurked at the center. The extrapolation of the $M_{BH}-\sigma$ relation for elliptical galaxies (Gultekin et al.~2009) predicts a $1.0\pm^{5.0}_{0.9}\times 10^{4}$ $M_{\odot}$ BH for UMi. Therefore, our constraint is still consistent with both the extrapolated value and that inferred by Maccarone et al.~(2005). \begin{figure} \plotone{FIG3.eps} \caption{Smoothed curves of ${\mathcal{N}}_{max}$ as a function of time for runs \# 0 (black line), \# 3 (grey line), \# 5 (red line) and \# 6 (blue line).} \label{fig:nmax} \end{figure} \begin{table*} \centering \caption{Relevant parameters and destruction times.} \medskip \begin{tabular}{@{}ccccccccc@{}} \hline Run & $R_{\rm core}$ & $R_{1/2}$ & BH position$^{\ast}$ & BH velocity & $M_{BH}$ & $t_{d}$\\ \#& pc & pc & at $t=0$ [pc] & at $t=0$ [km s$^{-1}$] & $M_{\odot}$ & Gyr &\\ \hline 0& 510 & 50 & $--$ & $--$ & 0 & $>14$ &\\ 1& 510 & 50 &$0$ & $0$ & $5\times10^{3}$ & $>14$ &\\ 2& 510& 50 &$0$ & $0$ & $1\times10^{4}$ & 11 &\\ 3& 510 & 50 & $0$ & $0$ & $3\times10^{4}$ & 8.3 &\\ 4& 510 & 50 & $0$ & $0$ & $1\times10^{5}$ & 1.7 &\\ 5& 200 & 25 & $0$ &$0$ & $3\times10^{4}$ & 7.8 &\\ 6& 510 & 50 & (0,-50,0) & 0 & $3\times 10^{4}$ & 5.5 & \\ 7& 510 & 50 & (0,0,-50) & 0 & $3\times 10^{4}$ & 8.5 & \\ 8& 510 & 50 & (0,-50,0) & 0 & $5\times 10^{4}$ & 3.5 & \\ 9& 510 & 50 & (0,0,-50) & 0 & $5\times 10^{4}$ & 5.0 & \\ 10& 510 & 50 & (0,0,-50) & 0 & $1\times 10^{5}$ & 1.8 & \\ \hline \end{tabular} \vskip 0.3cm $^{\ast}$ The clump is initially at the position $(200,0,0)$. \label{tab:parameters} \end{table*} \section{Discussion and conclusions} \label{conclusions} \noindent Our self-gravitating simulations confirm the claim of K03 that the dark halo of UMi must have a core radius of $\sim 300$ pc, comparable to the core radius of the underlying stellar population, in order to preserve the integrity of this clump. While the firm dynamical detection of an IMBH in any dSph galaxy is challenging because it requires observations of the velocity dispersion of stars deep into the core, we have demonstrated that the very integrity of kinematically cold substructure in dSph galaxies may impose useful limits not only on the core of the galaxy but also on the mass of putative BHs. In the case of UMi, the maximum mass of a BH initially seated at the centre of the potential or initially on radial orbit, is $(2-3)\times 10^{4}$ $M_{\odot}$. When searching for direct detection of the possible IMBH in UMi (e.g., Maccarone et al.~2005), one should keep in mind that the BH may be offset from the galactic center because of the gravitational pull exerted by the clump. As pointed out by the referee, the contour map of the surface brightness of the nucleus of M31 also has a bright off-center source (Fig.~2 of Lauer et al.~1993), which was interpreted not as a separate stellar system but as the apoapsis region of an eccentric stellar disk orbiting a central massive BH (Tremaine 1995; Lauer et al.~1996). The timing problem of the clump may be circumvented if one assumes that the secondary peak of UMi is the part of a ring close to apoapsis. Since the stars forming the secondary peak in UMi have a mean velocity equal to the systemic velocity of UMi, the ring should lie close to the plane of the sky and should be very eccentric. A serious difficulty with this scenario is that the apsides of the orbits need to be extremely aligned. Our simulations have shown that this alignment is not possible if the ring is the tidal debris of a stellar cluster. We do not know a satisfactory mechanism to explain the formation of a very eccentric stellar ring at scales of a galaxy with the required apsidal alignment. \acknowledgments The thoughtful comments and suggestions by an anonymous referee have greatly improved the paper quality. V.L, A.C.R \& A.E. thank financial support from grant 61547 from CONACyT. F.J.S.S.~acknowledges financial support from CONACyT CB2006-60526 and PAPIIT IN114107 projects. V.L.~wish to thank Stu group.
1,116,691,499,507
arxiv
\section{Introduction} Magnetic fields appear to play a crucial role in the evolution of gas in disc galaxies, from the diffuse interstellar medium \citep[ISM; see e.g. the reviews by][]{Crutcher12,Beck13} to dense molecular clouds (MCs) and star forming cores \citep[see e.g. the review by][]{Li14}. They can be observed e.g. by polarized radiation emitted from dust grains. In particular recent observations with the BlastPol experiment \citep{Matthews14,Fissel16,Fissel19,Gandilo16,Santos17,Soler17b,Ashton18} and the Planck satellite \citep{PlanckXX,PlanckXXXII,PlanckXXXV} provide more and more dust polarization observations of MCs \citep[see also e.g.][]{Houde04,Dotson10,Li13,Pillai15}. This dense, molecular part of the ISM is observed to be highly filamentary \citep[see e.g. the review by][]{Andre14}. The impact of magnetic fields on the formation of these filaments and thus finally on the star formation process itself is subject to active investigations \citep[e.g.][]{Goodman92,Goldsmith08,Chapman11,Sugitani11,Li13,Palmeirim13,Malinen16,Panopoulou16, PlanckXXXII,PlanckXXXV,Soler16,Soler17b,Soler19,Jow18,Monsch18,Fissel19}. It has been proposed that the orientation of magnetic field lines with respect to the gas flow and the dense structures/filaments gives insight into whether (i) magnetic fields channel the gas flow along their direction as it would be the case for strong magnetic fields, or (ii) whether the field is dragged along with the flow as it would be the case for weak fields \citep[e.g.][]{Li14}. A general outcome of the aforementioned observations is that there appears to be a progressive change in the relative orientation of the magnetic field from being preferentially parallel to the density structures at low column densities to preferentially perpendicular at high column densities. We emphasise, however, that some recent observations challenge these findings \citep{PlanckXXXV,Soler17b,Soler19,Jow18,Fissel19}, a fact we will investigate in this work. Results from numerical simulations show that for strong magnetic fields dense structures are mostly perpendicular to the field direction \citep[e.g.][but see also \citealt{Hennebelle19} for a recent review]{Heitsch01,Ostriker01,Li04,Nakamura08,Collins11,Hennebelle13,Soler13,Chen15,Chen20,Li15,Seifried15,Chen16,Zamora17,Mocz18}. This can be attributed to the fact that in ideal magneto-hydrodynamics (MHD) the gas can move freely only along the magnetic field lines whereas the flow perpendicular to it is hampered. The latter is the case when the (turbulent) motions of the gas are sub-Alfv\'enic. Consequently, both gravitating structures like star forming filaments and supersonic shock fronts will be mostly perpendicular to the magnetic field. \citet{Soler17a} developed a theory to describe the evolution of the angle between the magnetic field and the gas structures. They show that a perpendicular arrangement of magnetic fields and gas structures is a consequence of gravitational collapse or converging flows. \citet{Chen16} argue that the transition from a parallel to perpendicular orientation happens once the flow becomes super-Alfv\*enic due to gravitational collapse. \citet{Soler17a}, however, find that a super-Alfv\*enic flow is necessary but not sufficient for a perpendicular orientation to occur. Investigating the orientation of magnetic fields in numerical simulations and comparing the results to actual observations is challenging for various technical reasons. First, self-consistent (MHD) simulations have to be performed which capture a wide dynamical range from the larger-scale galactic environment of the clouds down to sub-pc scales. Secondly, the simulations have to include an appropriate treatment for the thermal evolution of both gas and dust. The latter is required for the accurate modelling of dust alignment efficiencies \citep{Lazarian07,Andersson15}, which presents the second challenge. One of the major obstacles here is the lack of a coherent dust grain alignment theory combining the different alignment processes \citep[see e.g.][for an overview]{Reissl16}. Thirdly, full radiative transfer calculations are required to produce synthetic dust polarization maps. Most of the works presented to date on this topic usually lack at least one of the aforementioned requirements, and thus do not produce \textit{fully} self-consistent dust polarization maps \citep[e.g.][]{Heitsch01b,Ostriker01,Padoan01,Pelkonen07,Pelkonen09,Kataoka12,Soler13,PlanckXX,Chen16,King18,Vaisala18}. In this work we try to overcome these difficulties in the following way: \begin{itemize} \item We use two sets of MC simulation, these are colliding flow simulations \citep{Joshi19} and the SILCC-Zoom simulations \citep{Seifried17} in order to study the relation between polarization observations and the physical (3D) cloud conditions. \item In order to create the polarization maps, we use the freely available dust polarization radiative transfer code POLARIS \citep{Reissl16,Reissl19}, which is able to calculate grain alignment efficiencies and the subsequent radiative transfer in a fully self-consistent manner. The code was already successfully applied in a number of synthetic dust polarization studies from cloud to protostellar disc scales \citep{Reissl17,Seifried19,Valdivia19} as well as the calculation of synthetic synchrotron maps and Zeeman splitting \citep{Reissl18,Reissl19}. \item The results of the dust polarization radiative transfer simulations are analysed using the Projected Rayleigh Statistics \citep{Jow18} and are compared to existing observations and are interpreted using the analytical explanation of \citet{Soler17a}. \end{itemize} The structure of the paper is as follows: First, we present the initial conditions and various methods used for the MHD simulations and the subsequent radiative transfer with POLARIS (Section~\ref{sec:numericsoverview}). We present our results concerning the colliding flow simulations in Section~\ref{sec:CF} and the SILCC-Zoom simulations in Section~\ref{sec:zoom} and discuss their agreement with the analytical theory of \citet{Soler17a}. In Section~\ref{sec:discussion} we discuss our results in a broader context, before we conclude in Section~\ref{sec:conclusion}. \section{Numerics, initial conditions and applied methods} \label{sec:numericsoverview} In the following we describe the radiative transfer methods, initial conditions and methods used for the colliding flow (CF) simulations and the SILCC-Zoom simulations. As they have been described in detail in previous papers, we only briefly summarise the main points. For more details on the CF simulations we refer to \citet{Joshi19} and for the SILCC-Zoom simulations to \citet{Seifried17,Seifried19}. For the dust polarization radiative transfer we refer to \citet{Reissl16} and \citet{Seifried19}. \subsection{Numerics} \label{sec:numerics} Both the CF and SILCC-Zoom simulations are performed with the adaptive mesh refinement code FLASH 4.3 \citep{Fryxell00,Dubey08}. The CF simulations use a magneto-hydrodynamics solver which guarantees positive entropy and density \citep{Bouchut07,Waagan09}, the SILCC-Zoom simulation an entropy-stable magneto-hydrodynamics solver which guarantees that the smallest possible amount of dissipation is included \citep{Derigs16,Derigs18}. For both types of simulations, we model the chemical evolution of the ISM using a chemical network for H$^+$, H, H$_2$, C$^+$, CO, e$^-$, and O \citep[][but see also \citealt{Walch15} for the implementation in the simulations]{Nelson97,Glover07b,Glover10}. The simulations follow the thermal evolution of the gas including the most relevant heating and cooling processes. The shielding of the interstellar radiation field \citep[$G_0$ = 1.7 in units of the radiation field of \citet{Habing68} corresponding to the strength determined by][]{Draine78} is calculated according to the surrounding column densities of total gas, H$_2$, and CO via the {\sc TreeRay}/{\sc OpticalDepth} module \citep{Clark12b,Walch15,Wunsch18}. The cosmic ray ionisation rate for atomic hydrogen\footnote{Note that in \citet{Seifried17} we erroneously wrote \mbox{1.3$\times$10$^{-17}$ s$^{-1}$.}} is \mbox{3$\times$10$^{-17}$ s$^{-1}$}. We solve the Poisson equation for self-gravity with a tree based method \citep{Wunsch18}. In addition, for the SILCC-Zoom simulations, we include a background potential from the pre-existing stellar component in the galactic disc, modelled as an isothermal sheet with \mbox{$\Sigma_\mathrm{star}$ = 30 M$_{\sun}$ pc$^{-2}$} and a scale height of \mbox{100 pc} \citep{Walch15,Girichidis16}. \subsection{Colliding flow simulations} \label{sec:initial-CF} The CF simulation domain represents a 128~pc $\times$ 32~pc $\times$ 32~pc rectangular cuboid with inflow boundary conditions in the $x$-direction and periodic boundaries in the $y$- and $z$-direction. The whole domain is initially filled with a warm, uniform density medium with a density of \mbox{$\rho_0$ = 1.67 $\times$ 10$^{-24}$ g cm$^{-3}$} consisting of atomic hydrogen and C$^+$ and an equilibrium temperature of 5540~K. The gas on either side of the $x$ = 0 plane is moving towards the plane with a velocity of $\pm$13.6 km s$^{-1}$ such that the collision occurs immediately upon the start of the simulation. In order to allow turbulent motions to develop, the initial collision plane is not exactly the $x$ = 0 plane but rather represents an irregular interface with the collision taking place at \begin{equation} x = A \left[\rmn{cos}(2 - \tilde y \tilde z) \rmn{cos}(k_y \tilde y) + \rmn{cos}(0.5 - \tilde y \tilde z) \rmn{sin}(k_z \tilde z) \right] \, , \end{equation} with A = 1.6 pc, $k_y$ = 2, $k_z$ = 1, $\tilde y$ = $\pi \cdot y$/(32 pc) and $\tilde z$ = $\pi \cdot z$/(32~pc) \citep[see interface I5 in Fig.~3 of][]{Joshi19}. The magnetic field is initially homogeneous and parallel to the $x$-axis. In order to test the dependence of our results on the field strength, we perform 5 simulations with magnetic field strengths of $B_{x,0}$ = 1.25, 2.5, 5.0, 7.5, and 10~$\mu$G. Using the collision velocity \mbox{$v$ = 13.6 km s$^{-1}$}, this results in Alfv\'enic Mach numbers \begin{equation} M_\rmn{A} = \frac{v}{B/\sqrt{4 \pi \rho_0}} \label{eq:Ma} \end{equation} in the moderately sub- to moderately super-alfv\'enic range (see Table~\ref{tab:overview}). The initial resolution of the simulations is 0.25~pc. During the course of the simulation, we allow for a higher resolution of up to 0.008 pc using a refinement criterion based on the local Jeans length, which must be resolved with at least 8 grid cells in one dimension. \begin{table} \caption{Overview of the simulations giving the run name, the initial magnetic field strength and Alfv\'enic Mach number, and the highest resolution reached. Furthermore, we list the reference time $t_0$ to which the times used throughout the paper refer. For the SILCC-Zoom simulations this corresponds to the time at which we start to zoom-in. The second-last column gives the mass-to-flux ratio $\mu$ at \mbox{$t_0$ + 3 Myr} \mbox{(i.e. $t_\rmn{evol}$ = 3 Myr)} and the last column the center of the zoom-in region.} \begin{tabular}{lcccccc} \hline run & B$_{x,0}$ & $M_\rmn{A}$ & d$x_\rmn{min}$ & $t_0$ & $\mu$ & center \\ & ($\mu$G) & & (pc) & (Myr) & & (pc) \\ \hline CF-B1.25 & 1.25 & 4.0 & 0.008 & 16.0 & 4.3 & --- \\ CF-B2.5 & 2.5 & 2.5 & 0.008 & 16.0 & 2.2 & --- \\ CF-B5 & 5.0 & 1.2 & 0.008 & 16.0 & 1.1 & --- \\ CF-B7.5 & 7.5 & 0.8 & 0.008 & 16.0 & 0.72 & --- \\ CF-B10 & 10 & 0.6 & 0.008 & 16.0 & 0.54 & --- \\ \hline SILCC-MC1 & 3.0 & 1.8 & 0.12 & 16.0 & 2.2 & (-84, 100, 0) \\ SILCC-MC2 & 3.0 & 1.8 & 0.12 & 16.0 & 2.8 & (126, -117, 0) \\ SILCC-MC3 & 3.0 & 1.8 & 0.12 & 16.0 & 2.1 & (-125, -104, 0) \\ SILCC-MC4 & 3.0 & 1.8 & 0.12 & 11.6 & 2.1 & (-97, 130, 0) \\ SILCC-MC5 & 3.0 & 1.8 & 0.12 & 11.6 & 2.3 & (3, 16, 0) \\ SILCC-MC6 & 3.0 & 1.8 & 0.12 & 16.0 & 1.4 & (62, 175, 0) \\ \hline \end{tabular} \label{tab:overview} \end{table} \subsection{SILCC-Zoom simulations} \label{sec:initial-zoom} In the following we briefly describe the SILCC-Zoom setup, for more details we refer to \citet{Seifried17}. The simulation domain represents a 500~pc $\times$ 500~pc $\times$ $\pm$5~kpc section of a galactic disc with an initial resolution of 3.9 pc. The initial gas distribution follows a Gaussian profile \begin{equation} \rho(z) = \rho_0 \times \textrm{exp}\left[ - \frac{1}{2} \left( \frac{z}{h_z} \right)^2 \right] \, , \label{eq:rhosilcc} \end{equation} with $h_z$ = 30 pc and $\rho_0 = 9 \times 10^{-24}$ g cm$^{-3}$. This results in a gas surface density of \mbox{$\Sigma_\rmn{gas}$ = 10 M$_{\sun}$ pc$^{-2}$}, similar to the solar neighbourhood. The initial magnetic field is given by \begin{equation} B_{x} = B_{x,0} \sqrt{\rho(z)/\rho_0} \; , B_y = 0 \; , B_z = 0 \, , \label{eq:bsilcc} \end{equation} with the magnetic field in the midplane being set to \mbox{$B_{x,0}$ = 3 $\mu$G} following recent observations \citep[e.g.][]{Beck13}. Assuming a typical turbulent velocity dispersion of \mbox{5 km s$^{-1}$} \citep[figure~5 in][]{Seifried17}, we obtain slightly super-Alfv\'enic turbulent motions (Table~\ref{tab:overview}). From the start we inject supernovae (SNe) up to a certain time $t_0$ with a constant rate of 15 SNe Myr$^{-1}$. The time $t_0$ was chosen such that the SNe can generate sufficient turbulent motions in the simulation domain \citep[see Section~2 in][]{Seifried18}. We use mixed SN driving, which allows us to obtain a realistic distribution of the multiphase ISM as initial conditions for the subsequent zoom-in procedure \citep{Walch15,Girichidis16}. If the Sedov-Taylor radius is resolved with at least 4 cells, we inject 10$^{51}$~erg per SN in form of thermal energy. Otherwise, the gas inside the injection region is heated to $10^4$~K and momentum corresponding to the end of the pressure-driven snowplough phase is injected \citep[see][for details]{Gatto17}. At $t_0$ we stop the injection of further SNe and choose six different cuboid-like regions centered in the midplane of the disc. In order to follow the evolution of the clouds forming in these six regions -- henceforth denoted as MC1 to MC6 -- we then progressively increase the spatial resolution inside these regions from 3.9~pc to 0.12~pc \citep[see Table 2 in][]{Seifried17}, refining based on the Jeans length and variations in the gas density. Afterwards we keep the highest resolution of 0.12 pc in the zoom-in regions, and the lower resolution of 3.9 pc outside. For all six clouds the corresponding $t_0$ and the centers of the zoom-in regions are listed in Table~\ref{tab:overview}. \subsection{POLARIS and radiative transfer} \label{sec:polaris} The radiative transfer (RT) calculations are performed with the freely available RT code POLARIS\footnote{http://www1.astrophysik.uni-kiel.de/$\sim$polaris} \citep{Reissl16,Reissl19}. POLARIS is a 3D line and dust continuum Monte-Carlo code allowing to solve the RT problem including dust polarization, which we have already successfully applied before \citep{Reissl17,Seifried19}. We apply the radiative torque (RAT) alignment theory \citep{Dolginov76,Draine96,Draine97,Bethell07,Lazarian07,Hoang08,Andersson15}. In short, using RAT, POLARIS determines the Stokes parameters by calculating the size-dependent alignment of dust grains with the magnetic field. The dust temperature is provided by the MHD simulations. Dust grains smaller than the threshold size $a_\rmn{alig}$ are not aligned with the magnetic field as the spinning-up due to the incident radiation is smaller than the randomizing effect of collisions with gas particles. The upper threshold, up to which grains are still aligned, is given by the Larmor limit, $a_l$, which is typically of the order of, or larger, than the maximum grain size assumed here (see below). We use the ISRF of \citet{Mathis77} scaled up by a factor of 1.47 such that its strength corresponds to that determined by \citet{Draine78}. As we focus on MCs in a Galactic environment, we apply a dust model consisting of 37.5\% graphite and 62.5\% amorphous silicate grains \citep{Mathis77}. The dust density is obtained from the gas density assuming a spatially constant dust-to-gas mass ratio of 1\%. We assume a grain size distribution of $n(a) \propto a^{-3.5}$ with the canonical values of the lower and upper cut-off radius of $a_{\rm min}$ = 5 nm and \mbox{$a_{\rm max}$ = 2 $\mu$m}, respectively, the latter accounting for a moderate grain growth in the dense ISM. The shape of a single dust grain is fractal in nature. However, we apply an oblate shape with an aspect ratio of $s = 0.5$, a valid approximation for an averaged ensemble of dust grains \citep{Hildebrand95,Draine17}. We pre-calculate individual cross sections for 160 size bins and 104 wavelength bins \citep[see][for details]{Reissl17} with the scattering code DDSCAT \citep{Draine13}. Optical properties of the different materials are taken from tabulated data of \cite{Lee85} and \citet{Laor93}. Here, we focus on RT calculations for a wavelength of \mbox{$\lambda$ = 1.3 mm}, which is close to the wavelength of the CO(2-1) transition. We emphasise that choosing e.g. $\lambda$ around 850 $\mu$m as in the Planck observations would give qualitatively and quantitatively very similar results, as the polarization maps show only very little difference of the order of 1$^\circ$ for the various wavelengths \citep{Seifried19}. The spatial resolution is identical to the highest resolution of the corresponding MHD simulation, i.e. 0.008~pc for the CF runs and 0.12~pc for the SILCC-Zoom runs. In order to mimic the observation of MCs forming in isolation and to avoid confusion of the polarization signal along the line-of-sight, we perform the RT calculations for a cubic sub-region of each simulation domain. For the CF runs, we pick a \mbox{32~pc $\times$ 32~pc $\times$ 32~pc} region centered in the middle of the simulation domain covering the entire collision interface. For the SILCC-Zoom runs we take a \mbox{125~pc $\times$ 125~pc $\times$ 125~pc} region centered on the midpoint of the corresponding zoom-in region (see Table~\ref{tab:overview}). From the RT calculations we obtain the Stokes parameters $I$, $Q$, and $U$, where $I$ is the total intensity, and $Q$ and $U$ quantify the linear polarization of the observed radiation \citep[see][for details]{Reissl16,Reissl19}. The polarization angle, $\phi_\rmn{Pol}$, is calculated as \begin{equation} \phi_\rmn{Pol} = \frac{1}{2}\rmn{arctan}(U,Q) \, \label{eq:phiPol} \end{equation} and the polarization degree is given by \begin{equation} p = \frac{\sqrt{Q^2 + U^2}}{I} \, . \label{eq:p} \end{equation} \subsection{The Projected Rayleigh Statistic} \label{sec:prs} The tools of Rayleigh Statistic were first applied to astrophysical problems by \citet{Jow18}. As discussed there, the Rayleigh Statistic tests whether in 2D for a set of $n$ independent angles $\theta_i$ in the range $[0,2\pi]$ the angles are uniformly distributed by calculating \begin{equation} Z = \frac{ (\Sigma_i^{n} \rmn{cos}\theta_i)^2 + (\Sigma_i^{n} \rmn{sin}\theta_i)^2 }{n} \, . \end{equation} This equation is identical to a random walk in 2D with $Z$ being the displacement from the origin if steps of unit length in the direction of $\theta_i$ are taken. In order to test the relative orientation of the magnetic field direction and density structures, we take $\theta$ = 2 $\phi$, where $\phi$ is the relative orientation angle between the plane-of-sky projected magnetic field $\mathbf{B_\rmn{POS}}$, inferred from the polarization direction by rotating it by 90$^\circ$, and the tangent to the column density ($N$) isocontour \citep{Soler17b}. This is equivalent to the angle between the observed polarization direction $\mathbf{E}$ and the gradient of the column density, $\nabla N$, which allows us to calculate the angle as \begin{equation} \phi = \rmn{arctan}\left( |\nabla N \times \mathbf{E}|, \nabla N \cdot \mathbf{E} \right) \, . \label{eq:phi} \end{equation} We correct for a possible oversampling of our data by checking against 100 realizations of a randomly distributed $\nabla N$ map as discussed in detail in \citet[][see their Eqs.~11 and~12]{Fissel19}. This takes into account that in our synthetic dust polarization maps neighbouring pixels (in particular in the less resolved, lower-density regimes) are not statistically independent. This effectively reduces the total number of pixels from $n$ to $n_\rmn{ind}$ where $n_\rmn{ind}$ ($<$ $n$) is the number of independent data samples. As discussed by \citet{Jow18}, the Projected Rayleigh Statistic (PRS) denoted with the symbol $Z_x$ can be used to test whether a preferred parallel or perpendicular orientation is present: \begin{equation} Z_x = \frac{ \Sigma_i^{n_\rmn{ind}} \rmn{cos}\theta_i }{\sqrt{n_\rmn{ind}/2}} \, . \end{equation} If the observed magnetic field $\mathbf{B_\rmn{POS}}$ is parallel to the iso-$N$ contour, then cos$\theta_i$ = 1; if the two directions are perpendicular, then cos$\theta_i$ = -1. Hence, measurements of $Z_x$ $\gg$ 0 indicate preferentially parallel orientation, whereas $Z_x$ $\ll$ 0 indicates magnetic fields perpendicular to the $N$ isocontours. For $Z_x$ $\simeq$ 0, no preferred direction is present. Finally, in order to test the statistical significance of the orientation, we compare $Z_x$ against its variance \citep{Jow18} \begin{equation} \sigma^2_{Z_x} = \frac{2 \Sigma_i^{n_\rmn{ind}} (\rmn{cos} \theta_i)^2 - (Z_x)^2 }{n_\rmn{ind}} \, . \end{equation} \section{Results of the CF simulations} \label{sec:CF} \begin{figure*} \includegraphics[width=\textwidth]{fig01.pdf} \caption{Time evolution (from left to right) of the column density of the CF runs with increasing magnetic field strength (from top to bottom). The higher the magnetic field strength, the more confined is the cloud to the collision interface around $x$ = 0. In addition, stronger magnetic fields suppress structure formation perpendicular to the original field direction. Note that the panels only show the central part of the simulation domain.} \label{fig:CD_CF} \end{figure*} \begin{figure*} \includegraphics[height=0.44\textwidth]{fig02a.png} \includegraphics[height=0.44\textwidth]{fig02b.png} \caption{Synthetic polarization maps obtained with POLARIS of run CF-B5 (left) and SILCC-MC1 (right) at $t_\rmn{evol}$ = 3 Myr. The figure shows the polarization degree from 3 orthogonal directions (colour coded) and the polarization direction rotated by 90$^\circ$ (black bars) to view the inferred magnetic direction.} \label{fig:pol_maps} \end{figure*} For the CF simulations we focus on the results from \mbox{$t_0$ = 16 Myr} onwards. By inspecting the simulations, this is the time when sufficient mass ($\sim$ 10$^{4}$ M$_{\sun}$) has accumulated at the collision interface and gravitational collapse accompanied by the formation of dense molecular gas sets in. We note that throughout the paper we refer to the time elapsed since $t_0$ as $t_\rmn{evol}$ = $t$ - $t_0$. We note that we first investigate the relative orientation between the observed magnetic field and the column density (Section~\ref{sec:prs_CF}), and then link its result to the underlying 3D structure (Section~\ref{sec:a23_CF}). In Fig.~\ref{fig:CD_CF} we show the time evolution of the column density of the five CF runs from $t_\rmn{evol}$ = 0 -- 3 Myr. For all runs the accumulation of mass in the central region can be observed. For the runs with the highest magnetic field strengths, CF-B7.5 and CF-B10, the dense regions appear to contract along the $x$-direction and are confined to $\sim$ $\pm$5 pc around $x$ = 0. For the remaining runs this contraction is less clear. In particular for the low magnetic field runs the collision region remains rather widespread with a typical extent of 15 -- 20~pc. This indicates that for CF-B7.5 and CF-B10 the magnetic field is strong enough to guide the gas streams, to promote a collapse along its original direction, and to prevent structure formation by turbulent motions perpendicular to it \citep[see also][]{Heitsch09,Zamora18,Iwasaki19}. For the remaining runs the gas is able to form significant structures also perpendicular to the original field direction. For a more detailed discussion on the dynamical and chemical evolution of the clouds we refer to Weis et al. (in prep.). In the left panel of Fig.~\ref{fig:pol_maps} we show the map of the polarization degree and direction for run CF-B5 at $t_\rmn{evol}$ = 3 Myr. In the collision region (parallel to the $y$-$z$-plane) the polarization degree is typically of a few 1 to 10\%. Only in the inflowing low-column density medium, where the field is still well ordered and resembles the initial setup, high polarization degrees ($\geq$ 25\%) are obtained. We note, however, that these would probably not be accessible in actual observations due to a low signal-to-noise ratio \citep{Seifried19}. Qualitatively similar results are also found for the other runs and times, although the drop in polarization degree in the collision region for the runs CF-B7.5 and CF-B10 is less pronounced. \subsection{2D: The relative orientation between the observed magnetic field and $\nabla N$} \label{sec:prs_CF} \begin{figure*} \includegraphics[width=\textwidth]{fig03.pdf} \caption{Time evolution (from left to right) of the PRS, $Z_x$, of the CF runs with increasing magnetic field strength (from top to bottom) for three different LOS. A preferentially perpendicular orientation ($Z_x <$ 0) at high column densities is only present for runs with field strength $\geq$ 5 $\mu$G. In addition, for these runs the perpendicular orientation becomes more pronounced at later times. For the weak field runs, the magnetic field remains mostly parallel to the density structures or shows no preferred direction. The uncertainty $\sigma_{Z_x}$ of each bin (shown by horizontal bars) is rather minor.} \label{fig:PRS_CF} \end{figure*} In Fig.~\ref{fig:PRS_CF} we show the PRS, $Z_x$, inferred from the polarization maps (Fig.~\ref{fig:pol_maps}) of the five CF runs for three lines-of-sights (LOS) as a function of time, i.e. for the same snapshots as shown in Fig.~\ref{fig:CD_CF}. For this purpose, we use the freely available tool magnetar\footnote{https://github.com/solerjuan/magnetar} \citep{Soler13}. Before evaluating the PRS, we smooth the polarization and column density maps with a Gaussian kernel with a size of 5 pixels, in order to average over regions in which the resolution of the MHD simulation was lower than the pixel size of 0.008~pc used in the polarization maps. We have chosen the bins in $N$ such that each contains the same number of pixels \citep{Soler13}. Furthermore, as stated in Section~\ref{sec:prs}, we have corrected for a potential oversampling of the data. Overall, the typical uncertainty $\sigma_{Z_x}$ is close to 1 \citep{Fissel19} and thus, it is rather small compared to $Z_x$. The most striking feature is that a preferentially perpendicular orientation of magnetic fields and column density structures ($Z_x <$ 0) is only obtained for the runs with fields strengths \mbox{$B_{x,0}$ $\geq$ 5 $\mu$G}, although for run CF-B5 the relative orientation shows partly no preferred direction within the uncertainty. For the runs CF-B7.5 and CF-B10 the configuration flips from a parallel to a perpendicular configuration around column densities of \mbox{$N_\rmn{trans}$ $\simeq$ 10$^{21 - 21.5}$ cm$^{-2}$}. This value is at the lower end of the distribution seen in recent observations~\citep{PlanckXXXV,Jow18,Soler17b,Soler19}. Moreover, the value of $N_\rmn{trans}$ agrees very well with the value of \citet{Crutcher12}, at which the transition from a sub- to a supercritical magnetic field in the ISM occurs. Interestingly, for the runs CF-B7.5 and CF-B10 we find a clear time evolution, which is not the case for the other runs. As time progresses the relative orientation becomes increasingly more perpendicular for high column densities. Also the values of $N_\rmn{trans}$ decrease over time. In addition, by comparing the runs CF-B7.5 and CF-B10, we find that the transition from parallel to perpendicular orientation appears to occur at lower column densities for higher magnetic field strengths. Some of the curves are even located completely below $Z_x$ = 0 \citep[see e.g. Fig. A.2 in][for an observational counterpart]{Soler17b}. For the runs CF-B1.25 and CF-B2.5, the PRS shows a preferentially parallel orientation for the $y$- and $z$-direction, whereas for the projection along the $x$-direction, i.e. along the original field direction, there is no preferred direction of the magnetic field recognisable. For this latter projection, we partly see an increase of $Z_x$ with increasing column density for run CF-B1.25. We attribute this to the fact that for the lower column densities (corresponding to the inflowing material) the orientation is random, whereas for the highest column densities, i.e. the collision interface, a parallel orientation, similar to the other the directions, tries to establish. Overall, however, we do no see a clear time evolution for the low-magnetic field runs. \subsubsection{Link to the column density evolution} The general shape of the PRS can be directly linked to the evolution of the column density (Fig.~\ref{fig:CD_CF}): For the low magnetic field runs, the column density structure remains rather unchanged over time and shows elongated structures along the $x$-direction. Consequently, as the magnetic field is mainly along the $x$-direction, for both the $y$- and $z$-projection (red and blue curves in Fig.~\ref{fig:PRS_CF}) the PRS indicates a rather parallel field-density configuration. For the higher magnetic field runs, however, the column density structure is more confined to the collision interface around $x$ = 0 and thus shows an overall elongated structure along the $y$- and $z$-direction. Together with a magnetic field along the $x$-direction, this results in a perpendicular configuration. In addition, the observed contraction of the structures along the $x$-direction and the increase in the maximum column density over time is also visible in the PRS: the more contracted, the clearer the perpendicular orientation. \subsection{3D: The relative orientation of the magnetic field and $n$} \label{sec:a23_CF} \begin{figure} \includegraphics[width=\linewidth]{fig04.pdf} \caption{Dependence of $\zeta$ on the number density for the CF runs at \mbox{$t_\rmn{evol}$ = 2 Myr}. Only for the highly magnetised runs CF-B7.5 and CF-B10, negative values, indicating a perpendicular orientation of the magnetic field and the densest structures, are reached. This matches well the results of the 2D analysis (Fig.~\ref{fig:PRS_CF}).} \label{fig:zeta_CF} \end{figure} \begin{figure} \includegraphics[width=0.9\linewidth]{fig05.pdf} \caption{Density dependence of $A_1$, $A_{23}$, $A_1$ + $A_{23}$ and $C$ (from top to bottom) for the five CF runs at $t_\rmn{evol}$ = 2 Myr. The thick lines with dots show the mean value for a given density, the thin lines the 1$\sigma$ interval. Overall, the observed relative orientation in Fig.~\ref{fig:PRS_CF} can be explained by the sum of $A_1$ and $A_{23}$: a positive sum results in perpendicular orientation ($\zeta < 0$ and $Z_x < 0$), a negative sum in parallel orientation ($\zeta > 0$ and $Z_x > 0$). Note, however, that also the large spread of e.g. $C$ can contribute to a perpendicular orientation (Appendix~\ref{sec:appendixA}).} \label{fig:a23_CF} \end{figure} In order to relate the results found for the 2D polarization maps to the actual conditions in the clouds, we next investigate the relative orientation of the magnetic field and the number density $n$ in 3D. For this purpose we consider the relative orientation angle between the (3D) magnetic field and the \textit{gradient} of the density, \mbox{$\varphi$ = $\measuredangle (\mathbf{B}, \nabla n$)} (in contrast to $\phi$ used for the PRS in 2D, which gives the angle between the projected magnetic field and the \textit{isocontour} of the column density), which we calculate via \begin{equation} \rmn{cos} \,\varphi = \frac{\nabla n \cdot\mathbf{B}}{|\nabla n|\,|\mathbf{B}|} \, . \label{eq:varphi} \end{equation} This implies that \mbox{cos $\varphi$ = $\pm$ 1} for $\bf{B}$ perpendicular to the iso-density contours, which are by definition normal to $\nabla n$, and \mbox{cos $\varphi$ = 0} for $\bf{B}$ parallel to the iso-density contours. The distribution of the angles between randomly oriented pairs of vectors in 3D is only flat in terms of their cosine, as a consequence of the coverage of the solid angle. Hence, we investigate the distribution of cos~$\varphi$, which we evaluate using the relative orientation parameter defined in \citet{Soler13}\footnote{We checked that using $\varphi$ instead of cos~$\varphi$ would not significantly change the qualitative behaviour of our findings.}: \begin{equation} \zeta = \frac{A_c - A_e}{A_c + A_e} \, . \end{equation} Here, $A_c$ is the volume, i.e. all cells, in a given density interval for which \mbox{|cos $\varphi$| $<$ 0.25} and $A_e$ the volume for which \mbox{|cos $\varphi$| $>$ 0.75}. In 3D \mbox{$\zeta$ $>$ 0} describes a parallel orientation of the magnetic field with respect to the density structures and \mbox{$\zeta$ $<$ 0} a perpendicular orientation. We note that we here revert to the usage of $\zeta$ as there is no equivalent of the PRS in 3D, where the topology of the space of orientations is different to 2D. This does not imply any loss of generality in our analysis and its outcome is directly comparable to that of other works in the literature \citep[e.g.][]{Soler13,Soler17a}. In Fig.~\ref{fig:zeta_CF} we show the dependence of $\zeta$ on the number density for \mbox{$t_\rmn{evol}$ = 2 Myr}. Overall, the results of the 3D analysis match those of the 2D analysis (Fig.~\ref{fig:PRS_CF}). Only for the highly magnetised runs CF-B7.5 and CF-B10, $\zeta$ reaches negative values at high densities (\mbox{$n_\rmn{trans}$ $\simeq$ 10$^3$ cm$^{-3}$}), indicating a perpendicular orientation of the magnetic field and the densest structures. For the other runs, $\zeta$ stays above zero in agreement with the parallel and random orientation found in the 2D polarization maps. We note that \citet{Chen16}, who study the ISM in a very small box of 1 pc$^3$, suggest that $\zeta$ drops below zero once the turbulence becomes super-Alfv\'enic. Our analysis, however, reveals that $\zeta$ becomes negative only for $M_A$ $\gg$ 1 (not shown) in agreement with the findings of \citet[][see their figure~6]{Soler17a}. Furthermore, the initial values of $M_A$ (Table~\ref{tab:overview}) show super-alfv\'enic motions for the runs with a parallel relative orientation and vice versa. \subsubsection{Comparison with \citet{Soler17a}} In order to investigate the origin of the relative orientation of magnetic fields and density structures in 3D, we apply the theory developed by \citet{Soler17a} to our data, which starts from the continuity equation \begin{equation} \frac{\rmn{d} \, \textrm{log} \rho}{\rmn{d} t} = -\partial_i v_i \, . \end{equation} Here, d/d$t$ denotes the Lagrangian time derivative, $v_i$ is the $i$-th component of the gas velocity, $\partial_i$ = $\partial/\partial x_i$ and we use Einstein's sum convention. The authors derive an evolution equation for cos~$\varphi$ (Eq.~\ref{eq:varphi}) which reads: \begin{equation} \frac{\rmn{d(cos}\, \varphi)}{\rmn{d}t} = C +\left[ A_1 + A_{23} \right] \rmn{cos}\, \varphi \label{EQ:COSPHI} \end{equation} using the definitions \begin{equation} C \equiv -\frac{\partial_i ( \partial_j v_j )}{(R_k R_k)^{1/2}} b_i \, , \label{eq:c} \end{equation} \begin{equation} A_1 \equiv \frac{\partial_i ( \partial_j v_j )}{(R_k R_k)^{1/2}} r_i \, , \label{eq:a1} \end{equation} and \begin{equation} A_{23} \equiv \partial_i v_j \left[r_i r_j - b_i b_j \right] \, . \label{eq:a23} \end{equation} Here, $b_i$ and $r_i$ are the components of the unity vector pointing in the direction of the magnetic field and of \mbox{$R_i \equiv \partial_i$ log $\rho$}, i.e. \begin{equation} b_{i} \equiv \frac{B_{i}}{(B_{k}B_{k})^{1/2}} = \frac{B_{i}}{|\mathbf{B}|} \; \rmn{and} \; r_{i} \equiv \frac{R_{i}}{(R_{k}R_{k})^{1/2}} = \frac{R_{i}}{|\mathbf{R}|} \, . \label{eq:bi} \end{equation} There are a three aspects in Eq.~\ref{EQ:COSPHI} being worth mentioning. First, at \mbox{cos~$\varphi$ = $\pm$1}, the right-hand-side of Eq.~\ref{EQ:COSPHI} becomes zero as \mbox{$r_i$ = $\pm b_i$} (Eq.~\ref{eq:varphi}). Hence, \mbox{cos~$\varphi$ = $\pm$1} is an equilibrium point, at which the magnetic field is perpendicular to the density structures. Second, assuming that C is negligible compared to \mbox{$A_1 + A_{23}$}, also the point \mbox{cos~$\varphi$ = 0} is an equilibrium point. Here, however, the magnetic field is parallel to the density structures. Third, with C being very small, over time cos~$\varphi$ tends towards $\pm$1, when \mbox{$A_1 + A_{23} >$ 0}, whereas it tends towards 0, when \mbox{$A_1 + A_{23} <$ 0.} We now calculate $A_1$, $A_{23}$, their sum and $C$ for the CF runs at $t_\rmn{evol}$ = 2~Myr from the 3D simulation data and show the results in Fig.~\ref{fig:a23_CF}. We find that the mean values (thick lines with dots) of $C$ are around 0.1 Myr$^{-1}$, which is about a factor of 10 smaller than the typical mean values of $A_1$ and $A_{23}$, which are of the order of a few 1~Myr$^{-1}$. For this reason, for the moment we can neglect $C$ in our consideration and focus on the interpretation of the mean values only (but see also Section~\ref{sec:a23_zoom} and Appendix~\ref{sec:appendixA}). The variables $A_1$ and $A_{23}$ show both negative and positive mean values with absolute values up to \mbox{$\sim$ 10 Myr$^{-1}$} which overall tend to increase with increasing magnetic field strength. For the runs CF-B1.25 and CF-B2.5 (black and blue curves), $\left\langle A_1 \right\rangle$ is close to zero with typical values around a few \mbox{$\pm$ 0.1 Myr$^{-1}$}, whereas $\left\langle A_{23} \right\rangle$ remains negative with values around a few times \mbox{-1 Myr$^{-1}$}. This is in excellent agreement with the preferentially parallel orientation of the magnetic field and the density structures shown in the Fig.'s~\ref{fig:PRS_CF} and~\ref{fig:zeta_CF} ($Z_x > 0$ and $\zeta > 0$). For run CF-B5 (red curves), $\left\langle A_1 \right\rangle$ is mostly positive with values around a few 1 Myr$^{-1}$, which -- at lower densities -- is balanced by negative values of $\left\langle A_{23} \right\rangle$. However, in particular towards higher densities, $\left\langle A_1+ A_{23}\right\rangle$ reaches positives values, indicating a perpendicular orientation as it is indeed the case for $t_\rmn{evol}$ $\geq$ 2 Myr (Fig.~\ref{fig:PRS_CF}). For the runs CF-B7.5 and CF-B10 (green and cyan curves), $\left\langle A_1 \right\rangle$ and $\left\langle A_{23} \right\rangle$ reach larger values than for the other runs. At low densities $\left\langle A_1 + A_{23} \right\rangle$ is dominated by the negative values of $\left\langle A_{23} \right\rangle$ of $\sim$ -10~Myr$^{-1}$. Towards higher densities, however, $\left\langle A_{23} \right\rangle$ increases and $\left\langle A_1 + A_{23} \right\rangle$ becomes positive in agreement with the clear perpendicular relative orientation of the magnetic field and the density shown in the two bottom rows of the Fig.'s~\ref{fig:PRS_CF} and~\ref{fig:zeta_CF} (\mbox{$Z_x < 0$} and \mbox{$\zeta < 0$}). In summary, the observed relative orientation in 2D and 3D can be well explained by $\left\langle A_1 + A_{23} \right\rangle$, where both $A_1$ and $A_{23}$ appear to contribute in a comparable manner. The mean value of C is smaller by a factor of $\sim$ 10 -- 100 and thus less likely to contribute. The 1$\sigma$ interval (thin lines in Fig.~\ref{fig:a23_CF}), however, is well comparable to that of $A_{1}$ and $A_{23}$. We emphasise that such a wide distribution of $C$ can also contribute to a preferentially perpendicular orientation (see Section~\ref{sec:a23_zoom} and Appendix~\ref{sec:appendixA}), even if $A_1$ and $A_{23}$ are on average slightly negative. Furthermore, the importance of $A_{23}$ is in agreement with the findings of \citet{Soler17a}. As explained by the authors, $A_{23}$ describes the complex interplay of compressive velocity modes and the magnetic field (see their sections~3.1.1 and~3.1.3). However, contrary to \citet{Soler17a}, where $\left\langle A_1 \right\rangle$ and $\left\langle C \right\rangle$ are about 10$^4$ times smaller than $\left\langle A_{23} \right\rangle$ and thus negligible, here we find that $\left\langle A_1 \right\rangle$ also contributes significantly and that $\left\langle C \right\rangle$ is smaller by a factor of $\sim$ 10 -- 100 only. We speculate that these differences might be due to the different physical setup the authors use, which is why we do not follow this further here. \section{Results of the SILCC-Zoom simulations} \label{sec:zoom} \begin{figure*} \flushleft \includegraphics[height=0.3\textwidth]{fig06a.png} \includegraphics[height=0.3\textwidth]{fig06b.png} \includegraphics[height=0.3\textwidth]{fig06c.png}\\ \includegraphics[height=0.3\textwidth]{fig06d.png} \includegraphics[height=0.3\textwidth]{fig06e.png} \includegraphics[height=0.3\textwidth]{fig06f.png}\\ \caption{Overview of the six different SILCC-Zoom simulations SILCC-MC1 to SILCC-MC6 (from top left to bottom right) at $t_\rmn{evol}$ = 3 Myr showing the column density projected from all three sides around the center of the corresponding zoom-in region.} \label{fig:CD_Zoom} \end{figure*} Similar to the CF simulations, the times given in the following refer to the time elapsed since the start of the zoom-in procedure at $t_0$ (see Table~\ref{tab:overview}), $t_\rmn{evol}$ = $t$ - $t_0$. As stated in Section~\ref{sec:initial-zoom}, the initial magnetic field strength of the SILCC-Zoom simulations is \mbox{$B_{x,0}$ = 3 $\mu$G}, which is slightly below the threshold value of $\sim$ 5 $\mu$G for which a change in the relative orientation occurred in the CF simulations (see the Fig.'s~\ref{fig:PRS_CF} and~\ref{fig:zeta_CF}). As for the CF simulation we first consider the results in 2D. In Fig.~\ref{fig:CD_Zoom} we show the column density for the 6 SILCC-Zoom simulations. The simulations show only small changes of the PRS over time, which is why we here consider only the situation at \mbox{$t_\rmn{evol}$ = 3 Myr}. However, as the results strongly dependent on the chosen LOS, we show the column density projected from all three sides. The clouds show a pronounced filamentary structure which is partly shaped by the SNe going off prior to $t_0$. The mass of the gas with $n$ $\geq$ 100 cm$^{-3}$ of the clouds at this time ranges from about $6 \times 10^3$ M$_{\sun}$ (SILCC-MC3) to $56 \times 10^3$ M$_{\sun}$ (SILCC-MC4) \citep[see Fig.~1 in][]{Seifried19}, thus covering the typical range for Galactic MCs \citep[e.g.][]{Larson81,Solomon87,Elmegreen96,Heyer01,Roman10,Miville17}. In the right panel of Fig.~\ref{fig:pol_maps} we show the polarization degree and direction of run SILCC-MC1. The polarization degree reaches values up to a few 10\% and the polarization pattern indicates a moderately complex magnetic field structure as expected for turbulent environments \citep[see also][for more details on the accuracy of the observed polarization degree and structure]{Seifried19}. The other runs (not shown here) show qualitatively similar results in particular with respect to the pattern of the polarization degree and the structure of the inferred magnetic field. \begin{table} \caption{The strength of the magnetic field, $|\left\langle \bf{B} \right\rangle|$, $\left\langle \bf{B} ^2\right\rangle^{1/2}$ and $ \left\langle \bf{B}_\rmn{rand} \right\rangle$ for the SILCC-Zoom simulations at \mbox{$t_\rmn{evol}$ = 3 Myr}.} \centering \begin{tabular}{lcccc} \hline run & $|\left\langle \bf{B} \right\rangle|$ ($\mu$G) & $ \left\langle \bf{B}^2 \right\rangle^{1/2}$ ($\mu$G) & $ \left\langle \bf{B}_\rmn{rand} \right\rangle$ ($\mu$G) \\ \hline SILCC-MC1 & 2.6 & 4.4 & 3.6 \\ SILCC-MC2 & 2.1 & 3.6 & 2.9 \\ SILCC-MC3 & 1.0 & 2.1 & 1.9 \\ SILCC-MC4 & 2.9 & 4.7 & 3.7 \\ SILCC-MC5 & 2.7 & 4.6 & 3.7 \\ SILCC-MC6 & 2.6 & 3.7 & 2.7 \\ \hline \end{tabular} \label{tab:bsilcc} \end{table} Despite some variations in the magnetic field structure, the two projections perpendicular to the $x$-axis in the right panel of Fig.~\ref{fig:pol_maps} show that the mean field direction is still along the $x$-axis. In order to investigate this more quantitively, we consider the evolution of the mean magnetic field strength, $|\left\langle \bf{B} \right\rangle|$, the total field strength, $\left\langle \bf{B} ^2\right\rangle^{1/2}$, and its random component, \mbox{$\left\langle \bf{B}_\rmn{rand} \right\rangle$ = $\sqrt{ \left\langle \bf{B} ^2 \right\rangle - |\left\langle \bf{B} \right\rangle|^2}$}. This is done for the cubes shown in Fig.~\ref{fig:CD_Zoom}, i.e. using a side-length of 125 pc. As their centers are exactly in the midplane of the galactic disc, i.e. at \mbox{$z$ = 0 pc} (see Table~\ref{tab:overview}), the initial field strength for all six regions is identical at the start of the simulation. Using Eqs.~\ref{eq:rhosilcc} and~\ref{eq:bsilcc}, one can derive \mbox{$|\left\langle \bf{B} \right\rangle|$ = $\left\langle \bf{B} ^2\right\rangle^{1/2}$ = 2.2~$\mu$G.} Note that this average is smaller than \mbox{$B_{x,0}$ = 3 $\mu$G} due to the exponential decrease along the $z$-direction. In Table~\ref{tab:bsilcc} we list $|\left\langle \bf{B} \right\rangle|$, $\left\langle \bf{B} ^2\right\rangle^{1/2}$ and $\left\langle \bf{B}_\rmn{rand} \right\rangle$ at \mbox{$t_\rmn{evol}$ = 3 Myr}. The first is always smaller than the second, as $|\left\langle \bf{B} \right\rangle|$ describes the ordered field, whereas $\left\langle \bf{B} ^2\right\rangle^{1/2}$ also takes into account the random component of the magnetic field, like e.g. field reversals. The random component, $\left\langle \bf{B}_\rmn{rand} \right\rangle$, alone is comparable to or partly even slightly larger than the ordered field. Overall, however, their difference is rather moderate. Furthermore, for all runs even at \mbox{$t_\rmn{evol}$ = 3 Myr} the largest component of $\left\langle \bf{B} \right\rangle$ is still along the $x$-direction, i.e. the direction of the initial magnetic field (Eq.~\ref{eq:bsilcc}). These results thus agree with our findings that also at later times there exists a preferred direction of $\bf{B}$ along the $x$-axis (Fig.~\ref{fig:pol_maps}). Furthermore, for most of the runs $|\left\langle \bf{B} \right\rangle|$ has increased slightly over time due to gravitational accretion of the ambient gas -- and thus also magnetic flux -- onto the forming clouds. Only for the run SILCC-MC3 there is a clear decrease present, which can be explained by the accompanying mass loss in the considered cube (compare top right panel in Fig.~\ref{fig:CD_Zoom}). The value of $\left\langle \bf{B} ^2\right\rangle^{1/2}$ has increased in all runs (except run SILCC-MC3) due to the random magnetic field component generated by the passing SN shocks and the ongoing gravitational collapse. \subsection{2D: The relative orientation between the observed magnetic field and $\nabla N$} \label{sec:prs_zoom} \begin{figure*} \includegraphics[width=\textwidth]{fig07.pdf} \caption{PRS, $Z_x$, of the six SILCC-Zoom simulations at $t_\rmn{evol}$ = 3 Myr for three different LOS. Overall, there is a large variety of shapes in agreement with the moderate field strength of 3 $\mu$G. On average, there appears to be a slight trend of decreasing $Z_x$ with increasing $N$. Note the different ordinate scalings.} \label{fig:PRS_Zoom} \end{figure*} In Fig.~\ref{fig:PRS_Zoom} we show the PRS, $Z_x$, of the six SILCC-Zoom runs for the three LOS at $t_\rmn{evol}$ = 3~Myr, i.e. for the same snapshots as shown in Fig.~\ref{fig:CD_Zoom}. The polarization and column density maps were smoothed with a Gaussian kernel with a size of 3 pixels before the calculation. The first thing to notice is the large variety in the shapes of the PRS. The curves show significant qualitative differences between \textit{both} different clouds and between different LOS for individual clouds. Overall, it appears that there is a weak trend of decreasing $Z_x$ with increasing $N$. However, in a number of cases, $Z_x$ does not reach negative values or is -- within the given uncertainty $\sigma_{Z_x}$ -- in agreement with a random orientation, i.e. $Z_x = 0$. In addition, some of the curves show $Z_x \leq 0$ in a narrow range of column densities, before $Z_x$ increases again and then drops towards $\leq 0$ at the highest column densities (e.g. the $x$-direction of SILCC-MC2 and the $y$-direction of SILCC-MC3 and SILCC-MC6). For some cases (e.g. the $y$-direction of SILCC-MC2, the $x$- and $y$-direction of SILCC-MC4 and the $y$-direction of SILCC-MC6) the PRS reaches negative values at \mbox{$N_\rmn{trans}$ $\simeq$ 10$^{21 - 21.5}$ cm$^{-2}$} and decreases towards higher $N$, but finally increases again towards zero for the highest column densities. Interestingly, this value of $N_\rmn{trans}$ is comparable to that of the CF simulations (see Fig.~\ref{fig:PRS_CF}) and that of actual observations \citep{PlanckXXXV,Jow18,Soler17b,Soler19}. Moreover, it also agrees with that for the transition from sub- to supercritical magnetic fields in the ISM \citep{Crutcher12}. The trend of random orientation towards the highest column densities is also seen in parts for the CF runs (Fig.~\ref{fig:PRS_CF}). There are three possible explanations for this change from a preferentially perpendicular to a random orientation at very high $N$: first, on the grid scale, the magnetic field structure is not resolved accurately any more due to numerical dissipation/reconnection, thus slightly decoupling from the density and possibly leading to a rather random configuration. This is supported by the fact that the final increase of $Z_x$ appears to happen at lower $N$ for the lower-resolved SILCC-Zoom simulations (0.12 pc) than for the higher resolved CF simulations (0.008 pc). On the other hand, there are also actual observations, which partly show an increase of $Z_x$ towards the highest column densities \citep{Soler17b,Soler19,Pillai20}, which would indicate that the observed increase of $Z_x$ is not due to a limited resolution. Thirdly, also projection effects occurring on very small scales, i.e. high densities, might contribute to an apparent random orientation (see below). \subsection{3D: The relative orientation of the magnetic field and $n$} \label{sec:a23_zoom} \begin{figure} \includegraphics[width=\linewidth]{fig08.pdf} \caption{Dependence of $\zeta$ on the number density for the SILCC-Zoom simulations at $t_\rmn{evol}$ = 3 Myr. The clear decrease of $\zeta$ with $n$ is in apparent contrast to the 2D analysis (Fig.~\ref{fig:PRS_Zoom}) indicating projection effects which might complicate the analysis in 2D.} \label{fig:zeta_Zoom} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{fig09.pdf} \caption{Density dependence of $A_1$ + A$_{23}$ and $C$ for the six SILCC-Zoom runs at $t_\rmn{evol}$ = 3 Myr. The thick lines with dots show the mean value for a given density, the thin lines the 1$\sigma$ interval. Overall, the width of the distribution is somewhat less than for the CF runs and the values of $A_1$ + $A_{23}$ do not show a clear trend.} \label{fig:a23_Zoom} \end{figure} The question arises why, for a fixed initial magnetic field strength, the different SILCC-Zoom simulations show such varying PRS. In order to investigate this, we analyse the orientation of the magnetic field and density structures in 3D (Fig.~\ref{fig:zeta_Zoom}). Interestingly, except for run SILCC-MC2, we find a clear trend of decreasing $\zeta$ with increasing density, reaching negative values around \mbox{$n_\rmn{trans}$ $\sim$ 10$^{2 \pm 0.5}$ cm$^{-3}$}. Hence, in 3D the magnetic field shows a perpendicular orientation with respect to the dense structures, which is not clearly visible in the 2D polarization maps (Fig.~\ref{fig:PRS_Zoom}). This strongly indicates that projection effects can significantly influence -- and thus complicate -- the analysis of the relative orientation in 2D: Observing parallel or random orientations of magnetic fields and column density structures ($Z_x > 0$) does not exclude the possibility that in 3D the magnetic field is oriented perpendicular to the densest structures ($\zeta < 0$). Such projection effects were also reported recently by Girichidis et al. (submitted). \subsubsection{Comparison with \citet{Soler17a}} Finally, in Fig.~\ref{fig:a23_Zoom} we also analyse the values of $C$ and $A_1$ + $A_{23}$ in the different zoom-in regions. Except for the run SILCC-MC4, the mean of $A_1$ + $A_{23}$ is always slightly negative ($\sim$ a few -1 Myr$^{-1}$) with a typical spread of $\sim$ 5 Myr$^{-1}$. The individual values of $A_1$ and $A_{23}$ are comparable in size (not shown here). The values of $C$ are on average close to zero except at very high densities and their standard deviation is comparable to that of $A_1$ + $A_{23}$. On first view the analysis does therefore not present a clear explanation for the observed trend of $\zeta$ (Fig.~\ref{fig:zeta_Zoom}). However, as we show in detail in a semi-analytical analysis of Eq.~\ref{EQ:COSPHI} in Appendix~\ref{sec:appendixA}, a wide distribution of $A_1$ + $A_{23}$ and $C$ around slightly negative mean values can still result in a preferentially perpendicular orientation. Hence, the results of Fig.~\ref{fig:a23_Zoom} can explain the actual orientation of magnetic fields and density structures shown in Fig.~\ref{fig:zeta_Zoom}. We note that decreasing the width of the distribution or further lowering the mean value increases the probability to find a parallel orientation. \section{What affects the PRS?} \label{sec:discussion} \subsection{Projection effects} \label{sec:projection} As discussed in Section~\ref{sec:a23_zoom}, the observed (2D) magnetic field configuration does not necessarily match the actually 3D morphology in the cloud. In fact, the large variety of PRS shapes for the SILCC-Zoom simulations (Fig.~\ref{fig:PRS_Zoom}) agrees well with recent observations of various MCs \citep{PlanckXXXV,Soler17b,Soler19,Jow18,Fissel19}. A compilation of the results of these papers shows that for observed Galactic clouds the full spectrum of PRS shapes found in our simulations is recovered: PRS curves which show (i) local minima, (ii) an increase towards the highest $N$, (iii) and random or perpendicular orientation over the entire column density range. In the context of the results presented here, the clouds in these observations might still have a preferentially perpendicular orientation of the magnetic field and density structures in 3D, though not observed in 2D due to projection effects. This could in particular explain observations of \citet{Soler17b} and \citet{Soler19}, which report significant variations in the PRS when considering different sub-regions of the same molecular cloud complex. Finally, we note that radiative transfer effects and imperfect dust alignment are unlikely to contribute significantly to these projection effects: As we have shown in \citet{Seifried19}, the observed magnetic field traces well the mass-weighted, LOS-integrated magnetic field. To further support this, we also calculated the PRS using this mass-weighted, LOS-integrated magnetic field instead of that inferred from the polarization maps. We find that the these PRS do not differ significantly from those shown in Fig.\ref{fig:PRS_Zoom}. \subsection{A critical magnetic field strength} \label{sec:Bfield} Overall, our findings of a change from a parallel to a perpendicular orientation of magnetic fields with respect to dense structures with increasing field strength is in good agreement with previous theoretical works \citep[e.g.][]{Heitsch01,Ostriker01,Li04,Nakamura08,Collins11,Hennebelle13,Soler13,Chen15,Chen20,Li15,Chen16,Zamora17,Mocz18}. As pointed out by \citet{Soler17a}, this change of relative orientation is an indicator of compressive motions, i.e. $\nabla \bf{v} <$ 0, coupled with a dynamically important magnetic field. The compressive motions could be created either by converging flows or gravitational collapse. As indicated in the Fig.'s~\ref{fig:CD_CF} and~\ref{fig:PRS_CF}, a stronger magnetic field results in (i) more guided motions towards the central collision interface, (ii) suppressing turbulent motions perpendicular to it, (iii) a stronger gravitation collapse \citep[see also][]{Heitsch09,Zamora18,Iwasaki19} and (iv) consequently a more pronounced perpendicular relative orientation between magnetic fields and the density structures, in good agreement with other theoretical works \citep[][and Girichidis et al. (submitted)]{Soler13,Soler17a,Chen16}. For the CF runs we find a critical field strength of $\sim$\,5 $\mu$G above which we observe a flip in field orientation. The SILCC-Zoom simulations have an initial magnetic field strength of 3~$\mu$G, which is close to this critical value. As demonstrated in Section~\ref{sec:a23_zoom}, the observed trend of $\zeta$ in the SILCC-Zoom runs supports the idea of a critical field strength of 3 -- 5~$\mu$G, above which a perpendicular orientation develops. Interestingly, this value of the initial magnetic field strength is close to the Galactic field strength of about 6~$\mu$G in the solar neighbourhood \citep[e.g.][]{Troland86,Heiles05,Beck13}. It is therefore not surprising that -- also due to possible projection effects -- recently observed Galactic MCs \citep{PlanckXXXV,Soler17b,Jow18,Fissel19} show a similar variety of PRS shapes as the SILCC-Zoom clouds, which have (almost) comparable field strengths. \subsection{The mass-to-flux ratio} \label{sec:mu} As compressive motions during the (later) evolution of MCs are created by gravitational collapse, it appears intuitive to relate the shape of the PRS to the mass-to-flux ratio, which combines the magnetic field strength and gravity in a single parameter. The mass-to-flux ratio is defined as \citep{Mouschovias76} \begin{equation} \mu = \frac{M}{\Phi} \cdot \left(\frac{M}{\Phi}\right)^{-1}_\rmn{crit} = \frac{M}{B \cdot A} \cdot \left(\frac{0.13}{\sqrt{G}}\right)^{-1} \, , \end{equation} where $G$ is the gravitational constant, $A$ and $M$ the area and mass of the cloud, and $\Phi$ = $B \cdot A$ the magnetic flux through it. For the CF runs we can estimate $\mu$ analytically from the initial velocity, density and magnetic field strength of the inflowing gas (see Section~\ref{sec:initial-CF}) as \begin{eqnarray} \mu_\rmn{CF} =& \frac{2 \times \left((32\, \rmn{pc})^2 \times 13.6\, \rmn{km\, s}^{-1} \times 1.67 \times 10^{-24} \rmn{g\, cm}^{-1} \right) \times \, t}{(32\, \rmn{pc})^2 B_{x,0}} \cdot \left(\frac{0.13}{\sqrt{G}}\right)^{-1} \nonumber \\ = & 2.85 \times \left( \frac{t}{10\, \rmn{Myr}} \right) \left( \frac{B_{x,0}}{1\, \mu\rmn{G}} \right)^{-1} \, , \end{eqnarray} where the factor of 2 in the nominator accounts for inflow from two sides. In Table~\ref{tab:overview} we list $\mu$ at \mbox{$t$ = 19 Myr} corresponding to \mbox{$t_\rmn{evol}$ = 3 Myr}. At this point, for run CF-B5, i.e. the run with the critical field strength of \mbox{$B_{x,0}$ = 5 $\mu$G}, a value of \mbox{$\mu$ $\simeq$ 1.1} is reached. This is very close to the critical mass-to-flux ratio of $\mu_\rmn{crit}$ = 1, below which gravitational collapse is hampered perpendicular to the magnetic field, but continues unhindered along the field. As stated before, this is in agreement with our simulation results where, for the strong-field (low-$\mu$) cases, the gas appears to be guided along the initial field ($x$-direction) resulting in an accelerated collapse (Fig.~\ref{fig:CD_CF} and Girichidis et al. (submitted)). Next, we estimate $\mu$ for the various SILCC-Zoom simulations at \mbox{$t_\rmn{evol}$ = 3 Myr}. Using the mass $M$ in the cube with a side-length of 125 pc around the center of the zoom-in region and the mean magnetic field strength given in Table~\ref{tab:bsilcc}, we can approximate the mass-to-flux ratio as \begin{equation} \mu_\rmn{SILCC} = \left(\frac{M}{|\left\langle \bf{B} \right\rangle| \times 125\,\rmn{pc}^2}\right) \cdot \left(\frac{0.13}{\sqrt{G}}\right)^{-1} \, . \end{equation} We again find values of $\mu_\rmn{SILCC}$ close to the critical value of 1 (see Table~\ref{tab:overview}, note that the initial value is 1.8). Hence, also for the SILCC-Zoom simulations we expect a flip from parallel to perpendicular orientation to occur matching the results shown in Section~\ref{sec:zoom}. Moreover, as stated before, the column density of \mbox{$N_\rmn{trans}$ $\simeq$ 10$^{21 - 21.5}$ cm$^{-2}$}, where the transition from parallel to perpendicular orientation occurs in the PRS (Fig.'s~\ref{fig:PRS_CF} and~\ref{fig:PRS_Zoom}), agrees well with the transition point from sub- to supersonic magnetic fields in the ISM \citep{Crutcher12}, which further supports the idea of a connection between both transitions. To summarise, we argue that an observed perpendicular orientation of magnetic fields and column density structures indicates a mass-to-flux ratio of $\mu$ $\lesssim$ 1, i.e. very weak magnetic fields can be excluded. Contrary, a parallel orientation indicates $\mu$ $\gtrsim$ 1, excluding the presence of a very strong magnetic field. However, in particular around $\mu$ $\simeq$ 1, projection effects might cause the \textit{observed} relative orientation to be parallel despite the actual, 3D orientation being perpendicular (Section~\ref{sec:zoom}). This limits the PRS analysis to an \textit{exclusion} of a certain range of field strengths. It does, however, not allow a robust determination of $\mu$ and thus the strength of $B$. \subsection{The column density distribution} \begin{figure*} \includegraphics[width=\linewidth]{fig10.pdf} \caption{Column density PDF of the different SILCC-Zoom simulations at $t_\rmn{evol}$ = 3 Myr. Overall, the PDFs are relatively similar, which can thus not explain the large variety of shapes of the PRS (compare to Fig.~\ref{fig:PRS_Zoom}). In order to guide the reader's eye, we show a line with a power-law slope of -2 (green dashed line).} \label{fig:cd_pdf} \end{figure*} Finally, it was suggested by \citet{Soler17b} that the shape of the PRS might be linked to the probability distribution function (PDF) of the column density. In order to check this in the context of our work, we consider the PDFs of log($N$) of the various SILCC-Zoom simulations at $t_\rmn{evol}$ = 3 Myr (Fig.~\ref{fig:cd_pdf}). The log($N$)-PDFs show the expected, gravity-driven power-law behaviour at high $N$ \citep[e.g.][]{Kainulainen09,Kritsuk11,Girichidis14,Schneider15,Auddy18}. However, for a given run there are only marginal differences in the PDFs at \mbox{$N$ $\gtrsim$ 10$^{20}$ cm$^{-2}$} when considering a different LOS, with the only exception being the run SILCC-MC5. This already indicates that the log($N$)-PDF is not related to the shape of the PRS as e.g. for MC1 the PRS along the $z$-direction differs significantly from those of the other two directions, whereas the log($N$)-PDFs do not show clear variations. Also a comparison of the log($N$)-PDFs of all runs with the corresponding shapes of the PRS does not reveal a coherent picture: The power-law slope of the log($N$)-PDFs for the different runs is around -2~$\pm$~0.5. There is, however, no recognisable correlation of the steepness of the slope with the shape of the PRS, i.e. a higher (or shallower) slope does not result in a particular shape of the PRS (compare to Fig.~\ref{fig:PRS_Zoom}). \subsection{The impact of resolution and simulation setups The CF and SILCC-Zoom runs have significantly different resolutions (0.008 pc vs. 0.12 pc) which might affect their comparison. In order to test a potential impact of different resolutions on the comparability, we repeated run CF-B2.5 with a 4 times (0.032~pc) and 16 times (0.125~pc) lower resolution, i.e. the latter run being comparable to the SILCC-Zoom runs in terms of resolution. We chose run CF-B2.5 as its initial magnetic field properties resemble best that of the SILCC-Zoom runs (see Table~\ref{tab:overview}). As shown in Fig.~\ref{fig:resolution} in Appendix~\ref{sec:appendixB}, the qualitative behaviour of the PRS results is mostly retained despited the difference of a factor of up to 16 in resolution. Moreover, also using differently-sized Gaussian kernels for the calculation of the PRS for the high-resolution CF runs (not shown) does not affect the results significantly. We are therefore confident that the results of the different simulations can be compared to each other. This is also supported by the fact that the SILCC-Zoom simulations and their corresponding magnetic field strengths fit in the trend suggested by the CF runs (Section~\ref{sec:Bfield}). Furthermore, the CF simulations presented here have an angle of \mbox{$\alpha$ = 0$^\circ$} between the initial magnetic field ($B_{x, 0}$) and the colliding flow along the $x$-direction. It was shown, however, that varying $\alpha$ can significantly affect the formation of dense structures. In particular for high values of $\alpha$, the formation of dense regions is hampered \citep{Heitsch09,Inoue09,Inoue16,Kortgen15,Iwasaki19}. However, \citet{Inoue16} show that when including a realistic level of ISM turbulence, for all values of $\alpha$, the forming low-column density structures ($N$~$\lesssim$~10$^{20.5}$~cm$^{-2}$) are oriented preferentially parallel to the magnetic field\footnote{Note that, as the authors do not include self-gravity, they do not make any statement about structures at higher $N$.}. This is in excellent agreement with the results presented here (Fig.~\ref{fig:PRS_CF}), indicating that our results do not strongly depend on the chosen angle between the initial magnetic field and the colliding flow direction. This is also supported by the results of the SILCC-Zoom simulations, where the initial SN shocks responsible for forming the clouds do no have any preferred direction with respect to the initial magnetic field. Furthermore, also turbulent box simulations \citep{Heitsch01,Ostriker01,Li04,Collins11,Hennebelle13,Soler13,Li15,Zamora17,Mocz18} show similar results concerning the relative orientation of the magnetic field and gas structures. We thus speculate that the orientation of the initial magnetic field with respect to the (turbulent) flow direction has only a moderate impact on the relative orientation of (column) density structures and the magnetic field. As discussed before, the (observed) orientation is rather influence by the strength of the magnetic field as well as projection effects. \section{Conclusions} \label{sec:conclusion} We present synthetic dust polarization maps of two sets of molecular cloud (MC) formation simulations, colliding flow (CF) simulations and simulations of the SILCC-Zoom project, which models MCs forming from the diffuse, supernova-driven ISM on scales of several 100 pc. The MHD simulations make use of a chemical network and self-consistently calculate the dust temperature by taking into account radiative shielding. The dust polarization maps are calculated with the freely available code POLARIS \citep{Reissl16,Reissl19}, which includes a self-consistent treatment of the alignment efficiencies of dust grains with variable sizes. We use radiative torque alignment and present synthetic polarization observations at a wavelength of 1.3~mm. We investigate the simulations concerning the relative orientation of the magnetic field and the density ($n$) structures in 3D and the column density ($N$) structures in 2D. For the latter we apply the Projected Rayleigh Statistics (PRS) introduced by \citet{Jow18}. In the following we summarise our main results: \begin{itemize} \item We investigate several CF simulations with increasing magnetic field strength. For these, the analyses of the (observed) relative orientation of the magnetic field in 3D and 2D agree with each other: For magnetic field strengths below $\sim$ 5 $\mu$G, the field has a parallel or random orientation with respect to the $n$- and $N$-structures over the entire range of values. \item Only for CF runs with strong magnetic fields \mbox{($\gtrsim$ 5 $\mu$G)} a flip from parallel orientation at low values of $n$ and $N$ to perpendicular orientation at high values of $n$ and $N$ occurs. The flip in 3D occurs at $n_\rmn{trans}$ $\simeq$ 10$^3$ cm$^{-3}$ and in 2D at $N_\rmn{trans}$ = 10$^{21 - 21.5}$ cm$^{-2}$. \item The SILCC-Zoom simulations all have an initial field strength of 3 $\mu$G and show a flip to a preferentially perpendicular orientation of the magnetic field and filamentary sub-structures at densities \mbox{$n_\rmn{trans}$ $\simeq$ 10$^{2 \pm 0.5}$ cm$^{-3}$}. \item Based on our results, we suggest that the flip in magnetic field orientation occurs if the cloud's mass-to-flux ratio, $\mu$, is close to or below the critical value of 1. For typical MCs this corresponds to a magnetic field strength around \mbox{3 -- 5 $\mu$G}, which roughly agrees with the strength of the magnetic field in our Galaxy. \item However, our results clearly demonstrates that projection effects can strongly influence the results of the PRS analysis (in 2D), thus reducing its power to determine the relative orientation of the magnetic field: the observed PRS of the SILCC-Zoom simulations show significant variations among the different runs and different LOS. In case a flip in orientation is present, it typically occurs around $N_\rmn{trans}$ $\simeq$ 10$^{21 - 21.5}$ cm$^{-2}$, but often the column density-based PRS does not show any flip at all. \item These projection effects can also explain the observed variety in the shape of the PRS, i.e. the magnetic field orientation, of recent observations \citep{PlanckXXXV,Soler17b,Soler19,Jow18,Fissel19}: even if in 3D the relative orientation is preferentially perpendicular, in 2D the postulated flip to a perpendicular orientation at high $N$ might not always be observable. They can also explain the different results obtained for different subregions of an individual MC, e.g. of the Vela C molecular cloud region \citep[][but see also \citealt{Soler19}]{Soler17b}. \item The column density of $\sim$10$^{21 - 21.5}$~cm$^{-2}$ at which the flip from parallel to perpendicular orientation occurs, agrees well with the transition point from sub- to supercritical magnetic fields in the ISM \citep{Crutcher12}. This further supports the proposed idea of a connection between both transitions. \item We find that the quantities ($C$, $A_1$ and $A_{23}$), which govern the evolution of the relative orientation in the analytical theory of \citet{Soler17a}, show a wide range of values. We show that their mean values can lead to misleading results in the theory of \citet{Soler17a} and investigate the impact of randomly varying values within the theory. We demonstrate that due to these variations, even slightly negative mean values of $C$, $A_1$ and $A_{23}$ can result in a preferentially perpendicular orientation. \item Finally, we do not find a correlation between the shape of the PRS and the column density PDF. \end{itemize} \section*{Acknowledgements} The authors like to thank the anonymous referee for the very constructive report which helped to significantly improve the paper. DS and SW acknowledge the support of the Bonn-Cologne Graduate School, which is funded through the German Excellence Initiative. DS and SW also acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG) via the Collaborative Research Center SFB 956 ``Conditions and Impact of Star Formation'' (subprojects C5 and C6). SW acknowledges support via the ERC starting grant No. 679852 "RADFEEDBACK". SR and RSK acknowledge support from the Deutsche Forschungsgemeinschaft via the SFB 881 ``The Milky Way System'' (subprojects B1, B2, and B8) and via the Priority Program SPP 1573 ``Physics of the Interstellar Medium'' (grant numbers KL 1358/18.1, KL 1358/19.2). RSK acknowledges funding from the Heidelberg Cluster of Excellence {\em STRUCTURES} in the framework of Germany's Excellence Strategy (grant EXC-2181/1 - 390900948). JDS is funded by the European Research Council under the Horizon 2020 Framework Program via the ERC Consolidator Grant CSF-648 505. The FLASH code used in this work was partly developed by the Flash Center for Computational Science at the University of Chicago. The authors acknowledge the Leibniz-Rechenzentrum Garching for providing computing time on SuperMUC via the project ``pr94du'' as well as the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu). \section*{Data Availability} The data underlying this article can be shared for selected scientific purposes after request to the corresponding author. \bibliographystyle{mnras}
1,116,691,499,508
arxiv
\section{Introduction} The ABJM theory is the three dimensional ${\cal N}=6\,\,$ $U(N)\times U(N)$ superconformal Chern-Simons theory with level $(k, -k)$ and dual to the type IIA string theory on AdS$_4\times \mathbb{CP}_3$ background \cite{ABJM}. Some test of this duality has been carried out largely based on the integrability with indication of some additional structures compared to the well known AdS$_5$/$CFT_4$ counterpart \cite{Minahan:2008hf}-\cite{Minahan2}. The type IIA supergravity description is dual to the large $N$ planar limit of the Chern-Simons theory where one takes $N,\,\, k \rightarrow \infty$ while holding 't Hooft coupling $\lambda= N/k$ fixed. Some probes of the Chern-Simons plasma at finite temperature are carried out recently via the consistent, $\mathbb{CP}_3$ invariant dimensional reduction of the type IIA supergravity \cite{BakYun}. Alternatively a finite temperature plasma can be completely characterized by towers of static length (1/mass) scales. They are arising as decaying spatial length scales of a perturbation theory when local operators are inserted at a certain point of the plasma. In Ref.~\cite{BakMin}, these scales are scanned for the low-lying bosonic modes from which the true mass gap $m_g$ (the lowest in all) and the Debye screening mass $m_D$ (the lowest in $CT$ odd sector of the theory \cite{Arnold:1995bh}) are found for the Chern-Simons plasma. Including Yang-Mills plasmas, these two scales are well representing the universal characteristics of a certain strongly coupled plasma. For instance, the ratio $m_D/m_g$ for the ${\cal N}=4$ SYM theory and the two-flavor model of QCD are matching with each other in the strong coupling limit supporting such a picture \cite{Karch}. Leading thermodynamic corrections of ABJM theory in the small $\lambda$ expansion are explored in Ref.~\cite{Smedback:2010ji}. In this paper, we note that the ABJM theory possesses $SU(4) \times U(1)$ R-symmetries and consider its particular sector in which one turns on a finite $U(1)$ number density. For the type IIA supergravity side, this sector is described by the $U(1)$ charged Reissner-Nordstrom (RN) AdS black brane solution whose physics will be our basic concern for the study of the field theory at a strong coupling \cite{BakYun}. We begin with the thermodynamic stability of the RN AdS black brane solution and prove that it is thermodynamically stable at all temperatures including the zero temperature limit. This is contrasted with the R-charged black holes in the type IIB supergravity which is dual to the R-charged sector of the ${\cal N}=4$ super-Yang-Mills (SYM) theory. The R-charged solution becomes thermodynamically unstable below a certain temperature and the validity of the solution itself will be lost completely \cite{Son:2006em}. Thus the RN AdS solution of this note is, up to now, the only known example where one has the thermodynamic stability of the gravity solution at all temperatures and the dual field theory description is precisely known at the same time. In other words, we found that the RN AdS black brane solution is relevant dual gravity description for the finite temperature ABJM theory with a chemical potential in the whole temperature region including the zero temperature, which is not the case for the ${\cal N}=4$ SYM theory. We go on to study phase structure and transitions occurring within this charged sector. These phase transitions turn out to be a second order type where the $SU(4)$ R-symmetry is broken spontaneously \cite{Klebanov:1999tb}. For this we scan behaviors of supergravity modes which are dual to the primary operators of some dimension $\Delta$ in the RN black hole background \cite{Nilsson:1984bj}. Among them, one finds the bosonic modes with mass squared ranged over $-9/4 \le m^2 < -3/2$ which are responsible for the phase transition. As will be further explained later on, the only possibility for the present case is $m^2=-2$ corresponding to one $\Delta =1$ operator of $SU(4)$ representation {\bf 15} and two $\Delta=2$ operators of $SU(4)$ representations {\bf 15} and {\bf 84}, whose detailed operator contents will be specified below. The transition related to the condensate of the $\Delta=1$ operator occurs at a critical temperature higher than that of the $\Delta =2$ operators. Above the transition temperature, the scalar field has to be set to zero to satisfy the required boundary condition while the other part of the original RN black brane solution remains intact. Below the transition temperature the scalar field begins to develop a profile whose boundary behavior represents a condensation of operator expectation value without introducing any external source field. Of course the geometry will then be back-reacted accordingly. The solution represents the condensation of the operator expectation values below the transition temperature, which takes a particular direction in the space of the $SU(4)$ representation {\bf 15}. Thus the $SU(4)$ symmetry is spontaneously broken down to its little group. We then study the critical exponents of the phase transitions and show both analytically and numerically that the exponents precisely match with those of the mean field theory. We shall also show that there is further symmetry breaking phase transition of the same nature at a lower temperature which involves condensation of the $\Delta =2$ operators. In Section 2, we shall discuss some relevant properties of the ABJM theory. Section 3 deals with the gravity description of the RN black brane and related physics in the field theory. In Section 4, our system described by RN black brane solution is shown to be thermodynamically stable. In Section 5, we discuss the development of gravitational instabilities of scalar modes below the critical temperature. This will fix the critical temperature which depends on the dimension of the corresponding operators. In Section 6, we discuss the phase transition by studying the gravity solution representing the condensation of operator expectation values including the back reaction to the bulk geometry. The critical exponents are shown to agree precisely with those of the mean field theory. Last section is devoted to the interpretations and concluding remarks. \section{Thermodynamics of ABJM theory at small $\lambda$} The on-shell degrees of the ABJM theory consist of bosonic and fermionic matter fields $Y^I$ and $\Psi_I\ (I=1,2,3,4)$ together with two gauge fields $A_m$ and $\bar{A}_m$. The complex scalar fields $Y^I$ are in the representation of $({\bf N}, {\bf \bar{N}}, {\bf 4})$ of the $U(N) \times U(N)$ gauge as well as the $SU(4)$ R-symmetries. There are also the complexified Majorana fermions $\Psi_I$, which are in the representation of $({\bf N}, {\bf \bar{N}}, {\bf \bar{4}})$. The gauge fields $A_m$ and $\bar{A}_m$ in the adjoint representations of the first $U(N)$ and the second $U(N)$ respectively, are coupled to the matter fields $\Phi=(Y^I\!,\,\, \Psi_I)$ by \bea D_m \Phi = \partial_m \Phi +i A _m\,\Phi -i \Phi\, \bar{A}_m\,. \eea For further details of the Lagrangian and notation used in this note, see for example Ref.~\cite{BakRey1}. The theory possesses global ${\cal N}=6$ 3d superconfomal symmetries of $OSp(6|4)$ whose bosonic part is given by the 3d conformal symmetry of $SO(3,2)$ multiplied by $SU(4)$ R-symmetry. This corresponds to the isometry of the AdS$_4 \times \mathbb{CP}_3$ in the type IIA supergravity side. But one crucial difference from the ${\cal N}=4$ case is the fact that there is an extra global $U(1)$ symmetry. The relevant charge is associated with the $U(1)$ phase transformation of complex fields by a overall phase factor. Denoting overall $U(1)$ parts of gauge fields by $A_m^{U(1)}$ and ${\bar{A}}^{U(1)}_m\!\!\!$,\,\,\, the above charge is only coupled to the relative $U(1)$ gauge field $A_m^{U(1)}-{\bar{A}}^{U(1)}_m\!\!\!$. Now note that the fields $Y^I$ and $\Psi_I$ carry the $U(1)$ charges $(-1,+1)$ or $-1$ in terms of the relative one. Due to the Gauss law constraint of the Chern-Simons theory, these charges are always accompanied by $U(1)$ magnetic fluxes\footnote{The unit flux for the current case with $e=1$ is given by $h/e= 2\pi \hbar = 2\pi$ where we set $\hbar=1$.} $ {2\pi\over k}\, (+1,-1)$. Thus for general $k$ including even the nonabelian contributions, the basic degrees in the field theory in the deconfined phase exhibit an anyonic nature due to the (generically nonabelian) statistical interactions between them. But due to the large $N$ planar limit where we send $N$ and $k$ to infinity at the same time, the statistical interactions drop out since they are of higher order in $1/k$. Therefore, for instance, the total effective number of degrees in the weakly coupled small $\lambda$ limit is simply proportional to $N^2$. Namely the entropy density at temperature $T$ takes a value \cite{ABJM} \bea {\cal S} (\lambda\rightarrow 0) ={21\zeta(3)\over \pi} N^2 T^2 \eea in the $\lambda \rightarrow 0$ limit. (The free energy density is always related to the entropy density by ${\cal F}= - {\cal S}\, T/3$ as dictated by the conformal symmetry.) On the other hand in the strongly coupled large $\lambda$ region, the system is described by the black brane solution in the gravity side whose Bekenstein-Hawking entropy density reads \cite{ABJM} \bea {\cal S}(\lambda)= {16\pi^2\over 27 \sqrt{2}} {N^2 T^2\over \sqrt{\lambda}}\,. \label{entropystrong} \eea The appearance of the ${1\over \sqrt{\lambda}}$ suppression factor is not understood from the direct computation of the field theory. This drastic change of the number of degrees might be related to some remnant of anyonic interactions but we do not have any supporting evidence for this picture. Our main concern in this paper is the sector of the ABJM theory where one turns on a finite $U(1)$ number density or equivalently the corresponding chemical potential $\mu$. First consider the small $\lambda$ region of planar limit and the high enough temperature with $T\gg \mu$. The fermions can be ignored in this limit. In the thermal circle compactified effective theory, the fermions can be integrated out at weak coupling and high temperature limit as their Matsubara frequency starts with $\pi T$. Of course the $U(1)$ current gets contributions from both bosonic and fermionic degrees of the theory. Their currents are conserved not separately but only in sum. Hence at weak coupling of small $\lambda$, instead of building up a fermi surface, occupation of bosonic states is energetically preferred. The scalar fields in this effective 2d description acquire a mass \cite{Yamada:2006rx}, \bea m^2(T) = -\mu^2 + m_T^2\,, \eea where $m^2_T$ is the thermal mass correction. For $\lambda \ll 1$, the thermal mass has the expression \cite{Smedback:2010ji} \bea m^2_T = {118\over 3}\, \lambda^2 (\log \lambda)^2\, T^2 + O(\lambda^2 \log\lambda)\,. \eea The theory lies in the unbroken phase if $m^2 (T) \ge 0$. On the other hand the system in the symmetric phase becomes unstable when $m^2 (T)\ < \ 0$ or $ \mu\ \ll\ T\ < \ \sqrt{3\over 118}\,\, {\mu \over \lambda|\log\lambda|}$. As argued in Ref.~\cite{Yamada:2006rx}, some of the operator of the field theory may acquire nonvanishing expectation values possibly leading to $SU(4)$ R-symmetry broken phase. But the precise fate of the system at weak coupling requires a further study, which is beyond scope of the present paper. \section{RN black brane and type IIA supergravity on AdS$_4 \times \mathbb{CP}_3$} In the strongly coupled region of large $\lambda$, the description of the ABJM theory by the type IIA supergravity on AdS$_4\times \mathbb{CP}_3$ is appropriate since geometry there is weakly curved. The type IIA spetra compactified on $\mathbb{CP}_3$ space was known quite some time ago \cite{Nilsson:1984bj}. Each mode of the resulting 4d supergravity is dual to a gauge invariant primary operator whose scaling dimension $\Delta$ is protected against quantum corrections. The presence of these bulk modes shows how the basic degrees of freedom are organized in the strongly coupled side of the ABJM theory. Thus study of these bulk modes for a given supergravity background will be our main tool probing the physics at the strong coupling. The bulk modes consist of infinite towers of spectra from spin zero to spin two. We shall be here briefly describing some of relevant low-lying modes for later discussions. Let us begin with the case of spin one: The lowest are two massless bulk gauge fields that are dual to the $\Delta =2$ current operators in the field theory side. One is for the current of $SU(4)$ singlet representation [\,$(000)$ in the $SU(4)$ Dynkin label notation\,], which is identified with that of the extra $U(1)$ global symmetry. In the gravity side the $U(1)$ is related to the $U(1)$ isometry of the M-theory circle from the 11d perspective and its charge is carried by D0 branes in a rough sense. As was shown explicitly in Ref.~\cite{BakYun}, this gauge field is arising as a linear combination of \bea A_\mu= A^{D0}_\mu + 3 A^{D4}_{\mu}\,, \eea where $A^{D0}_\mu $ and $A^{D4}_\mu $ respectively coupled to the D0 branes and $D4$ branes wrapping $\mathbb{CP}_2$ four cycle inside $\mathbb{CP}_3$. The other combination $\tilde{A}= A^{D0}_\mu - A^{D4}_{\mu}$ becomes massive by Higgs mechanism with $m^2=12$, which couples to the $\Delta =5$ boundary current operator. It should be also noted that the monopole operator with overall field theory $U(1)$ charges $n(k,-k) \,\, (n\in {\bf Z})$ is the one example of heavy BPS state coupled to this $U(1)$ bulk gauge field \cite{ABJM}. The second massless gauge field is in the adjoint ${\bf 15}$ [$(101)$] of $SU(4)$ and couples to the boundary $SU(4)$ current operator of dimension $\Delta =2$. \begin{table}[ht] { \renewcommand{\arraystretch}{1.2} \begin{tabular*}{153mm}{@{\extracolsep\fill}|l||l|l|l|l|l|} \hline \hline \phantom{aaaaaaaaaa} & spin 0 & spin ${1\over 2}$ & spin 1 & spin ${3\over 2}$ & spin 2 \\ \hline \hline $\Delta=1$ & $(101)^+_{-2}$ & & & & \\ \hline $\Delta={3\over 2}$ & & $(002)_{0}$ $(200)_{0}$ & & & \\ $ $ & & $(010)_{0}$ & & & \\ \hline $\Delta=2$ & $(202)^+_{-2}$ \ $(101)^-_{-2}$ & & $(000)^-_{0}$ \ $(101)^-_{0}$ & &\\ \hline $\Delta={5\over 2}$ & & $(103)_{1}$ $(301)_{1}$ & & $\,(010)_1\,$ & \\ $ $ & & $(111)_{1}$ & & & \\ \hline $\Delta=3$ & $(303)^+_{0}$ \ $(202)^-_{0}$ & & $(101)^-_{2}$ \ $(202)^-_{2}$ & & $(000)^+_{0}$ \\ & $(400)^-_{0}$ $(004)^-_{0}$ & & $(210)^-_{2}$ $(012)^-_{2}$ & & \\ & $(210)^-_{0}$ $(012)^-_{0}$ & & & & \\ & $(020)^-_{0}$ & & & & \\ \hline {\small $\mathbb{CP}_3\,\,$} singlets & $(000)^+_{4}$ $(000)^-_{10}$ & & $(000)^-_{0}$ $(000)^-_{12}$ & & $(000)^+_{0}$ \\ & $(000)^+_{18}$ & & & & \\ \hline \end{tabular*} \caption {\small The low lying spectra up to the operator dimension 3 are presented. The upper and lower indices denote respectively the parity and the mass squared value $m^2$ of the supergravity mode. The whole spectra of $\mathbb{CP}_3$ singlet sector are presented in addition.} \label{tableone} } \end{table} For the bulk scalar modes, the lowest ones with $m^2=-2$ are relevant for our later discussions, which correspond to $\Delta=1,\,\,2$ operators. The $\Delta =1$ scalar mode couples to the boundary operator of $SU(4)$ representation {\bf 15} [$(101)$] which takes the form \bea O^{I}_J = {\tr}\,\, Y^I Y^\dagger_J - (\rm trace\ part)\,, \label{op1} \eea and has 15 independent components. The $\Delta=2$ modes involve two field theory operators of $SU(4)$ representation {\bf 15} [$(101)$] and {\bf 84} [$(202)$] whose operator contents read \bea && \tilde{O}_{I}^J = {\tr}\,\, \Psi_I \Psi^{\dagger J} - (\rm trace\ part)\,, \nn\\ && O^{IJ}_{KL} = {\tr}\,\, Y^{(I} Y^\dagger_{(K} Y^{J)} Y^\dagger_{L)} - (\rm trace\ part) \eea where the trace part denotes any contractions between the upper and the lower indices. The fermionic bulk modes start with $|m|=0$ corresponding to the operator dimension $\Delta={3\over 2}$: These are boundary operators of $SU(4)$ representations {\bf 10} [$(002)$], {\bf 6} [$(010)$], and $\overline{\bf 10}$ [$(200)$]. The low lying modes up to $\Delta = 3$ are listed in Table \ref{tableone}. A few comments are in order. We note that there are no bulk supergravity modes that are charged under the massless $U(1)$ gauge field. The knowledge about the compactification spectra tells us about only linearized fluctuations of modes above the AdS$_4$ or the AdS$_4$ black brane solution. In Ref.~\cite{BakYun}, a consistent $\mathbb{CP}_3$ compactification keeping all $SU(4)$ invariant modes is carried out explicitly. Any solutions of this 4d system can be consistently embedded into the original 10d supergravity theory. Our starting Lagrangian for the further discussion is \be {\cal L}=\frac{1}{2 \kappa^2} \Big(\mathcal{R} +6 - \frac{1}{4}F_{\mu\nu}F^{\mu\nu} \sum_{a=1}^{n}\big(\, (\nabla\phi_a)^2 + m_a^2\,\,\phi_a^2\,\big) \,\Big)\label{Lagrangian} \ee where\footnote{In the bulk, any dimensionful quantity is measured with respect to the AdS radius scale $\ell$ which we set to be unity for the notational simplicity.} \be \kappa^{-2}= {N^2\over 6\pi \sqrt{2\lambda}}\,. \label{newton} \ee In this action, the Einstein Maxwell part with the negative cosmological constant is a fully consistent truncation \cite{BakYun} while the remaining scalar part is only valid up to quadratic order. Below we shall show that, above the critical temperature of the $\Delta=1$ operator, the stable solution of the system is given by the RN black brane solution with vanishing scalar fields. This part of the solution is fully consistent as just stated. Below the critical temperature, the relevant scalar field begins to develop and the corresponding set of solutions is valid only if the magnitude of the scalar field is small enough. But the set of solutions in the near critical region carries all the information about universal natures of the phase transition including relevant critical exponents. Similar models have been discussed many times in the bottom up approach \cite{Horowitz einstein scalar,Horowitz criticality,Hartnoll:2008kx}. However, it is hard to identify the dual field theory and the corresponding operator for condensation in this bottom up approach. In our work, on the contrary, the identification of the physics of boundary CFT is straightforward. Below we take the following ansatz, \bea ds^2&=&e^{2A(r)}\bigg(-h(r) dt^2 + dx^2 + dy^2\bigg)+\frac{dr^2}{h(r)}\,\,,\nonumber\\ A_{t}&=&A_{t}(r)\;,\;\;\;\;\phi_a=\phi_a(r)\,, \label{metric1} \eea to describe a finite temperature system with plane plus time ($\mathbb{R}^2\times \mathbb{R}$) translational symmetries. Plugging the above ansatz into the equations of motion, we are led to the following equations \cite{Gubser:2008pf}: \bea && A''+\frac{1}{2}\sum_{i=1}^{n}\phi_i'^2=0\,\,,\nonumber\\ && h''+3A'h'-e^{-2A}\;F_{tr}^2=0\,\,,\nonumber\\ && (e^{A}\;F_{tr})'=0\;\,\,,\nonumber\\ && h\phi_a''+(3A'h+h')\phi_a' - m_a^2\phi_a=0\,\label{equations of motion} \eea with a constraint, \be {h}\sum_{a=1}^{n}{\phi'}_a^2 -\frac{1}{2}e^{-2A}\;F_{tr}^2 -2A'h' -6h(A')^2 = -{6}+\sum_{a=1}^{n} m_a^2\phi_a^2\,\,.\label{constraint} \ee The third equation in (\ref{equations of motion}) can be solved by \be F_{rt} = 2q e^{-A(r)}\,, \label{fund} \ee leading to the set of equations \bea && A''+\frac{1}{2}\sum_{i=1}^{n}\phi_i'^2=0\,\,,\nonumber\\ && h''+3A'h'-4 q^2e^{-4A}=0\,\,,\nonumber\\ && h\, \phi_a''+(3A'h+h')\phi_a' - m_a^2\,\phi_a=0\,,\label{eqom} \eea which will be the starting point of our subsequent analysis. The black brane solution, \be A(r)=r\,, \ \ h(r)= 1- e^{-3r+3r_H}\,,\ \ F_{rt}=\phi_a=0\,, \ee describes the uncharged sector of the ABJM theory at finite temperature. Due to the conformal symmetry of the black brane background, the finite temperature phase here depends on only one dimensionful parameter which can be taken as the temperature $T$. Thus this uncharged sector possesses only one finite temperature phase corresponding to the high temperature limit. The temperature $T$ is identified with the Hawking temperature of the black brane, \be T= {1\over 4\pi} h'(r_H) e^{A(r_H)}={3\over 4\pi} e^{r_H}\,, \ee where $r=r_H$ is the location of horizon. The Bekenstein-Hawking entropy density is given by the expression ${\cal S}$ in (\ref{entropystrong}). The energy density, the free energy density and the pressure are related to the entropy density by ${\cal E}= {2\over 3} {\cal S}T= 2p = -2 {\cal F}$ as simply dictated by the conformal symmetry of the background. Some probes of system are investigated in Ref.~\cite{BakYun} by studying the response of the scalar and current operators to the external perturbation. The static length scales including the true mass gap as well as the Debye mass are further studied in Ref.~\cite{BakMin}. Our main interest of this note is the RN black brane solution, \bea \label{ABJMcharged} A(r)=r\;,\;\;\;\;h(r)=1-\epsilon e^{-3r}+q^2e^{-4r}\;,\;\;\;\;F_{rt}=2\,{q}\;e^{-r}\;,\;\;\;\;\phi_a=0\,, \eea which is an exact solution of the original 10d equations of motion. The parameters $\epsilon$ and $q$ here are respectively proportional to the mass and the charge densities of the RN black brane. The horizon is located at $r=r_H$ with $h(r_H)=0$ satisfying explicitly \be \epsilon\, e^{-3 r_H}= 1+ q^2 e^{-4 r_H}\,. \ee The minimum of $h(r)$ occurs at \be e^{-r_m}={3\epsilon\over 4q^2}\,, \ee with $h'(r_m)=0$. No nakedness condition for the mass and charge requires $h(r_m) \ \le \ 0$ leading to the inequality \be \Big({\epsilon\over 4}\Big)^2 \ \ge \ \Big({q\over \sqrt{3}}\Big)^3\,, \ee where the inequality is saturated at zero temperature. The Hawking temperature of the RN black brane becomes \be T={3\epsilon \, e^{-2 r_H}\over 4\pi}\Big( 1-{4q^2\over 3\epsilon}\, e^{-r_H} \Big)\,. \ee The entropy, the energy and the charge densities read \be {\cal S}={2\pi\over \kappa^2}\, e^{2 r_H}\,,\ \ \ {\cal E}={\epsilon\over \kappa^2}\,,\ \ \ \rho={2q\over \kappa^2}\,, \ee with ${\cal F}={\cal E}-T{\cal S}$ and $\mu_G={\partial {\cal F}(T,\rho)\over \partial\rho}$ where $\mu_G$ is proportional to the field theory chemical potential $\mu$. Hence the no nakedness condition now takes a form \be \Big({\kappa^2 \,{\cal E}\over 4}\Big)^2 \ \ge \ \Big({\kappa^2\,\rho\over 2\sqrt{3}}\Big)^3\,. \ee In Ref.~\cite{BakRey2}, this character is attributed to build-up of a fermi surface for a finite number of the fermion number density. The weakly coupled massless fermions at finite temperature in general do satisfy an analogous inequality with precisely the same powers. Further support of the picture comes from the fact that the specific heat at low enough temperature is linear as \be C_V= \gamma_V\,\, T\,, \ee which is another important characteristic of the fermi surface. However, there are some additional properties which cannot be understood from the fermion picture. The entropy density even at zero temperature remains finite; namely, \be {\cal S}(T=0)= {\pi \over \sqrt{3}}\,\, \rho\,, \ee with a finite size of horizon at $e^{2r_H(T=0)}={q\over \sqrt{3}}$. Where this entropy comes from even at zero temperature is not clear. Recall further that the $U(1)$ current is conserved only in sum of bosonic and fermionic contributions together. Thus at weak coupling the build-up fermi surface would not be possible if the bosons were in an unbroken phase and, hence, putting charges to the bosonic states were energetically favored. However the symmetric unbroken phase without condensation will be problematic as argued in the previous section for the weakly coupled case. Therefore the system should be at some unbroken phase with some bosonic condensate at least for the weakly coupled regime. In later sections, we would like to study the nature of phases occurring at the RN black holes, in the strongly coupled side, by the condensation of some operator expectation values. Near boundary regions of large $r$, general asymptotically AdS solutions should behave as \bea && A(r)= a_1 r + a_0 + \cdots \,,\nn\\ && h(r)= h(\infty) + h_3 e^{-3 A} + \cdots\,, \nn\\ && F_{rt} (r)= 2q e^{-A}\,, \label{bdinfty} \eea and the discussion of the boundary data for the scalar fields will be specified below. The entropy, energy, charge densities and the temperature have the expressions, \bea &&{\cal S}={2\pi\over \kappa^2}\, e^{2 A(r_H)}\,,\ \ \ {\cal E}=-{h_3\over \kappa^2 h(\infty)}\,, \nn\\ && \rho ={2q\over \kappa^2 }\,, \ \ \ T={1\over 4\pi} e^{A(r_H)} {h'(r_H)\over \sqrt{h(\infty)}}\,. \eea In this computation all the thermodynamic quantities are measured with respect to the boundary time $\sqrt{h(\infty)}\, t$ such that one may bring the boundary metric of (\ref{metric1}) in the standard form. \section{Thermodynamic stability of the RN black brane} In this section we shall discuss the thermodynamic stability of the RN black brane solution (\ref{ABJMcharged}). The thermodynamic stability is ensured if the Hessian (second-derivative) matrix of the energy with respect to its thermodynamic variables has no negative eigenvalues. With any negative eigenvalue, the system becomes thermodynamically unstable under small fluctuations that drive the system toward some other stable point \cite{Gubser:2000mm,Hubeny:2002xn,Hirayama:2002hn}. In the type IIB theory, $SO(6)$ R-charged black brane solution is available. But there it is observed that the R-charged black brane solution exhibits thermodynamic instabilities at a temperature lower than a certain critical value \cite{Son:2006em}. The fate of black brane in this unstable regime has not been known up to now. Unlike the case of this type IIB counterpart, we find that our RN black brane solution is thermodynamically stable. To show this, note first that the energy density can be expressed as \be {\cal E}= {\kappa\, {\cal S}^{3\over 2}\over (2\pi)^{3\over 2}} \Big(1+ {\pi^2\rho^2\over {\cal S}^2} \Big)\,. \ee The components $H_{ij}$ of the Hessian matrix are given by \bea &&H_{11} =\frac{\partial^2{\cal E}}{\partial {\cal S}^2}\ =\ {3\kappa \over 4(2\pi)^{3\over 2} {\cal S}^{1\over 2}} \Big(1+ {\pi^2\rho^2\over {\cal S}^2} \Big)\,,\nn\\ && H_{12}=\frac{\partial^2 {\cal E}}{\partial {\cal S}\partial \rho}=-{\kappa \pi^2\over (2\pi)^{3\over 2}} {\rho\over {\cal S}^{3\over 2}}\,,\nn\\ && H_{22} = \frac{\partial^2{\cal E}}{\partial \rho^2}\ =\ {2\kappa \pi^2\over (2\pi)^{3\over 2} {\cal S}^{1\over2}} \,. \eea The determinant of $H_{ij}$ then becomes \be {\rm det}\,\, H = {\kappa^2\over 16\pi {\cal S}} \Big(3+ {\pi^2\rho^2\over {\cal S}^2} \Big) \ee together with ${\rm tr} \,\, H \ > \ 0$. This proves that the two eigenvalues are positive definite. One may also consider turning on small quantities of the three further charges $\rho_a \,\,(a=1,2,3)$ in the Cartans of the $SU(4)$ R-symmetry. Including these contributions to the quadratic order, the energy density has the expression, \be {\cal E}= {\kappa\, {\cal S}^{3\over 2}\over (2\pi)^{3\over 2}} \Big(1+ {\pi^2 \over {\cal S}^2} \, (\rho^2 +\rho_a\rho_a)\Big)\,. \ee Using now $5\times 5$ Hessian matrix, one may easily verify that the system at $\rho_a=0$ is thermodynamically stable even in this enlarged space. Thus we conclude that the RN black brane solution is thermodynamically stable. \section{Geometrical instability of the RN black brane } In this section, we shall investigate possible geometrical instabilities of the RN black brane system with a particular probe mode turned on. Depending on the mass of the supergravity mode, the RN black brane may become geometrically unstable below a certain critical temperature $T_c$ leading to a new black brane solution wearing nontrivial hairs. For scalar modes, arising of the instability may be understood as follows \cite{Horowitz einstein scalar,Horowitz criticality,Hartnoll:2008kx}. Note that the usual geometrical stability condition for the AdS$_{d+1}$ spacetime is given by the so called Breitenlohner-Freedman (BF) bound \cite{Breitenlohner:1982bm}, \be -{d^2\over 4} \ \le\ m^2\,, \ee which is indeed respected by any scalar mode of the present theory. This is the relevant condition for the stability of the near boundary region of the RN black brane solution that is asymptotic to AdS$_4$. On the other hand, the near horizon geometry of the extremal RN black brane in $d+1$ dimensions is given by AdS$_2\times \mathbb{R}^{d-1}$ with a scaled AdS radius of $1/\sqrt{d(d-1)}$. Hence for scalars, the BF bound of this region is violated if \be m^2 \ <\ -{d(d-1)\over 4} \,. \ee Therefore for our case of $d=3$, the instability below a certain temperature may occur if the mass squared of a bulk scalar is ranged in \be -{9\over 4}\ \le\ m^2 \ < \ -{3\over 2}\,. \ee For the higher spin fields with spin $s\ge {1\over 2}$, one may show that there is no potential instability once they are neutral under the $U(1)$ gauge field of the RN black brane. Thus scanning the supergravity modes in Table \ref{tableone}, one finds that the possible gravitational instabilities are limited to the case of scalar fields with $m^2=-2$. As explained in Section 3, these bulk scalars are dual to the field theory operators of dimension $\Delta =1$ and $2$. To show the instability of the black hole, let us study the positivity condition of the energy functional for the scalar field. The energy of a scalar field reads \be E \int dr dx dy \sqrt{-g} \bigg[ |g^{tt}| (\dot{\phi})^2 + g^{rr} (\phi')^2 + g^{xx} (\partial_x\phi)^2 + g^{yy} (\partial_y\phi)^2 + m^2\phi^2 \bigg]\,, \ee where dots and primes denote derivatives with respect to $t$ and $r$ respectively. If there exists any normalizable (probe) mode $\varphi$ of the scalar field $\phi$ which makes this energy functional negative, the geometrical instability of the RN background can be triggered driving the system to some new stable configuration. In order to find a possible mode of instability, we take $\varphi$ as a function of $r$ only and look for the negative fluctuation mode satisfying \begin{eqnarray} \left( e^{3A}h(r) \varphi' \right)' + 2 e^{3 A} \varphi - \kappa_0^2 \, \varphi = 0, \label{stabilty eom \end{eqnarray} where $\kappa_0$ is a real constant and we set $m^2=-2$. Of course one may turn on the spatial fluctuation by considering the probe field depending on $x$ and $y$ by $e^{i(k_x x+ k_y y)}$ but this will only increase the energy of the system. Hence the above consideration will be sufficient. The boundary conditions are crucial for a determination of the solution of (\ref{stabilty eom}). For $r=\infty$, we note that the behavior of scalar fields in the near boundary region takes a form, \begin{eqnarray} \varphi \sim s_{\Delta}(x) e^{-(3-\Delta)A(r)} + o_{\Delta}(x) e^{-\Delta A(r)}+ \cdots, \end{eqnarray} where $\cdots$ denote higher order terms. From the standard dictionary of AdS/CFT, the presence of $s_\Delta$ corresponds to turning on an external source term for the dual operator $O_{\Delta}(x)$ while $o_\Delta$ represents the operator expectation value $\langle O_{\Delta}(x)\rangle_{s_\Delta}$ in the presence of the source $s_\Delta(x)$. In our present problem, we would like to consider the system without introducing the source term and, thus, our choice of the boundary condition here and below is $s_\Delta=0$ at $r=\infty$. The other boundary condition is for the horizon of black brane. Basically we shall require nonsingularity of the configuration there. For this, we need to expand the equation (\ref{stabilty eom}) into a generalized power series in $r-r_H$, and then we can obtain boundary condition for the horizon as \begin{eqnarray} \varphi'(r_H)&=& -\frac{2 -\kappa_0^2 e^{-3 A(r_H)}}{h'(r_H)}\varphi(r_H)\,,\nonumber\\ \varphi''(r_H)&=& \frac{ 2-\kappa_0^2 e^{-3 A(r_H)}}{2 \left(h'(r_H)\right)^2} \left( 3 A'(r_H)h'(r_H)+ h''(r_H)+2 -\kappa_0^2 e^{-3A(r_H)} \right)\varphi(r_H)\nn\\ &-&\frac{ 3\kappa_0^2\, A'(r_H)e^{-3A(r_H)}}{2 h'(r_H)}\varphi(r_H), \end{eqnarray} by setting the coefficients of the negative powers of $r-r_H$ to zero. Note that the denominators are given by $h'(r_H)$: If it is too small, we may have trouble in numerical computations. In order to avoid potential numerical instabilities, we use another form of RN solution given by \begin{eqnarray} &&A(r) = ( 3 - q^2 )r,~~h(r)=\frac{ 1-(1+q^2) e^{-3A(r)} + q^2 e^{-4 A(r)} }{(3-q^2)^2},\nonumber\\&&A_t = - \frac{2q}{3-q^2}\left(e^{-A(r)} -1\right)\,, \ \ \ \phi_a=0\,. \label{rnsol} \end{eqnarray} This solution can be related to (\ref{ABJMcharged}) by the coordinate transformation, \bea && r' = {r-r_H\over 3 - q^2e^{-4 r_H}} ,~~t'=e^{r_H}(3 - q^2e^{-4 r_H})\, t\nonumber\\&& x' = e^{r_H}\,x \,, \ \ \ \ \ \ y' = e^{r_H}\,y\,, \label{ct} \eea with the redefinition of the parameters, \be q'=q\, e^{-2r_H}\,, \ \ \ \ \epsilon'=\epsilon\, e^{-3r_H}\,. \label{pt} \ee After the transformation, we drop primes for the notational simplicity. The temperature for this system reads \be T={1\over 4\pi} {({3-q^2})}\,, \ee and the location of the horizon is at $r=0$. Here and below, we shall use this rescaled background for the numerical analysis. Now one can solve the probe equation (\ref{stabilty eom}) numerically to find the temperature below which the negative mode begins to develop. For the numerical analysis, we use the standard shooting method based on a Mathematica coding. The results are as follows: The on-set of instability occurs at \be T_c=0.0395(7) \ \ \ \big[\,q_c=1.582(0)\,\big] \label{ins1} \ee for the $\Delta=1$ operator. For the $\Delta=2$ operators, we found \be T_c=0.0003(5) \ \ \ \big[\,q_c=1.7307(7)\,\big]\,. \label{ins2} \ee The differences in the numbers of significant digits of $T_c$ and $q_c$ arise due to the fact that $T_c$ is proportional to $3-q_c^2$. \section{Phase structures and critical exponents} In the previous section, we have established the instability of the RN black brane in the presence of bulk scalars with $m^2=-2$. As temperature decreases below the critical temperature, the scalar fields begin to develop a nontrivial profile that affects the original RN black brane geometry. As we shall see later on, the boundary CFT undergoes a phase transition by the condensation of the expectation value of the dual field theory operator $O_{\Delta}(x)$. In this section, we shall investigate this phase transition by looking at the supergravity solutions. Since the changes in the thermodynamic quantities like the entropy, energy and so on are encoded in the geometry, the probe analysis of the previous section alone is not sufficient and inclusion of the full back-reaction to the geometry will be essential. Our study of the relevant solutions will be mainly based on numerical analysis. Our starting point of the analysis\footnote{We use here our original coordinates without performing the coordinate transformation in (\ref{ct}) and (\ref{pt}). Then, when the scalar field vanishes, we shall directly obtain the exact RN solution (\ref{rnsol}) by fixing some freedoms in our coordinate choice.} is the set of equations in (\ref{fund}) and (\ref{eqom}) where we turn on only one scalar field $\phi$ with $m^2=-2$. To set up the problem completely, one has to also specify the boundary conditions. Let us begin with the horizon side. At the horizon, $h(r)$ has to be zero at some finite $r=r_H$ which is the coordinate location of the horizon. Using the translational freedom of the $r$ coordinate, we shall choose \be r_H=0\,. \label{bd0} \ee In addition, we note that one has the scaling freedom for $x_\mu=(t,x,y)$ and the coordinate $r$ to generate a new solution. Using this we shall fix \begin{eqnarray} A(0)=0~~~{\rm and}~~~h'(0)=1. \label{bd3} \end{eqnarray} At the horizon we basically require that all the fields in (\ref{eqom}) should be regular. Setting $\phi(0)=u$, the regularity leads to \begin{eqnarray} \phi'(0)&=& - \frac{2\, u}{h'(0)}~,\nn\\ A'(0)&=&\frac{3- q^2 e^{-4 A(0)}+ u^2}{h'(0)}\,\,, \label{bd1} \end{eqnarray} and \begin{eqnarray} h''(0) &=& -9 -3 u^2 + 7 q^2 e^{-4 A(0)}~,\nn\\ A''(0) &=& - \frac{2\,u^2}{\left(h'(0)\right)^2}\,\,,\nn\\ \phi''(0) &=& \frac{2\, u}{\left(h'(0)\right)^2}\left( 1 + 2 q^2 e^{-4 A(0)}\right)\,. \label{bd2} \end{eqnarray} ($\,$Solving (\ref{fund}) and (\ref{eqom}) with the coordinate and boundary conditions in (\ref{bd0})-(\ref{bd2}) for the case of vanishing scalar field leads directly to the exact RN solution in (\ref{rnsol}) without need of the transformation in (\ref{ct}) and (\ref{pt}).) At $r=\infty$, we shall impose the behaviors of fields in (\ref{bdinfty}) that are required for the asymptotically AdS spacetime. Again note that the scalar field in the near boundary region takes the form \begin{eqnarray} \phi \sim s_{\Delta}(x) e^{-(3-\Delta)A(r)} + o_{\Delta}(x) e^{-\Delta A(r)}+ \cdots, \end{eqnarray} where $\cdots$ denote higher order terms. (The interpretation of $s_{\Delta}(x)$ and $o_{\Delta}(x)$ in the boundary CFT is the same as that in the previous section.) We shall set the source term $s_{\Delta}(x)=0$, which corresponds to the boundary system without an external source term. By this last condition, $u$ will be determined as a function of $q$ by $u=u(q)$. Now adopting the shooting method based on a Mathematica coding, we perform a numerical analysis for $\Delta=1$ and $2$ cases separately. For each case, we find one set of solutions that is parameterized by the value of $q$. The resulting functions $u(q)$ in $(u\,,\,q)$ plane are drawn in Fig.~\ref{f4}. For each case, there is a critical value of $q_c$ beyond which the corresponding scalar field begins to develop a nontrivial profile. Namely if $q\ \le\ q_c$, the exact RN solution in (\ref{rnsol}), which satisfies all the coordinate and boundary conditions in (\ref{bd0})-(\ref{bd2}), remains intact. This part is depicted in Fig.1 by the vertical blue solid line for $q \le q_c$. If $q \ >\ q_c$, the RN black brane is modified by wearing a nontrivial scalar hair with $u(q)\neq 0$; In Fig.1, the blue dots represent the data set which we obtained by the numerical analysis. The blue solid curve represents the fitting function that is obtained by the standard curve fitting method in Mathematica. \begin{figure}[ht] \center \includegraphics[width=7.2cm]{gp1a.eps} \includegraphics[width=7.5cm]{gp1b.eps} \caption{\small The functions $u(q)$ in $(u,\,\, q)$ plane are depicted in the left and the right sides respectively for the $\Delta=1$ and the $\Delta=2$ cases. For each case, the development of the scalar profile is represented by nonvanishing $u(q)$ for $q > q_c$; The dots represent our numerical data set while the solid curve is for the fitting function obtained by the standard curve fitting method in Mathematica. Below $q_c$, $u(q)=0$ is indicated by the blue solid line, which corresponds to the RN black holes in (\ref{rnsol}). In addition, we marked the numerical values of $q_c$ by the red circles, which are given by 1.581(6) and 1.7308(3) respectively for the $\Delta =1$ and the $\Delta =2$ cases. } \label{f4}~~ \end{figure} Thus the critical temperature may be evaluated by using the RN black brane solution (\ref{rnsol}) with $q=q_c$ leading to \be T_c= {1\over 4\pi}\, ({3-q_c^2})\,. \ee By the standard curve fitting method, we found \be T_c=0.0396(4) \ \ \ \big[\,q_c=1.581(6)\,\big]\,, \label{tceq1} \ee for the $\Delta=1$ scalar field, and \be T_c=0.0003(4) \ \ \ \big[\,q_c=1.7308(3)\,\big]\, \label{tceq2} \ee for the $\Delta=2$ scalar fields. In principle, these critical values should agree precisely with those from the probe analysis of the previous section. Therefore the numerical precision of our analysis can be estimated by comparison of the numerical values from the two methods. We see that the $\Delta =2$ critical temperature, whose value is closer to zero, has a poorer numerical accuracy. In order to show further numerics, instead of using the charge density $\rho=2q/\kappa^2$, we shall use simply $q$ which differs from the actual charge density by a constant factor $2/\kappa^2$. ($\,$See (\ref{newton}) for the definition of $\kappa$.) In the above sets of solutions, both $q$ (or the charge density) and the temperature $T=T(q)$ changes as one changes $q$ along the transition. But what we want is to fix $q$ (or the charge density) while changing the temperature of the system along the transition. In order to generate such sets of solutions, we use the coordinate transformation of the gravity system by \be x^\mu \ \rightarrow \ {x^\mu / \lambda_s}\,. \ee Note that, by the scale transformation, the thermodynamic quantities transform as \bea && T\ \rightarrow \ \lambda_s \, T\,, \ \ \ q\ \rightarrow \ \lambda_s^2 \, q\,, \ \ \ {\cal E}\ \rightarrow \ \lambda_s^3{\cal E}\,,\nn\\ && {\cal S}\ \rightarrow \ \lambda_s^2 {\cal S}\,, \ \ \ o_\Delta \ \rightarrow\ \lambda_s^{\Delta} o_\Delta \,, \ \ \ s_\Delta \ \rightarrow\ \lambda_s^{3-\Delta} s_\Delta\,, \label{scalet} \eea and, hence, \be {\partial o_\Delta\over \partial s_{\Delta}} \ \rightarrow\ \lambda_s^{2\Delta -3} \,\, {\partial o_\Delta\over \partial s_{\Delta}}\,. \ee Using this scaling transformation, we shall fix $\tilde{q}=1$ by choosing $\lambda_s=1/\sqrt{q}$. (\,The quantities carrying a tilde are the ones after the scale transformation (\ref{scalet}).\,) The new sets of scaled solutions are now parameterized by the rescaled temperature \be \tilde{T}= {T\over \sqrt{q}} \ee with fixed charge density $\tilde{\rho}=2/\kappa^2$. Using the values in (\ref{tceq1}), we found the rescaled critical temperatures $\tilde{T}_c=T_c/\sqrt{q_c}$ as \be \tilde{T}_c=0.0315(6) \label{ttc1} \ee for the $\Delta=1$ scalar field. For the $\Delta=2$ scalar fields, we found \be \tilde{T}_c=0.0002(5) \label{ttc2} \ee using the values in (\ref{tceq2}). Below the critical temperature, one finds a development of expectation value $o_\Delta$ which signals a phase transition. As we shall see details later on, this basically corresponds to the spontaneous symmetry breaking transition of the $SU(4)$ R-symmetry in which the condensate $o_\Delta$ plays the role of the order parameter. In terms of the scaled variable $\tilde{o}_\Delta=o_\Delta/q^{\Delta/2}$, the phase diagrams are depicted in Fig.~\ref{f5} for the $\Delta=1$ and the $\Delta=2$ cases. \begin{figure}[ht] \center \includegraphics[width=7.5cm]{gp2a.eps} \includegraphics[width=7.5cm]{gp2b.eps} \caption{\small The phase diagrams in the left and the right sides are respectively for the $\Delta=1$ and the $\Delta=2$ scalars. The system undergoes a symmetry breaking transition from a symmetric phase to a broken phase as the temperature is lowered below the critical temperature. For each case, the blue dots represent our numerical data set and the blue solid curve depicts the fitting function that is obtained by the standard curve fitting method in Mathematica. We marked the numerical values of $\tilde{T}_c$ by the red circles, which are given by 0.315(6) and 0.0002(5) [see (\ref{ttc1}) and (\ref{ttc2})] respectively for the $\Delta =1$ and the $\Delta =2$ cases. } \label{f5} \end{figure} As expected, the transition for $\Delta=1$ occurs at a higher critical temperature. Then the transition for $\Delta=2$ cannot be treated separately. Namely one has to turn on both of the $\Delta=1,\,\,2$ scalar fields around the region of the $\Delta=2$ transition in which the $\Delta=1$ scalar field has developed some finite amount of profile representing the condensation. However remember that the scalar part of our starting Lagrangian is only valid up to quadratic orders. Hence our treatment loses its validity around the region of the $\Delta=2$ transition. Though it is an interesting problem to clarify further, we shall leave it to the future investigation. For the remaining we shall be focusing on the transition involving the $\Delta=1$ condensation. The phase transition is of second order as in the usual cases of symmetry breaking transition. We note that natures of phase transitions are in general classified by their critical exponents. Here we compute numerically the exponents $\alpha$, $\beta$ and $\gamma$ respectively defined by \bea &&{\partial {\cal E}\over \partial T}\ \sim \ |T-T_c|^{-\alpha} \nn\\ && o_1\ \sim \ |T-T_c|^{\beta} \nn\\ && \chi_1 = {\partial {o_1}\over \partial s_1}\Big|_{s_1=0}\ \sim \ |T-T_c|^{-\gamma}\,. \label{chi1} \eea For the numerical estimation of the exponents $\alpha$, $\beta$ and $\gamma$, we use the numerical data sets respectively of the forms ($\log (T-T_c)$, $\log (\partial E/\partial T)$), ($\log (T-T_c)$, $\log o_1$) and ($\log (T-T_c)$, $\log \chi_1$). Using the fitting with linear least squares, we estimated the relevant slopes as well as their standard deviations. From the corresponding data set for $o_1$ in Fig.~\ref{f5}\,, one finds \be \beta_n=0.04978\pm 0.0032\,, \ee which is in a good agreement with the mean field value $\beta=1/2$. Here and below the subscript $n$ indicates that the relevant exponent is obtained by the numerical analysis. The left side of Fig.~\ref{f6} shows the behavior of the energy as a function of temperature in the vicinity of the transition region. Form the corresponding data set, the exponent for the specific heat is identified as \be \alpha_n=0.012\pm 0.018\,, \ee which agrees well with the mean field value $\alpha=0$. The right hand side of Fig.~\ref{f6} shows the temperature dependence of the susceptibility $\log \chi_1$ with respect to the logarithm of temperature. Again from the corresponding numerical data set, one finds \be \gamma_n=1.011\pm 0.015\,, \ee which agrees well with the mean field value $\gamma=1$. One can check that \be (\alpha+2\beta+\gamma)_n=2.019\pm 0.024 \ee which is consistent with the so called Rushbrooke scaling law \be \alpha+ 2\beta +\gamma=2\,. \ee \begin{figure}[ht] \center \includegraphics[width=7.5cm]{gp3a.eps} \includegraphics[width=7.5cm]{gp3b.eps} \caption{\small The left figure shows the scaled energy density as a function of temperature. The right hand side shows $\log \Big(\sqrt{q}\chi_1\Big)$ in (\ref{chi1}) with respect to $\log \Big( T/\sqrt{q}-T_c/\sqrt{q_c} \Big)$. For each case, the blue dots represent our numerical data set and the blue solid curve depicts the fitting function that is obtained by the standard curve fitting method in Mathematica. }\label{f6} \end{figure} Finally we shall reconfirm the above exponents by the analytic treatment. We note first that, due to the boundary condition $\phi(0)=u$, the scalar field behaves as $\phi \propto u$ when $u\ \ll \ 1$. Then inspecting the equations of motion (\ref{eqom}) together with the boundary conditions (\ref{bd1}) and (\ref{bd2}), one finds that $s_1$ and $o_1$ should behave as \bea s_1 &=& u \,\, \big(\, a_0 (q) +a_2(q) u^2 + O(u^3)\, \big) \nn\\ o_1 &=& u\,\, \big(\, b_0(q)+b_2(q) u^2 +O(u^3)\,\big)\,, \label{so} \eea before imposing the last boundary condition of $s_1=0$. For $|u|\ll 1$, the coefficients of the $u^3$ terms in $s_1$ and $o_1$ are basically controlled by the $u^2$ terms of $A'(0)$ and $h'(0)$ in ({\ref{bd1}}) and ({\ref{bd3}}), which one may argue by a careful examination of the equations of motion in (\ref{eqom}): By this consideration, one may show that \bea a_2(q)= a_{20} +O(q-q_c)\,,\ \ \ \ b_2(q)= b_{20} +O(q-q_c) \eea with $a_{20}\neq 0$ and $b_{20}\neq 0$. Then, since $s_1=0$ has a nontrivial solution $u(q)\neq 0$ only for $q > q_c$, $a_0(q)$ should behave as \bea a_0(q)= a_{01}\, (q-q_c) + O[\,(q-q_c)^2] \eea with $a_{01} a_{20} < 0$ for $|q-q_c| \ll 1$. (Here we assume the existence of $q_c$, which is justified by our numerical analysis.) Since we know that the $o_1=0$ boundary condition instead of $s_1=0$ leads to a different value for the critical charge, $b_0(q)$ has the expansion of the form \be b_0(q)= b_{00} +O(q-q_c) \ee with $b_{00}\neq 0$. This argument shows that the $s_1=0$ condition leads to a solution \bea u= \left[ \begin{array}{cr} \sqrt{-a_{01}\over a_{20}}\,\, (q-q_c)^{1\over 2} \, (1+O(q-q_c)) & \ \ \ \ {q\ \ge\ q_c}\phantom{a}\\ 0 & {q \ < \ q_c}\,\,. \end{array} \right. \eea Thus, \be o_1\ \sim \ |T-T_c|^{1\over 2}\,, \ee which implies that $\beta={1\over 2}$. Since the scalar field contribution to the energy density is of order $u^2$, the energy density has to be of the form \be {\cal E} = e_0 + e_1 (q-q_c) + e_2 u^2 + \cdots \ee where the first two terms are from the original RN black brane with $u=0$. We then conclude that $\alpha=0$ since the leading power of specific heat is zero. By changing $u=u(q)$ to $u=u(q)+\delta u$, $s_1$ and $o_1$ in (\ref{so}) vary by \be \delta s_1 =3 a_{20}(u(q))^2 \delta u\,, \ \ \ \ \delta o_1= b_{00}\delta u \ee to the leading orders. Then the susceptibility behaves as \be {\delta o_1\over \delta s_1} = {b_{00}\over 3 a_{20}} {1\over (u(q))^2} \ \sim \ |T-T_c|^{-1}\,, \ee which implies that $\gamma=1$. This proves that our exponents are those of the mean field theory. \section{Interpretations and discussions} In the previous section, we have established the phase transition whose exponents belong to the universality class of the mean field theory. For the resulting expectation value of $\Delta=1$ operator, let us introduce a notation \be {\cal M}_{IJ}(x) = \langle O^I_J (x)\rangle \ee where $O^I_J$ is given in (\ref{op1}). Since ${\cal M}_{IJ}= {\cal M}^*_{JI}$ and ${\cal M}_{I\,I}=0$, the $4\times 4$ matrix ${\cal M}$ is Hermitian and traceless. We have seen that the boundary CFT undergoes a phase transition. Above the critical temperature $T_c$, the condensate is vanishing, i.e. ${\cal M}=0$ while at lower temperature, ${\cal M}\neq0$. This is the spontaneous symmetry breaking phase transition where the $SU(4)$ R-symmetry of the ABJM theory is broken by the presence of the condensate ${\cal M}$. Using ${\cal M}$ as an order parameter, the phase transition may be effectively described by the Landau free energy $F$ \be F/T=\int d^2\vec{x} \Big( \,\,\, {\tr}\, \nabla {\cal M} \cdot \nabla {\cal M} -{1\over T}{\tr }\, {\cal M} {\cal H} +2 r_0 (T-T_c) {\tr }\, {\cal M}^2 + g_0 ({\tr } {\cal M}^2)^2 \Big)\,, \ee where $\vec{x}=(x,y)$ and the $4\times 4$ traceless Hermitian matrix ${\cal H}$ is for the external source term in the adjoint representation of $SU(4)$. The form of the quartic term is determined by the symmetry. Due to the symmetry of the gravity solution, it should be invariant under the transformation of ${\cal M} $ by the SU(4) R-symmetry. The Landau free energy is minimized if \bea {\cal M}= \left[ \begin{array}{lr} 0 & {T \ > \ T_c}\\ \sqrt{r_0\over g_0}\, (T_c-T)^{1\over 2}\,\hat{n} & \ \ \ \ \ \ \ \ \ \ {T\ \le\ T_c} \end{array} \right. \eea where $\hat{n}$ is a $4\times 4$ traceless Hermitian matrix with $\tr\, {\hat{n}}^2=1$. This implies that $\beta=1$. Noting $C_{V}=-T{\partial^2 F\over\partial T^2}$ with \be F \sim (T-T_c)^2\,, \ee one has $\alpha=0$. Similarly one can check that $\gamma =1$ from the definition of the susceptibility. One can further compute the correlation length scale $\xi$, \be \xi = (\, 2 r_0(T_c-T)\,)^{-{1\over 2}}\ \sim \ |T-T_c|^{-\nu}\ \ \ \ \ \ (\,{\rm for} \ \ {T\ \le \ T_c}\,)\,\,, \ee from which one finds $\nu=1/2$. It will be interesting to test the prediction of the mean field theory by the direct analysis of the bulk gravity system. But on top of the new background obtained from the numerical analysis, further numerical analysis of solving the scalar fluctuation equation has to be performed. However this requires some improvement of the current method to reach the required numerical precision. As we described in Section 3, our $U(1)$ charged black brane describes a system where the $U(1)$ number is carried by both bosonic and fermionic degrees. The build-up of the fermi surface at weak coupling side would not be possible if the bosonic degrees were in a unbroken phase. Indeed we have argued that there will be a condensation of the elementary degrees at weak coupling with low enough temperature. For the strong coupling side, we know that the basic degrees that are weakly coupled among themselves are now organized by the supergravity modes. We found that the symmetric phase of the RN black brane again becomes unstable when $T\ < \ T_c$. The effective mass squared of the Landau free energy description becomes negative in this region driving the symmetry breaking phase transition by the condensation of the operator expectation value. This partially explains the fate of the bosonic $U(1)$ charged state of the ABJM theory at strong coupling. But as observed before, there is a finite entropy of the extremal RN black brane at zero temperature. However due to the phase transition, ordering of degrees will lead to a reduction of entropy in general. Hence there can be a chance of having a zero-entropy system at zero temperature. One interesting point of study is then the modification of the RN black brane by the scalar condensation in the near zero temperature region. Since the scalar part of our starting Lagrangian is only valid up to quadratic orders, we cannot answer this question at the moment. One has to improve our understanding of higher-order scalar contribution to the Lagrangian first. Another interesting aspect is the build-up of the fermi surface as mentioned before. The picture presented in Ref.~\cite{BakRey2} is seemingly plausible, which mostly concerns the region of near zero temperature, and works only for a pure RN black brane system. But now close to zero temperature, our black brane solution is significantly modified by the condensation of nontrivial scalar profile. Further studies of the low temperature region appear interesting, in particular with focus on the question whether the Fermi picture is still valid. Finally we comment on the theorem by Coleman-Mermin-Wagner-Hohenberg \cite{Coleman:1973ci}. The theorem states that a continuous global symmetry cannot be broken spontaneously in 1+1 dimensions at zero temperature and 2+1 dimensions at finite temperature. Our example appears to be contradicting with what the above theorem states. However, we are working in the strict large $N$ limit, which makes the dual gravity system completely classical. This violates some assumptions of the above theorem as pointed out in Ref.~\cite{Anninos:2010sq}. Our example here is reminiscent of the clash of the unitarity in the explicit time dependent black hole solution which describes a thermalization of a boundary field theory \cite{Bak:2007qw}. There again the trouble stems from the large $N$ limit of the boundary field theory. \section*{Acknowledgement} K.K. Kim would like to thank Gungwon Kang for helpful discussions. DB was supported in part by NRF SRC-CQUeST-2005-0049409 and NRF Mid-career Researcher Program 2011-0013228. KK and SY were supported in part by WCU Grant No. R32-2008-000-10130-0.
1,116,691,499,509
arxiv
\section{Introduction} The present article is concerned with mirror symmetry for generalized K3 surfaces, with particular emphasis on complex and K\"ahler rigid structures. A few important aspects of mirror symmetry for K3 surfaces are unified in the framework of generalized K3 surfaces. Mirror symmetry for a K3 surface $S$ is a very subtle problem as the complex and K\"ahler structures are somewhat mixed as they both live in $H^2(S,\mathbb{C})$. In the foundational article \cite{Dol}, Dolgachev formulated mirror symmetry for lattice polarized K3 surfaces as a certain duality between the algebraic and transcendental cycles. Although his formulation works beautifully in many case, it cannot be a definitive one as there is an artificial assumption which does not hold in general. Geometrically an obvious exception corresponds to a Shioda--Inose K3 surface, which is a K3 surface with the maximal Picard number $20$. It is also known as a rigid K3 surface in the sense that it admits no deformation of complex structure keeping the maximal Picard number. A satisfactory formulation of mirror symmetry for K3 surfaces, which in particular should solve the puzzle of that for Shioda--Inose K3 surfaces, has been long-anticipated. The celebrated work \cite{AM} of Aspinwall--Morrison is the key articles explaining mirror symmetry for K3 surfaces from a physics point of view. They discussed SCFTs on a K3 surface $S$ and showed that the moduli space $\mathfrak{M}_{(2,2)}$ of $N=(2,2)$ SCFTs fibers over the moduli space $\mathfrak{M}_{(4,4)}$ of $N=(2,2)$ SCFTs. Mathematically such moduli spaces are related with space-like 4-spaces in $H^*(S,\mathbb{R}) \cong \mathbb{R}^{4,20}$ equipped with the Mukai pairing, and $\mathfrak{M}_{(2,2)}$ and $\mathfrak{M}_{(4,4)}$ are identified with $\mathrm{Gr}^{po}_{2,2}(H^*(M,\mathbb{R}))\cong \mathrm{O}(4,20)/(\mathrm{SO}(2)\times \mathrm{SO}(2)\times \mathrm{O}(20))$ and $\mathrm{Gr}^{po}_{4}(H^*(M,\mathbb{R})) \cong \mathrm{O}(4,20)/(\mathrm{SO}(4)\times \mathrm{O}(20))$ respectively. Then mirror symmetry is realized as a certain discrete group action on these moduli spaces. On the other hand, the moduli space $\mathfrak{M}_{\mathrm{HK}}$ of $B$-field shifts of hyperK\"ahler metrics can be identified with an open dense subset of $\mathfrak{M}_{(4,4)}$ by the period map $\mathfrak{per}_{\mathrm{HK}}$. Inspired by this fact, in the beautiful article \cite{Huy}, Huybrechts showed that the counterpart of $\mathfrak{M}_{(2,2)}$ is given by the moduli space $\mathfrak{M}_{\mathrm{gK3}}$ of generalized K3 surfaces, which are the K3 version of the generalized Calabi--Yau (CY) structures introduced by Hitchin \cite{Hit}. The moduli space $\mathfrak{M}_{\mathrm{K3}} \times H^2(M,\mathbb{R})$ of $B$-field shifts of K3 surfaces endowed with a Ricci-flat metric are naturally contained in $\mathfrak{M}_{\mathrm{gK3}}$ of real codimension $2$. $$ \xymatrix{ \mathfrak{M}_{\mathrm{K3}} \times H^2(M,\mathbb{R}) \ar@{^{(}->}[r]^-{\iota} & \mathfrak{M}_{\mathrm{gK3}} \ar[d] \ar[r]^-{\mathfrak{per}_{\mathrm{gK3}}} & \mathrm{Gr}^{po}_{2,2}(H^*(M,\mathbb{R}))=\mathfrak{M}_{(2,2)} \ar[d] \\ & \mathfrak{M}_{\mathrm{HK}} \ar[r]^-{\mathfrak{per}_{\mathrm{HK}}} & \mathrm{Gr}^{po}_{4}(H^*(M,\mathbb{R})) =\mathfrak{M}_{(4,4)} } $$ Then, as Huybrechts pointed out, it is natural to expect that points in $\mathfrak{M}_\mathrm{K3}$ might be mirror symmetric to points that are no longer in $\mathfrak{M}_\mathrm{K3}$. We will show that this is precisely the source of the problem of mirror symmetry for the Shioda--Inose K3 surfaces. There is a good chemistry between Hitchin's generalized CY structures and mirror symmetry as they both embrace A-structures (symplectic) and B-structures (complex) on an equal footing. Inspired by the works of Dolgachev, Aspinwall--Morrison and Huybrechts, we introduce a formulation of mirror symmetry for generalized K3 surfaces (Section \ref{MS for gK3}). The key features of our formulation is twofold, and both are guided by Hitchin's theory. The first is to extend the scope of lattice polarizations to the Mukai lattice. Mixture of degrees of cycles is indispensable for generalized K3 surfaces. The second is to treat the A- and B-structures on a completely equal footing. The N\'eron--Severi lattice and transcendental lattice are defined in the same fashion and moreover the lattice polarizations are imposed on both. Along the way, we investigate complex and K\"ahler rigid structures of generalized K3 surfaces (Section \ref{rigid structures}). The notion of a complex rigid structure is enhanced to the level of generalized K3 surfaces, and we give a classification theorem which generalizes the famous Shioda--Inose theorem. A rigid K\"ahler structure is defined by using a generalized CY structure, which captures a hidden integral structure of the K\"ahler moduli space. Such a structure has been anticipated for a long time also from the viewpoint of mirror symmetry. We give a conjectural explicit mirror correspondence between the complex rigid K3 surfaces and K\"ahler rigid K3 surfaces. \subsection*{Structure of article} Section \ref{gK3} provides a brief summary of generalized K3 structures based on Huybrechts' work \cite{Huy}. Section \ref{rigid structures} investigates the complex and K\"ahler rigid structures on generalized K3 surfaces. Section \ref{MS for gK3} is devoted to mirror symmetry for generalized K3 surfaces in comparison with the classical formulation for K3 surfaces. Section \ref{MS for SIK3} settles the long-standing problem of mirror symmetry for Shioda--Inose K3 surfaces. \subsection*{Acknowledgement} The author thanks Yu-Wei Fan and Shinobu Hosono for many valuable discussions and helpful comments. Some parts of this work was done while the author was affiliated by Kyoto University with Leading Initiative for Excellent Young Researchers Grant. This work is partially supported by the JSPS Grant-in-Aid Wakate(B)17K17817. \section{Generalized K3 surfaces} \label{gK3} Hitchin's invention of generalized CY structures is the key to unify the symplectic and complex structure \cite{Hit}. Such structures have been extensively studied in 2-dimensions by Huybrechts \cite{Huy}. In this section, for the sake of completeness, we provide a brief review of Huybrechts' work \cite{Huy}. \subsection{Generalized CY structures} Let $M$ be a differentiable manifold underlying a K3 surface and $A^{2*}_\mathbb{C}(M)=\oplus_{i=0}^2 A_\mathbb{C}^{2i}(M)$ the space of even differential forms with $\mathbb{C}$-coefficients. Let $\varphi_i$ denote the degree $i$ part of $\varphi \in A^{2*}_\mathbb{C}(M)$. The Mukai pairing for $\varphi, \varphi' \in A_\mathbb{C}^{2*}(M)$ is defined as $$ \langle \varphi, \varphi' \rangle = \varphi_2 \wedge \varphi'_2 - \varphi_0 \wedge \varphi'_4 - \varphi_4 \wedge \varphi'_0 \in A_\mathbb{C}^{4}(M) $$ \begin{Def}[generalized CY structure] A generalized CY structure on $M$ is a closed form $\varphi \in A^{2*}_\mathbb{C}(M)$ such that $$ \langle \varphi, \varphi \rangle =0, \ \ \ \langle \varphi, \overline{\varphi} \rangle >0. $$ \end{Def} The special appeal of generalized CY manifolds resides in the fact that they embrace complex and symplectic structures on an equal footing. There are two fundamental CY structures: \begin{enumerate} \item A symplectic structure $\omega$ on $M$ induces a generalized CY structure $$ \varphi=e^{\sqrt{-1}\omega}=1 + \sqrt{-1} \omega-\frac{1}{2}\omega^2. $$ \item A complex structure $J$ of $M$ makes $M$ a K3 surface $M_J$. A holomorphic $2$-form $\sigma$ of $M_J$, which is unique up to scaling, defines a generalized CY structure $\varphi=\sigma$. We also write $M_\sigma=M_J$. \end{enumerate} For $B \in A^{2}_\mathbb{C}(M)$, $e^{B}$ acts on $A^{2*}_\mathbb{C}(M)$ by exterior product $$ e^{B}\varphi =(1 + B + \frac{1}{2}B \wedge B) \wedge \varphi. $$ This action is orthogonal with respect to the Mukai pairing $$ \langle e^{B} \varphi, e^{B}\varphi' \rangle = \langle \varphi, \varphi' \rangle. $$ A real closed 2-form is called a $B$-field. For a $B$-field $B$ and a generalized CY structure $\varphi$, the $B$-field transform $ e^{B} \varphi$ is a generalized CY structure. We will later show that a $B$-field is indispensable when we view complex and symplectic structures as special instance of a more general notion. The following shows that a generalized CY structure is a $B$-field transform of either one of the fundamental CY structures. \begin{Prop}[\cite{Hit}] \label{type A and B} Let $\varphi$ be a generalized CY structure. \begin{enumerate} \item If $\varphi_0 \ne0$, then $$ \varphi=\varphi_0 e^{B+\sqrt{-1}\omega} $$ with a symplectic $\omega$ and a $B$-field $B$. \item If $\varphi_0=0$, then $$ \varphi=e^{B}\sigma= \sigma + B^{0,2} \wedge \sigma $$ with is a holomorphic 2-form $\sigma$ with respect to a complex structure $J$ and a $B$-field $B$. \end{enumerate} \end{Prop} \begin{Def} Generalized CY structure of (1) and (2) in Proposition \ref{type A and B} are called type $A$ and type $B$ respectively. \end{Def} We consider the group $\mathrm{Diff}_*(M)$ of all diffeomorphisms $f:M\rightarrow M$ such that the induced action $f^*:H^2(M,\mathbb{Z}) \rightarrow H^2(M,\mathbb{Z})$ is trivial. Generalized CY structures $\varphi$ and $\varphi'$ are called isomorphic if there exists an exact $B$-field $B$ and a diffeomoprhism $f \in \mathrm{Diff}_*(M)$ such that $\varphi=e^{B} f^*\varphi'$. The most fascinating aspects of generalized CY structures is the occurrence of the classical CY structure $\sigma$ (type $B$) as well as of symplectic generalized CY structure $e^{\sqrt{-1}\omega}$ (type $A$) in the same moduli space. This allows us to pass from the symplectic to the complex world in a continuous fashion. \begin{Ex}[\cite{Hit}] For a complex structure $\sigma$, the real and imaginary parts $\mathrm{Re}(\sigma), \mathrm{Im}(\sigma)$ are themselves real symplectic forms. A family of generalized CY structures of type $A$ $$ \varphi_t=t e^{\frac{1}{t}(\mathrm{Re}(\sigma) + \sqrt{-1} \mathrm{Im}(\sigma))} $$ converges, as $t$ goes to $0$, to the generalized CY structure $\sigma$ of type $B$. In this way, the $B$-fields interpolate between generalized Calabi--Yau structures of type $A$ and $B$. \end{Ex} \subsection{(hyper)K\"ahler structures} \begin{Def} Let $\varphi$ be a generalized CY structure. Then $P_\varphi \subset A^*(M)$ and $P_{[\varphi]} \subset H^*(M,\mathbb{R})$ denote the real $2$-planes \begin{align} P_\varphi &= \mathbb{R} \mathrm{Re} \varphi \oplus \mathbb{R} \mathrm{Im} \varphi \subset A^*(M), \notag \\ P_{[\varphi]} &= \mathbb{R} [\mathrm{Re} \varphi] \oplus \mathbb{R} [\mathrm{Im} \varphi] \subset H^*(M,\mathbb{R}) \notag \end{align} spanned by the real and imaginary parts of $\varphi$ and $[\varphi]$ respectively. They are positive plane with respect to the Makai pairings. \end{Def} We say that two generalized CY structures $\varphi$ and $\varphi'$ are orthogonal if $P_\varphi $ and $P_{\varphi'}$ are pointwise orthogonal. The orthogonality of $2$-planes $P_\varphi $ and $P_{\varphi'}$ is in general a stronger condition than just $\langle \varphi, \varphi' \rangle=0$. \begin{Def}[K\"ahler] A CY structure $\varphi$ is called K\"ahler if there exists another generalized CY structure $\varphi'$ orthogonal to $\varphi$. In this case, $\varphi'$ is called a K\"ahler structure for $\phi$. \end{Def} Let $\varphi=\sigma$ a holomorphic $2$-form for a complex structure $J$ on $M$. If $\varphi'$ is a K\"ahler structure for $\varphi$, then it is of the form $\varphi'=\varphi'_0e^{B+\sqrt{-1}\omega}$ (type $A$) for some $B$-field $B$ and symplectic form $\omega$. The orthogonality is equivalent to $$ \sigma \wedge B = \sigma \wedge \omega =0. $$ Therefore $B$ is a closed real $(1,1)$-form and $\pm \omega$ is a K\"ahler form with respect to $J$. A hyperK\"ahler structure is then defined as a special instance of a K\"ahler structure. Recall first that a K\"ahler form $\omega$ on a K3 surface is a hyperK\"ahler form if $$ 2\omega^2=C\sigma \wedge \overline{\sigma} $$ for some $C \in \mathbb{R}$. We may assume $C=1$ by rescaling $\sigma$. \begin{Def}[hyperK\"ahler] A generalized CY structure $\varphi$ is hyperK\"ahler if there exists another generalized CY structure $\varphi'$ such that \begin{enumerate} \item $\varphi$ and $\varphi'$ are orthogonal \item $\langle \varphi, \overline{\varphi} \rangle = \langle \varphi', \overline{\varphi'} \rangle$. \end{enumerate} Such $\varphi'$ is called a hyperK\"ahler structure for $\varphi$. \end{Def} \begin{Rem} \label{Rem B-shifts} \begin{enumerate} \item The definition of a (hyper)K\"ahler structure is symmetric for $\varphi$ and $\varphi'$. \item Let $\varphi$ be a generalized CY structure and $\varphi'$ a (hyper)K\"ahler for $\varphi$. Then $e^{B} \varphi'$ is a (hyper)K\"ahler structure for $e^{B} \varphi$. \end{enumerate} \end{Rem} We give a classification of the hyperK\"ahler structures for a generalized CY structure $\varphi$. By remark \ref{Rem B-shifts}, we may assume that $\varphi$ is either of the form $\varphi=\lambda e^{\sqrt{-1}\omega}$ (type $A$) or $\varphi=\sigma$ (type $B$). \begin{enumerate} \item If $\varphi=\sigma$, then a hyperK\"ahler structure is a generalized CY structure $\varphi'=\lambda e^{B+\sqrt{-1}\omega}$ of type $A$, where $B$ is a closed $(1,1)$-form and $\pm \omega$ is a hyperK\"ahler form such that $2|\lambda|^2\omega^2=\sigma \wedge \overline{\sigma}$. \item If $\varphi=\lambda e^{\sqrt{-1}\omega}$, then a hyperK\"ahler structure is either \begin{enumerate} \item a generalized CY structure $\varphi'=\sigma$ of type $B$, where $\pm \omega$ is a hyperK\"ahler form, \item a generalized CY structure $\varphi'=\lambda' e^{B'+\sqrt{-1}\omega'}$ of type $A$ such that \begin{enumerate} \item $\omega \wedge \omega'=\omega \wedge B'=\omega' \wedge B = 0$, $B'^2=\omega^2+\omega'^2$, \item $|\lambda|^2 \omega^2 = |\lambda'|^2 \omega'^2$. \end{enumerate} \end{enumerate} \end{enumerate} \subsection{Generalized K3 surfaces} \begin{Def}[K3 surface] A K3 surface is a pair $(\sigma,\omega)$ of a closed 2-form $\sigma \in A_\mathbb{C}^2(M)$ with $\sigma \wedge \sigma=0$ and a symplectic form $\omega \in A^2(M)$ such that \begin{enumerate} \item $\omega \wedge \sigma=0$ \item $2 \omega^2=\sigma \wedge \overline{\sigma}>0$. \end{enumerate} \end{Def} A K3 surface $(\sigma,\omega)$ is simply a classical K3 surface $M_\sigma$ with a chosen hyperK\"ahler structure $\omega$. To avoid confusion, we usually denote by $M_\sigma$ or $S$ a classical K3 surface. \begin{Def}[generalized K3 surface] A generalized K3 surface is a pair $(\varphi_A,\varphi_B)$ of generalized CY structures such that $\varphi_A$ is a hyperK\"ahler structure for $\varphi_B$. \end{Def} A K3 surface $(\sigma,\omega)$ is often identified with a generalized K3 surface $(e^{\sqrt{-1}\omega},\sigma)$. A $B$-field transform $e^{B} (\varphi_A,\varphi_B)=(e^{B}\varphi_A,e^{B}\varphi_B)$ of a generalized K3 surface $(\varphi_A,\varphi_B)$ is a generalized K3 surface. Generalized K3 surfaces $(\varphi_A,\varphi_B)$ and $(\psi_A,\psi_B)$ are called isomorphic if there exists a diffeomorphism $f \in \mathrm{Diff}_*(M)$ and an exact real 2-form $B \in A^2(M)$ such that $(\varphi_A,\varphi_B)=e^{B} f^*(\psi_A,\psi_B)=(e^{B} f^*\psi_A,e^{B} f^*\psi_B)$. \begin{Def} We associate to a generalized K3 surface $(\varphi_A,\varphi_B)$ the oriented positive 4-spaces \begin{align} \Pi_{(\varphi_A,\varphi_B)} &= P_{\varphi_A} \oplus P_{\varphi_B} \subset A^{2*}(M), \notag \\ \Pi_{([\varphi_A],[\varphi_B])} &= P_{[\varphi_A]} \oplus P_{[\varphi_B]} \subset H^*(M,\mathbb{R}). \notag \end{align} \end{Def} The set $T_\Pi$ of the generalized K3 surface $(\varphi_A,\varphi_B)$ with fixed positive 4-space $\Pi$ is naturally isomorphic to the Grassmannian of oriented planes $$ \mathrm{Gr}^o_2(\Pi)=S^2 \times S^2 = \mathbb{P}^1 \times \mathbb{P}^1, $$ which is identified with a quadric $Q_\Pi \subset \mathbb{P}(\Pi_\mathbb{C})$. For a K3 surface $(e^{\sqrt{-1}\omega},\sigma)$, the 4-space $\Pi_{(e^{\sqrt{-1}\omega},\sigma)}$ is spanned by the oriented basis $$ 1-\frac{1}{2}\omega^2, \ \omega, \ \mathrm{Re}(\sigma), \ \mathrm{Im}(\sigma). $$ We may write $\omega_I=\omega$, $\omega_J=\mathrm{Re}(\sigma)$, $\omega_K= \mathrm{Im}(\sigma)$, where $I,J,K$ are the complex structures induced by the hyperK\"ahler form. Then there is a 2-sphere $$ S^2=\{aI+bJ+cK \ | \ a^2+b^2+c^2=1\} $$ of complex structures with respect to which the metric is K\"ahler. This $S^2 \cong \mathbb{P}^1$ is identified with the hyperplane section $Q_\Pi \cap \mathbb{P}(\mathbb{C}\langle \omega_I,\omega_J,\omega_K\rangle)$. The other generalized K3 surfaces parametrized by $\mathbb{P}^1 \times \mathbb{P}^1 \setminus S^2$ are not realized as the $B$-field transforms of points in $S^2$. \begin{Prop} \label{uniqueness} Let $(\varphi,\varphi')$ be a generalized K3 surface. Then there exists a K3 surface $(\sigma,\omega)$ and a closed $B$-field $B$ such that $$ \Pi_{(\varphi,\varphi')}=e^{B} \Pi_{(e^{\sqrt{-1} \omega},\sigma)}. $$ \end{Prop} \subsection{Moduli spaces and Torelli theorems} Let $\mathfrak{M}_{\mathrm{K3}}$ and $\mathfrak{M}_{\mathrm{gK3}}$ be the moduli spaces of K3 surfaces and generalized K3 structures respectively. We also define the moduli space of the $B$-field shifts of the hyperK\"ahler metrics on $M$ $$ \mathfrak{M}_{\mathrm{HK}}=\left( \mathrm{Met}^{\mathrm{HK}}(M)/\mathrm{Diff}_*(M) \right) \times H^2(M,\mathbb{R}). $$ Proposition \ref{uniqueness} implies that there exists a natural $S^2 \times S^2$-fibration $$ \mathfrak{M}_{\mathrm{gK3}} \longrightarrow \mathfrak{M}_{\mathrm{HK}}, \ (\varphi,\varphi') \mapsto (g,B), $$ where $B$ is given by $\Pi_{(\varphi,\varphi')}=e^{B} \Pi_{(e^{\sqrt{-1}\omega},\sigma)}$ and $g$ is the hyperK\"ahler metric associated with $\sigma$ and $\omega$. We have the canonical inclusion $$ \iota: \mathfrak{M}_{\mathrm{K3}} \times H^2(M,\mathbb{R}) \longrightarrow \mathfrak{M}_{\mathrm{gK3}}, \ (\sigma,\omega,B) \mapsto e^{B}(\sigma,e^{\sqrt{-1}\omega}). $$ \begin{Def} The period of a generalized K3 surface $(\varphi,\varphi')$ is the orthogonal pair of positive oriented planes $$ (P_{[\varphi]},P_{[\varphi']}) \in \mathrm{Gr}^{po}_{2,2}(H^*(M,\mathbb{R})). $$ Then the period map is defined by $$ \mathfrak{per}_{\mathrm{gK3}}: \mathfrak{M}_{\mathrm{gK3}} \longrightarrow \mathrm{Gr}^{po}_{2,2}(H^*(M,\mathbb{R})), \ (\varphi,\varphi') \mapsto (P_{[\varphi]},P_{[\varphi']}). $$ \end{Def} \begin{Def} The period of a $B$-field shift of hyperK\"ahler metric $(g,B)$ is the positive 4-space $$ \Pi_{([\varphi],[\varphi'])} \in \mathrm{Gr}^{po}_{4}(H^*(M,\mathbb{R})) $$ where $g$ is the hyperK\"ahler metric associated with $\sigma$ and $\omega$ and $\Pi_{(\varphi,\varphi')}=e^{B} \Pi_{(e^{\sqrt{-1}\omega},\sigma)}$. Then the period map is defined by $$ \mathfrak{per}_{\mathrm{HK}}: \mathfrak{M}_{\mathrm{HK}} \longrightarrow \mathrm{Gr}^{po}_{4}(H^*(M,\mathbb{R})), \ (g,B) \mapsto \Pi_{([\varphi],[\varphi'])}. $$ \end{Def} To summarize, we obtain the following commutative diagram: $$ \xymatrix{ \mathfrak{M}_{\mathrm{K3}} \times H^2(M,\mathbb{R}) \ar@{^{(}->}[r]^-{\iota} \ar[rd]_{S^2} & \mathfrak{M}_{\mathrm{gK3}} \ar[d]^{S^2 \times S^2} \ar[rr]^-{\mathfrak{per}_{\mathrm{gK3}}} & & \mathrm{Gr}^{po}_{2,2}(H^*(M,\mathbb{R})) \ar[d]^{S^2 \times S^2} \\ & \mathfrak{M}_{\mathrm{HK}} \ar[rr]^-{\mathfrak{per}_{\mathrm{HK}}} & & \mathrm{Gr}^{po}_{4}(H^*(M,\mathbb{R})) } $$ where $S^2$ and $S^2 \times S^2$ denote the fibers of corresponding the vertical arrows. An important consequence of the diagram is that $\mathfrak{per}_{\mathrm{gK3}}$ is an immersion with dense image. Finally we compare the moduli space $\mathfrak{N}_{\mathrm{gCY}}=\{\mathbb{C} \varphi\}/\cong$ of the generalized CY structures of hyperK\"ahler type, and the moduli space $\mathfrak{N}_{\mathrm{K3}}=\{\mathbb{C} \sigma\}/\mathrm{Diff}_*(M)$ of the classical marked K3 surfaces. \begin{Thm}[\cite{Huy}] \label{Torelli theorem} The classical period map $\mathfrak{per}_{\mathrm{K3}}$ naturally extends to the period map $\mathfrak{per}_{\mathrm{gCY}}$ for the generalized CY structures of hyperK\"ahler type. $$ \xymatrix@=18pt{ \mathfrak{N}_{\mathrm{gCY}} \ar@{}[d]|{\bigcup} \ar[rr]^-{\mathfrak{per}_{\mathrm{gCY}}}_-{\mathbb{C} \varphi \to [\phi]} \ \ & &\ar@{}[d]|{\bigcup} \widetilde{\mathfrak{D}}=\{ [\varphi] \in \mathbb{P}(H^*(M,\mathbb{C})) \ | \ \langle \varphi, \varphi \rangle=0, \langle \varphi, \overline{\varphi} \rangle>0 \} \\ \mathfrak{N}_{\mathrm{K3}} \ar[rr]^-{\mathfrak{per}_{\mathrm{K3}}}_-{\mathbb{C} \sigma \to [\sigma]} \ \ & & \mathfrak{D} =\{ [\sigma] \in \mathbb{P}(H^2(M,\mathbb{C})) \ | \ \langle \sigma, \sigma \rangle=0, \langle \sigma, \overline{\sigma} \rangle >0 \} } $$ $\mathfrak{per}_{\mathrm{gCY}}$ is \'etale surjective, and bijective over the complement of the hyperplane section $\mathbb{P}(H^2(M,\mathbb{C})\oplus H^4(M,\mathbb{C}))\cap \widetilde{\mathfrak{D}}$. \end{Thm} Therefore the generalized K3 surfaces could be considered as geometric realizations of points in the extended period domain $ \widetilde{\mathfrak{D}}$. This Torelli theorem will be a foundation of our formulation of mirror symmetry in Section \ref{MS for gK3}. \begin{figure}[htbp] \begin{center} \includegraphics[width=70mm]{moduli.eps} \caption{Moduli space $\mathfrak{N}_{\mathrm{gCY}}$} \end{center} \end{figure} \section{Rigid structures} \label{rigid structures} \subsection{N\'eron--Severi lattice and transcendental lattice} \label{NS and T} We begin by introducing the N\'eron--Severi lattice and the transcendental lattice of a generalized K3 surface. \begin{Def} \label{NS and T def} The N\'eron--Severi lattice and transcendental lattices of $X=(\varphi_A,\varphi_B)$ are defined respectively by \begin{align} \widetilde{NS}(X)=\{ \delta \in H^*(M,\mathbb{Z}) \ | \ \langle \delta, \varphi_B \rangle =0 \}, \notag \\ \widetilde{T}(X)=\{ \delta \in H^*(M,\mathbb{Z}) \ | \ \langle \delta, \varphi_A \rangle =0 \}. \notag \end{align} \end{Def} It is worth mentioning that $\widetilde{NS}(X) \cap \widetilde{T}(X)$ can be non-trivial. Moreover, neither $\varphi_A \in \widetilde{NS}(X)_\mathbb{C}$ nor $\varphi_B \in \widetilde{T}(X)_\mathbb{C}$ holds in general. Our definition of the transcendental lattice $\widetilde{T}(X)$ differs from Huybrechts' \cite{Huy} (the orthogonal complement of $\widetilde{NS}(X)$). We define $\widetilde{NS}(X)$ and $\widetilde{T}(X)$ on a completely equal footing. For later use, we introduce one more notation. For $\psi \in H^*(M,\mathbb{C})$, we denote by $L_\psi$ the smallest sublattice $L \subset H^*(M,\mathbb{Z})$ such that $\psi \in L_\mathbb{C}$. Then we may write $$ \widetilde{NS}(X)=L_{\varphi_B}^\perp, \ \ \ \widetilde{T}(X)=L_{\varphi_A}^\perp. $$ This in particular shows that the signatures of $\widetilde{NS}(X)$ and $\widetilde{T}(X)$ are of the form $(t,*)$ with $0 \le t \le 2$. It is instructive to compute basic examples. Let $\delta_i$ denote the degree $i$ part of $\delta \in H^*(M,\mathbb{Z})$. \begin{enumerate} \item If $\varphi_B=\sigma + B^{0,2} \wedge \sigma$, a twist of $\sigma$ by a $B$-field $B \in H^2(M,\mathbb{R})$, then $$ \widetilde{NS}(X) = \{\delta_0+ \delta_2 \ | \ \int_M \delta_2 \wedge \sigma = \delta_0 \int_M B^{0,2} \wedge \sigma \} \oplus H^4(M,\mathbb{Z}). $$ There is a natural inclusion $H^4(M,\mathbb{Z})\oplus NS(M_\sigma) \subset \widetilde{NS}(S)$ (the inclusion is strict if $B\in H^2(M,\mathbb{Q})$). In particular, for the classical case $\varphi_B=\sigma$, we recover \begin{align} \widetilde{NS}(X) &= H^0(M,\mathbb{Z}) \oplus NS(M_\sigma) \oplus H^4(M,\mathbb{Z}) \notag \\ &\cong NS'(M_\sigma) \notag \end{align} \item If $\varphi_A=e^{B+\sqrt{-1}\omega}$ for $B \in H^2(M,\mathbb{R})$ and a symplectic form $\omega$, the orthogonality condition $\langle \delta,\varphi_A \rangle=0$ is equivalent to \begin{align*} \int_M \delta_2 \wedge B - \frac{\delta_0}{2}\int_M(B^2-\omega^2) + \int_M \delta_4 =0, \\ \int_M \delta_2 \wedge \omega - \delta_0 \int_M B \wedge \omega=0. \end{align*} In particular, if a $B$-field is absent, i.e. $\varphi_A=e^{\sqrt{-1}\omega}$, then $$ \widetilde{T}(X)=H^2(M,\mathbb{Z})_\omega \oplus \{ \delta_0+ \delta_4 \ | \ \delta_0 \int_M \omega^2 = 2 \int_M \delta_4\}, $$ where $H^2(M,\mathbb{Z})_\omega$ is the space of $\omega$-primitive classes. \end{enumerate} \subsection{Complex rigidity} The Shioda--Inose K3 surfaces are known as complex rigid K3 surfaces in the sense that they do not admit any complex deformation keeping the maximal Picard number $20$. We first give a generalization of complex rigid K3 surfaces. \begin{Def} A generalized K3 surface $X=(\varphi_A,\varphi_B)$ is called complex rigid if \begin{enumerate} \item $\varphi_B$ is of type $B$, \item $\mathrm{rank} (\widetilde{NS}(X))=22$. \end{enumerate} \end{Def} \begin{Thm} \label{rigid K3} A complex rigid generalized K3 surface is of the form $X=(\lambda e^{B+B'+\sqrt{-1}\omega}, \sigma+B'\wedge \sigma)$, where \begin{enumerate} \item $M_\sigma$ is a complex rigid K3 surface, \item $B \in H^{1,1}(M_\sigma,\mathbb{R})$, \item $B' \in H^2(M,\mathbb{Q})$, \item $\pm \omega$ is a hyperK\"ahler form for $\sigma$. \end{enumerate} In other words, it is a rational $B$-field $B'$ shift of a rigid K3 surface equipped with a complexified K\"ahler structure $(\lambda e^{B \pm\sqrt{-1}\omega},\sigma)$. \end{Thm} \begin{proof} Let $X=(\varphi_A, \varphi_B)$ be a complex rigid K3 surface. Since $H^2(M,\mathbb{Z})$ is of signature $(3,19)$, $\varphi_A$ must be of type $A$. Hence we can write $X=e^{B'}(\lambda e^{B+\sqrt{-1}\omega},\sigma)$. Then $Y=(\lambda e^{B+\sqrt{-1}\omega},\sigma)$ is a generalized K3 surface, i.e. $B$ is a closed real $(1,1)$-form and $\pm \omega$ is a hyperK\"ahler form for $\sigma$. On the other hand, for $$ \varphi_B=\sigma+B'\wedge \sigma, $$ $\mathrm{rank}(L_{\varphi_B})=2$ if and only if $\mathrm{rank}(T(M_\sigma))=2$ and $B'$ is rational. \end{proof} The following is a generalization of the celebrated Shioda--Inose theorem concerning the classification of rigid K3 surfaces \cite{SI}. \begin{Thm} Let $\mathfrak{L}^{ev}_{(2,0)}$ be the set of isomorphism classes of positive definite even lattices of rank $2$. Let $\mathfrak{M}_{\mathrm{Cpx}}^{\mathrm{rigid}}$ be the set of isomorphism classes of complex rigid generalized K3 surfaces. Then $$ \mathfrak{M}_{\mathrm{Cpx}}^{\mathrm{rigid}} \longrightarrow \mathfrak{L}^{ev}_{(2,0)} \times H^2(M,\mathbb{Q}), \ \ \ X \mapsto (T(M_\sigma), B') $$ where $X=e^{B'}(\lambda e^{B+\sqrt{-1}\omega}, \sigma)$ as in Theorem \ref{rigid K3}, is surjective. The fiber over $(T(M_\sigma), B')$ is given by the 2 copies of complexified K\"ahler cone $H^{1,1}(M_\sigma) +\sqrt{-1} \mathcal{K}({M_\sigma})$ \end{Thm} \begin{proof} The Shioda--Inose theorem \cite{SI} asserts that there is a bijective correspondence between the isomorphism classes of rigid K3 surfaces and the set $\mathfrak{L}^{ev}_{(2,0)}$ of isomorphism classes of positive definite even lattices of rank $2$. The correspondence is given by $S \to T(S)$. Then the assertion follows from Theorem \ref{rigid K3}. \end{proof} \subsection{K\"ahler rigidity} In light of mirror symmetry, a rigid K\"ahler structure has been anticipated for a long time. A K\"ahler rigidity comparable with a complex rigidity naturally appears in the framework of generalized CY structures. We begin with a simple observation. Let $S$ be a K3 surface with Picard number $1$. We write $NS(S)=\mathbb{Z} H$ with $H^2=2n>0$, and consider $$ v_1=(1,0,-n), \ \ v_2=(0,H,0) \in NS'(S). $$ Then \begin{align} e^{\sqrt{-1}H} &= (1,\sqrt{-1}H,-n) \notag \\ &=v_1+\sqrt{-1}v_2 \in (\mathbb{Z} v_1 + \mathbb{Z} v_2)_\mathbb{C} \subsetneq NS'(S)_\mathbb{C} \notag \end{align} On the other hand, for $\epsilon^2 \notin \mathbb{Q}$, \begin{align} e^{\sqrt{-1}\epsilon H} &=(1,\sqrt{-1} \epsilon H,-\epsilon^2n) \notag \\ &= (1,0,-\epsilon^2n) + \sqrt{-1}\epsilon(0, H, 0) \notag \\ &= (1,0,0) -\epsilon^2 (0,0,n) + \sqrt{-1}\epsilon(0, H, 0) \in NS'(S)_\mathbb{C}. \notag \end{align} Hence there is no proper sublattice $L \subsetneq NS'(S)$ such that $e^{\sqrt{-1}\epsilon H} \in L_\mathbb{C}$. Therefore the K\"ahler structure $H$ is not deformable in such a way that $\mathrm{rank}(L_{e^{\sqrt{-1}\epsilon H}})=2$. This calculation illustrates that a generalized CY structure $e^{B+\sqrt{-1}\omega}$ is able to detect a hidden integral structure of the K\"ahler moduli space. \begin{Def} A symplectic manifold $(M,\omega)$ is called K\"ahler rigid if $\omega^2 \in H^4(M,\mathbb{Q})$. \end{Def} As the previous calculation shows, a symplectic manifold $(M,\omega)$ is K\"ahler rigid if and only if $\mathrm{rank}(L_{e^{\sqrt{-1}\omega}})=2$. \begin{Def} A generalized K3 surface $X=(\varphi_A,\varphi_B)$ is called K\"ahler rigid if \begin{enumerate} \item $\varphi_A$ is of type $A$, \item $\mathrm{rank} (\widetilde{T}(X))=22$. \end{enumerate} \end{Def} We provide a characterization of the K\"ahler rigid generalized K3 surfaces $(\varphi_A,\varphi_B)$ as follows. In contrast to the complex rigid case, the partner $\varphi_B$ can be either of type $A$ or $B$. \begin{Thm} A K\"ahler rigid generalized K3 surface is of the form $X=(\lambda e^{B+\sqrt{-1}\omega}, \varphi_B)$, where \begin{enumerate} \item $B \in H^2(M,\mathbb{Q})$, \item $\omega^2 \in H^4(M,\mathbb{Q})$. \end{enumerate} In other words, it is a rational $B$-field $B$ shift of a K\"ahler rigid symplectic manifold equipped with a hyperK\"ahler structure $(\lambda e^{\sqrt{-1}\omega}, \varphi_B)$. \end{Thm} \begin{proof} Let $X=(e^{B+\sqrt{-1}\omega},\varphi_B)$ be a K\"ahler rigid K3 surface. We consider an existence condition of a rank $2$ sublattice $L \subset H^*(M,\mathbb{Z})$ such that $$ e^{B+\sqrt{-1}\omega}=1 + B + \frac{1}{2}(B^2-\omega^2)+\sqrt{-1}(\omega+ B \wedge \omega) \in L_\mathbb{C}. $$ First, $B$ needs to be rational, and hence so is $\omega^2$. Then we may write the symplectic class $\omega=\kappa H$ for $\kappa^2 \in \mathbb{Q}$ and $H \in H^2(M,\mathbb{Z})$ with $H^2>0$. Indeed, in this case, there exist $m,n \in \mathbb{N}$ such that $$ m \mathrm{Re} (e^{B+\sqrt{-1}\kappa H}), \ n \mathrm{Im} (e^{B+\sqrt{-1}\kappa H}) \in H^*(M,\mathbb{Z}). $$ Then the complexification $L_\mathbb{C}$ of the lattice $$ L=\mathbb{Z} m \mathrm{Re} (e^{B+\sqrt{-1}\kappa H}) + \mathbb{Z} n \mathrm{Im} (e^{B+\sqrt{-1}\kappa H}) \subset H^*(M,\mathbb{Z}) $$ contains $e^{B+\sqrt{-1}\kappa H}$. \end{proof} By analogy with the complex rigid case, we conjecture the following mirror assertion of the Shioda--Inose theorem. \begin{Conj} \label{rigid corresp} There is a bijective correspondence between the isomorphism classes of K\"ahler rigid structure on $M$ and the set $\mathfrak{L}^{ev}_{(2,0)}$ of isomorphism classes of positive definite even lattices of rank $2$. The correspondence is given by $(M,\omega) \to L_{e^{\sqrt{-1}\omega}}$. \end{Conj} The conjecture in particular implies a natural bijective correspondence between the complex rigid structures and the Kahler rigid symplectic structures on $M$ (and between their generalized versions by the rational $B$-field shifts). This mirror correspondence is essentially given by comparison of the N\'eron--Severi lattices and transcendental lattices. $$ \xymatrix{ \{\text{Complex rigid}\} \ar@{<->}[rr]^{\text{mirror}} \ar[rd]_{\omega \mapsto L_{e^{\sqrt{-1}\omega}}} & & \{\text{K\"ahler rigid}\} \ar[ld]^{\sigma \mapsto L_\sigma} \\ & \mathfrak{L}^{ev}_{(2,0)} & } $$ \begin{Rem} Clearly, there exist parallel theories for $T^4$, the $4$-manifold underlying a $2$-dimensional complex torus. \end{Rem} \section{Mirror symmetry for generalized K3 surfaces} \label{MS for gK3} Our formulation of mirror symmetry for generalized K3 surfaces builds upon the combination of two novel ideas: (1) lattice polarizations of cycles (Dolgachev \cite{Dol}) and (2) generalized K3 structures (Hitchin \cite{Hit} and Huybrechts \cite{Huy}). As is discussed Section \ref{rigid structures}, lattices are sensitive to the integral structures associated with generalized K3 structures. \subsection{Lattices} The hyperbolic lattice $U$ is the rank $2$ even unimodular lattice defined by the Gram matrix $\begin{bmatrix} 0 & 1\\ 1 & 0\\ \end{bmatrix}$. $E_{8}$ is the rank $8$ even unimodular lattices defined by the corresponding Cartan matrix. Let $\Lambda_{K3}=U^{\oplus 3}\oplus E_8^{\oplus 2}$ be the K3 lattice. \subsection{Mirror symmetry for K3 surfaces ala Dolgachev} \label{MS ala Dolgachev} Let $S$ be a K3 surface. The N\'eron-Severi lattice and transcendental lattice are defined respectively by $$ NS(S)=H^{1,1}(S,\mathbb{R}) \cap H^2(S,\mathbb{Z}), \ \ \ T(S)=NS(S)^\perp, $$ where the orthogonal complement is taken in $H^2(S,\mathbb{Z}) \cong \Lambda_{K3}$. The Picard number is $\rho(S)=\mathrm{rank} (NS(S))$. The extended N\'eron-Severi lattice $NS'(S)$ by the sublattice $$ NS'(S)=H^0(S,\mathbb{Z}) \oplus NS(S) \oplus H^4(S,\mathbb{Z}). $$ of the Mukai lattice $H^*(S,\mathbb{Z}) \cong U^{\oplus 4}\oplus E_8^{\oplus 2}$. \begin{Def} Let $L$ be an even non-degenerate lattice of signature $(1, *)$. An $L$-polarized K3 surface is a pair $(S,i)$ of a K3 surface $S$ and a primitive lattice embedding $i:L \hookrightarrow NS(S)$. It is called ample if $i(L)$ contains a K\"ahler class. \end{Def} There may be several families of ample $L$-polarized K3 surfaces, and we usually choose one for the purpose of mirror symmetry. In the foundational article \cite{Dol}, Dolgachev formulated mirror symmetry for lattice polarized K3 surfaces as follows. \begin{Def}[\cite{Dol}] For a sublattice $L \subset \Lambda_{K3}$ of signature $(1,t)$, assume there exists a lattice $N$ such that $L^\perp=N\oplus U$. Then \begin{enumerate} \item a family of ample $L$-polarized K3 surfaces $\{S\}$ \item a family of ample $N$-polarized K3 surfaces $\{S^\vee\}$ \end{enumerate} are mirror symmetric. \end{Def} In fact, when the assumption holds, for generic $L$-polarized K3 surface $S$ and $N$-polarized K3 surface $S^\vee$, we have $$ NS'(S) \cong L \oplus U \cong T(S^\vee), \ \ \ T(S) \cong N \oplus U \cong NS'(S^\vee). $$ These are considered as a duality between algebraic cycles of $S$ and transcendental cycles of $S^\vee$, and vice versa. Mirror duality can also be investigated at the level of the moduli spaces. We refer the reader to the original article \cite{Dol} for more details. The crucial assumption in Dolgachev's formulation is the existence of a decomposition of the form $L^\perp=N\oplus U$ for $L \subset \Lambda_{K3}$ (or more generally $L^\perp =N \oplus U(k)$ for some $k \in \mathbb{N}$). There are geometric explanations for such a decomposition; the hyperbolic lattice $U$ corresponds to \begin{enumerate} \item the standard cusps of the Baily--Borel compactification of the period domains \cite{Sca}, \item the lattice spanned by the fiber and section classes of the SYZ fibrations on a mirror K3 surface. \end{enumerate} Although this formulation works beautifully in many case, it cannot be a definitive one as this artificial assumption does not hold in general. A prototypical example of such K3 surfaces is a Shioda--Inose K3 surface, which is a K3 surface $S$ with the maximal Picard number $20$. Hence, by the Hodge index theorem, $T(S)$ is of signature $(2,0)$ and the assumption is never satisfied. A natural question "How can we understand mirror symmetry for Shioda--Inose K3 surfaces?" is a long-standing puzzle in the mirror symmetry community. Indeed, its K\"ahler moduli space is of dimension 20 and complex moduli space is of dimension 0, and there seems no mirror partner for a Shioda--Inose K3 surface. \begin{table}[htb] \begin{center} \begin{tabular}{c|c|c} & Shioda--Inose K3 surface& \ \ \ \ \ \ \ \ \ mirror ?? \ \ \ \ \ \ \ \ \ \\ \hline \ K\"ahler \ & 20-dim & 0-dim \\ \hline \ complex \ & 0-dim & 20-dim \end{tabular} \end{center} \end{table} We will provide an answer to this puzzling problem in the framework of mirror symmetry for generalized K3 surfaces. \subsection{Lattice polarization via Mukai lattice} For integers $\kappa,\lambda \ge 2$ such that $\kappa+\lambda=24$, let $K$ and $L$ be even lattice of signature $(2,\kappa-2)$ and $(2,\lambda-2)$ respectively. \begin{Def} A $(K, L)$-polarized generalized K3 surface is a pair $(X,i)$ of a generalized K3 surface $X=(\varphi_A,\varphi_B)$ and a primitive embedding $i: K \oplus L \hookrightarrow H^*(M,\mathbb{Z})$ such that \begin{enumerate} \item $K \subset \widetilde{NS}(X)$ and $K_\mathbb{C}$ contains a generalized CY structure of type $A$, \item $L \subset \widetilde{T}(X)$ and $L_\mathbb{C}$ contains a generalized CY structure of type $B$. \end{enumerate} \end{Def} The containment of a generalized CY structure of type $A$ or type $B$ is comparable with the ampleness condition for the conventional lattice polarizations. Without these, swapping $\varphi_A$ and $\varphi_B$ produces a trivial mirror families. \begin{Rem} In the conventional $L$-polarization, there is asymmetry between algebraic and transcendental cycles; the inclusions $L \subset NS(S)$ and $ L^{\perp} \supset T(S)$ are not symmetric. On the other hand, a $(K,L)$-polarization imposes the inclusions $K \subset \widetilde{NS}(X)$ and $L \subset \widetilde{T}(X)$ evenly. \end{Rem} \subsection{Mirror symmetry for generalized K3 surfaces} As in the case of K3 surfaces, there may be several families of $(K,L)$-polarized generalized K3 surfaces, and we choose one for the purpose of mirror symmetry. \begin{Def} \label{MS for (K,L)-pol gK3s} For integers $\kappa,\lambda \ge 2$ such that $\kappa+\lambda=24$, let $K, L$ be even lattice of signature $(2,\kappa-2)$ and $(2,\lambda-2)$ respectively. Then \begin{enumerate} \item a family of $(K,L)$-polarized generalized K3 surfaces $\{X\}$ \item a family of $(L, K)$-polarized generalized K3 surfaces $\{X^\vee\}$ \end{enumerate} are mirror symmetric. \end{Def} The fundamental idea is comparable with Dolgachev's formulation. A major difference is the mixture of the degrees of cycles. For example, $\widetilde{NS}(X)$ may not contain a pure zero cycle (a point is no longer algebraic in this sense). Another notable difference is that there is no artificial assumption on the lattices. Let $X=(\varphi_A,\varphi_B)$ be a $(K, L)$-polarized generalized K3 surface. Note that the inclusions $K \subset \widetilde{NS}(X)$ and $L \subset \widetilde{T}(X)$ are equivalent to $\varphi_B \in L_\mathbb{C}$ and $\varphi_A \in K_\mathbb{C}$ respectively. \begin{Def} A deformation of $\mathbb{C} \varphi_A$ keeping $X$ as a $(K, L)$-polarized generalized K3 surface is called an $A$-deformation. The $A$-moduli space $\mathfrak{M}_A^X$ is defined as the space of $A$-deformations. A $B$-deformation and the $B$-moduli space $\mathfrak{M}_B^X$ are defined similarly. \end{Def} By the Torelli theorem (Theorem \ref{Torelli theorem}), $\dim \mathfrak{M}_A^X=\kappa-2$ and $\dim \mathfrak{M}_B^X=\lambda-2$. The construction of moduli spaces works in largely the same way as that for lattice polarized K3 surfaces (see Dolgachev's work \cite{Dol} for details). We first define $$ \mathfrak{D}_K =\{ [\varphi] \in \mathbb{P}(K_\mathbb{C}) \ | \ \langle \sigma, \sigma \rangle=0, \langle \varphi, \overline{\varphi} \rangle >0 \}, $$ which can be identified with $\mathrm{Gr}^{po}_{2}(K_\mathbb{R})\cong \mathrm{O}(2,\kappa-2)/(\mathrm{SO}(2)\times \mathrm{O}(\kappa-2))$. It is a complex manifold of dimension $\kappa-2$ with two connected components $\mathfrak{D}_L^+$ and $\mathfrak{D}_L^-$, each of which is a symmetric domain of type IV. Independent of the choice of lattices, the $(K, L)$-polarized generalized K3 surface $X$ form a $20$-dimensional family. By the Torelli theorem, the moduli space $\mathfrak{M}_A^X \times \mathfrak{M}_B^X$ is identified with an open subset of $\mathfrak{D}_{(K,L)}= \mathfrak{D}_K^+ \times \mathfrak{D}_L^+$. \subsection{Comparison: classical and new} We will next confirm that our formulation of mirror symmetry naturally includes Dolgachev's. Let $K' \subset \Lambda_{K3}$ be a sublattice of signature $(1,\rho-1)$. Assume that there exists a decomposition $(K')^\perp=L' \oplus U$ for a lattice $L'$ of signature $(1,19-\rho)$. Mirror symmetry for lattice polarized K3 surfaces asserts that the $K'$-polarized K3 surfaces and $L'$-polarized K3 surfaces are mirror symmetric to each other. Let $M_\sigma$ be a generic $K'$-polarized K3 surface equipped with a $B$-field $B$ shift of a hyperK\"ahler form $\omega$ ($B, \omega \in K'_\mathbb{R}$ are also taken generically). It defines a generalized K3 surface $$ X=(\varphi_A,\varphi_B)=(e^{B+\sqrt{-1}\omega}, \sigma). $$ and we have \begin{align} \widetilde{NS}(X) &= NS'(M_\sigma) \cong K' \oplus U, \notag \\ \widetilde{T}(X) &= T(M_\sigma) \cong L' \oplus U. \notag \end{align} Similarly, let $M_{\sigma^\vee}$ be a generic $L'$-polarized K3 surface equipped with a $B$-field $B^\vee$ shift of a hyperK\"ahler form $\omega^\vee$. It defines a generalized K3 surface $$ X^\vee=(\varphi_A,\varphi_B)=(e^{B^\vee+\sqrt{-1}\omega^\vee}, \sigma^\vee). $$ and we have \begin{align} \widetilde{NS}(X^\vee) &= NS'(M_{\sigma^\vee}) \cong L' \oplus U, \notag \\ \widetilde{T}(X^\vee) &= T(M_{\sigma^\vee}) \cong K \oplus U. \notag \end{align} By setting $K=K'\oplus U$ and $L=L'\oplus U$, $X$ is a $(K,L)$-polarized generalized K3 surface and $X^\vee$ is a $(L,K)$-polarized generalized K3 surface. Therefore the classical formulation of mirror symmetry is naturally considered as a special case of ours. \section{Mirror symmetry for Shioda--Inose K3 surfaces} \label{MS for SIK3} \subsection{Mirror symmetry for Shioda--Inose K3 surfaces} Our formulation of mirror symmetry will really comes into its own when classical formulation fails. We will finally discuss mirror symmetry for Shioda--Inose (complex rigid) K3 surfaces in the framework of mirror symmetry for generalized K3 surfaces. For an integer $n>0$, we define $$ K=\langle -2n \rangle^{\oplus 2} \oplus U \oplus E_8^{\oplus 2}, \ \ \ L=\langle 2n \rangle^{\oplus 2}, $$ where $\langle k \rangle$ denotes the lattice of rank $1$ generated by $v$ with $v^2=k$. \begin{enumerate} \item A family of the $(K, L)$-polarized generalized K3 surfaces is given by the family of the Shioda--Inose K3 surfaces $\{X=(e^{B +\sqrt{-1}\omega}, \sigma)\}$, where $\sigma$ is the complex structure of $M$ such that $T(M_\sigma)=L$ (such a complex structure is unique up to isomorphism) and $B, \omega \in NS(M_\sigma)_\mathbb{R}$. For a generic $X$, we have $$ \widetilde{NS}(X)=NS'(M_\sigma), \ \ \ \widetilde{T}(X)=T(M_\sigma), $$ which are isomorphic to $K$ and $L$ respectively. \item A family of the $(L, K)$-polarized generalized K3 surfaces contains the family of the classical K3 surfaces of the form $\{X^\vee=(e^{\sqrt{-1}H}, \sigma^\vee)\}$ as a subfamily. Here $\sigma^\vee$ is a complex structure of $M$ such that $NS(M_{\sigma^\vee})=\mathbb{Z} H, H^2=2n>0$. For a generic $X^\vee$, we have $$ \widetilde{NS}(X^\vee) \subset NS'(M_{\sigma^\vee}), \ \ \ \widetilde{T}(X^\vee) \supset T(M_{\sigma^\vee}) $$ and $\widetilde{NS}(X^\vee)=L$, $ \widetilde{T}(X^\vee)=K$. \end{enumerate} We observe that the dimensions of the A- and B-moduli spaces are interchanged between the mirror families. (cf. table in Section \ref{MS ala Dolgachev}) \begin{table}[htb] \begin{center} \begin{tabular}{c|c|c} & \ $(K, L)$-polarization \ & \ $(L, K)$-polarization \ \\ \hline $\mathfrak{M}_A$ & 20-dim & 0-dim \\ \hline $\mathfrak{M}_B$ & 0-dim & 20-dim \end{tabular} \end{center} \end{table} To summarize, the mirror partner of a Shioda--Inose K3 surfaces is in general given by a generalized K3 surface. The family of the K3 surfaces of the form $\{X^\vee=(e^{\sqrt{-1}H}, \sigma^\vee)\}$ is a 19-dimensional family contained in the genuine 20-dimensional mirror family. In order to understand mirror symmetry for K3 surfaces, we need to incorporate the deformations of generalized K3 surfaces in general. \begin{Rem} The combination of generalized CY geometry and lattice porarlization via the Muaki lattice is also the correct framework for mirror symmetry for $4$-tori $T^4$. \end{Rem} \subsection{Attractor mechanisms} The initial motivation of the present article comes from the attractor mechanisms on moduli spaces of CY 3-folds \cite{Moo, FK}. Let $X$ be a projective CY 3-fold. Let $\pi: \widetilde{\mathfrak{M}}_{\mathrm{Cpx}} \rightarrow \mathfrak{M}_{\mathrm{Cpx}}$ be the universal covering of the complex moduli space $\mathfrak{M}_{\mathrm{Cpx}}$ of $X$. The normal central charge of a $3$-cycle $\gamma \in H_3(X,\mathbb{Z})$ is defined by $$ Z(\Omega_{X_z},\gamma)= e^{\frac{K^B(z)}{2}}\int_\gamma \Omega_{X_z} $$ where $\Omega_{X_z}$ is a holomorphic volume form of $X_z$ and $K^B(z)$ is the Weil--Petersson potential $$ K^B(z)=- \log(\sqrt{-1}\int_{X_z} \Omega_{X_z} \wedge \overline{ \Omega_{X_z}}) $$ on $\widetilde{\mathfrak{M}}_{\mathrm{Cpx}}$. It induces a function $$ |Z(-,\gamma)|:\widetilde{\mathfrak{M}}_{\mathrm{Cpx}} \longrightarrow \mathbb{R}_{\ge0}, \ \ \ z \mapsto |Z(\Omega_z,\gamma)|, $$ called the mass function of $\gamma$. The stationary points of the mass functions are called the complex attractors and corresponding CY 3-folds are called the complex attractor varieties. This new class of CY 3-folds are a vast generalization of rigid CY 3-folds and expected to posses rich structures. In light of mirror symmetry, the K\"ahler attractor mechanisms are developed in our recent article \cite{FK}. The K\"ahler attractor varieties are defined in the same fashion (they correspond to the stationary points of the K\"ahler mass functions). If $X$ and $Y$ are mirror CY 3-folds, then the mirror map should induce a bijective correspondence between the set $\mathrm{Attr}_{\mathrm{Cpx}}^X$ of complex attractors of $X$ and the set $\mathrm{Attr}_{\mathrm{Kah}}^Y$ of K\"ahler attractors of $Y$: $$ \xymatrix@=18pt{ \mathfrak{M}_{\mathrm{Cpx}}^X\ar@{}[d]|{\bigcup} \ar@{}[r]|*{\cong} & \mathfrak{M}_{\mathrm{Kah}}^Y \ar@{}[d]|{\bigcup} \\ \mathrm{Attr}^X_{\mathrm{Cpx}} \ar@{}[r]|*{\cong}& \mathrm{Attr}^Y_{\mathrm{Kah}} } $$ An interesting observation is obtained when $X$ and $Y$ are the product of a K3 surface and an elliptic curve. On the complex side, a complex attractor variety is the product of a Shioda--Inose K3 surface $M_\sigma$ and an elliptic curve. On the K\"ahler side, a K\"ahler attractor variety is the product of a K\"ahler rigid symplectic manifold $(M,\omega)$ and an elliptic curve. This observation led us to the hidden integral structure on the K\"ahler moduli space and the idea of mirror symmetry in the framework of generalized K3 surfaces.
1,116,691,499,510
arxiv
\section{Numerical flux, temporal scheme, and CFL condition employed in the numerical simulations}\label{app:numerics} This appendix aims to present the necessary details to compute the numerical flux, \red{boundary conditions}, the CFL condition, and the temporal discretization for the simulations in section \ref{sec:numtest}. The pressure function in the simulations has the form of $P(\rho)=\rho^m$, with $m\geq1$. When $m=0$ the pressure satisfies the ideal-gas relation $P(\rho)=\rho$, and the density does not present vacuum regions during the temporal evolution. For this case the employed numerical flux is the versatile local Lax-Friedrich flux, which approximates the flux at the boundary $F_{i+\frac{1}{2}}$ in \eqref{eq:numflux} as \begin{equation} F_{i+\frac{1}{2}}=\mathscr{F}\left(U_{i+\frac{1}{2}}^-,U_{i+\frac{1}{2}}^+\right)=\frac{1}{2}\left(F\left(U_{i+\frac{1}{2}}^-\right)+F\left(U_{i+\frac{1}{2}}^+\right)-\lambda_{i+\frac{1}{2}} \left(U_{i+\frac{1}{2}}^+-U_{i+\frac{1}{2}}^-\right)\right), \end{equation} where $\lambda$ is taken as the maximum of the absolute value of the eigenvalues of the system, \begin{equation} \lambda_{i+\frac{1}{2}}=\max_{\left(U_{i+\frac{1}{2}}^-,U_{i+\frac{1}{2}}^+\right)}\left\{\left|u+\sqrt{P'(\rho)}\right|,\left|u-\sqrt{P'(\rho)}\right|\right\}. \end{equation} This maximum is taken locally for every node, resulting in different values of $\lambda$ along the lines of nodes. It is also possible to take the maximum globally, leading to the classical Lax-Friedrich scheme. For the simulations where $P(\rho)=\rho^m$ and $m>1$ vacuum regions with $\rho=0$ are generated. This implies that the hiperbolicity of the system \eqref{eq:generalsys2} is lost in those regions, and the local Lax-Friedrich scheme fails. As a result, an appropiate numerical flux has to be implemented to handle the vacuum regions. In this case a kinetic solver based on \cite{perthame2001kinetic} is employed. This solver is constructed from kinetic formalisms applied in macroscopic models, and has already been employed in previous works for shallow-water applications \cite{audusse2005well}. The flux at the boundary $F_{i+\frac{1}{2}}$ in \eqref{eq:numflux} is computed from \begin{equation} F_{i+\frac{1}{2}}=\mathscr{F}\left(U_{i+\frac{1}{2}}^-,U_{i+\frac{1}{2}}^+\right)=A_-\left(U_{i+\frac{1}{2}}^-\right)+A_+\left(U_{i+\frac{1}{2}}^+\right), \end{equation} where \begin{equation} A_-\left(\rho,\rho u\right)=\int_{\xi\geq0}\xi \begin{pmatrix} 1 \\ \xi \end{pmatrix} M(\rho,u-\xi)\,d\xi, \quad A_+\left(\rho,\rho u\right)=\int_{\xi\leq0}\xi \begin{pmatrix} 1 \\ \xi \end{pmatrix} M(\rho,u-\xi)\,d\xi. \end{equation} The function $M(\rho,\xi)$ is chosen accordingly to the kinetic representation of the macroscopic system, and for this case satisfies \begin{equation} M(\rho,\xi)=\rho^{\frac{2-m}{2}} \chi\left(\frac{\xi}{\rho^\frac{m-1}{2}}\right). \end{equation} The function $\chi(\omega)$ can be chosen in different ways. For this simulations we simply take it as a characteristic function, \begin{equation}\label{eq:chikinetic} \chi(\omega)=\frac{1}{\sqrt{12}}\mathbbm{1}_{\left\{\left|\omega\right|\leq\sqrt{3}\right\}}, \end{equation} although \cite{perthame2001kinetic} presents other possible choices for $\chi(\omega)$. Further valid numerical fluxes able to treat vacuum, such as the Rusanov flux or the Suliciu relaxation solver, are reviewed in \cite{bouchut2004nonlinear}. \red{The boundary conditions are taken to be no flux both for the density and the momentum equations. As a result, the evaluation of the numerical fluxes \eqref{eq:numflux} at the boundaries of the domain is taken as \begin{equation} F_{i-\frac{1}{2}}=0\enspace\text{if}\:i=1 \quad \text{and}\quad F_{i+\frac{1}{2}}=0\enspace\text{if}\:i=n. \end{equation}} The time discretization is acomplished by means of the third order TDV Runge-Kutta method \cite{gottlieb1998total}. From \eqref{eq:compactsys} we can define $L(U)$ as $L(U)=S(x,U)-\partial_xF(U)$, so that $\partial_t U=L(U)$. Then, the third order TDV Runge-Kutta temporal scheme to advance from $U^n$ to $U^{n+1}$ with a time step $\Delta t$ reads \begin{align*} U^{(1)}&=U^n+\Delta t L\left(U^n\right),\\ U^{(2)}&=\frac{3}{4}U^n+\frac{1}{4}U^{(1)}+\frac{1}{4}\Delta t L\left(U^{(1)}\right),\\ U^{n+1}&=\frac{1}{3}U^n+\frac{2}{3}U^{(2)}+\frac{2}{3}\Delta t L\left(U^{(2)}\right). \end{align*} The time step $\Delta t$ for the case of Lax-Friedrich flux is chosen from the CFL condition, \begin{equation} \Delta t = \text{CFL} \frac{\min_i \Delta x_i}{\max_{\forall\left(U_{i+\frac{1}{2}}^-,U_{i+\frac{1}{2}}^+\right)}\left\{\left|u+\sqrt{P'(\rho)}\right|,\left|u-\sqrt{P'(\rho)}\right|\right\}}, \end{equation} and the $\Delta t$ for the kinetic flux, with a function $\chi(\omega)$ as in \eqref{eq:chikinetic}, is chosen as \begin{equation} \Delta t = \text{CFL} \frac{\min_i \Delta x_i}{\max_{\forall\left(U_{i+\frac{1}{2}}^-,U_{i+\frac{1}{2}}^+\right)}\left\{|u|+3^{\frac{m-1}{4}}\right\}}. \end{equation} The CFL number is taken as $0.7$ in all the simulations. \section{Introduction}\label{sec:intro} The construction of robust well-balanced numerical methods for conservation laws has attracted a lot of attention since the initial works of LeRoux and collaborators \cite{greenberg1996well, gosse1996well}. The well-balanced property is equivalent to the exact C-property defined beforehand by Berm\'udez and V\'azquez in \cite{bermudez1994upwind}, and both of them refer to the ability of a numerical scheme to preserve the steady states at a discrete level and to accurately compute evolutions of small deviations from them. The historical evolution of well-balanced schemes is reviewed in \cite{gosse2013computing}. On the other hand, the derivation of numerical schemes preserving structural properties of the evolutions under study such as dissipations or conservations of relevant physical quantities is an important line of research in hydrodynamic systems and their overdamped limits, see for instance \cite{de2012discontinuous,carrillo2015finite,sun2018discontinuous,pareschi2018structure}. In the present work, we propose numerical schemes with well-balanced and free energy dissipation properties for a general class of balance laws or hydrodynamic models with attractive-repulsive interaction forces, and linear or nonlinear damping effects, such as the Cucker-Smale alignment term in swarming. The general hydrodynamic system has the form \begin{equation}\label{eq:generalsys} \begin{cases} \partial_{t}\rho+\nabla\cdot\left(\rho \bm{u}\right)=0,\quad \bm{x}\in\mathbb{R}^{d},\quad t>0,\\[2mm] {\displaystyle \partial_{t}(\rho \bm{u})\!+\!\nabla\!\cdot\!(\rho \bm{u}\otimes \bm{u})\!=-\nabla P(\rho)-\rho \nabla H(\bm{x},\rho) - \gamma\rho \bm{u}-\!\rho\!\!\!\int_{\mathbb{R}^{d}}\!\!\psi(\bm{x}-\bm{y})(\bm{u}(\bm{x})-\bm{u}(\bm{y}))\rho(\bm{y})\,d\bm{y} ,} \end{cases} \end{equation} where $\rho = \rho(\bm{x},t)$ and $\bm{u} = \bm{u}(\bm{x},t)$ are the density and the velocity, $P(\rho)$ is the pressure, $H(\bm{x},\rho)$ contains the attractive-repulsive effects from external $V$ or interaction potentials $W$, assumed to be locally integrable, given by \begin{equation*}\label{eq:pot} H(\bm{x},\rho)=V(\bm{x})+W(\bm{x})\star \rho, \end{equation*} and $\psi(\bm{x})$ is a nonnegative symmetric smooth function called the communication function in the Cucker-Smale model \cite{cucker2007emergent,cucker2007mathematics} describing collective behavior of systems due to alignment \cite{carrillo2017review}. The fractional-step methods \cite{leveque2002finite} have been the widely-employed tool to simulate the temporal evolution of balance laws such as \eqref{eq:generalsys}. They are based on a division of the problem in \eqref{eq:generalsys} into two simpler subproblems: the homogeneous hyperbolic system without source terms and the temporal evolution of density and momentum without the flux terms but including the sources. These subproblems are then resolved alternatively employing suitable numerical methods for each. This procedure introduces a splitting error which is acceptable for the temporal evolution, but becomes critical when the objective is to preserve the steady states. This is due to the fact that the steady state is reached when the fluxes are exactly balanced with the source terms in each discrete node of the domain. However, when solving alternatively the two subproblems, this discrete balance can never be achieved, since the fluxes and source terms are not resolved simultaneously. To correct this deficiency, well-balanced schemes are designed to discretely satisfy the balance between fluxes and sources when the steady state is reached \cite{bouchut2004nonlinear}. The strategy to construct well-balanced schemes relies on the fact that, when the steady state is reached, there are some constant relations of the variables that hold in the domain. These relations allow the resolution of the fluxes and sources in the same level, thus avoiding the division that the fractional-step methods introduce. Moreover, if the system enjoys a dissipative property and it has a Liapunov functional, obtaining analogous tools at the discrete level is key for the derivation of well-balanced schemes. In this work the steady-state relations and the dissipative property are obtained by means of the associated free energy, which in the case of the system in \eqref{eq:generalsys} is formulated as \begin{equation}\label{eq:freeenergy} \mathcal{F}[\rho]=\int_{\mathbb{R}^{d}}\Pi(\rho)d\bm{x}+\int_{\mathbb{R}^{d}}V(\bm{x})\rho(\bm{x})d\bm{x}+\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}W(\bm{x}-\bm{y})\rho(\bm{x})\rho(\bm{y}) d\bm{x} d\bm{y}, \end{equation} where \begin{equation}\label{eq:PandPi} \rho \Pi''(\rho)=P'(\rho). \end{equation} The pressure $P(\rho)$ and the potential term $H(\bm{x},\rho)$ appearing in the general system \eqref{eq:generalsys} can be gathered by considering the associated free energy. Taking into account that the variation of the free energy in \eqref{eq:freeenergy} with respect to the density $\rho$ is equal to \begin{equation}\label{eq:varfreeenergy} \frac{\delta \mathcal{F}}{\delta \rho}=\Pi'(\rho)+H(\bm{x},\rho), \end{equation} it follows that the general system \eqref{eq:generalsys} can be written in a compact form as \begin{equation}\label{eq:generalsys2} \begin{cases} \partial_{t}\rho+\nabla\cdot\left(\rho \bm{u}\right)=0,\quad \bm{x}\in\mathbb{R}^{d},\quad t>0,\\[2mm] {\displaystyle \partial_{t}(\rho \bm{u})\!+\!\nabla\!\cdot\!(\rho \bm{u}\otimes \bm{u})\!=-\rho \nabla \frac{\delta \mathcal{F}}{\delta \rho}-\gamma\rho \bm{u}-\!\rho\!\!\int_{\mathbb{R}^{d}}\!\!\psi(\bm{x}-\bm{y})(\bm{u}(\bm{x})-\bm{u}(\bm{y}))\rho(\bm{y})\,d\bm{y} .} \end{cases} \end{equation} The system in \eqref{eq:generalsys2} is rather general containing a wide variety of physical problems all under the so-called density functional theory (DFT) and its dynamic extension (DDFT) see e.g. \cite{goddard2012general,goddard2012unification,goddard2012overdamped,yatsyshin2012spectral,yatsyshin2013geometry,duran2016dynamical} and the references therein. A variety of well-balanced schemes have already been constructed for specific choices of the terms $\Pi(\rho)$, $V(\bm{x})$ and $W(\bm{x})$ in the free energy \eqref{eq:freeenergy}, see \cite{audusse2004fast, bouchut2004nonlinear, filbet2005approximation} for instance. Here the focus is set on the free energy and the natural structure of the system \eqref{eq:generalsys2}. It is naturally advantageous to consider the concept of free energy in the construction procedure of well-balanced schemes, since they rely on relations that hold in the steady states, and moreover, the variation of the free energy with respect to the density is constant when reaching these steady states, more precisely \begin{equation}\label{eq:steadyvarener} \frac{\delta \mathcal{F}}{\delta \rho}=\Pi'(\rho)+H(\bm{x},\rho)= \text{constant on each connected component of}\ \mathrm{supp}(\rho)\ \text{and}\ u=0, \end{equation} where the constant can vary on different connected components of $\mathrm{supp}(\rho)$. As a result, the constant relations in the steady states, which are needed for well-balanced schemes, are directly provided by the variation of the free energy with respect to the density. The steady state relations in \eqref{eq:steadyvarener} hold due to the dissipation of the linear damping $-\rho \bm{u}$ or nonlinear damping in the system \eqref{eq:generalsys}, which eventually eliminates the momentum of the system. This can be justified by means of the total energy of the system, defined as the sum of kinetic and free energy, \begin{equation}\label{eq:totalenergy} E(\rho,\bm{u})=\int_{\mathbb{R}^{d}}\frac{1}{2}\rho \left|\bm{u}\right|^2 d\bm{x}+\mathcal{F}(\rho), \end{equation} since it is formally dissipated, see \cite{giesselmann2017relative,carrillo2017weak,carrillo2018longtime}, as \begin{equation}\label{eq:equalenergy} \frac{dE(\rho,\bm{u})}{dt}=-\gamma\int_{\mathbb{R}^{d}}\rho \left|\bm{u}\right|^2 d\bm{x}-\!\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\!\!\psi(\bm{x}-\bm{y})\left|\bm{u}(\bm{y})-\bm{u}(\bm{x})\right|^2 \rho(\bm{x})\,\rho(\bm{y})\,d\bm{x}\,d\bm{y}. \end{equation} This last dissipation equation ensures that the total energy $E(\rho,\bm{u})$ keeps decreasing in time while there is kinetic energy in the system. At the same time, since the definition of the total energy \eqref{eq:totalenergy} also depends on the velocity $\bm{u}$, it results that the velocity throughout the domain eventually vanishes. When $\bm{u}=\bm{0}$ throughout the domain, the momentum equation in \eqref{eq:generalsys2} reduces to \begin{equation*} \rho \nabla \frac{\delta \mathcal{F}}{\delta \rho}=\bm{0}, \end{equation*} meaning that in the support of the density the steady state relation \eqref{eq:steadyvarener} holds. However, for those points outside the support of the density and satisfying $\rho=0$, the variation of the free energy with respect to the density does not need to keep the constant value when the steady state is reached. A discussion of the resulting steady states depending on $\Pi(\rho)$ and $H(\bm{x},\rho)$ is provided in \cite{carrillo2015finite,carrillo2016nonlinear,hoffmann2017keller}. The system \eqref{eq:generalsys} also satisfies an entropy identity \begin{equation}\label{eq:entrineq} \partial_{t} \eta(\rho, \rho \textbf{u})+ \nabla \cdot \bm{G}(\rho, \rho \textbf{u})= -\rho \bm{u}\cdot \nabla H(\bm{x},\rho)-\gamma\rho \left|\bm{u}\right|^2-\rho \int_{\mathbb{R}^{d}}\!\!\psi(\bm{x}-\bm{y})\,\bm{u}(\bm{x}) \cdot (\bm{u}(\bm{x})-\bm{u}(\bm{y}))\rho(\bm{y})\,d\bm{y}, \end{equation} where $\eta(\rho,\rho\bm{u})$ and $\bm{G}(\rho,\rho\bm{u})$ are the entropy and the entropy flux defined as \begin{equation}\label{eq:entropy} \eta(\rho, \rho \bm{u})=\rho \frac{\left|\bm{u}\right|^2}{2}+\Pi (\rho), \quad \bm{G} (\rho, \rho \bm{u})=\rho \bm{u}\left(\frac{\left|\bm{u}\right|^2}{2}+\Pi' (\rho) \right). \end{equation} From a physical point of view the entropy is always a convex function of the density \cite{lasota2013chaos}. As a result, from \eqref{eq:entropy} it is justified to assume that $\Pi(\rho)$ is convex, meaning that $\Pi'(\rho)$ has an inverse function for positive densities $\rho$. This last fact is a necessary requirement for the construction of the well-balanced schemes of this work, as it is explained in section \ref{sec:numsch}. Finally, notice that from the entropy identity \eqref{eq:entrineq}, one recovers the free energy dissipation \eqref{eq:equalenergy} by integration using the continuity equation to deal with the forces term $H(\bm{x},\rho)$ and using symmetrization of the nonlinear damping term due to $\psi$ being symmetric. Let us also point out that the evolution of the center of mass of the density can be computed in some particular cases. In fact, it is not difficult to deduce from \eqref{eq:generalsys2} that \begin{equation}\label{eq:moment} \frac{d}{dt}\int_{\mathbb{R}^{d}}\bm{x} \rho d\bm{x} =\int_{\mathbb{R}^{d}} \rho \bm{u} d\bm{x} \quad \mbox{and} \quad \frac{d}{dt}\int_{\mathbb{R}^{d}} \rho \bm{u} d\bm{x}= - \int_{\mathbb{R}^{d}} \nabla V(\bm{x}) \rho d\bm{x} -\gamma \int_{\mathbb{R}^{d}} \rho \bm{u} d\bm{x}\,, \end{equation} due to the antisymmetry of $\nabla W(\bm{x})$ and the symmetry of $\psi(\bm{x})$. Therefore, in case $V(\bm{x})$ is not present or quadratic, \eqref{eq:moment} are explicitly solvable. Moreover, if the potential $V(\bm{x})$ is symmetric, the initial data for the density is symmetric, and the initial data for the velocity is antisymmetric, then the solution to \eqref{eq:generalsys2} keeps these symmetries in time, i.e., the density is symmetric and the velocity is antisymetric for all times, and the center of mass is conserved $$ \frac{d}{dt}\int_{\mathbb{R}^{d}}\bm{x} \rho d\bm{x}= 0\,. $$ The steady state relations \eqref{eq:steadyvarener} only hold when the linear damping term is included in system \eqref{eq:generalsys}. When only the nonlinear damping of Cucker-Smale type is present, the system has the so-called moving steady states, see \cite{carrillo2016pressureless,carrillo2017review,carrillo2018longtime}, which satisfy the more general relations \begin{equation}\label{eq:steadyvarener2} \frac{\delta \mathcal{F}}{\delta \rho}=\text{constant on each connected component of}\ \mathrm{supp}(\rho)\ \text{and}\ \bm{u}=\text{constant} . \end{equation} However, the construction of well-balanced schemes satisfying the moving steady state relations has proven to be more difficult than for the still steady states \eqref{eq:steadyvarener} without dissipation. For literature about well-balanced schemes for moving steady states without dissipation, we refer to \cite{noelle2007high,xing2011advantage}. The most popular application in the literature for well-balanced schemes deals with the Saint-Venant system for shallow water flows with nonflat bottom \cite{audusse2004fast, bouchut2004nonlinear, canestrelli2009well, liang2009numerical, xing2005high, xing2014survey}, for which $\Pi(\rho)=\frac{g}{2}\rho^2$, with $g$ being the gravity constant, and $H(\bm{x},\rho)$ depends on the bottom. Here it is important to remark the work of Audusse et al. in \cite{audusse2004fast}, where they propose a hydrostatic reconstruction that has successfully inspired more sophisticated well-balanced schemes in the area of shallow water equations \cite{marche2007evaluation, noelle2006well}. Another area where well-balanced schemes have been fruitful is chemosensitive movement, with the works of Filbet, Shu and their collaborators \cite{filbet2005derivation,filbet2005approximation,gosse2012asymptotic,xing2006high}. In this case the pressure satisfies $\Pi(\rho)=\rho \left(\ln(\rho)-1\right)$ and $H$ depends on the chemotactic sensitivity and the chemical concentration. The list of applications of the system \eqref{eq:generalsys} continues growing with more choices of $\Pi(\rho)$ and $H(\bm{x},\rho)$ \cite{xing2006high}: the elastic wave equation, nozzle flow problem, two phase flow model, etc. The orders of accuracy from the finite volume well-balanced schemes presented before range from first- and second-order \cite{audusse2004fast, leveque1998balancing, xu2002well,kurganov2007second,liang2009numerical} to higher-order versions \cite{xing2006high, vukovic2004weno,noelle2006well,gallardo2007well}. Again, the most popular application has been shallow water equations, and the survey from Xing and Shu \cite{xing2014survey} provides a summary of all the shallow water methods with different accuracies. Some of the previous schemes proposed were equipped to satisfy natural properties of the systems under consideration, such as nonnegativity of the density \cite{audusse2005well,kurganov2007second} or the satisfaction of a discrete entropy inequality \cite{audusse2004fast,filbet2005approximation}, enabling also the computation of dry states \cite{gallardo2007well} . Theoretically the Godunov scheme satisfies all these properties \cite{leroux1999riemann}, but its main drawback is its computationally expensive implementation. The high-order schemes usually rely on the WENO reconstructions originally proposed by Jiang and Shu \cite{jiang1996efficient}. Other well-balanced numerical approaches employed to simulate the system \eqref{eq:generalsys2} are finite differences \cite{xing2006high2,xing2005high}, which are equivalent to the finite volume methods for first-and second-order, and the discontinuous Galerkin methods \cite{xing2006high}. The overdamped system of \eqref{eq:generalsys2} with $\psi\equiv 0$, obtained in the free inertia limit where the momentum reaches equilibrium on a much faster timescale than the density, has also been numerically resolved for general free energies of the form \eqref{eq:freeenergy}, via finite volume schemes \cite{carrillo2015finite} or discontinuous Galerkin approaches \cite{sun2018discontinuous}. This scheme for the overdamped system also conserves the dissipation of the free energy at the discrete approximation. \red{ The novelty of this work is twofold. Foremost, all these previous schemes were only applicable for specific choices of $\Pi(\rho)$ and $H(\bm{x},\rho)$, meaning that a general scheme valid for a wide range of applications is lacking. And while some previous schemes \cite{xing2006high} could be employed in more general cases, the focus in the literature has been on the shallow water and chemotaxis equations. In addition, the function $H(\bm{x},\rho)$, which results from summing $V(\bm{x})$ and $W(\bm{x})\star \rho$ as in \eqref{eq:pot}, has so far been taken as dependent on $\bm{x}$ only, unlike the present work where it depends on $\rho$ by means of the convolution with an interaction potential $W(\bm{x})$.} In this work we present a finite volume scheme for a general choice of $\Pi(\rho)$ and $H(\bm{x},\rho)$ which is first- and second-order accurate and satisfies the nonnegativity of the density, the well-balanced property, the semidiscrete entropy inequality and the semidiscrete free energy dissipation. Furthermore, as it is shown in example \ref{ex:hardrods} of section \ref{sec:numtest}, it can also be applied to more general free energies than the one in \eqref{eq:freeenergy} and with the form \begin{equation}\label{eq:freeenergygeneral} \mathcal{F}[\rho]=\int_{\mathbb{R}^{d}}\Pi(\rho)d\bm{x}+\int_{\mathbb{R}^{d}}V(\bm{x})\rho(\bm{x})d\bm{x}+\frac{1}{2}\int_{\mathbb{R}^{d}}K\left(W(\bm{x})\star \rho(\bm{x})\right) \rho(\bm{x})d\bm{x}, \end{equation} where $K$ is a function depending on the convolution of $\rho(\bm{x})$ with the kernel $W(\bm{x})$. Its variation with respect to the density satisfies \begin{equation}\label{eq:varfreeenergygeneral} \frac{\delta \mathcal{F}}{\delta \rho}=\Pi'(\rho)+V(\bm{x})+\frac{1}{2} K\left(W(\bm{x})\star \rho \right)+\frac{1}{2} K'\left(W(\bm{x})\star \rho\right) \left(W(\bm{x})\star \rho\right). \end{equation} These free energies arise in applications related to (D)DFT\cite{duran2016dynamical,goddard2012unification}, see \cite{carrillo2017blob} for other related free energies and properties. \red{The other novel technical aspect of this work concerns the numerical treatment of the different source terms in \eqref{eq:generalsys}. In fact, in order to keep the well-balanced property and the decay of the free energy we treat source terms differently. While the dissipative terms are harmless and treated by direct approximations, the fundamental question is how to choose the discretization of the potential term given by $H(\bm{x},\rho) =V(\bm{x})+W(\bm{x})\star \rho$. For this purpose we appropriately extend the ideas in \cite{bouchut2004nonlinear,filbet2005approximation} to our case to keep the well-balanced property and the energy decay. The condition for stationary states \eqref{eq:steadyvarener} is crucial in defining an approximation of the term $-\rho \nabla H(\bm{x},\rho) $ by a dicretization of $\nabla P(\rho)$ which is consistent when the new reconstructed values of the density at the interfaces taking into account the potential $H(\bm{x},\rho) $. This general treatment includes as specific cases both the shallow-water equations \cite{audusse2004fast, bouchut2004nonlinear} and the hyperbolic chemotaxis problem \cite{filbet2005approximation}.} Section \ref{sec:numsch} describes the first- and second-order well-balanced scheme reconstructions, and provides the proofs of their main properties. Section \ref{sec:numtest} contains the numerical simulations, with a first subsection \ref{subsec:val} where the validation of the well-balanced property and the orders of accuracy is conducted, and a second subsection \ref{subsec:numexp} with numerical experiments from different applications. A wide range of free energies is employed to remark the extensive nature of our well-balanced scheme. A short summary and conclusions are offered in section 4. \section{Well-balanced finite volume scheme}\label{sec:numsch} The terms appearing in the one-dimensional system \eqref{eq:generalsys2} are usually gathered in the form of \begin{equation}\label{eq:compactsys} \partial_t U + \partial_x F(U) = S(x,U), \end{equation} with \begin{equation*} U=\begin{pmatrix} \rho \\ \rho u \end{pmatrix}, \quad F(U)=\begin{pmatrix} \rho u \\ \rho u^2+P(\rho) \end{pmatrix} \end{equation*} and \begin{equation*} S(x,U)=\begin{pmatrix} 0 \\ -\rho \partial_x H -\gamma\rho u-\rho \displaystyle\int_{\mathbb{R}}\psi(x-y)(u(x)-u(y))\rho(y)\,dy \end{pmatrix}, \end{equation*} where $U$ are the unknown variables, $F(U)$ the fluxes and $S(U)$ the sources. The one-dimensional finite volume approximation of \eqref{eq:compactsys} is obtained by breaking the domain into grid cells $\left(x_{i-1/2}\right)_{i\in \mathbb{Z}}$ and approximating in each of them the cell average of $U$. Then these cell averages are modified after each time step, depending on the flux through the edges of the grid cells and the cell average of the source term \cite{leveque2002finite}. Finite volume schemes for hyperbolic systems employ an upwinding of the fluxes and in the semidiscrete case they provide a discrete version of \eqref{eq:compactsys} under the form \begin{equation}\label{eq:fvbasic} \dfrac{d U_i}{dt}=-\dfrac{F_{i+\frac{1}{2}}-F_{i-\frac{1}{2}}}{\Delta x_i} + S_i, \end{equation} where the cell average of $U$ in the cell $\left(x_{i-\frac{1}{2}},x_{i+\frac{1}{2}}\right)$ is denoted as \begin{equation*} U_i=\begin{pmatrix} \rho_i \\ \rho_i u_i \end{pmatrix}, \end{equation*} $F_{i+\frac{1}{2}}$ is an approximation of the flux $F(U)$ at the point $x_{i+\frac{1}{2}}$, $S_i$ is an approximation of the source term $S(x,U)$ in the cell $\left(x_{i-\frac{1}{2}},x_{i+\frac{1}{2}}\right)$ and $\Delta x_i$ is the possibly variable mesh size $\Delta x_i=x_{i+\frac{1}{2}}-x_{i-\frac{1}{2}}$. The approximation of the flux $F(U)$ at the point $x_{i+\frac{1}{2}}$, denoted as $F_{i+\frac{1}{2}}$, is achieved by means of a numerical flux $\mathscr{F}$ which depends on two reconstructed values of $U$ at the left and right of the boundary between the cells $i$ and $i+1$. These two values, $U_{i+\frac{1}{2}}^-$ and $U_{i+\frac{1}{2}}^+$, are computed from the cell averages following different construction procedures that seek to satisfy certain properties, such as order of accuracy or nonnegativity. Two widely-employed reconstruction procedures are the second-order finite volume monotone upstream-centered scheme for conservation laws, referred to as MUSCL \cite{osher1985convergence}, or the weighted-essentially non-oscllatory schemes, widely known as WENO \cite{shu1998essentially}. Once these two reconstructed values are computed, $F_{i+\frac{1}{2}}$ is obtained from \begin{equation}\label{eq:numflux} F_{i+\frac{1}{2}}=\mathscr{F}\left(U_{i+\frac{1}{2}}^-,U_{i+\frac{1}{2}}^+\right). \end{equation} The numerical flux $\mathscr{F}$ is usually denoted as Riemann solver, since it provides a stable resolution of the Riemann problem located at the cell interfaces, where the left value of the variables in $U_{i+\frac{1}{2}}^-$ and the right value $U_{i+\frac{1}{2}}^+$. The literature concerning Riemann solvers is vast and there are different choices for it \cite{toro2013riemann}: Godunov, Lax-Friedrich, kinetic, Roe, etc. Some usual properties of the numerical flux that are assumed \cite{audusse2004fast, bouchut2004nonlinear, filbet2005approximation} are: \begin{enumerate}[label=\arabic*.] \item It is consistent with the physical flux, so that $\mathscr{F}(U,U)=F(U)$. \item It preserves the nonnegativity of the density $\rho_i (t)$ for the homogeneous problem, where the numerical flux is computed as in \eqref{eq:numflux}. \item It satisfies a cell entropy inequality for the entropy pair \eqref{eq:entropy} for the homogeneous problem. Then, according to \cite{bouchut2004nonlinear}, it is possible to find a numerical entropy flux $\mathscr{G}$ such that \begin{multline}\label{eq:cellentropyineq1} \hspace{0.9cm} G(U_{i+1})+\nabla_U \,\eta(U_{i+1})\left(\mathscr{F}(U_{i},U_{i+1})-F(U_{i+1})\right)\\ \leq \mathscr{G}(U_{i},U_{i+1}) \leq G(U_{i})+\nabla_U \,\eta(U_{i})\left(\mathscr{F}(U_{i},U_{i+1})-F(U_{i})\right), \end{multline} where $\nabla_U\,\eta$ is the derivative of $\eta$ with respect to $U=\begin{pmatrix} \rho \\ \rho u\end{pmatrix}$. \end{enumerate} The first- and second-order well-balanced schemes described in this section propose an alternative reconstruction procedure for $U_{i+\frac{1}{2}}^-$ and $U_{i+\frac{1}{2}}^+$ which ensures that the steady state in \eqref{eq:steadyvarener} is discretely preserved when starting from that steady state. Subsections \ref{subsec:firstorder} and \ref{subsec:secondorder} contain the first- and second-order schemes, respectively, together with their proved properties. \subsection{First-order scheme}\label{subsec:firstorder} The basic first-order schemes approximate the flux $F_{i+\frac{1}{2}}$ by a numerical flux $\mathscr{F}$ which depends on the cell averaged values of $U$ at the two adjacent cells, so that the inputs for the numerical flux in \eqref{eq:numflux} are \begin{equation}\label{eq:numflux1} F_{i+\frac{1}{2}}=\mathscr{F}\left(U_i,U_{i+1}\right). \end{equation} The resolution of the finite volume scheme in \eqref{eq:fvbasic} with a numerical flux of the form in \eqref{eq:numflux1} and a cell-centred evaluation of $-\rho \partial_x H$ for the source term $S_i$ is not generally able to preserve the steady states, as it was shown in the initial works of well-balanced schemes \cite{greenberg1996well, gosse1996well}. These steady states are provided in \eqref{eq:steadyvarener}, and satisfy that the variation of the free energy with respect to the density has to be constant in each connected component of the support of the density. The discrete steady state is defined in a similar way, \begin{equation}\label{eq:steadyvarenerdiscrete} \left(\frac{\delta \mathcal{F}}{\delta \rho}\right)_i=\Pi'(\rho_i)+H_i= C_\Gamma \text{ in each}\ \Lambda_\Gamma, \Gamma\in\mathbb{N}\,, \end{equation} where $\Lambda_\Gamma$, $\Gamma\in\mathbb{N}$, denotes the possible infinite sequence indexed by $\Gamma$ of subsets $\Lambda_\Gamma$ of subsequent indices $i\in\mathbb{Z}$ where $\rho_i>0$ and $u_i=0$, and $C_\Gamma$ the corresponding constant in that connected component of the discrete support. As it was emphasized above, the preservation of these steady states for particular choices of $\Pi'(\rho)$ and $H(x,\rho)$, such as shallow water \cite{audusse2004fast} or chemotaxis \cite{filbet2005approximation}, is paramount. A solution to allow this preservation was proposed in the work of Audusse et al. \cite{audusse2004fast}, where instead of evaluating the numerical flux as in \eqref{eq:numflux}, they chose \begin{equation}\label{eq:numfluxnew} F_{i+\frac{1}{2}}=\mathscr{F}\left(U_{i+\frac{1}{2}}^-,U_{i+\frac{1}{2}}^+\right), \ \text{where}\ U_{i+\frac{1}{2}}^{\pm}=\begin{pmatrix} \rho_{i+\frac{1}{2}}^{\pm} \\ \rho_{i+\frac{1}{2}}^{\pm} u_{i+\frac{1}{2}}^{\pm} \end{pmatrix}. \end{equation} The interface values $U_{i+\frac{1}{2}}^{\pm}$ are reconstructed from $U_i$ and $U_{i+1}$ by taking into account the steady state relation in \eqref{eq:steadyvarenerdiscrete}. Contrary to other works in which the interface values are reconstructed to increase the order of accuracy, now the objective is to satisfy the well-balanced property. Bearing this in mind, we make use of \eqref{eq:steadyvarenerdiscrete} to the cells with centred nodes at $x_i$ and $x_{i+1}$ to define the interface values such that \begin{equation*} \begin{gathered} \Pi'\left(\rho_{i+\frac{1}{2}}^{-}\right)+H_{i+\frac{1}{2}}=\Pi'\left(\rho_{i}\right)+H_{i},\\ \Pi'\left(\rho_{i+\frac{1}{2}}^{+}\right)+H_{i+\frac{1}{2}}=\Pi'\left(\rho_{i+1}\right)+H_{i+1}, \end{gathered} \end{equation*} where the term $H_{i+\frac{1}{2}}$ is evaluated to preserve consistency and stability, with an upwind or average value obtained as \begin{equation}\label{eq:hinterface} H_{i+\frac{1}{2}}=\max\left(H_{i},H_{i+1}\right)\quad \textrm{or} \quad H_{i+\frac{1}{2}}=\frac{1}{2}\left(H_{i}+H_{i+1}\right). \end{equation} Then, by denoting as $\xi(s)$ the inverse function of $\Pi'(s)$ for $s>0$, we conclude that the interface values $U_{i+\frac{1}{2}}^{\pm}$ are computed as \begin{equation}\label{eq:rhointerface} \begin{gathered} \rho_{i+\frac{1}{2}}^{-}=\xi \left(\Pi'\left(\rho_{i}\right)+H_{i}-H_{i+\frac{1}{2}}\right)_+,\quad u_{i+\frac{1}{2}}^{-}=u_i,\\ \rho_{i+\frac{1}{2}}^{+}=\xi \left(\Pi'\left(\rho_{i+1}\right)+H_{i+1}-H_{i+\frac{1}{2}}\right)_+,\quad u_{i+\frac{1}{2}}^{+}=u_{i+1}. \end{gathered} \end{equation} The function $\xi(s)$ is well-defined for $s>0$ since $\Pi(s)$ is strictly convex, $\Pi''(s)>0$. This is always the case since, as mentioned in the introduction, the physical entropies are always strictly convex from \eqref{eq:entropy}. However, some physical entropies and applications allow for vacuum of the steady states, therefore we need to impose the value of $\rho_{i+\frac{1}{2}}^{\pm}$, given that they should be nonnegative. \red{Henceforth, $\xi(s)$ denotes} the extension by zero of the inverse of $\Pi'(s)$ whenever $s>0$. Furthermore, the discretization of the source term is taken as \begin{equation}\label{eq:sourcewellbalanced} S_i=\frac{1}{\Delta x_i}\begin{pmatrix} 0 \\ P\left(\rho_{i+\frac{1}{2}}^{-}\right) - P\left(\rho_{i-\frac{1}{2}}^{+}\right) \end{pmatrix}-\begin{pmatrix} 0 \\ \gamma \rho_i u_i + \rho_i \displaystyle\sum_{j} \Delta x_j (u_i-u_j)\rho_j \psi_{ij} \end{pmatrix}, \end{equation} which is motivated by the fact that in the steady state, with $u=0$ in \eqref{eq:compactsys}, the fluxes are balanced with the sources, \begin{equation*} \rho \partial_x \Pi'(\rho)=-\rho \partial_x H. \end{equation*} Here, $\psi_{ij}$ is an approximation of the average value of $\psi$ on the interval centred at $x_i-x_j$ of length $\Delta x_j$. From here, and integrating over the cell volume, it results that \begin{equation}\label{eq:integratesource} \int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}}-\rho \partial_x H \,dx=\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}}\rho \partial_x \Pi'(\rho)\,dx=\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}}\partial_x P(\rho)\,dx=P(\rho_{i+\frac{1}{2}}^{-})-P(\rho_{i-\frac{1}{2}}^{+}), \end{equation} with the relation between $\Pi'(\rho)$ and $P(\rho)$ was given in \eqref{eq:PandPi}. This idea of distributing the source terms along the interfaces has already been explored in previous works \cite{katsaounis2004upwinding}. The discretization of the source term in \eqref{eq:sourcewellbalanced} entails that the discrete balance between fluxes and sources is accomplished when $F_{i+\frac{1}{2}}=P(\rho_{i+\frac{1}{2}}^{-})=P(\rho_{i+\frac{1}{2}}^{+})$. The computation of the numerical fluxes expressed in \eqref{eq:numfluxnew}, in which the interface values $U_{i+\frac{1}{2}}^{\pm}$ are considered, enables this balance if in the steady states $U_{i+\frac{1}{2}}^{-}=U_{i+\frac{1}{2}}^{+}=(\rho_{i+\frac{1}{2}}^{-},0)=(\rho_{i+\frac{1}{2}}^{+},0)$. Moreover, the discretization of the source term as in \eqref{eq:sourcewellbalanced} may seem counter-intuitive when the system is far away from the steady state, given that the balanced expressed in \eqref{eq:integratesource} only holds in those states. In spite of this, the consistency with the original system in \eqref{eq:compactsys} is not lost, as it will be proved in subsection \ref{subsec:firstorderprop}. Let us finally discuss the discretization of the potential $H(x,\rho)=V(x)+W\ast\rho (x)$. We will always approximate it as $$ H_i=V_i+\sum_j \Delta x_j W_{ij} \rho_j \,, \mbox{ for all } i\in\mathbb{Z}\,, $$ where $V_i=V(x_i)$ and $W_{ij}=W(x_i-x_j)$ in case the potential is smooth or choosing $W_{ij}$ as an average value of $W$ on the interval centred at $x_i-x_j$ of length $\Delta x_j$ in case of general locally integrable potentials $W$. Let us also point out that this discretization keeps the symmetry of the discretized interaction potential $W_{ij}=W_{ji}$ for all $i,j\in\mathbb{Z}$ whenever $W$ is smooth or solved with equal size meshes $\Delta x_i=\Delta x_j$ for all $i,j\in\mathbb{Z}$. \subsection{Properties of the first-order scheme}\label{subsec:firstorderprop} The first-order semidiscrete scheme defined in \eqref{eq:fvbasic}, constructed with \eqref{eq:numfluxnew}-\eqref{eq:sourcewellbalanced}, and for a numerical flux $\mathscr{F}\left(U_{i},U_{i+1}\right)=\left(\mathscr{F}^{\rho},\mathscr{F}^{\rho u}\right)\left(U_{i},U_{i+1}\right)$ satisfying the properties stated in the introduction of section \ref{sec:numsch}, satisfies: \begin{enumerate}[label=(\roman*)] \item preservation of the nonnegativity of $\rho_i(t)$; \item well-balanced property, thus preserving the steady states given by \eqref{eq:steadyvarenerdiscrete}; \item consistency with the system \eqref{eq:generalsys2}; \item cell entropy inequality associated to the entropy pair \eqref{eq:entropy}, \begin{equation}\label{eq:cellentropyineq2} \Delta x_i \frac{d\eta_i}{dt}+\Delta x_i H_i \frac{d\rho_i}{dt}+G_{i+\frac{1}{2}}-G_{i-\frac{1}{2}}=-u_i\left(\gamma\Delta x_i \rho_i u_i+\Delta x_i \rho_i \sum_j \Delta x_j \rho_j \left(u_i-u_j\right)\psi_{ij}\right)\,, \end{equation} where $\eta_i=\Pi\left(\rho_i\right)+\frac{1}{2}\rho_i u_i^2$ and $$ G_{i+\frac{1}{2}}=\mathscr{G}\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)+\mathscr{F}^\rho\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)H_{i+\frac{1}{2}}. $$ \item the discrete analog of the free energy dissipation property \eqref{eq:equalenergy} given by \begin{equation}\label{eq:disfreeenergydiscrete} \frac{d}{dt} E^\Delta(t)\leq -\gamma\sum_i \Delta x_i \rho_i u_i^2-\frac12 \sum_{i,j} \Delta x_i \Delta x_j \rho_i \rho_j \left(u_i-u_j\right)^2\psi_{ij} \end{equation} with \begin{equation}\label{eq:freeenergydiscrete} E^\Delta= \sum_i \frac{\Delta x_i}{2}\rho_i u_i^2 + \mathcal{F}^\Delta \quad \mbox{ and } \quad \mathcal{F}^\Delta = \sum_i \Delta x_i \left[\Pi\left(\rho_i\right)+ V_i\rho_i \right]+\frac12 \sum_{i,j} \Delta x_i \Delta x_j W_{ij} \rho_i \rho_j . \end{equation} \item the discrete analog of the evolution for centre of mass in \eqref{eq:moment}, \begin{equation}\label{eq:proofevolcentremass} \frac{d}{dt}\left(\sum_i \Delta x_i \rho_i x_i \right)=\sum_i \Delta x_i \mathscr{F}^\rho\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right) , \end{equation} which is reduced to \begin{equation}\label{eq:proofevolcentremasssimp} \sum_i \Delta x_i \rho_i x_i =0\quad \end{equation} when the initial density is symmetric and the initial velocity antisymmetric. This implies that the discrete centre of mass is conserved in time and centred at $0$. \end{enumerate} \begin{proof} Some of the following proofs follow the lines considered in \cite{audusse2004fast,filbet2005approximation}. \begin{enumerate}[label=(\roman*)] \item \red{If a first-order numerical flux $\mathscr{F}\left(U_{i},U_{i+1}\right)=\left(\mathscr{F}^{\rho},\mathscr{F}^{\rho u}\right)\left(U_{i},U_{i+1}\right)$ for the homogeneous problem, such as the Lax-Friedrich scheme detailed in Appendix \ref{app:numerics}, satisfies the nonnegativity of the density $\rho_i(t)$, then it necessarily follows that \begin{equation}\label{eq:proofnegat1} \mathscr{F}^{\rho}((\rho_i=0,u_i),(\rho_{i+1},u_{i+1}))-\mathscr{F}^{\rho}((\rho_{i-1},u_{i-1}),(\rho_i=0,u_i))\leq 0\quad \forall (\rho_i,u_i)_i. \end{equation} In our case, the sources do not contribute to the continuity equation in \eqref{eq:compactsys}, and for the numerical flux in \eqref{eq:numfluxnew} we need to check that \begin{equation}\label{eq:proofnegat2} \mathscr{F}^{\rho}\left(U_{i+\frac{1}{2}}^-,U_{i+\frac{1}{2}}^+\right)-\mathscr{F}^{\rho}\left(U_{i-\frac{1}{2}}^-,U_{i-\frac{1}{2}}^+\right)\leq 0 \end{equation} whenever $\rho_i=0$. When $\rho_i=0$, the reconstruction in \eqref{eq:hinterface} and \eqref{eq:rhointerface} yields $\rho_{i+\frac{1}{2}}^-=\rho_{i+\frac{1}{2}}^+=0$ since $\Pi(\rho)$ is assumed to be convex, and \eqref{eq:proofnegat2} results in \begin{equation}\label{eq:proofnegat3} \mathscr{F}^{\rho}((0,u_i),(\rho_{i+\frac{1}{2} }^+,u_{i+1}))-\mathscr{F}^{\rho}((\rho_{i-\frac{1}{2} }^-,u_{i-1}),(\rho_i=0,u_i))\leq 0\quad \forall (\rho_{i+\frac{1}{2}}^+,\rho_{i+\frac{1}{2}}^-,u_i)_i. \end{equation} Then, given that the numerical scheme is chosen so that it preserves the nonnegativity of the density for the homogeneous problem and \eqref{eq:proofnegat1} holds, it follows that \eqref{eq:proofnegat3} is satisfied too.} \item To preserve the steady state the discrete fluxes and source need to be balanced, \begin{equation}\label{eq:proof2step1} F_{i+\frac{1}{2}}-F_{i-\frac{1}{2}}=\Delta x S_i. \end{equation} When the steady state holds it follows from \eqref{eq:rhointerface} that $\rho_{i+\frac{1}{2}}^-=\rho_{i+\frac{1}{2}}^+$ and $u_{i+\frac{1}{2}}^-=u_{i-\frac{1}{2}}^+=0$, and as a result $U_{i+\frac{1}{2}}^-=U_{i+\frac{1}{2}}^+$. Then, by consistency of the numerical flux $\mathscr{F}$, \begin{equation}\label{eq:proof2step2} F_{i+\frac{1}{2}}=\mathscr{F}\left((\rho_{i+\frac{1}{2}}^-,0),(\rho_{i+\frac{1}{2}}^+,0)\right)=F(U_{i+\frac{1}{2}}^-)=F(U_{i+\frac{1}{2}}^+)=\begin{pmatrix} 0 \\ P(\rho_{i+\frac{1}{2}}^-)\end{pmatrix} =\begin{pmatrix} 0 \\ P(\rho_{i+\frac{1}{2}}^+)\end{pmatrix} . \end{equation} Concerning the source term $S_i$ of \eqref{eq:sourcewellbalanced}, in the steady state it is equal to \begin{equation}\label{eq:proof2step3} \Delta x_i S_i=\begin{pmatrix} 0 \\ P\left(\rho_{i+\frac{1}{2}}^{-}\right) - P\left(\rho_{i-\frac{1}{2}}^{+}\right) \end{pmatrix}. \end{equation} Then the balance in \eqref{eq:proof2step1} is obtained from \eqref{eq:proof2step2} and \eqref{eq:proof2step3}. \item For the consistency with the original system of \eqref{eq:generalsys2} one has to apply the criterion in \cite{bouchut2004nonlinear}, by which two properties concerning the consistency with the exact flux $F$ and the consistency with the source term need to be checked. Before proceeding, the finite volume discretization in \eqref{eq:fvbasic} needs to be rewritten in a non-conservative form as \begin{equation} \begin{gathered} \dfrac{d U_i}{dt}=-\dfrac{\mathscr{F}_{l}(U_i,U_{i+1},H_i,H_{i+1})-\mathscr{F}_{r}(U_{i-1},U_{i},H_{i-1},H_{i})}{\Delta x_i} \\-\begin{pmatrix} 0 \\ \gamma \rho_i u_i + \rho_i \sum_{j} (u_i-u_j)\rho_j \psi(x_i-x_j) \end{pmatrix} \end{gathered} \end{equation} where \red{\begin{equation*} \begin{gathered} \mathscr{F}_{l}(U_i,U_{i+1},H_i,H_{i+1})=F_{i+\frac{1}{2}}-\Delta x_i S_{i+\frac{1}{2}}^-,\\ \mathscr{F}_{r}(U_{i-1},U_{i},H_{i-1},H_{i})=F_{i-\frac{1}{2}}+\Delta x_i S_{i-\frac{1}{2}}^+. \end{gathered} \end{equation*}} Here the source term $S_i$ is considered as being distributed along the cells interfaces, satisfying \red{\begin{equation*} \begin{gathered} S_i=S_{i+\frac{1}{2}}^-+S_{i-\frac{1}{2}}^+-\begin{pmatrix} 0 \\ \gamma \rho_i u_i + \rho_i \sum_{j} (u_i-u_j)\rho_j \psi(x_i-x_j) \end{pmatrix},\\ S_{i+\frac{1}{2}}^-=\frac{1}{\Delta x_i}\begin{pmatrix} 0\\ P(\rho_{i+\frac{1}{2}}^-) - P(\rho_i) \end{pmatrix}\quad\text{and}\quad S_{i-\frac{1}{2}}^+=\frac{1}{\Delta x_i}\begin{pmatrix} 0\\ P(\rho_i)-P(\rho_{i-\frac{1}{2}}^+) \end{pmatrix}. \end{gathered} \end{equation*}} The consistency with the exact flux means that $ \mathscr{F}_{l}(U,U,H,H)=\mathscr{F}_{r}(U,U,H,H)=F(U)$. This is directly satisfied since $U_{i+\frac{1}{2}}^-=U_i$ and $U_{i+\frac{1}{2}}^+=U_{i+1}$ whenever $H_{i+1}=H_{i}$, due to \eqref{eq:rhointerface}. For the consistency with the source term the criterion to check is \begin{equation*} \mathscr{F}_{r}(U_i,U_{i+1},H_i,H_{i+1})-\mathscr{F}_{l}(U_i,U_{i+1},H_i,H_{i+1})=\begin{pmatrix}0\\ -\rho (H_{i+1}-H_i)+o(H_{i+1}-H_i)\end{pmatrix} \end{equation*} as $U_i$, $U_{i+1}\rightarrow U$ and $H_i$, $H_{i+1}\rightarrow H$. \red{For this case, \begin{equation} \begin{gathered} \mathscr{F}_{r}(U_i,U_{i+1},H_i,H_{i+1})-\mathscr{F}_{l}(U_i,U_{i+1},H_i,H_{i+1})=\begin{pmatrix}0\\S_{i+\frac{1}{2}}^++S_{i+\frac{1}{2}}^- \end{pmatrix}=\\\begin{pmatrix}0\\ \\ -(P(\xi(\Pi'(\rho_{i+1})+H_{i+1}-H_{i+\frac{1}{2}})-P(\rho_{i+1}))+(P(\xi(\Pi'(\rho_i)+H_i-H_{i+\frac{1}{2}})-P(\rho_i))\end{pmatrix}, \end{gathered} \end{equation}} where $H_{i+\frac{1}{2}}=\max\left(H_{i},H_{i+1}\right)$. By assuming without loss of generality that $H_{i+\frac{1}{2}}=H_{i}$, the second term of the last matrix results in \red{\begin{equation*} -P(\xi(\Pi'(\rho_{i+1})+H_{i+1}-H_{i}))+P(\xi(\Pi'(\rho_{i})))=-P(\xi(\Pi'(\rho_{i+1})+H_{i+1}+H_{i}))-P(\rho_i)\,. \end{equation*}} This term can be further approximated as \red{\begin{equation*} -(P\circ\xi)'(\Pi'(\rho_{i+1}))\,(H_{i+1}-H_{i})+o(H_{i+1}-H_{i})=-\rho_{i+1}(H_{i+1}-H_{i})+o(H_{i+1}-H_{i}) \end{equation*}} since \begin{equation*} (P\circ\xi)'(\Pi'(\rho_{i+1}))=P'(\rho_{i+1})\frac{1}{\Pi''(\rho_{i+1})}=\rho_{i+1} \end{equation*} by taking derivatives in $(\xi \circ \Pi')(\rho)=\rho$ and making use of \eqref{eq:PandPi}. Finally, since $\rho_{i+1}\rightarrow\rho$, the consistency with the source term is satisfied. An analogous procedure can be followed whenever $H_{i+\frac{1}{2}}=H_{i+1}$. \item To prove \eqref{eq:cellentropyineq2} we follow the strategy from \cite{filbet2005approximation}. We first set $G_{i+\frac{1}{2}}$ to be \begin{equation*} G_{i+\frac{1}{2}}=\mathscr{G}\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)+\mathscr{F}^\rho\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)H_{i+\frac{1}{2}}. \end{equation*} Subsequently, and employing the inequalities for $\mathscr{G}\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)$ in \eqref{eq:cellentropyineq1}, it follows that \begin{equation*} \begin{split} G_{i+\frac{1}{2}}-G_{i-\frac{1}{2}} \leq &\ G\left(U^-_{i+\frac{1}{2}}\right)+\nabla_U \eta \left(U^-_{i+\frac{1}{2}}\right) \left(\mathscr{F}\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)-F\left(U^-_{i+\frac{1}{2}}\right)\right)\\ & -G\left(U^+_{i-\frac{1}{2}}\right)-\nabla_U \eta \left(U^+_{i-\frac{1}{2}}\right) \left(\mathscr{F}\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)-F\left(U^+_{i-\frac{1}{2}}\right)\right)\\ & +\mathscr{F}^\rho\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)H_{i+\frac{1}{2}}-\mathscr{F}^\rho\left(U^-_{i-\frac{1}{2}}, U^+_{i-\frac{1}{2}}\right)H_{i-\frac{1}{2}}. \end{split} \end{equation*} This last inequality can be rewritten after some long computations as \begin{equation*} \begin{split} G_{i+\frac{1}{2}}-G_{i-\frac{1}{2}} \leq& \left(\Pi'\left(\rho^-_{i+\frac{1}{2}}\right)-\frac{1}{2}u_i^2+H_{i+\frac{1}{2}}\right)\mathscr{F}^\rho\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)\\ & -\left(\Pi'\left(\rho^+_{i-\frac{1}{2}}\right)-\frac{1}{2}u_i^2+H_{i-\frac{1}{2}}\right)\mathscr{F}^\rho\left(U^-_{i-\frac{1}{2}}, U^+_{i-\frac{1}{2}}\right)\\ & +u_i\left(\mathscr{F}^{\rho u}\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)-\mathscr{F}^{\rho u}\left(U^-_{i-\frac{1}{2}}, U^+_{i-\frac{1}{2}}\right)+P\left(\rho^+_{i-\frac{1}{2}}\right)-P\left(\rho^-_{i+\frac{1}{2}}\right)\right). \end{split} \end{equation*} From here, by bearing in mind the definition of $\rho^-_{i+\frac{1}{2}}$ and $\rho^+_{i-\frac{1}{2}}$ in \eqref{eq:rhointerface} and the definition of the scheme in \eqref{eq:fvbasic}-\eqref{eq:numfluxnew}-\eqref{eq:sourcewellbalanced}, we get \begin{equation* \begin{split} G_{i+\frac{1}{2}}-G_{i-\frac{1}{2}} \leq &\ \left(\Pi'\left(\rho_i\right)-\frac{1}{2}u_i^2+H_{i}\right)\left(\mathscr{F}^\rho\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)-\mathscr{F}^\rho\left(U^-_{i-\frac{1}{2}}, U^+_{i-\frac{1}{2}}\right)\right)\\ & +u_i\left(\mathscr{F}^{\rho u}\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)-\mathscr{F}^{\rho u}\left(U^-_{i-\frac{1}{2}}, U^+_{i-\frac{1}{2}}\right)+P\left(\rho^+_{i-\frac{1}{2}}\right)-P\left(\rho^-_{i+\frac{1}{2}}\right)\right)\\ = &\ -\left(\Pi'\left(\rho_i\right)-\frac{1}{2}u_i^2+H_{i}\right) \Delta x_i \frac{d\rho_i}{dt}-\Delta x_i u_i \frac{d}{dt}(\rho_i u_i)\\ &\, -u_i\left(\gamma \Delta x_i \rho_i u_i+\Delta x_i \rho_i \sum_j \rho_j \left(u_i-u_j\right)\psi_{ij}\right).\\ \end{split} \end{equation*} Finally, this last inequality results in the desired cell entropy inequality \eqref{eq:cellentropyineq2} by rearranging according to \eqref{eq:compactsys}, yielding \begin{equation}\label{aux} \Delta x_i \frac{d\eta_i}{dt}+\Delta x_i H_i \frac{d\rho_i}{dt}+G_{i+\frac{1}{2}}-G_{i-\frac{1}{2}}=-u_i\left(\gamma \Delta x_i \rho_i u_i+\Delta x_i \rho_i \sum_j \rho_j \left(u_i-u_j\right)\psi_{ij}\right). \end{equation} \item The last property of the scheme and formulas \eqref{eq:disfreeenergydiscrete}-\eqref{eq:freeenergydiscrete} follow by summing over the index $i$ over identity \eqref{aux}, collecting terms and symmetrizing the dissipation using the symmetry of $\psi$. \item Starting from the finite volume equation for the density in \eqref{eq:compactsys}, \begin{equation*} \Delta x_i \frac{d\rho_i}{dt} =-\mathscr{F}^\rho\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)+\mathscr{F}^\rho\left(U^-_{i-\frac{1}{2}}, U^+_{i-\frac{1}{2}}\right), \end{equation*} one can multiply it by $x_i$ and sum it over the index $i$, resulting in \begin{equation*} \frac{d}{dt}\left(\sum_i \Delta x_i \rho_i x_i \right)=\sum_i x_i \left(-\mathscr{F}^\rho\left(U^-_{i+\frac{1}{2}}, U^+_{i+\frac{1}{2}}\right)+\mathscr{F}^\rho\left(U^-_{i-\frac{1}{2}}, U^+_{i-\frac{1}{2}}\right)\right). \end{equation*} By rearranging and considering, for instance, periodic or no flux boundary conditions, we get~\eqref{eq:proofevolcentremass}. On the other hand, the finite volume equation for the momentum in \eqref{eq:compactsys}, after summing over the index $i$, becomes \begin{equation}\label{eq:proofmomentum} \begin{gathered} \frac{d}{dt}\left(\sum_i \Delta x_i \rho_i u_i \right)=\sum_i \left( P \left(\rho^-_{i+\frac{1}{2}}\right)-P \left(\rho^+_{i-\frac{1}{2}}\right)\right)-\gamma \sum_i \Delta x_i \rho_i u_i\\ -\sum_{i,j} \Delta x_i \Delta x_j \rho_i \rho_j (u_i-u_j) \psi_{ij} , \end{gathered} \end{equation} since the numerical fluxes cancel out due to the sum over the index $i$. In addition, the Cucker-Smale damping term also vanishes due to the symmetry in $\psi(x)$. Finally, if the initial density is symmetric and the initial velocity antisymmetric, the sum of pressures in the RHS of \eqref{eq:proofmomentum} is $0$, due to the symmetry in the density. This implies that the discrete solution for the density and momentum maintains those symmetries, since \eqref{eq:proofmomentum} is simplified as \begin{equation*} \sum_i \Delta x_i \rho_i u_i =0 \end{equation*} and as a result \eqref{eq:proofevolcentremass} reduces to \eqref{eq:proofevolcentremasssimp}. This means that the discrete centre of mass is conserved in time and is centred at $0$, for initial symmetric densities and initial antisymmetric velocities. \end{enumerate} \end{proof} \begin{remark} As a consequence of the previous proofs, our scheme conserves all the structural properties of the hydrodynamic system \eqref{eq:generalsys2} at the semidiscrete level including the dissipation of the discrete free energy \eqref{eq:equalenergy} and the characterization of the steady states. These properties are analogous to those obtained for finite volume schemes in the overdamped limit \cite{carrillo2015finite,sun2018discontinuous}. \end{remark} \begin{remark} All the previous properties, which are applicable for free energies of the form \eqref{eq:freeenergy}, can be extended to the general free energies in \eqref{eq:freeenergygeneral}. It can be shown indeed that the discrete analog of the free energy dissipation in \eqref{eq:disfreeenergydiscrete} still holds for a discrete total energy defined as in \eqref{eq:freeenergydiscrete} and a discrete free energy of the form \begin{equation} \mathcal{F}^\Delta = \sum_i \Delta x_i \left[\Pi\left(\rho_i\right)+ V_i\rho_i \right]+\frac12 \sum_{i} \Delta x_i \rho_i K_i, \end{equation} where $K_i$ is a discrete approximation of $K(W(x)\star\rho)$ at the node $x_i$ and is evaluated as \begin{equation} K_i=K\left(\sum_{j}\Delta x_j W_{ij} \rho_j\right). \end{equation} \end{remark} \subsection{Second-order extension}\label{subsec:secondorder} The usual procedure to extend a first-order scheme to second order is by computing the numerical fluxes \eqref{eq:numflux} from limited reconstructed values of the density and momentum at each side of the boundary, contrary to the cell-centred values taken for the first order schemes \eqref{eq:numflux1}. These values are classically computed in three steps: prediction of the gradients in each cell, linear extrapolation and limiting procedure to preserve nonnegativity. For instance, MUSCL \cite{osher1985convergence} is a usual reconstruction procedure following these steps. From here the values $\rho_{i,l}$, $\rho_{i,r}$, $u_{i,l}$ and $u_{i,r}$ are obtained $\forall i$, where $l$ indicates at the left of the boundary and $r$ at the right. Then the inputs for the numerical flux in \eqref{eq:numflux}, for a usual second-order scheme, are \begin{equation*} F_{i+\frac{1}{2}}=\mathscr{F}\left(U_{i,r},U_{i+1,l}\right). \end{equation*} This procedure has already been adapted to satisfy the well-balanced property and maintain the second order for specific applications, such as shallow water \cite{audusse2004fast} or chemotaxis \cite{filbet2005approximation}. In this subsection the objective is to extend the procedure to general free energies of the form \eqref{eq:freeenergy}. As it happened for the well-balanced first-order scheme, the boundary values introduced in the numerical flux, which in this case are $U_{i,r}$ and $U_{i+1,l}$, need to be adapted to satisfy the well-balanced property. For the well-balanced scheme the first step is to reconstruct the boundary values $\rho_{i,l}$, $\rho_{i,r}$, $u_{i,l}$ and $u_{i,r}$ following the three mentioned steps. In addition, the reconstructed values of the potential $H(x,\rho)$ at the boundaries, $H_{i,l}$ and $H_{i,r}$ $\forall i$, have to be also computed. This is done as suggested in \cite{audusse2004fast}. Instead of reconstructing directly $H_{i,l}$ and $H_{i,r}$ following the three mentioned steps, for certain applications one has to reconstruct firstly $\left(\Pi'(\rho)+H(x,\rho)\right)_i$ to obtain $\left(\Pi'(\rho)+H(x,\rho)\right)_{i,l}$ and $\left(\Pi'(\rho)+H(x,\rho)\right)_{i,r}$, and subsequently compute $H_{i,l}$ and $H_{i,r}$ as \red{\begin{equation*} \begin{gathered} H_{i,l}=\left(\Pi'(\rho)+H(x,\rho)\right)_{i,l}-\Pi'\left(\rho_{i,l}\right),\\ H_{i,r}=\left(\Pi'(\rho)+H(x,\rho)\right)_{i,r}-\Pi'\left(\rho_{i,r}\right). \end{gathered} \end{equation*}} This is shown in \cite{audusse2004fast} to be necessary in order to maintain nonnegativity and the steady state in applications where there is an interface between dry and wet cells. For instance, these interfaces appear when considering pressures of the form $P=\rho^m$ with $m>0$, as it is shown in examples \ref{ex:squarepot} and \ref{ex:squarepotdoublewell} of section \ref{sec:numtest}. For other applications where vacuum regions do not occur, the values $H_{i,l}$ and $H_{i,r}$ can be directly reconstructed following the three mentioned steps. After this first step, the inputs for the numerical flux are updated from \eqref{eq:numflux} to satisfy the well-balanced property as \begin{equation*} F_{i+\frac{1}{2}}=\mathscr{F}\left(U_{i+\frac{1}{2}}^-,U_{i+\frac{1}{2}}^+\right), \ \text{where}\quad U_{i+\frac{1}{2}}^{-}=\begin{pmatrix} \rho_{i+\frac{1}{2}}^{-} \\ \rho_{i+\frac{1}{2}}^{-} u_{i,r} \end{pmatrix},\quad U_{i+\frac{1}{2}}^{+}=\begin{pmatrix} \rho_{i+\frac{1}{2}}^{+} \\ \rho_{i+\frac{1}{2}}^{+} u_{i+1,l} \end{pmatrix}. \end{equation*} The interface values $\rho_{i+\frac{1}{2}}^{\pm}$ are reconstructed as in the first-order scheme, by taking into account the steady state relation in \eqref{eq:steadyvarenerdiscrete}. The application of \eqref{eq:steadyvarenerdiscrete} to the cells with centred nodes $x_i$ and $x_{i+1}$ leads to \begin{equation*} \begin{gathered} \Pi'\left(\rho_{i+\frac{1}{2}}^{-}\right)+H_{i+\frac{1}{2}}=\Pi'\left(\rho_{i,r}\right)+H_{i,r},\\ \Pi'\left(\rho_{i+\frac{1}{2}}^{+}\right)+H_{i+\frac{1}{2}}=\Pi'\left(\rho_{i+1,l}\right)+H_{i+1,l}, \end{gathered} \end{equation*} where the term $H_{i+\frac{1}{2}}$ is evaluated to preserve consistency and stability, with an upwind or average value obtained as \begin{equation*} H_{i+\frac{1}{2}}=\max\left(H_{i,r},H_{i+1,l}\right)\quad \textrm{or} \quad H_{i+\frac{1}{2}}=\frac{1}{2}\left(H_{i,r}+H_{i+1,l}\right). \end{equation*} Then, by denoting as $\xi(x)$ the inverse function of $\Pi'(x)$, the interface values $\rho_{i+\frac{1}{2}}^{\pm}$ are computed as \begin{equation*} \begin{gathered} \rho_{i+\frac{1}{2}}^{-}=\xi \left(\Pi'\left(\rho_{i,r}\right)+H_{i,r}-H_{i+\frac{1}{2}}\right),\\ \rho_{i+\frac{1}{2}}^{+}=\xi \left(\Pi'\left(\rho_{i+1,l}\right)+H_{i+1,l}-H_{i+\frac{1}{2}}\right). \end{gathered} \end{equation*} The source term is again distributed along the interfaces, \begin{equation*} S_i=S_{i+\frac{1}{2}}^{-}+S_{i-\frac{1}{2}}^{+}+S_i^c, \end{equation*} where \begin{equation*} S_{i+\frac{1}{2}}^{-}=\frac{1}{\Delta x_i}\begin{pmatrix} 0 \\ P\left(\rho_{i+\frac{1}{2}}^{-}\right) -P\left(\rho_{i,r}\right) \end{pmatrix}, \quad S_{i-\frac{1}{2}}^{+}=\frac{1}{\Delta x_i}\begin{pmatrix} 0 \\ P\left(\rho_{i,l}\right)- P\left(\rho_{i-\frac{1}{2}}^{+}\right) \end{pmatrix}. \end{equation*} \red{The inclusion of the central source term $S_i^c$ is vital in order to preserve the second-order accuracy and well-balanced property of the scheme. This idea was firstly introduced in \cite{katsaounis2005first}, where second order error estimates are derived under certain conditions for $S_i^c$. Further works customize this central source term $S_i^c$ for particular applications such as shallow water equations \cite{katsaounis2003second,audusse2004fast} or chemotaxis \cite{filbet2005approximation}. There is some flexibility in the choice of this term, as far as it satisfies two criteria for second-order accuracy and well-balancing. In the following remark we summarize the two criteria, which are described with more extend in Ref.~\cite{bouchut2004nonlinear} (specifically, (4.187) for second-order accuracy, and (4.204) for well-balancing).} \begin{remark} \red{The central source term $S_i^c$ preserves the second-order accuracy and well-balanced property of the scheme if the following two criteria are satisfied: \begin{enumerate}[label=(\roman*)] \item Second-order accuracy if \begin{equation}\label{eq:criteriaSc1} S_i^c\left(\rho_{i,l},\rho_{i,r},H_{i,l},H_{i,r}\right)=\begin{pmatrix} 0 \\ \left(-\frac{\rho_{i,l}+\rho_{i,r}}{2}+\mathcal{O}\left(\left|\rho_{i,r}-\rho_{i,l}\right|^2+\left|H_{i,r}-H_{i,l}\right|^2\right)\right)\left(H_{i,r}-H_{i,l}\right) \end{pmatrix} \end{equation} as $\rho_{i,r}-\rho_{i,l}\rightarrow 0 $ and $H_{i,r}-H_{i,l}\rightarrow 0 $. \item Well-balanced property if \begin{equation}\label{eq:criteriaSc2} S_i^c\left(\rho_{i,l},\rho_{i,r},H_{i,l},H_{i,r}\right)=F\left(\rho_{i,r},H_{i,r}\right)-F\left(\rho_{i,l},H_{i,l}\right), \end{equation} meaning that the steady states are let invariant. \end{enumerate}} \end{remark} The objective here is to provide a general form of $S_i^c$ which applies to general free energies of the form \eqref{eq:freeenergy}. Following the strategy in \cite{bouchut2004nonlinear}, we propose to approximate the generalized centred sources as \begin{equation*} S_i^c=\frac{1}{\Delta x_i}\begin{pmatrix} 0 \\ P(\rho_{i,r})-P(\rho_{i,r}^*)-P(\rho_{i,l})+P(\rho_{i,l}^*) \end{pmatrix}-\begin{pmatrix} 0 \\ \gamma \rho_i u_i + \rho_i \sum_{j} (u_i-u_j)\rho_j \psi(x_i-x_j) \end{pmatrix}, \end{equation*} where the values $\rho_{i,l}^*$ and $\rho_{i,r}^*$ are computed from the steady state relation \eqref{eq:steadyvarenerdiscrete} as \begin{equation*} \begin{gathered} \rho_{i,l}^*=\xi \left(\Pi'\left(\rho_{i,l}\right)+H_{i,l}-H_i^*\right),\\ \rho_{i,r}^*=\xi \left(\Pi'\left(\rho_{i,r}\right)+H_{i,r}-H_i^*\right), \end{gathered} \end{equation*} and $H_i^*$ is a centred approximation of the potentials satisfying \begin{equation*} H_i^*=\frac{1}{2}(H_{i,l}+H_{i,r}). \end{equation*} \red{The proposed structure of $S_i^c$ is suggested in \cite{bouchut2004nonlinear} and satisfies the two criteria for second-order accuracy \eqref{eq:criteriaSc1} and well-balanced property \eqref{eq:criteriaSc2}.} Overall, the second-order semidiscrete scheme defined in \eqref{eq:fvbasic} and constructed as detailed in this subsection \ref{subsec:secondorder}, and for a numerical flux $\mathscr{F}$ satisfying the properties stated in the introduction of section \ref{sec:numsch}, satisfies: \begin{enumerate}[label=(\roman*)] \item preservation of the nonnegativity of $\rho_i(t)$; \item well-balanced property, thus preserving the steady states given by \eqref{eq:steadyvarenerdiscrete}; \item consistency with the system \eqref{eq:generalsys2}; \item second-order accuracy. \end{enumerate} The proof of these properties is omitted here since it follows the same techniques from \cite{audusse2004fast,filbet2005approximation}, and the general procedure is very similar to the one from the first-order scheme in subsection \ref{subsec:firstorderprop}. \section{Numerical tests}\label{sec:numtest} This section details numerical simulations in which the first- and second-order schemes from section \ref{sec:numsch} are employed. Firstly, subsection \ref{subsec:val} contains the validation of the first- and second-order schemes: the well-balanced property and the order of accuracy of the schemes are tested in four different configurations. Secondly, subsection \ref{subsec:numexp} illustrates the application of the numerical schemes to a variety of choices of the free energy, leading to interesting numerical experiments for which analytical results are limited in the literature. Unless otherwise stated, all simulations contain linear damping with $\gamma=1$ and have a total unitary mass. Only the indicated ones contain the Cucker-Smale damping term, where the communication function satisfies \begin{equation*} \psi(x)=\frac{1}{\left(1+|x|^2\right)^\frac{1}{4}}. \end{equation*} The pressure function in the simulations has the form of $P(\rho)=\rho^m$, with $m\geq1$. When $m=1$ the pressure satisfies the ideal-gas relation $P(\rho)=\rho$, and the density does not develop vacuum regions during the temporal evolution. For this case the employed numerical flux is the versatile local Lax-Friedrich flux. For the simulations where $P(\rho)=\rho^m$ and $m>1$ vacuum regions with $\rho=0$ are generated. This implies that the hyperbolicity of the system \eqref{eq:generalsys2} is lost in those regions, and the local Lax-Friedrich scheme fails. As a result, an appropiate numerical flux has to be implemented to handle the vacuum regions. In this case a kinetic solver based on \cite{perthame2001kinetic}, and already implemented in previous works \cite{audusse2005well}, is employed. The time discretization is acomplished by means of the third order \red{TVD} Runge-Kutta method \cite{gottlieb1998total} and the CFL number is taken as $0.7$ in all the simulations. The boundary conditions are chosen to be no flux. For more details about the numerical fluxes, temporal discretization, \red{boundary conditions} and CFL number, we remit the reader to Appendix \ref{app:numerics}. Videos from all the simulations displayed in this work are available at \cite{simulations}. \subsection{Validation of the numerical scheme}\label{subsec:val} The validation of the schemes from section \ref{sec:numsch} includes a test for the well-balanced property and a test for the order of accuracy \blue{in the transient regimes}. These tests are completed in four different examples with steady states satisfying \eqref{eq:steadyvarener}, which differ in the choice of the free energy, potentials and the inclusion of Cucker-Smale damping terms. An additional fifth example presenting moving steady states of the form \eqref{eq:steadyvarener2} is considered to show that our schemes satisfy the order of accuracy test even for this challenging steady states. The well-balanced property test evaluates whether the steady state solution is preserved in time up to machine precision. As a result, the initial condition of the simulation has to be directly the steady state. The results of this test for the four examples of this section are presented in table \ref{table:preserve}. All the simulations are run from $t=0$ to $t=5$, \red{and the number of cells is 50}. \begin{table}[!htbp] \centering \caption{Preservation of the steady state for the examples \ref{ex:idealpot}, \ref{ex:idealpotCS}, \ref{ex:idealker} and \ref{ex:squarepot} with the first- and second-order schemes and double precision, at $t=5$}\label{table:preserve} \label{table:cproperty} \begin{tabular}{cccc} \hline & Order of the scheme & $L^1$ error & $L^{\infty}$ error \\ \hline \multirow{2}{*}{Example \ref{ex:idealpot}} & 1\textsuperscript{st} & 9.1012E-18 & 1.1102E-16 \\ & 2\textsuperscript{nd} & 2.3191E-17 & 2.2843E-16 \\ \hline \multirow{2}{*}{Example \ref{ex:idealpotCS}} & 1\textsuperscript{st} & 7.8666E-18 & 1.1102E-16 \\ & 2\textsuperscript{nd} & 1.4975E-17 & 1.5057E-16 \\ \hline \multirow{2}{*}{Example \ref{ex:idealker}} & 1\textsuperscript{st} & 5.5020E-17 & 6.6613E-16 \\ & 2\textsuperscript{nd} & 6.4514E-17 & 7.2164E-16 \\ \hline \multirow{2}{*}{Example \ref{ex:squarepot}} & 1\textsuperscript{st} & 1.3728E-17 & 2.2204E-16 \\ & 2\textsuperscript{nd} & 3.4478E-18 & 1.1102E-16 \\ \hline \end{tabular} \end{table} The order of accuracy \blue{in the transient regimes} test is based on evaluating the $L^1$ error of a numerical solution for a particular choice of $\Delta x$ with respect to a reference solution, \blue{and for a time when the steady state is not reached yet}. Subsequent $L^1$ errors are obtained after halving the $\Delta x$ of the previous numerical solution, doubling in this way the total number of cells. The order of the scheme is then computed as \begin{equation} \text{Order of the scheme}=\ln_2\left(\frac{L^1\,\text{error} (\Delta x)}{L^1\,\text{error} (\Delta x/2)}\right), \end{equation} and the $\Delta x$ is halved four times. The reference solution is frequently taken as an explicit solution of the system that is being tested. In this case, the system in \eqref{eq:generalsys2} does not have an explicit solution in time for the free energies presented here, even though the steady solution can be analytically computed. Since we are interested in evaluating the order of accuracy away from equilibrium, the reference solution is computed from the same numerical scheme but with a really small $\Delta x$, so that the numerical solution can be considered as the exact one. In all cases here the reference solution is obtained from a mesh with 25600 cells, while the numerical solutions employ a number of cells between $50$ and $400$. The results from the accuracy tests are shown in the tables \ref{table:idealpot}, \ref{table:idealpotCS}, \ref{table:idealker}, \ref{table:squarepot} and \ref{table:movingwater}. The simulations were run with the configurations specified in each example and from $t=0$ to $t=0.3$, unless otherwise stated. \blue{The final time of $t=0.3$ is taken so that all examples are in the transient regime}. \begin{examplecase}[Ideal-gas pressure and attractive potential]\label{ex:idealpot} In this example the pressure satisfies $P(\rho)=\rho$ and there is an external potential of the form $V(x)=\frac{x^2}{2}$. As a result, the relation holding in the steady state is \begin{equation}\label{eq:constantidealpot} \frac{\delta \mathcal{F}}{\delta \rho}=\Pi'(\rho)+H=\ln(\rho)+\frac{x^2}{2}=\text{constant}\ \text{on}\ \mathrm{supp}(\rho)\ \text{and}\ u=0. \end{equation} The steady state, for an initial mass $M_0$, explicitly satisfies \begin{equation}\label{eq:steadyidealpot} \rho_{\infty}=M_0 \frac{e^{-x^2/2}}{\int_\mathbb{R} e^{-x^2/2} dx}. \end{equation} For the order of accuracy test the initial conditions are \begin{equation}\label{eq:icidealpot} \rho(x,t=0)=M_0\frac{0.2+5\,\cos\left(\frac{\pi x}{10}\right)}{\int_\mathbb{R}\left(0.2+5\,\cos\left(\frac{\pi x}{10}\right)\right)dx},\quad \rho u(x,t=0)=-0.05 \sin\left(\frac{\pi x}{10}\right),\quad x\in [-5,5], \end{equation} with $M_0$ equals to $1$ so that the total mass is unitary. The order of accuracy test from this example is shown in table \ref{table:idealpot}, and the evolution of the density, momentum, variation of the free energy with respect to the density, total energy and free energy are depicted in figure \ref{fig:idealpot}. From \ref{fig:idealpot} (D) one can notice how the discrete total energy always decreases in time, due to the discrete free energy dissipation property \eqref{eq:disfreeenergydiscrete}, and how there is an exchange between free energy and kinetic energy which makes the discrete free energy plot oscillate. \begin{figure}[ht!] \begin{center} \subfloat[Evolution of the density]{\protect\protect\includegraphics[scale=0.4]{density-1.pdf} } \subfloat[Evolution of the momentum]{\protect\protect\includegraphics[scale=0.4]{momentum-1.pdf} }\\ \subfloat[Evolution of the variation of the free energy]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-1.pdf} } \subfloat[Evolution of the total energy and free energy]{\protect\protect\includegraphics[scale=0.4]{energies-1.pdf} } \end{center} \protect\protect\caption{\label{fig:idealpot} Temporal evolution of Example \ref{ex:idealpot}.} \end{figure} \begin{table}[!htbp] \centering \caption{Accuracy test for Example \ref{ex:idealpot} with the first and second-order schemes\blue{, at $t=0.3$}} \label{table:idealpot} \begin{tabular}{c c c c c } \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Number of \\ cells\end{tabular}} & \multicolumn{2}{c}{ First-order} & \multicolumn{2}{c}{Second-order} \\ \cline{2-5} & $L^1$ error & order & $L^1$ error & order \\ \hline 50 & 6.8797E-03 & - & 7.6166E-04 & - \\ \hline 100 & 3.4068E-03 & 1.01 & 2.0206E-04 & 1.91 \\ \hline 200 & 1.6826E-03 & 1.02 & 5.0308E-05 & 2.01 \\ \hline 400 & 8.3104E-04 & 1.02 & 1.2879E-05 & 1.97 \\ \hline \end{tabular} \end{table} \end{examplecase} \begin{examplecase}[Ideal-gas pressure, attractive potential and Cucker-Smale damping terms]\label{ex:idealpotCS} In this example the pressure satisfies $P(\rho)=\rho$ and there is an external potential of the form $V(x)=\frac{x^2}{2}$. The difference with example \ref{ex:idealpot} is that the Cucker-Smale damping terms are included, and the linear damping term $-\rho u$ excluded. The relation holding in the steady state is expressed in \eqref{eq:constantidealpot} and the steady state satisfies \eqref{eq:steadyidealpot}. The initial conditions are also \eqref{eq:icidealpot}. The order of accuracy test from this example is shown in table \ref{table:idealpotCS}, and the evolution of the density, momentum, variation of the free energy with respect to the density, total energy and free energy are depicted in figure \ref{fig:idealpotCS}. The lack of linear damping leads to higher oscillations in the momentum plots in comparison to figure \ref{fig:idealpot}. There is also an exchange of kinetic and free energy during the temporal evolution, which could be noticed from the oscillations of the discrete free energy in figure \ref{fig:idealpotCS} (D). \begin{figure}[ht!] \begin{center} \subfloat[Evolution of the density]{\protect\protect\includegraphics[scale=0.4]{density-2.pdf} } \subfloat[Evolution of the momentum]{\protect\protect\includegraphics[scale=0.4]{momentum-2.pdf} }\\ \subfloat[Evolution of the variation of the free energy]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-2.pdf} } \subfloat[Evolution of the total energy and free energy]{\protect\protect\includegraphics[scale=0.4]{energies-2.pdf} } \end{center} \protect\protect\caption{\label{fig:idealpotCS} Temporal evolution of Example \ref{ex:idealpotCS}.} \end{figure} \begin{table}[!htbp] \centering \caption{Accuracy test for Example \ref{ex:idealpotCS} with the first and second-order schemes\blue{, at $t=0.3$}} \label{table:idealpotCS} \begin{tabular}{c c c c c } \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Number of \\ cells\end{tabular}} & \multicolumn{2}{c}{ First-order} & \multicolumn{2}{c}{Second-order} \\ \cline{2-5} & $L^1$ error & order & $L^1$ error & order \\ \hline 50 & 6.3195E-03 & - & 7.3045E-04 & - \\ \hline 100 & 3.2658E-03 & 0.95 & 1.9462E-04 & 1.91 \\ \hline 200 & 1.6373E-03 & 1.00 & 4.8629E-05 & 2.00 \\ \hline 400 & 8.7771E-04 & 1.01 & 1.2468E-05 & 1.97 \\ \hline \end{tabular} \end{table} \end{examplecase} \begin{examplecase}[Ideal-gas pressure and attractive kernel]\label{ex:idealker} In this case study the pressure satisfies $P(\rho)=\rho$ and there is an interaction potential with a kernel of the form $W(x)=\frac{x^2}{2}$. The steady state for a general total mass $M_0$ is again equal to the steady states from examples \ref{ex:idealpot} and \ref{ex:idealpotCS} with unit mass. The linear damping coefficient $\gamma$ has been reduced, $\gamma=0.01$, in order to compare the evolution with respect to the previous examples. The initial conditions for the order of accuracy test are the ones from example \ref{ex:idealpot} in \eqref{eq:icidealpot}. The order of accuracy test from this example is shown in table \ref{table:idealker}, and the evolution of the density, momentum, variation of the free energy with respect to the density, total energy and free energy are depicted in figure \ref{fig:idealker}. Due to the low value of $\gamma$ in the linear damping, there is a repeated exchange of free energy and kinetic energy during the temporal evolution, which can be noticed from the oscillations of the free energy plot in figure \ref{fig:idealker} (D). In the previous examples the linear damping term dissipates the momentum in a faster timescale and these exchanges only last for a few oscillations. One can also notice that the time to reach the steady state is higher than in the previous examples. \begin{figure}[ht!] \begin{center} \subfloat[Evolution of the density]{\protect\protect\includegraphics[scale=0.4]{density-3.pdf} } \subfloat[Evolution of the momentum]{\protect\protect\includegraphics[scale=0.4]{momentum-3.pdf} }\\ \subfloat[Evolution of the variation of the free energy]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-3.pdf} } \subfloat[Evolution of the total energy and free energy]{\protect\protect\includegraphics[scale=0.4]{energies-3.pdf} } \end{center} \protect\protect\caption{\label{fig:idealker} Temporal evolution of Example \ref{ex:idealker}.} \end{figure} \begin{table}[!htbp] \centering \caption{Accuracy test for Example \ref{ex:idealker} with the first and second-order schemes\blue{, at $t=0.3$}} \label{table:idealker} \begin{tabular}{c c c c c } \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Number of \\ cells\end{tabular}} & \multicolumn{2}{c}{ First-order} & \multicolumn{2}{c}{Second-order} \\ \cline{2-5} & $L^1$ error & order & $L^1$ error & order \\ \hline 50 & 6.6938E-03 & - & 7.6135E-04 & - \\ \hline 100 & 3.4702E-03 & 0.95 & 2.0207E-04 & 1.91 \\ \hline 200 & 1.7410E-03 & 1.00 & 5.0306E-05 & 2.01 \\ \hline 400 & 8.6890E-04 & 1.00 & 1.2879E-05 & 1.97 \\ \hline \end{tabular} \end{table} \end{examplecase} \begin{examplecase}[Pressure proportional to square of density and attractive potential]\label{ex:squarepot} For this example the pressure satisfies $P(\rho)=\rho^2$ and there is an external potential of the form $V(x)=\frac{x^2}{2}$. Contrary to the previous examples \ref{ex:idealpot}, \ref{ex:idealpotCS} and \ref{ex:idealker}, the choice of $P(\rho)=\rho^2$ implies that regions of vacuum where $\rho=0$ appear in the evolution and steady solution of the system. As explained in the introduction of this section, the numerical flux employed for this case is a kinetic solver based on \cite{bouchut2004nonlinear}. The steady state for this example with an initial mass of $M_0$ satisfies \[ \rho_\infty (x) = \left\{ \begin{array}{ll} \displaystyle -\frac{1}{4}\left(x+\sqrt[3]{3 M_0}\right)\left(x-\sqrt[3]{3 M_0}\right) \quad & \textrm{for} \quad x\in\left[-\sqrt[3]{3 M_0},\sqrt[3]{3 M_0}\right],\\[3mm] 0 & \textrm{otherwise}. \end{array} \right. \] The initial conditions taken for the order of accuracy test are \begin{equation* \rho(x,t=0)=M_0\frac{0.1+e^{-x^2}}{\int_\mathbb{R} \left(0.1+e^{-x^2} \right)dx},\quad \rho u(x,t=0)=-0.2 \sin\left(\frac{\pi x}{10}\right),\quad x\in [-5,5], \end{equation*} with $M_0$ being the mass of the system and equal to $1$. The order of accuracy test from this example is shown in table \ref{table:squarepot}, and the evolution of the density, momentum, variation of the free energy with respect to the density, total energy and free energy are depicted in figure \ref{fig:squarepot}. The initial kinetic energy represents a large part of the initial total energy, and there is also an exchange between the kinetic energy and the free energy resulting in the oscillations for the plot of the discrete free energy. As a remark, in this example the order of accuracy for the schemes with order higher than one is reduced to one \blue{both in the vacuum and interface regions}, as it is also pointed out in \cite{filbet2005approximation}. The orders showed in table \ref{table:squarepot} \blue{are computed by considering only the cells in the support of the density that are away from the interface region, and the vacuum regions are not taken into consideration}. \begin{figure}[ht!] \begin{center} \subfloat[Evolution of the density]{\protect\protect\includegraphics[scale=0.4]{density-4.pdf} } \subfloat[Evolution of the momentum]{\protect\protect\includegraphics[scale=0.4]{momentum-4.pdf} }\\ \subfloat[Evolution of the variation of the free energy]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-4.pdf} } \subfloat[Evolution of the total energy and free energy]{\protect\protect\includegraphics[scale=0.4]{energies-4.pdf} } \end{center} \protect\protect\caption{\label{fig:squarepot} Temporal evolution of Example \ref{ex:squarepot}.} \end{figure} \begin{table}[!htbp] \centering \caption{Accuracy test for Example \ref{ex:squarepot} with the first and second-order schemes\blue{, at $t=0.3$}} \label{table:squarepot} \begin{tabular}{c c c c c } \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Number of \\ cells\end{tabular}} & \multicolumn{2}{c}{ First-order} & \multicolumn{2}{c}{Second-order} \\ \cline{2-5} & $L^1$ error & order & $L^1$ error & order \\ \hline 50 & 6.8826E-03 & - & 1.0735E-03 & - \\ \hline 100 & 3.5106E-03 & 0.97 & 2.9188E-04 & 1.88 \\ \hline 200 & 1.7596E-03 & 1.00 & 7.6113E-05 & 1.94 \\ \hline 400 & 8.8184E-04 & 1.00 & 1.9103E-05 & 1.99 \\ \hline \end{tabular} \end{table} \end{examplecase} \begin{examplecase}[Moving steady state with ideal-gas pressure, attractive kernel and Cucker-Smale damping term]\label{ex:moving} The purpose of this example is to show that our scheme from section \ref{sec:numsch} preserves the order of accuracy for moving steady states of the form \eqref{eq:steadyvarener2}, where the velocity is not dissipated. As mentioned in the introduction, the generalization of well-balanced schemes to preserve moving steady states has proven to be quite complicated \cite{noelle2007high,xing2011advantage}, and it is not the aim of this work to construct such schemes. For this example the pressure satisfies $P(\rho)=\rho$ and there is an interaction potential with a kernel of the form $W(x)=\frac{x^2}{2}$. The linear damping is eliminated and the Cucker-Smale damping term included. Under this configuration, there exists an explicit solution for system \eqref{eq:generalsys2} consisting in a travelling wave of the form \begin{equation}\label{eq:travellingwave} \rho(x,t)=M_0 \frac{e^{-(x-u t)^2/2}}{\int_\mathbb{R} e^{-x^2/2} dx},\quad u(x,t)=0.2, \end{equation} with $M_0$ equals to $1$ so that the total mass is unitary. As a result, the order of accuracy test can be accomplished by computing the error with respect to the exact reference solution, contrary to what was proposed in the previous examples. It should be remarked however that the velocity and the variation of the free energy with respect to the density profiles are not kept constant along the domain by our numerical scheme, since the well-balanced property for moving steady states is not satisfied. The initial conditions for our simulation are \eqref{eq:travellingwave} at $t=0$, in a numerical domain with $ x\in [-8,9]$. The simulation is run until $t=3$. The table of errors for different number of cells is showed in table \ref{table:movingwater}, and a depiction of the evolution of the system is illustrated in figure \ref{fig:moving}. The velocity and the variation of the free energy plots are not included since they are not maintained constant with our scheme. \begin{table}[!htbp] \centering \caption{Accuracy test for Example \ref{ex:moving} with the first and second-order schemes\blue{, at $t=3$}} \label{table:movingwater} \begin{tabular}{c c c c c } \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Number of \\ cells\end{tabular}} & \multicolumn{2}{c}{ First-order} & \multicolumn{2}{c}{Second-order} \\ \cline{2-5} & $L^1$ error & order & $L^1$ error & order \\ \hline 50 & 9.84245E-03 & - & 2.78988E-03 & - \\ \hline 100 & 4.92029E-03 & 1.00 & 9.09342E-04 & 1.62 \\ \hline 200 & 2.44627E-03 & 1.01 & 2.55340E-04 & 1.83 \\ \hline 400 & 1.21228E-03 & 1.01 & 7.47905E-05 & 1.77 \\ \hline \end{tabular} \end{table} \begin{figure}[ht!] \begin{center} \subfloat[Evolution of the density]{\protect\protect\includegraphics[scale=0.4]{density-10.pdf} } \subfloat[Evolution of the total energy and free energy]{\protect\protect\includegraphics[scale=0.4]{energies-10.pdf} } \end{center} \protect\protect\caption{\label{fig:moving} Temporal evolution of Example \ref{ex:moving}.} \end{figure} \end{examplecase} \subsection{Numerical experiments}\label{subsec:numexp} This subsection applies the well-balanced scheme in section \ref{sec:numsch} to a variety of free energies from systems which have acquired an important consideration in the literature. Some of these systems have been mainly studied in their overdamped form, resulting when $\gamma \to \infty$, and as a result our well-balanced scheme can be useful in determining the role that inertia plays in those systems. \begin{examplecase}[Pressure proportional to square of density and double-well potential] \label{ex:squarepotdoublewell} In this example the pressure is taken as in example \ref{ex:squarepot}, with $P(\rho)=\rho^2$, thus leading to vacuum regions. The external potential are chosen to have a double-well shape of the form $V(x)=a\,x^4-b\,x^2$, with $a,\,b>0$. This system exhibits a variety of steady states depending on the symmetry of the initial condition, the initial mass and the shape of the external potential $V(x)$. The general expression for the steady states is \begin{equation*} \rho_{\infty}=\left(C(x)-V(x)\right)_{+}=\left(C(x)-a\,x^4+b\,x^2\right)_{+}, \end{equation*} where $C(x)$ is a piecewise constant function, zero outside the support of the density. Notice that $C(x)$ can attain a different value in each connected component of the support of the density. Three different initial data are simulated in order to compare the resulting long time asymptotics, i.e., we show that different steady states are achieved corresponding to different initial data. The initial conditions are \begin{equation* \rho(x,t=0)=M_0\frac{0.1+e^{-(x-x_0)^2}}{\int_\mathbb{R} \left(0.1+e^{-(x-x_0)^2}\right)dx},\quad \rho u(x,t=0)=-0.2 \sin\left(\frac{\pi x}{10}\right),\quad x\in [-10,10], \end{equation*} with $M_0$ equal to $1$ so that the total mass is unitary. When $x_0=0$, the initial density is symmetric, and when $x_0\neq0$ the initial density is asymmetric. \begin{enumerate}[label=\alph*.] \item First case: The external potential satisfies $V(x)=\frac{x^4}{4}-\frac{3x^2}{2}$ and the initial density is symmetric with $x_0=0$. For this configuration the steady solution presents two disconnected bumps of density with the same mass in each of them, as it is shown in figure \ref{fig:doublewell} (A) and (B). The variation of the free energy with respect to the density presents the same constant value in the two disconnected supports of the density. The evolution is symmetric throughout. \begin{figure}[] \begin{center} \subfloat[Density in first case]{\protect\protect\includegraphics[scale=0.4]{density-51.pdf} } \subfloat[Variation of the free energy in first case]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-51.pdf} }\\ \subfloat[Density in second case]{\protect\protect\includegraphics[scale=0.4]{density-52.pdf} } \subfloat[Evolution of the variation of the free energy in second case]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-52.pdf} }\\ \subfloat[Density in third case]{\protect\protect\includegraphics[scale=0.4]{density-53.pdf} } \subfloat[Variation of the free energy in third case]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-53.pdf} } \end{center} \protect\protect\caption{\label{fig:doublewell} Temporal evolution of the first, second and third cases from example \ref{ex:squarepotdoublewell}.} \end{figure} \item Second case: The external potential satisfies $V(x)=\frac{x^4}{4}-\frac{3x^2}{2}$ and the initial density is asymmetric with $x_0=1$. The final steady density is characterised again by the two disconnected supports but for this configuration the mass in each of them varies, as shown in figure \ref{fig:doublewell} (C) and (D). Similarly, the variation of the free energy with respect to the density presents different constant values in the two disconnected supports of the density. \item Third case: for this last configuration the external potential is varied and satisfies $V(x)=\frac{x^4}{4}-\frac{x^2}{2}$, and the initial density is asymmetric with $x_0=1$. For this case, even though the initial density is asymmetric, the final steady density is symmetric and compactly supported due to the shape of the potential, as it is shown in figure \ref{fig:doublewell} (E) and (F). The variation of the free energy with respect to the density presents constant value in all the support of the density. \end{enumerate} This behavior shows that this problem has a complicated bifurcation diagram and corresponding stability properties depending on the parameters, for instance the coefficient on the potential well controling the depth and support of the wells used above. \end{examplecase} \begin{examplecase}[Ideal pressure with noise parameter and its phase transition]\label{ex:bifurcation} The model proposed for this example has a pressure satisfying $P(\rho)=\sigma \rho$, where $\sigma$ is a noise parameter, and external and interaction potentials chosen to be $V(x)=\frac{x^4}{4}-\frac{x^2}{2}$ and $W(x)=\frac{x^2}{2}$, respectively. The corresponding model in the overdamped limit has been previously studied in the context of collective behaviour \cite{barbaro2016phase}, mean field limits \cite{gomes2018mean}, and systemic risk \cite{garnier2013large}, see also \cite{tugaut2014phase} for the proof in one dimension. We find that this hydrodynamic system exhibits a supercritical pitchfork bifurcation in the center of mass $\hat{x}$ of the steady state when varying the noise parameter $\sigma$ as its overdamped limit counterpart discussed above. For values of $\sigma$ higher than a certain threshold, all teady states are symmetric and have the center of mass $\hat{x}$ at $x=0$. However, when $\sigma$ decreases below that threshold, the pitchfork bifurcation takes place. On the one hand, if the center of mass of the initial density is at $x=0$, the final center of mass in the steady state remains at $x=0$. On the other hand, if the center of mass of the initial density is at $x\neq0$, the center of mass of the steady state approaches asymptotically to $x=1$ or $x=-1$ as $\sigma \to 0$, depending on the sign of the initial center of mass. Finally, when $\sigma=0$, the steady state turns into a Dirac delta at $x=0$, $x=1$ or $x=-1$, depending on the initial density. \begin{figure}[ht!] \begin{center} \subfloat[Bifurcation diagram]{\protect\protect\includegraphics[scale=0.4]{bifurcation.pdf} } \subfloat[Steady state profiles for different $\sigma$]{\protect\protect\includegraphics[scale=0.4]{bifurcationsteadystates.pdf} \llap{\shortstack{% \includegraphics[scale=.14,trim={0 0 1.4cm 0cm},clip]{bifurcationsteadystate0001.pdf}\\ \rule{0ex}{0.65in}% } \rule{1.288in}{0ex}} } \end{center} \protect\protect\caption{\label{fig:bifurcation} Bifurcation diagram (A) and steady states for different values of the noise parameter $\sigma$ (B) from Example \ref{ex:bifurcation}} \end{figure} The pitchfork bifurcation is supercritical since the branch of the bifurcation corresponding to $\hat{x}=0$ is unstable. This means that any deviation from an initial center of mass at $x=0$ leads to a steady center of mass located in one of the two branches of the parabola in the bifurcation state. The numerical scheme outlined in section \ref{sec:numsch} captures this bifurcation diagram for the evolution of the hydrodynamic system. The results are shown in figure \ref{fig:bifurcation}. In it, (A) depicts the bifurcation diagram of the final centre of mass when the noise parameter $\sigma$ is varied, and for an initial center of mass at $x\neq0$. For a symmetric initial density and antisymmetric velocity, the centre of mass numerically remains at $x=0$ for an adequate stopping criterion, since property (vi) in subsection \ref{subsec:firstorderprop} holds. However, any slight error in the numerical computation unavoidably leads to a steady state deviating towards any of the two stable branches, due to the strong unstable nature of the branch with $x=0$. In (B) of figure \ref{fig:bifurcation} there is an illustration of the steady states resulting from an initial center of mass located at $x>0$, for different choices of the noise parameter $\sigma$. For $\sigma=0.001$, which is the smallest value of $\sigma$ simulated, the density profile approaches the theoretical Dirac delta expected at $x=1$ when $\sigma \to 0$. When $\sigma=0$ the hyperbolicity of the system in \eqref{eq:generalsys2} is lost since the pressure term vanishes, and as a result the numerical approach in section \ref{sec:numsch} cannot be applied. The numerical strategy followed to recover the bifurcation diagram is based on the so-called differential continuation. It simply means that, as $\sigma \to 0$, the subsequent simulations with new and lower values of $\sigma$ have as initial conditions the previous steady state from the last simulation. This allows to complete the bifurcation diagram, since otherwise the simulations with really small $\sigma$ take long time to converge for general initial conditions. In addition, to maintain sufficient resolution for the steady states close to the Dirac delta, the mesh is adapted for each simulation. This is accomplished by firstly interpolating the previous steady state with a piecewise cubic hermite polynomial, which preserves the shape and avoids oscillations, and secondly by creating a new and narrower mesh where the interpolating polynomial is employed to construct the new initial condition for the differential continuation. \end{examplecase} \begin{examplecase}[Hydrodynamic generalization of the Keller-Segel system - Generalized Euler-Poisson systems]\label{ex:kellersegel} The original Keller-Segel model has been widely employed in chemotaxis, which is usually defined as the directed movement of cells and organisms in response to chemical gradients \cite{keller1970initiation}. These systems also find their applications in astrophysics and gravitation \cite{sire2002thermodynamics,deng2002solutions}. It is a system of two coupled drift-diffusion differential equations for the density $\rho$ and the chemoattractant concentration $S$, \begin{equation*} \begin{cases} {\displaystyle\partial_{t}\rho= \nabla \cdot \left(\nabla P(\rho)-\chi\rho\nabla S\right), }\\[2mm] {\displaystyle \partial_{t}S=D_s\Delta S-\theta S+\beta \rho.} \end{cases} \end{equation*} In this system $P(\rho)$ is the pressure, and the biological/physical meaning of the constants $\chi$, $D_s$, $\alpha$ and $\beta$ can be reviewed in the literature \cite{horstmann20031970,bellomo2015toward,hoffmann2017keller}. For this example they are simplified as usual so that $\chi=D_s=\beta=1$ and $\theta=0$. A further assumption usually taken in the literature is that $\partial_{t} \rho$ is very big in comparison to $\partial_{t} S$ \cite{hoffmann2017keller}, leading to a simplification of the equation for the chemoattractant concentration $S$, which becomes the Poisson equation $-\Delta S=\rho$. Hydrodynamic extensions of the model, which include inertial effects, have also been proven to be essential for certain applications\cite{chavanis2006jeans,chavanis2007kinetic,gamba2003percolation}, leading to a hyperbolic system of equations with linear damping which in one dimension reads as \begin{equation*} \begin{cases} \partial_{t}\rho+\partial_{x}\left(\rho u\right)=0, \\[2mm] {\displaystyle \partial_{t}(\rho u)\!+\partial_{x}(\rho u^2)\!=-\partial_{x} P(\rho)+\partial_{x}S -\gamma\rho u ,}\\[2mm] {-\partial_{xx} S=\rho .} \end{cases} \end{equation*} By using the fundamental solution of the Laplacian in one dimension, this equation becomes $2S =|x| \star \rho$. This term, after neglecting the constant, can be plugged in the momentum equation so that the last equation for $S$ can be removed. As a result, the hydrodynamic Keller-Segel model is reduced to the system of equations \eqref{eq:generalsys} considered in this work, with $W(x)=|x|/2$, $V(x)=0$ and $\psi\equiv 0$. As a final generalization \cite{carrillo2015finite}, the original interaction potential $W(x)=|x|/2$ can be extended to be a homogeneous kernel $W(x)=|x|^\alpha/\alpha$, where $\alpha>-1$. By convention, $W(x)=\ln|x|$ for $\alpha=0$. Further generalizations are Morse-like potentials as in \cite{carrillo2014explicit,carrillo2015finite} where $W(x)=1-\exp(-|x|^\alpha/\alpha)$ with $\alpha> 0$. The solution of this system can present a rich variety of behaviours due to the competition between the attraction from the local kernel $W(x)$ and the repulsion caused by the diffusion of the pressure $P(\rho)$, as reviewed in \cite{calvez2017equilibria,calvez2017geometry}. By appropriately tuning the parameters $\alpha$ in the kernel $W(x)$ and $m$ in the pressure $P(\rho)$, one can find compactly supported steady states, self-similar behavior, or finite-time blow up. Three different regimes have been studied in the overdamped generalized Keller-Segel model \cite{carrillo2015finite}: diffusion dominated regime ($m>1-\alpha$), balanced regime ($m=1-\alpha$) where a critical mass separates self-similar and blow-up behaviour, and aggregation-dominated regime ($m<1-\alpha$). These three regimes have not been so far analytically studied for the hydrodynamic system except for few particular cases \cite{carrillo2016critical,carrillo2016pressureless}, and the presence of inertia indicates that the initial momentum profile plays a role together with the mass of the system to separate diffusive from blow-up behaviour. The well-balanced scheme provided in section \ref{sec:numsch} is a useful tool to effectively reach the varied steady states resulting from different values of $\alpha$ and $m$. The objective of this example is to provide some numerical experiments to show the richness of possible behaviors. This scheme can be eventually employed to numerically validate the theoretical studies concerning the existence of the different regimes for the hydrodynamic system for instance, or how the choice of the initial momentum or the total mass can lead to diffusive or blow-up behaviour. This will be explored further elsewhere. \begin{figure}[ht!] \begin{center} \subfloat[Evolution of the density]{\protect\protect\includegraphics[scale=0.4]{density-71.pdf} } \subfloat[Evolution of the momentum]{\protect\protect\includegraphics[scale=0.4]{momentum-71.pdf} }\\ \subfloat[Evolution of the variation of the free energy]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-71.pdf} } \subfloat[Evolution of the total energy and free energy]{\protect\protect\includegraphics[scale=0.4]{energies-71.pdf} } \end{center} \protect\protect\caption{\label{fig:kellersegel1} Temporal evolution of Example \ref{ex:kellersegel} with compactly-supported steady state.} \end{figure} We have conducted two simulations with different choices of the paramenters $\alpha$ and $m$. In both $m>1$, so that a proper numerical flux able to deal with vacuum regions has to be implemented. As emphasised in the introduction of this section, the kinetic scheme developed in \cite{perthame2002kinetic} is employed. Both of the simulations share the same initial conditions, \begin{equation*} \rho(x,t=0)=M_0\frac{e^{-\frac{4(x+2)^2}{10}}+e^{-\frac{4(x-2)^2}{10}}}{\int_\mathbb{R} \left(e^{-\frac{4(x-2)^2}{10}}+e^{-\frac{4(x+2)^2}{10}}\right)dx},\quad \rho u(x,t=0)=0,\quad x\in [-8,8], \end{equation*} where the total mass $M_0$ of the system is $1$. In the first simulation the choice of parameters is $\alpha=0.5$ and $m=1.5$. According to the regime classification for the overdamped system, this would correspond to the diffusion-dominated regime. In the overdamped limit, solutions exist globally in time, and the steady state is compactly supported. The results are depicted in figure \ref{fig:kellersegel1} and adequately agree with this regime. In the steady state the variation of the free energy with respect to density has a constant value only in the support of the density, as expected. The total energy decreases in time and there is no exchange between the free energy and the kinetic energy since the free energy in figure \ref{fig:kellersegel1} (D) does not oscillate. \begin{figure}[ht!] \begin{center} \subfloat[Evolution of the density]{\protect\protect\includegraphics[scale=0.4]{density-72.pdf} \llap{\shortstack{% \includegraphics[scale=.14,trim={0 0 1.4cm 0cm},clip]{density-detail-72.pdf}\\ \rule{0ex}{1.28in}% } \rule{1.288in}{0ex}} \subfloat[Evolution of the momentum]{\protect\protect\includegraphics[scale=0.4]{momentum-72.pdf} }\\ \subfloat[Evolution of the variation of the free energy]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-72.pdf} } \subfloat[Evolution of the total energy and free energy]{\protect\protect\includegraphics[scale=0.4]{energies-72.pdf} } \end{center} \protect\protect\caption{\label{fig:kellersegel2} Temporal evolution of Example \ref{ex:kellersegel} with finite-time blow up.} \end{figure} \end{examplecase} The second simulation has a choice of parameters of $\alpha=-0.5$ and $m=1.3$. In the case of the overdamped system this would correspond to the aggregation-dominated regime, where blow-up and diffusive behaviour coexist and depend on the initial density profile. The results from this simulation of the hydrodynamic system are illustrated in figure \ref{fig:kellersegel2}. For this particular initial condition there is analytically finite-time blow up. Our scheme, due to the conservation of mass of the finite volume scheme, concentrates all the mass in one single cell in finite time, that is, the scheme achieves in finite time the better approximation to a Dirac Delta at a point with the chosen mesh. Once this happens, this artificial numerical steady state depending on the mesh is kept for all times. From figure \ref{fig:kellersegel2} (C) it is evident that the variation of the free energy with respect to density does not reach a constant value, and in figure \ref{fig:kellersegel2} (D) the free energy presents a sharp decay when the concentration in one cell is produced (around $t\approx65$). The value of the slope in the free energy plot theoretically tends to $-\infty$ due to the blow up, but in the simulation the decay is halted due to conservation of mass and the artificial steady state. This agrees with the fact that the expected Dirac delta profile in the density at the blow up time is obviously not reached numerically. It was also checked that this phenomena repeats for all meshes leading to more concentrated artificial steady states with more negative free energy values for more refined meshes. For other more spreaded initial conditions our scheme produces diffusive behaviour as expected from theoretical considerations. \begin{figure}[ht!] \begin{center} \subfloat[Evolution of the density]{\protect\protect\includegraphics[scale=0.4]{density3D-73.pdf} } \subfloat[Evolution of the momentum]{\protect\protect\includegraphics[scale=0.4]{momentum-73.pdf} }\\ \subfloat[Evolution of the variation of the free energy]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-73.pdf} } \subfloat[Evolution of the total energy and free energy]{\protect\protect\includegraphics[scale=0.4]{energies-73.pdf} } \end{center} \protect\protect\caption{\label{fig:kellersegel3} Temporal evolution of Example \ref{ex:kellersegel} with Morse-type potential and three initial density bumps.} \end{figure} A further simulation is carried out to explore the convergence in time towards equilibration with a Morse-type potential of the form $W(x)=-e^{-|x|^2/2}/\sqrt{2\pi}$. With this potential the attraction between two bumps of density separated at a considerable distance is quite small. However, when enough time has passed and the bumps get closer, they merge in an exponentially fast pace due to the convexity of the Gaussian potential, and a new equilibrium is reached with just one bump. The interesting fact about this system is therefore the existence of two timescales: the time to get the bumps of density close enough, which could be arbitrarily slow, and the time to merge the bumps, which is exponentially fast in time. We have set up a simulation whose initial state presents three bumps of density, with the initial conditions satisfying \begin{equation*} \rho(x,t=0)=M_0\frac{e^{-\frac{(x+3)^2}{2}}+e^{-\frac{(x-3)^2}{2}}+0.55e^{-\frac{(x-8.5)^2}{2}}}{\int_\mathbb{R} \left(e^{-\frac{(x+3)^2}{2}}+e^{-\frac{(x-3)^2}{2}}+0.55e^{-\frac{(x-8.5)^2}{2}}\right)dx},\quad \rho u(x,t=0)=0,\quad x\in [-8,12], \end{equation*} and the total mass of the system equal to $M_0=1.2$. The parameter $m$ in the pressure satisfies $m=3$, and the effect of the linear damping is reduced by assigning $\gamma=0.05$. The results are depicted in \ref{fig:kellersegel3}. In (A) one can observe how the two central bumps of density merge after some time, and how the third bump, with less mass, starts getting closer in time until it also blends. This is also reflected in the evolution of the free energy in figure \ref{fig:kellersegel3} (D), where there are two sharp and exponential decays corresponding to the merges of the bumps. \begin{examplecase}[DDFT for 1D hard rods]\label{ex:hardrods} Classical (D)DFT is a theoretical framework provided by nonequilibrium statistical mechanics but has increasingly become a widely-employed method for the computational scrutiny of the microscopic structure of both uniform and non-uniform fluids \cite{duran2016dynamical,goddard2012unification,yatsyshin2012spectral,yatsyshin2013geometry,lutsko2010recent}. The DDFT equations have the same form as in \eqref{eq:generalsys2} when the hydrodynamic interactions are neglected. The starting point in (D)DFT is a functional $\mathcal{F}[\rho]$ for the fluid's free energy which encodes all microscopic information such as the ideal-gas part, short-range repulsive effects induced by molecular packing, attractive interactions and external fields. This functional can be exactly derived only for a limited number of applications, for instance the one-dimensional hard rod system from Percus \cite{percus1976equilibrium}. However, in general it has to be approximated by making appropriate assumptions, as e.g. in the so-called fundamental-measure theory of Rosenfeld \cite{rosenfeld1989free}. These assumptions are usually validated by carrying out appropriate test simulations (e.g. of the underlying stochastic dynamics) to compare e.g. the DDFT system with the approximate free-energy functional to the microscopic reference system~\cite{goddard2012general}. The objective of this example is to show that the numerical scheme in section \ref{sec:numsch} can also be applied to the physical free-energy functionals employed in (D)DFT, which satisfy the more complex expression for the free energy described in \eqref{eq:freeenergygeneral}, and with a variation satisfying \eqref{eq:varfreeenergygeneral}. For this example the focus is on the hard rods system in one dimension. Its free energy has a part depending on the local density and which satisfies the classical form for an ideal gas, with $P(\rho)=\rho$. It is therefore usually denoted as the ideal part of the free energy, \begin{equation*} \mathcal{F}_{id}[\rho]=\int \Pi(\rho)dx=\int \rho(x) \left( \ln \rho -1 \right) dx. \end{equation*} There is also a part of general free energy in \eqref{eq:freeenergygeneral} which contains the non-local dependence of the density, and has different exact or approximative forms depending of the system under consideration. In (D)DFT it is denoted as the excessive free energy, and for the hard rods satisfies \begin{align*} \mathcal{F}_{ex}[\rho]&=\frac{1}{2}\int K\left(W(\bm{x})\star \rho(\bm{x})\right) \rho(\bm{x})d\bm{x}\\ &=-\frac{1}{2}\int \rho(x+\sigma/2)\ln\left(1-\eta(x)\right)dx -\frac{1}{2}\int \rho(x-\sigma/2)\ln\left(1-\eta(x)\right)dx, \end{align*} \begin{figure}[ht!] \begin{center} \subfloat[Evolution of the density]{\protect\protect\includegraphics[scale=0.4]{density-81.pdf} } \subfloat[Evolution of the momentum]{\protect\protect\includegraphics[scale=0.4]{momentum-81.pdf} }\\ \subfloat[Evolution of the variation of the free energy]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-81.pdf} } \subfloat[Evolution of the total energy and free energy]{\protect\protect\includegraphics[scale=0.4]{energies-81.pdf} } \end{center} \protect\protect\caption{\label{fig:hardrods1} Temporal evolution of Example \ref{ex:hardrods} with $8$ hard rods and a confining potential.} \end{figure} where $\sigma$ is the length of a hard rod and $\eta(x)$ the local packing fraction representing the probability that a point $x$ is covered by a hard rod, \begin{equation*}\label{eq:packing} \eta(x)=\int_{-\frac{\sigma}{2}}^{\frac{\sigma}{2}}\rho(x+y) dy . \end{equation*} The function $K(x)$ in this case satisfies $K(x)=\ln(1-x)$ and the kernel $W(x)$ takes the form of a characteristic function which limits the interval of the packing function \eqref{eq:packing}. To obtain the excessive free energy for the hard rods one has to also consider changes of variables in the integrals. The last part of the general free energy in \eqref{eq:freeenergygeneral} corresponds to the effect of the external potential $V(x)$. On the whole, the variation of the free energy in \eqref{eq:freeenergygeneral} with respect to the density, for the case of hard rods, satisfies \begin{align*} \frac{\delta \mathcal{F}[\rho]}{\delta \rho}=&\frac{\delta \mathcal{F}_{id}[\rho]}{\delta \rho}+\frac{\delta \mathcal{F}_{ex}[\rho]}{\delta \rho}+V(x)\\ =&\ln(\rho)-\frac{1}{2}\ln\left(1-\int_{x-\sigma}^{x}\rho(y)dy\right)-\frac{1}{2}\ln\left(1-\int_{x}^{x+\sigma}\rho(y)dy\right)\\ & +\frac{1}{2}\int_{x-\sigma/2}^{x+\sigma/2}\left(\frac{\rho(x+\sigma/2)+\rho(x-\sigma/2)}{1-\eta(x)}\right)dx+V(x). \end{align*} This system can be straightforwardly simulated with the well-balanced scheme from section \ref{sec:numsch} by gathering the excessive part of the free energy and the external potentials under the term $H(x,\rho)$, so that \begin{equation*} H(x,\rho)=\frac{\delta \mathcal{F}_{ex}[\rho]}{\delta \rho}+V(x). \end{equation*} The first simulation seeks to capture the steady state reached by $8$ hard rods of unitary mass and length $\sigma=1$ under the presence of an external potential of the form $V(x)=x^2$. The initial conditions of the simulation are \begin{equation*} \rho(x,t=0)=e^{-\frac{x^2}{20.372}},\quad \rho u(x,t=0)=0,\quad x\in [-13,13], \end{equation*} where the density is chosen so that the total mass of the system is $8$. The results are plotted in figure \ref{fig:hardrods1}. The steady state reached for the density reveals layering due to the confining effects of the external potential and the repulsion between the hard rods. These layering effects can be amplified by increasing the coefficient in the external potential. It is also observed how each of the $8$ peaks has a unitary width. This is due to the fact that the length of the hard rods $\sigma$ was taken as $1$. The variation of the free energy with respect to the density also reaches a constant value in all the domain. For microscopic simulations of the underlying stochastic dynamics for similar examples we refer the reader to \cite{goddard2012unification}. Starting from this last steady state, the second simulation performed for this example shows how the hard rods diffuse when the confining potential is removed. This simulation has as initial condition the previous steady state from figure \ref{fig:hardrods1} and the external potential is set to $V(x)=0$. The results are depicted in figure \ref{fig:hardrods2}, and they share the same features of the simulations in \cite{marconi1999dynamic}. The final steady state of the density is uniform profile resultant from the diffusion of the hard rods, and in this situation the variation of the free energy with respect to the density also reaches a constant value in the steady state, as expected. \begin{figure}[ht!] \begin{center} \subfloat[Evolution of the density]{\protect\protect\includegraphics[scale=0.4]{density-82.pdf} } \subfloat[Evolution of the momentum]{\protect\protect\includegraphics[scale=0.4]{momentum-82.pdf} }\\ \subfloat[Evolution of the variation of the free energy]{\protect\protect\includegraphics[scale=0.4]{varfreeenergy-82.pdf} } \subfloat[Evolution of the total energy and free energy]{\protect\protect\includegraphics[scale=0.4]{energies-82.pdf} } \end{center} \protect\protect\caption{\label{fig:hardrods2} Temporal evolution of Example \ref{ex:hardrods} with $8$ hard rods and no potential.} \end{figure} \end{examplecase} \section{Conclusions} We have introduced first- and second-order accurate finite volume schemes for a large family of hydrodynamic equations with general free energy, positivity preserving and free energy decaying properties. These hydrodynamic models with damping naturally arise in dynamic density functional theories and the accurate computation of their stable steady states is crucial to understand their phase transitions and stability properties. The models possess a common variational structure based on the physical free energy functional from statistical mechanics. The numerical schemes proposed capture very well steady states and their equilibration dynamics due to the crucial free energy decaying property resulting into well-balanced schemes. The schemes were validated in well-known test cases and the chosen numerical experiments corroborate these conclusions for intricate phase transitions and complicated free energies. There are also several new avenues of possible future directions. Indeed, we believe the computational framework and associated methodologies presented here can be useful for the study of bifurcations and phase transitions for systems where the free energy is known from experiments only, either physical or in-silico ones, and then our framework can be adopted in a ``data-driven" approach. Of particular extension would also be extension to multi-dimensional problems. Two-dimensional problems in particular would be of direct relevance to surface diffusion and therefore to technological processes in materials science and catalysis. We shall examine these and related problems in future studies. \section*{Acknowledgements} We are indebted to P. Yatsyshin and M. A. Dur\'an-Olivencia from the Chemical Engineering Department of Imperial College (IC) for numerous stimulating discussions on statistical mechanics of classical fluids and density functional theory. J.~A.~Carrillo was partially supported by EPSRC via Grant Number EP/P031587/1 and acknowledges support of the IBM Visiting Professorship of Applied Mathematics at Brown University. S.~Kalliadasis was partially supported by EPSRC via Grant Number EP/L020564/1. S.~P.~Perez acknowledges financial support from the IC President's PhD Scholarship and thanks Brown University for hospitality during a visit in April 2018. C.-W. Shu was partially supported by NSF via Grant Number DMS-1719410. \bibliographystyle{siam}
1,116,691,499,511
arxiv
\section{Introduction} Multiple pieces of evidence point to the presence of substantial past volcanic activity on Mars.\ This activity may in fact have been key in maintaining a sufficiently warm, dense, and moist palaeoclimate through greenhouse gas emission, thus allowing for a stable presence of surface liquid water (e.g. \citealp{craddockgreeley2009}). While this past activity is partly reflected in gaseous isotope ratios that are found in the present-day Martian atmosphere \citep{jakoskyphillips2001,craddockgreeley2009}, more obvious evidence is provided by the numerous large volcanic structures and lava flows present over much of the surface of Mars that have been studied since the first spacecraft observations made by the Mariner 9 orbiter in the 1970s \citep{masursky1973}. Evidence of present-day outgassing activity on Mars, however, remains more elusive. While no signs of thermal hotspots were found by the Mars Odyssey/Thermal Emission Imaging System (THEMIS) instrument \citep{christensen2003themis}, evidence from the Mars Express/High Resolution Stereo Camera (HRSC) showed signs of very intermittent activity from volcanoes that erupt between periods of dormancy that last for hundreds of millions of years around the Tharsis and Elysium regions \citep{neukum2004}, with the most recent activity in some cases dating from only a few million years ago. More recently, the InSight Lander found that seismic activity was generally lower than expected on Mars; however, the two largest marsquakes it managed to detect were in the Cerberus Fossae region \citep{giardini2020insight}.\ This was later accompanied by evidence from the Mars Reconnaissance Orbiter/High Resolution Imaging Experiment (HIRISE) of extremely recent volcanic activity in the same region dating back only around 100,000 years \citep{horvath2021}.This gives further credence to the possibility of residual volcanic activity continuing to the present day, perhaps from localised thermal vents. On Earth, sulphur dioxide (SO\textsubscript{2}) tends to be by far the most abundant compound emitted through passive volcanic degassing after CO\textsubscript{2} and water vapour; it is detected together with smaller amounts of hydrogen sulphide (H\textsubscript{2}S) and occasionally carbonyl sulphide (OCS), although the exact ratios of these compounds usually depend on the volcano and the type of eruption or outgassing in question \citep{Symonds1994}. \citet{gaillard2009} used constraints on the composition of Martian basalts, together with thermochemical constraints on the composition of the Martian mantle, to estimate a sulphur content of potential Martian volcanic emission, especially from volcanoes that formed later in Mars' history, such as those in the Tharsis range, which is 10 - 100 times higher than equivalent sulphur emission from Earth volcanoes. Since these sulphurous gases have stabilities in the Martian atmosphere of the order of only days to a few years \citep{nair1994,krasnopolsky1995,wong2003,wong2005}, their detection in the Martian atmosphere would be a strong indicator of residual present-day volcanic activity on Mars. Alternatively, it could be a sign of a coupling between the atmosphere and sulphur-containing regoliths \citep{farquhar2000} or sulphate deposits that are present in multiple regions of Mars \citep{bibring2005,langevin2005}. This would be an indirect sign of past volcanic activity but one that would nonetheless shed light on a previously unknown sulphur cycle in the Martian atmosphere. Recent detections of other new trace gases in the atmosphere of Mars from remote sensing provide a tantalising glimpse of fascinating new Martian surface and atmospheric chemistry that until now had remained unknown to science. Examples include the independent detections of hydrogen peroxide by \citet{clancy2004htwootwo} and \citet{encrenaz2004}, a photochemical product that acts as a possible sink for organic compounds in the atmosphere; and the independent detections of hydrogen chloride (HCl) reported by \citet{korablev2020hcl}, \citet{olsen2021hcl}, and \citet{aoki2021}, a possible tracer of the interaction of dust with the atmosphere. HCl in particular was historically assumed to be a tracer of volcanic outgassing and was hypothesised to be responsible for the observed geographical distribution of elemental chlorine \citep{keller2006} and perchlorates \citep{hecht2009,catling2010}. While the seasonality of observed HCl detections in the Martian atmosphere has made a volcanic origin less likely, more recent anomalous detections of HCl in aphelion reported by \citet{olsen2021hcl} have not entirely ruled it out. We should of course also note various disputed detections of methane (e.g. \citealp{formisano2004methane,krasnopolsky2004,mumma2009,webster2015methane}), whose ultimate provenance remains subject to debate and could be a result of a number of different processes, including volcanic outgassing (\citealt{oehleretiope2017}; and references therein), but whose legitimacy has been seriously questioned by recent satellite measurements \citep{korablev2019,montmessin2021} \begin{comment} Observations of sulphurous gases concurrently with future methane detections could therefore constrain whether the methane has a biological or an abiological origin. In spite of this \end{comment} Despite this, all attempts to constrain the presence of sulphurous gases in the Martian atmosphere, and in particular SO\textsubscript{2}, have failed to confirm any statistically significant positive detections. Although comprehensive searches for trace gases in the Martian atmosphere have been ongoing since the Mariner 9 era \citep{maguire1977}, most of the earliest attempts were hampered by the lack of spectral sensitivity and low spectral resolution: the resolving power in the mid-infrared where many of the required spectral signatures are located generally needs to be greater than 10,000 in order to resolve them from neighbouring CO\textsubscript{2} and H\textsubscript{2}O absorption lines. More recent literature concerning upper limits relied on ground-based observations that were made at much higher spectral resolutions, with SO\textsubscript{2} and H\textsubscript{2}S spectral data obtained in the sub-millimetre (e.g. \citealp{nakagawa2009,encrenaz2011so2}) and the thermal infrared \citep{krasnopolsky2005,krasnopolsky2012upperlims} wavenumber ranges, and corresponding OCS spectral data around an absorption band in the mid-infrared \citep{khayat2017}. However, ground-based observations still have a number of drawbacks. Firstly, they are hindered by a lack of temporal coverage. Secondly, the presence of telluric absorption in ground-based observations also obscures gas absorption signatures of Martian provenance, removal of which requires correction for Doppler shift (cited by \citealt{zahnle2011} as a possible source of false detections of methane) as well as a complex radiative transfer model that decouples the contribution from Earth and Mars. Finally, they can often lack the spatial resolution necessary to resolve local sources of emission. By contrast, observations from probes in orbit suffer less from these issues and have more localised spatial coverage, which enabled the recent detection of HCl in orbit despite numerous recent failed attempts to detect it from the ground, even with instruments of similar spectral resolution (e.g. \citealp{krasnopolsky1997,hartogh2010,villanueva2013}). In this work we present upper detection limits on SO\textsubscript{2}, H\textsubscript{2}S, and OCS based on spectral data from the first two Martian years of observations from the mid-infrared channel of the Atmospheric Chemistry Suite instrument \citep[ACS MIR;][]{korablev2018} on board the ExoMars Trace Gas Orbiter (TGO; \citealp{Vago2015}). In Sect. 2 we introduce the ACS MIR data and summarise the data calibration process, and in Sect. 3 we present the two methods used to establish detection limits on each of the given compounds. In Sect. 4 we present the retrieved upper limits for each of the sulphur trace species in turn. Discussion and conclusions are reserved for Sect. 5. \section{Data and calibration} TGO has been in orbit around Mars since October 2016 and has operated continuously since the start of its nominal science phase in April 2018, providing a wealth of data covering one and a half Martian years as of writing (from L\textsubscript{s}= 163$\text{\textdegree}$ in MY 34 to L\textsubscript{s}= 352$\text{\textdegree}$ in MY 35). On board are two sets of infrared spectrometers designed to perform limb, nadir, and solar occultation observations of the atmosphere: the ACS (\citealp{korablev2018}) and the Nadir and Occultation for Mars Discovery (NOMAD; \citealp{vandaele2018}) instruments. In this work, we focus on the ACS MIR instrument (\citealp{trokhimovskiy2015mir}), which observes Mars purely through solar occultation viewing geometry with the line of sight parallel to the surface, obtaining a set of transmission spectra of the Martian atmosphere at individual tangent heights separated at approximately 2 km intervals from well below the surface of Mars up to the top of the atmosphere during a single measurement sequence. The closer the tangent height of observation to the surface, the greater the molecular number density of the absorbing species and hence the greater the optical depth integrated along the line of sight. This allows for higher sensitivities to very small abundances of trace gases, in theory down to single parts per trillion for some species \citep{korablev2018,toon2019}, than can be achieved from ground-based observations where the path length through the atmosphere is much shorter. In practice, however, the sensitivity is limited by instrumental noise and artefacts, and the signal-to-noise ratio (S/N) usually decreases at lower altitudes due to the attenuation of the solar signal by the presence of dust, cloud and gas absorption. The best upper limits for trace gas species are therefore usually obtained as close to the surface as possible under very clear atmospheric conditions, which occur especially near the winter poles, during aphelion and excluding global dust storm periods. ACS MIR is a cross-dispersion echelle spectrometer that covers a spectral range of 2400-4500~cm\textsuperscript{-1} with a spectral resolving power of up to 30,000 \citep{trokhimovskiy2020sashaslines}. The instrument makes use of a steerable secondary reflecting grating that then disperses the incoming radiation and separates it into named diffraction orders that encompass smaller wavenumber subdivisions. The group of diffraction orders that is selected depends on the position of the secondary grating. In this work we analyse observations obtained using three grating positions: position 9, which covers the main SO\textsubscript{2} absorption bands; position 11, which covers the strongest OCS absorption bands; and position 5, which covers H\textsubscript{2}S. Respectively, these grating positions cover wavenumber ranges of 2380 - 2560~cm\textsuperscript{-1} (diffraction orders 142 - 152), 2680 - 2950~cm\textsuperscript{-1} (160 - 175), and 3780 - 4000~cm\textsuperscript{-1} (226 - 237). In each case, only one grating position is used per measurement sequence and the spatial distribution of these measurements over the time period covered in this analysis is shown in Fig. \ref{spatialdistribution}. The end product is a 2D spectral image from each tangent height projected onto a detector, which consists of wavenumbers dispersed in the x axis and diffraction orders separated along the y axis (the reader is directed to \citet{korablev2018,trokhimovskiy2020sashaslines,olsen2020co,montmessin2021} for diagrams and further discussion). This image contains approximately 20 - 40 pixel rows per diffraction order depending on the grating position in question, corresponding to an increment of around 200 - 300 m in tangent height per row. The intensity distribution of incoming radiation is distributed asymmetrically over each diffraction order, with the row of maximum intensity, and hence maximum S/N, usually located somewhere between the geometric centre of the diffraction order and the edge of the slit that is over the solar disc. \begin{figure}[h] \includegraphics[bb=20bp 0bp 578bp 248bp,width=1.1\columnwidth]{observations_distribution2} \caption{Spatial and temporal distribution of all the ACS MIR observations covered in this analysis. Squares represent observations made by ACS using grating position 9 (sensitive to SO\protect\textsubscript{2}), diamonds using grating position 5 (H\protect\textsubscript{2}S), and triangles using grating position 11 (OCS). The colour of each symbol represents the time, in units of solar longitude ($L_{s}$) over Martian years (MY) 34 and 35, at which the observation was obtained. The relative concentration of observations made closer to the poles is due to the TGO orbital geometry. Latitude values are planetocentric. The background image of Mars is based on topography data from the Mars Orbiter Laser Altimeter (MOLA) instrument on board Mars Global Surveyor \citep{smith2001mola}.} \label{spatialdistribution} \end{figure} We should also note the presence of `doubling', an artefact of unknown origin that causes two images per tangent height, offset horizontally and vertically by a few pixels, to be projected onto the detector surface, resulting in transmission spectra in which each absorption line appears to be divided into two separate local minima \citep{alday2019,olsen2020co}. This doubling effect tends to be strongest in the centre of the illuminated portion of the slit where the signal strength is highest, and its profile changes across each row of a given diffraction order in ways that are still under investigation as of writing. In practice it also acts to reduce the effective spectral resolution of ACS MIR spectra from the nominal value \citep{olsen2021hcl} and hence proves a major source of error in the determination of upper limit values. Calibration was performed according to the procedure detailed in \citet{trokhimovskiy2020sashaslines} and \citet{olsen2020co}. Observations in the measurement sequence when the field-of-view is fully obscured by Mars were used to estimate dark current and thermal background. At the other end of the measurement sequence, observations in the very high atmosphere were used to estimate the solar spectrum, taking account the drift of the image on the detector over time induced by the thermal background. Stray light was estimated from the signal level between adjacent diffraction orders. A first guess of the pixel-to-wavenumber registration was made by comparing the measured solar spectrum with reference solar lines \citep{hase2010}, which was then further refined as part of the retrieval process as will be described in Sect. \ref{subsec:RISOTTO}. The tangent height of each observation at the edges of the slit was estimated using the TGO SPICE kernels \citep{trokhimovskiy2020sashaslines}, where the row positions of the top and bottom of each slit were established using reference `sun-crossing' measurement sequences in which the slit passes across the Sun perpendicularly to the incident solar radiation path above the top of the atmosphere. Nonetheless, there is still some uncertainty on the tangent height registration as a) although we assume a linear distribution, the change in tangent height over each intermediate slit row is not well constrained and b) the pixel offset between the two doubled images can make it difficult to identify the exact locations of the slit edges on the detector array. This uncertainty is usually of the order of around 0.5 - 1.0~km. \begin{comment} These do not currently exist for grating position 11, and so cruder estimates of the locations of the slit edges must be made based on the shape of the intensity profile across the slit. \end{comment} \section{Analysis\label{sec:Method}} \subsection{RISOTTO forward model\label{subsec:RISOTTO}} Upper limits are obtained using the RISOTTO radiative transfer and retrieval pipeline \citep{braude2021soar}. RISOTTO relies on Bayesian optimal estimation, starting from a prior state vector of parameters relating both to the atmospheric quantities that we wish to retrieve from the data (in this case, vertical gas abundance profiles quantified in units of volume mixing ratio, hereon abbreviated to VMR) and a number of instrumental parameters intended to reduce systematic errors due to noise, uncertainties in instrumental calibration or the presence of aerosol. We model three of these instrumental parameters specifically: first, a polynomial law that relates the first guess of the pixel-to-wavenumber registration to a more accurate registration; second, an instrument line shape model akin to that described by \citet{alday2019} that approximates the doubling artefact observed in spectra from ACS MIR; and third, the transmission baseline as a function of wavenumber that takes into account broad variations in the shape of the spectrum while simultaneously avoiding problems of so-called overfitting, where any narrow residuals due to individual noise features or poorly modelled gas absorption lines are compensated for by changing the baseline in an unphysical manner. The algorithm then computes a forward model based on these parameters and then iteratively finds the optimal values of both the scientific quantities and the instrumental parameters simultaneously that best fit the observed transmission spectra, together with their associated uncertainties \begin{comment} , which acts to retrieve a vertical profile of a given gaseous species in the atmosphere given a measurement sequence and a single row per wavenumber window, for which the retrieval of multiple separate windows per single diffraction order are permitted, as well as the simultaneous retrieval of multiple windows over separate diffraction orders. The retrieval also acts to iteratively correct for three major instrumental uncertainties: the variable transmission baseline due to the presence of noise and aerosol absorption, the instrument line shape function associated with the aforementioned doubling which is approximated using two Gaussians of finite resolution similar to the parametrisation in \citet{alday2019}, and small uncertainties in the spectral registration. \end{comment} {} In order to model gas absorption along the line of sight both accurately and in a manner that is computationally efficient, spectral absorption cross-section lookup tables are calculated from the HITRAN 2016 database \citep{GORDON2017} over a number of sample pressure and temperature values that reflect the range of values typically found in the Martian atmosphere, and then a quartic function is derived through regression to compute the cross-sections at any given intermediate pressure and temperature value as explained further in \citet{braude2021soar}. Examples of these computed cross-sections for each of the three molecules studied in this article are shown in Fig. \ref{synthetic}a-c, with the corresponding wavenumber selections for the retrievals set to favour stronger molecular lines while avoiding lines that overlap too greatly with lines of other known molecules present in the region. As an illustration of how these species may appear in real ACS MIR spectra given predicted abundances, Fig. \ref{synthetic}d-i show equivalent approximate simulated transmissions for the major species in the spectral ranges present in the three orders, with conditions equivalent to those near the south pole at aphelion around 10 km of altitude, and convolved with a Gaussian instrument line shape function of R~$\sim$~20,000 to roughly reflect the reduction in spectral resolution due to doubling. For both figures we assume an atmosphere of 95.5\% CO\textsubscript{2}, 100~ppmv H\textsubscript{2}O, 10~ppbv of SO\textsubscript{2} and H\textsubscript{2}S, and 1~ppbv of HCl and OCS, corresponding either to simulated detection limits in optimal dust conditions \citep{korablev2018} or existing measurements of known species (eg. \citealt{fedorova2020,korablev2020hcl}). We can see that the regions in which the strongest SO\textsubscript{2} and H\textsubscript{2}S lines are located are dominated by much stronger lines of CO\textsubscript{2} and H\textsubscript{2}O. By contrast, the spectral region in which OCS is found is usually much clearer, with HCl absorption lines easily resolvable and only found during certain seasons. \begin{figure*} \includegraphics[width=1\textwidth]{simulated_transmissions3} \caption{Computed cross-section values and simulated transmission spectra, for illustrative reference pressure and temperature values of 1 mbar and 200 K, respectively. Displayed wavenumber ranges reflect the coverage of the analysed diffraction orders, with the light yellow highlighted regions showing the spectral intervals over which detection limit retrievals were conducted, maximising absorption from the trace species in question while minimising overlapping absorptions from other known species. \emph{Panels a-c: }Cross-section values for \emph{(a) }SO\protect\textsubscript{2} in diffraction order 148, \emph{(b)} OCS in diffraction orders 173 and 174, and \emph{(c)} H\protect\textsubscript{2}S in diffraction order 228. The grey cross-hatched regions in panels \emph{(b), (f), }and \emph{(g) }show the wavenumber range overlap between diffraction order 173 on the left and diffraction order 174 on the right. \emph{Panels d-i:} Estimated transmission spectra given the aforementioned pressure and temperature conditions, convolved with a Gaussian instrument function of resolution R $\sim$ 20,000. For each diffraction order, we plot the contributions of the trace species to the total transmission separately from the other interfering species, for which the contribution is several orders of magnitude larger.} \label{synthetic} \end{figure*} At the wavenumber ranges studied in this analysis, the CO\textsubscript{2} absorption lines are not usually strong enough to independently constrain gaseous abundances, temperature and pressure all simultaneously. This can be even further exacerbated by uncertainties induced by systematic errors in the instrument line shape due to doubling (e.g. \citealp{alday2021co2}). Nonetheless, retrieved gas abundances are heavily degenerate with temperature and pressure and so completely arbitrary values cannot be chosen. \emph{A priori }temperature and pressure data are therefore usually sourced from fitting stronger CO\textsubscript{2} bands in the near-infrared (NIR) channel of ACS data obtained during the same measurement sequence \citep{fedorova2020}. For occasional ACS measurement sequences, however, these observations are not available, in which case we use estimates from the Laboratoire de M\'et\'eorologie Dynamique general circulation model (LMD GCM) solar occultation database \citep{forget1999mcd,millour2018mcd,forget2021lmdgcm}, which computes vertical profiles of a number of atmospheric parameters for the given time and location of an ACS measurement sequence using a GCM. For retrievals where no CO\textsubscript{2} bands are to be fitted, such as with OCS, it is usually adequate to leave the temperature-pressure profile fixed in the retrieval. For retrievals of SO\textsubscript{2} and H\textsubscript{2}S where they are to be fitted, the LMD GCM estimates are not always close enough to the true value to provide a good fit to the CO\textsubscript{2} absorption bands, and so minor adjustments in the temperature-pressure profile do have to be made in the retrieval. To decouple the effects of temperature and pressure, we keep a reference pressure value at a given altitude fixed to that from the LMD solar occultation database, then use the CO\textsubscript{2} bands to retrieve a vertical temperature profile directly from the data and thereby derive a vertical pressure profile by assuming hydrostatic equilibrium (e.g. \citealp{quemerais2006,alday2019,alday2021fractionation,montmessin2021}). \subsection{Retrieval procedure} \begin{comment} When we attempted to apply this method to retrievals of SO\textsubscript{2} however, we found that the degeneracy with the baseline level due to high continuum absorption of SO\textsubscript{2}, together with the presence of the most prominent SO\textsubscript{2} signatures near the edge of the detector where the SNR and uncertainties in calibration are highest, resulted in the retrieval of unrealistically low upper limits as well as a large number of false detections. This therefore forced us to use a different, more empirical method for SO\textsubscript{2}. We discuss these two methods in turn in the next two subsections. \end{comment} {} \begin{comment} figure with retrieval of profile given different injections of the trace species abundance \end{comment} \subsubsection{H\protect\textsubscript{2}S and OCS\label{subsec:HS-and-OCS}} A number of approaches can be used to derive the upper limit of detection of a given species, each with their advantages and disadvantages. The most common approach is to define the upper limit as the estimate of the uncertainty on a retrieved VMR given a true VMR equal to 0; if the retrieved VMR was greater than this uncertainty value, it would be deemed a detection as opposed to an upper limit. This uncertainty can be estimated either by quantifying each of the sources of error in turn (e.g. \citealp{aoki2018}) or by deriving a first estimate of the\emph{ }uncertainty from a retrieval code and then adjusting the uncertainties \emph{post hoc }according to statistical criteria (e.g. \citealp{montmessin2021,knutsen2021}). This method is expanded on by \citet{olsen2021ph3} by performing an independent retrieval of the vertical gas profiles using multiple pixel rows on the ACS MIR detector array, and then calculating an upper limit value from the standard error on the weighted average of the vertical profiles from each of these retrievals combined. This has the added benefit of increasing the statistical significance of a measurement through repeated observation and thereby deriving lower upper limit \begin{comment} , however we find in our own retrievals that it is also sensitive to intrinsic biases imposed either by the prior state vector or by local artefacts in the spectrum \end{comment} . For retrievals of upper limits of OCS and H\textsubscript{2}S, we use a modified version of the \citet{olsen2021ph3} method, but with some further amendments to minimise uncertainties in the estimated spectral noise level, as well as to reduce sensitivity due to intrinsic biases imposed either by the prior state vector or by local artefacts in the spectrum. The sigma detection value at each altitude is determined by the ratio of the weighted mean abundance value $\mu_{j}^{*}$ to the weighted standard deviation $\varsigma_{j}^{*}$ as found by the procedure detailed in Appendix \ref{sec:Detailed-procedure-of}, that is to say, where $\mu_{j}^{*}/\varsigma_{j}^{*}=1$ indicates a positive 1$\sigma$ detection. Usually, a 1$\sigma$ or 2$\sigma$ detection indicates overfitting of local artefacts or noise features as opposed to a genuine positive detection. We therefore treat anything below a 3$\sigma$ detection as insignificant and define our upper limit at each altitude by adding total sources of both systematic and random error in quadrature, that is to say, the upper limit is equal to $\sqrt{\mu_{j}^{*2}+\varsigma_{j}^{*2}}$ smoothed using a narrow Gaussian filter with respect to altitude, as shown in Fig. \ref{methodology_visual_example}. \begin{figure}[h] \includegraphics[width=1\columnwidth]{ocs_detections_example_review1} \caption{Illustrative example of the upper limit derivation methodology for OCS and H\protect\textsubscript{2}S. Grey lines indicate the individual $2N$ vertical profiles of OCS retrieved according to \emph{(solid) }step 3 and \emph{(dotted)} step 4, for each row on the detector array. The solid blue line shows the weighted mean abundance profile, $\mu_{j}^{*}$, derived from the individual retrievals, with the standard deviation, $\varsigma_{j}^{*}$, shown in the shaded blue region. The solid orange line shows the profile of $\sqrt{\mu_{j}^{*}+\varsigma_{j}^{*}}$ smoothed over altitude. The red shaded region shows the altitude range for which a 1$\sigma$ detection was determined, i.e. where the value of $\varsigma_{j}^{*}$ is lower than the value of $\mu_{j}^{*}$.} \label{methodology_visual_example} \end{figure} \subsubsection{SO\protect\textsubscript{2}\label{subsec:SO}} Retrievals of SO\textsubscript{2} are complicated by the fact that SO\textsubscript{2} has a dense line structure that is difficult to fully resolve at the given spectral resolution, and so retrievals of SO\textsubscript{2} are somewhat difficult to decouple from uncertainties in the baseline level induced by noise, calibration errors, and aerosol extinction. Failure to take into account uncertainties in the baseline level will therefore result in upper limits of SO\textsubscript{2} that are too low. While RISOTTO should ordinarily be able to handle some degeneracy with low frequency baseline variations \citep{braude2021soar} that would save having to perform a more complicated fitting procedure as in \citet{belyaev2008,belyaev2012} to fully separate out the contribution of aerosol and SO\textsubscript{2} to the continuum absorption, the situation is made complicated by the presence of artefacts near the edge of the detector where the degeneracy between low-frequency baseline variations and SO\textsubscript{2} absorption can most easily be broken due to the presence of prominent SO\textsubscript{2} lines. The retrieval code therefore has the propensity of overfitting artefacts, resulting in false detections of highly negative abundances of SO\textsubscript{2}. One method is to start with a forward model where the species is absent, and gradually inject incrementally large quantities of the species into the forward model until the change in the spectral fit is deemed to increase above a given threshold (eg. \citealp{teanby2009,korablev2020hcl}). In practice this can become slow and unwieldy in the case of solar occultations where there are multiple spectra to be fit simultaneously, each with their own independent noise profile and sensitivity to the given species. In addition, it also begs the question as to how the threshold should be defined quantitatively, especially if the S/N is uncertain. \citet{teanby2009} assessed the significance of a detection according to the mean squared difference between the observed spectrum $y_{obs}(\nu)$ as a function of wavenumber, $\nu$, and the modelled spectrum $y_{mod}(\nu,x)$ given an `injected' gas volume mixing ratio, $x$, weighted according to the spectral uncertainty $\sigma(\nu)$: \begin{alignat}{1} \Delta\chi^{2} & =\chi^{2}(x)-\chi^{2}(0)\\ & =\sum_{i}^{N_{\nu}}\left(\left(\frac{y_{obs}(\nu_{i})-y_{mod}(\nu_{i},x)}{\sigma(\nu_{i})}\right)^{2}-\left(\frac{y_{obs}(\nu_{i})-y_{mod}(\nu_{i},0)}{\sigma(\nu_{i})}\right)^{2}\right), \end{alignat} \noindent where $N_{\nu}$ is the number of individual spectral points in a given observation. The residual errors on the fit to the spectrum are assumed to follow a double exponential distribution \citep{press1992}, neglecting systematic errors: \begin{equation} P(y_{obs}(\nu_{i})-y_{mod}(\nu_{i},0))\sim\exp\left(\;\;\left|\frac{y_{obs}(\nu_{i})-y_{mod}(\nu_{i},0)}{\sigma(\nu_{i})}\right|\;\;\right)\label{eq:proberror} .\end{equation} \citet{teanby2009,teanby2019} then defined upper limits according to the confidence interval around the mean of the probability distribution described by Eq. \ref{eq:proberror}, so that values of gas abundance $x$ that give a value of $\Delta\chi^{2}=n^{2}$ should represent an \emph{n}-sigma upper limit, while a gas abundance that results in $\Delta\chi^{2}=-n^{2}$ should analogously represent a positive \emph{n}-sigma detection. However, this method was used for single isolated lines where the detection limit is more or less independent of the size of the spectral window. This is not the case for SO\textsubscript{2}, which has a dense line structure that affects the shape of the baseline, and where there is substantial uncertainty on the values of $\sigma(\nu_{i})$ induced by various sources of systematic error, notably from the doubled instrument line shape. We find in our own data that values of $\Delta\chi^{2}=1$ give detection limits that are far too optimistic even when correcting for spectral sampling as in \citet{teanby2019}. Instead, we find that taking the standard deviation of the double exponential distribution provides more realistic detection limits, which would therefore correspond to \emph{n}-sigma upper limits and detections of, respectively, $\Delta\chi^{2}=\pm\sqrt{2N_{\nu}}n^{2}$. Depending on the occultation in question, $\sqrt{2N_{\nu}}$ is usually equal to around 25 for the wavenumber range taken into account in the retrieval. We first performed an initial retrieval where SO\textsubscript{2} is not present in the forward model, only retrieving the temperature profile and the three instrumental parameters as previously described, to give a spectral fit $y_{mod}(\nu,0)$. The values of $\sigma(\nu)$ are then estimated by taking a moving average of the difference between $y_{obs}(\nu)$ and $y_{mod}(\nu,0)$, thereby providing a first guess of spectral uncertainties due to systematics in the spectra and forward modelling error, and hence allowing a preliminary value of $\chi^{2}(0)$ to be calculated. Fixed vertical profiles of SO\textsubscript{2} abundance $x$ were then added to the forward model, and a new retrieval performed where the baseline is allowed to vary but the remaining parameters of the state vector are fixed to those retrieved from $y_{mod}(\nu,0)$. If this results in a value of $\Delta\chi^{2}$ at a given tangent height that is negative, the values of $\sigma(\nu)$ are further refined by taking a moving average of the difference between $y_{obs}(\nu)$ and $y_{mod}(\nu,x)$, so that the contribution of forward modelling error to the spectral uncertainty is minimised. Once enough different vertical profiles of SO\textsubscript{2} were modelled to sufficiently sample the $\Delta\chi^{2}$ parameter space, the approximate 1$\sigma$ level at each altitude was found through progressive quadratic interpolation to a volume mixing ratio profile where $\Delta\chi^{2}$ is as close to $\sqrt{2N_{\nu}}$ as possible for each altitude, or conversely where $\Delta\chi^{2}$ is as low as possible. Due to the presence of systematic errors, we do not attempt to reduce upper limits by adding the contribution of multiple rows of each diffraction order as with H\textsubscript{2}S or OCS, instead only analysing a single row close to the centre of the diffraction order that receives the most input radiance and hence has the highest S/N. \section{Results} \subsection{SO\protect\textsubscript{2}} Given that volcanic emissions of SO\textsubscript{2} are predicted to dwarf those of any other sulphur species and have the longest photochemical lifetim \begin{comment} cite \end{comment} , SO\textsubscript{2} is the tracer that has attracted the most attention in the literature out of all the three gases analysed here. Ground-based observations of SO\textsubscript{2} in the past have focussed on two main spectral regions. In the sub-millimetre there is a single line present at 346~GHz, from which disc-integrated upper limits of 2~ppbv were derived by \citet{nakagawa2009} and upper limits of just over 1 ppbv by \citet{khayat2015,khayat2017}. A number of rotational-vibrational lines are also present in the thermal infrared between 1350 - 1375~cm\textsuperscript{-1} \citep{encrenaz2004,encrenaz2011so2,krasnopolsky2005,krasnopolsky2012upperlims}, with upper limits of 0.3~ppbv independently confirmed by both \citet{encrenaz2011so2} and \citet{krasnopolsky2012upperlims}. By contrast, the strongest SO\textsubscript{2} absorption band in the ACS MIR wavenumber range, centred around 2500~cm\textsuperscript{-1}, is still weaker than the absorption bands found in these two regions. In addition, the SO\textsubscript{2} absorption band has a very dense line structure, which at ACS MIR resolution results in only a small number of isolated lines that are sufficiently prominent to decouple the contributions of SO\textsubscript{2} absorption to the spectrum from the uncertainty in the transmission baseline. This is also further complicated by the fact that several lines overlap strongly with the absorption bands of three separate isotopes of CO\textsubscript{2}, several of which are also difficult to resolve from one another and contribute to the baseline uncertainty. The SO\textsubscript{2} line that is best resolved from the baseline and CO\textsubscript{2} absorption is present at 2491.5~cm\textsuperscript{-1}, but this is also located close to the edge of the diffraction order where the S/N starts to decrease. We performed retrievals on a relatively broad spectral range of 2481 - 2492~cm\textsuperscript{-1}, which allowed the strongest lines of \textsuperscript{12}C\textsuperscript{16}O\textsuperscript{18}O to be fit in order to constrain both the vertical temperature profile and the doubling line shape, as well as minimising the probability of local noise features at 2491.5~cm\textsuperscript{-1} being overfit with SO\textsubscript{2} absorption, while avoiding lower wavenumbers where the contribution of \textsuperscript{12}C\textsuperscript{16}O\textsuperscript{16}O and \textsuperscript{12}C\textsuperscript{16}O\textsuperscript{17}O to the uncertainty in the transmission baseline starts to become significant. In addition, we also only fit spectra up to a tangent height of 50~km above the Martian geoid (usually referred to as the `areoid') as noise features start to dominate in spectra above this level, which can occasionally be confused with genuine SO\textsubscript{2} absorption by the retrieval code. This is justified as SO\textsubscript{2} emitted from the surface would only be detectable at very high abundances above 50~km as we show later in this section. We retrieved estimates of upper limits of SO\textsubscript{2} from all measurement sequences obtained using grating position 9 for which the observed altitude of aerosol saturation was below~50 km, according to the procedure previously outlined in Sect. \ref{subsec:SO}. 190 occultations in total satisfied this criterion, covering a time period of L\textsubscript{s} = 165 - 280$\text{\textdegree}$ of MY 34 and L\textsubscript{s} = 140 - 350$\text{\textdegree}$ of MY 35, in both cases equivalent to the time around perihelion and approaching early aphelion. Occasionally, vertical correlations between spectra, together with the breakdown in the approximation of a quadratic relationship between volume mixing ratio and $\Delta\chi^{2}$ due to baseline degeneracy, can result in failure to converge to a $\Delta\chi^{2}\approx\sqrt{2N_{\nu}}$ solution for certain tangent heights. The altitude ranges at which these poor $\Delta\chi^{2}$ solutions are found are therefore removed from the upper limit profiles before they are smoothed. In Fig. \ref{so2scatter}, we plot the smoothed vertical upper limit profiles retrieved from all 190 occultations as a function of altitude. We find in most cases that we can derive upper limits down to around 30 - 40~ppbv, usually around 20~km above the areoid, with the sensitivity decreasing exponentially with altitude until only measurements of the order of 1~ppmv of SO\textsubscript{2} are retrievable in the lower mesosphere. Below the 20~km level, the presence of aerosols usually lowers the S/N to the point where the CO\textsubscript{2} lines become heavily distorted, particularly below around 10 - 15~km, which makes accurate fitting of SO\textsubscript{2} very difficult even when taking temperature variations and spectral doubling into account. We find no significant detections of SO\textsubscript{2} to greater than 1$\sigma$ confidence. \begin{figure}[h] \includegraphics[width=1\columnwidth]{SO2_scatter2_review1} \caption{1$\sigma$ upper limit values of SO\protect\textsubscript{2} obtained as a function of altitude for each position 9 measurement sequence. Yellower colours indicate greater densities of upper limit values for a given altitude.} \label{so2scatter} \end{figure} In Fig. \ref{so2dist} we plot the lowest upper limits retrieved per occultation as a function of altitude and latitude. For reference, we mark periods of increased dust storm activity during perihelion, which usually consists of a large dust storm event that affects the general latitude range between 60$\text{\textdegree}$ S and 40$\text{\textdegree}$ N and peaking around $L_{s}=220\text{\textdegree}-240\text{\textdegree}$ , followed by a dip in dust activity just after solstice and a smaller regional dust storm event around $L_{s}=320\text{\textdegree}$ that mostly affects southern mid-latitudes \citep{wangrichardson2015,montabone2015}. Dust activity was particularly intense in MY 34 compared with MY 35 \citep{montabone2020,olsen2021hcl}, and although the poles remained relatively clear of dust even during the MY 34 global dust storm event we were only able to get upper limits down to around 50 ppbv in northern polar regions. By contrast, outside the perihelion dust storm events, the atmosphere could be probed deep enough to attain sufficient sensitivity to regularly attain 1$\sigma$ upper limits of 20~ppbv down at around 10 km above the areoid, close to both the northern and southern polar regions. \begin{figure}[h] \includegraphics[width=1\columnwidth]{so2_distribution_season2_review1} \caption{Change in the distribution of the lowest SO\protect\textsubscript{2} upper limit values derived from each ACS occultation sequence as a function of season, with the colour of each circle representing the approximate latitude value where the measurement was obtained. Brown shaded regions indicate major dust storm events \citep{montabone2020,olsen2021hcl}.} \label{so2dist} \end{figure} \begin{figure*}[t] \includegraphics[width=1\textwidth]{so2_fits2} \caption{Example fit to a position 9 spectrum for which the lowest upper limits were retrieved from the entire position 9 dataset. In blue is shown the fit to the measured spectrum taking only CO\protect\textsubscript{2} absorption and instrumental parameters into account, with the synthetic spectrum showing the additional contribution of 60 ppbv of SO\protect\textsubscript{2}, corresponding to the perceived 3$\sigma$ level, superimposed in magenta. Estimated spectral uncertainty is shaded in grey. All spectra are normalised to the retrieved transmission baseline at unity for clarity. \emph{Panels a-b: }Spectral fit when the transmission baseline is retrieved only for the initial fitting of the CO\protect\textsubscript{2} lines with zero SO\protect\textsubscript{2} abundance. \emph{Panels c-d: }Spectral fit when the transmission baseline is re-retrieved following the addition of 60~ppbv of SO\protect\textsubscript{2} into the forward model.} \label{so2fits} \end{figure*} The spectrum where we were able to find the best upper limit value of 20 ppbv is shown in Fig. \ref{so2fits}, where we compare the fit using a fixed abundance of SO\textsubscript{2} in the forward model that is equal to 3 times the derived 1 sigma upper limit, to the corresponding fit where no SO\textsubscript{2} is taken into account in the forward model. The change in the aforementioned doubling effect over the breadth of the diffraction order is clearly seen in Fig. \ref{so2fits}, with each CO\textsubscript{2} absorption line exhibiting two local absorption minima where the right minimum progressively dominates more over the left minimum as one moves towards the right edge of the detector. The uncertainty induced by this doubling effect on the transmission baseline ensures that the transmission baseline is difficult to derive \emph{a priori} without knowledge of SO\textsubscript{2} abundance, and this is shown clearly in Panel a of Fig. \ref{so2fits} where fixing the baseline ensures that given abundances of SO\textsubscript{2} result in SO\textsubscript{2} absorption lines that penetrate further out of the noise level than if the baseline is allowed to vary as in Panel c, and hence results in deceptively low SO\textsubscript{2} upper limits. A better-constrained transmission baseline would result in upper limits that are approximately halved, which is also reflected in quoted theoretical upper limits in the literature of 7~ppbv assuming low dust conditions \citep{korablev2018}. These are still far higher than the aforementioned upper limits of 0.3 - 2 ppbv retrieved from stronger absorption bands in the thermal infrared and the sub-millimetre. Since the uncertainties in the retrievals of SO\textsubscript{2} are so heavily dominated by sources of systematic error, we cannot analyse multiple rows of the detector array in order to reduce these values further. SO\textsubscript{2} is predicted to be well mixed in the atmosphere below 30 - 50~km altitude after approximately six months, depending on the volume of outgassing \citep{krasnopolsky1993,krasnopolsky1995,wong2003}, well below the predicted photochemical lifetime of 2 years \citep{nair1994,krasnopolsky1995}. We therefore find that we can still probe to deep enough altitudes for any large surface eruption of SO\textsubscript{2} to be monitored from ACS MIR data within months of its occurrence and be able to track its origin. \subsection{H\protect\textsubscript{2}S} Despite lower predicted outgassing rates of H\textsubscript{2}S compared with SO\textsubscript{2}, and hence a lower likelihood of detectability, the concurrent detection of H\textsubscript{2}S is important for three main reasons. Firstly it is a superior tracer of the location of outgassing from the surface due to its shorter photochemical lifetime, with quoted values ranging from of the order of a week \citep{wong2003,wong2005} to 3 months \citep{summers2002}, well below the timescales for global mixing in the atmosphere of Mars. Secondly, the H\textsubscript{2}S/SO\textsubscript{2} ratio provides additional information on the temperature and water content of an outgassing event, as it is governed by a redox reaction in which a lower temperature and higher water content favours the production of H\textsubscript{2}S from SO\textsubscript{2} (\citealt{oppenheimer2011}; and references therein). Finally, H\textsubscript{2}S is, in itself, a gas that is produced biotically on Earth through the metabolism of sulphate ions in acidic environments \citep{bertaux2007}, which could hypothetically be produced by micro-organisms living in sulphate deposits on Mars. Even the strongest H\textsubscript{2}S lines in the ACS MIR wavenumber range are relatively weak, with maximum HITRAN 2016 line strengths of $\sim$1.6 x 10\textsuperscript{-21}cm\textsuperscript{-1}/(molecule~cm\textsuperscript{-2}) at 296~K, compared with equivalent lines found in the thermal infrared at 1293 cm\textsuperscript{-1}($\sim$3.3 x 10\textsuperscript{-20}~cm\textsuperscript{-1}/(molecule~cm\textsuperscript{-2}); \citealp{maguire1977,encrenaz2004}) and especially with lines in the sub-millimetre ($\sim$4.6x 10\textsuperscript{-20}~cm\textsuperscript{2}~GHz; \citealt{pickett1998,encrenaz1991,khayat2015} \begin{comment} conversion factor from nm2 MHz is stated in Pickett paper, need to go back onto database once it stops being down \end{comment} . In addition, the spectral region in which most H\textsubscript{2}S absorption lines are found is dominated by the absorption of several different isotopologues of CO\textsubscript{2} and H\textsubscript{2}O \citep{alday2019}, with especially strong absorption by H\textsubscript{2}\textsuperscript{16}O. We focus here on one particular spectral region between 3827 - 3833~cm\textsuperscript{-1} around some of the strongest H\textsubscript{2}S bands in the ACS MIR wavenumber range, located in diffraction order 228 of grating position 5, as well as an additional spectral window around a single strong line at 3839.2~cm\textsuperscript{-1} located near the edge of the diffraction order where the S/N is lowest. Temporal coverage for this spectral range is restricted to a relatively small number of observations during the middle (L\textsubscript{s} = 164 - 218$\text{\textdegree}$) and end (L\textsubscript{s} = 315 - 354$\text{\textdegree}$) of MY 34, the former providing reasonable spatial and temporal overlap with concurrent observations of SO\textsubscript{2}. In Fig. \ref{h2sdist} we show the retrieved H\textsubscript{2}S upper limit profiles from all 86 processed position 5 observations. Although the sample size is small, we clearly see that upper limit values down to at least 30~ppbv can be achieved relatively consistently around 20~km altitude. In the best case we were able to achieve an upper limit down to 15~ppbv at 4~km altitude, which we show in Fig. \ref{h2sfits}. From our spectral fits it is clear that such an upper limit is mostly imposed by the 3830.7~cm\textsuperscript{-1} and 3831~cm\textsuperscript{-1} H\textsubscript{2}S absorption lines, the latter of which is the only one fully resolvable from neighbouring CO\textsubscript{2} and H\textsubscript{2}O lines and less affected by the uncertainty on the ACS MIR doubling instrument function. While this 15~ppbv value is in line with expected values from \citet{korablev2018}, who quote predicted upper limits of 17~ppbv for ACS MIR in low dust conditions, it is still far higher than the 1.5~ppbv obtained by \citet{khayat2015} in the sub-millimetre. Experimental evidence from the Curiosity rover found that heating samples of regolith to very high temperatures would result in the emission of both SO\textsubscript{2} and H\textsubscript{2}S at an H\textsubscript{2}S/SO\textsubscript{2} ratio of approximately 1 in 200 \citep{leshin2013}. This indicates that, given our constraints on SO\textsubscript{2} abundance, passive outgassing is unlikely to lead to volume mixing ratios of H\textsubscript{2}S in the troposphere above around 0.4~ppbv, far below the detection capabilities of ACS MIR. \begin{figure}[H] \includegraphics[bb=0bp 0bp 461bp 346bp,width=1\columnwidth]{H2S_scatter_review1}\caption{Retrieved 1$\sigma$ upper limits of H\protect\textsubscript{2}S from the position 5 dataset as a function of altitude. Colours indicate the density of retrieved values at each altitude.} \label{h2sdist} \end{figure} \begin{figure*} \includegraphics[width=0.92\textwidth]{h2s_fits2_review1} \caption{Upper limit of H\protect\textsubscript{2}S from position 5 spectra. \emph{ Panel a: }Fit to the spectrum with the lowest retrieved H\protect\textsubscript{2}S upper limit in the position 5 dataset, with a 3$\sigma$ value equivalent to 45 ppbv. In grey are the observed spectra at each row of the detector array normalised to the retrieved unity baseline (black, dashed): the darker the grey colour, the greater the weighting used for the calculation of $\mu^{*}$ and $\varsigma^{*}$. The red and blue lines respectively show the contributions of each of the interfering gas species (in this case, CO\protect\textsubscript{2} and H\protect\textsubscript{2}O) to the fit to the row of the detector array with the greatest S/N, while the brown line shows the synthetic spectrum following the addition of 3$\sigma$ of H\protect\textsubscript{2}S. \emph{Panel b: }Residuals of each of the gas contributions compared with the observed spectrum with the greatest S/N, where the shaded grey region represents the estimated noise level.} \label{h2sfits} \end{figure*} \subsection{OCS} \noindent Although predicted outgassing of OCS is much lower than for SO\textsubscript{2} and H\textsubscript{2}S, with the OCS/SO\textsubscript{2} ratio from terrestrial volcanic emission observed to be of the order of 10\textsuperscript{-4} - 10\textsuperscript{-2} (e.g. \citealp{sawyer2008,oppenheimerkyle2008}), it could nonetheless be produced indirectly from volcanic SO\textsubscript{2} in the high atmosphere through reaction with carbon monoxide \citep{hongfegley1997}. Older systematic searches for OCS from Mariner 9 data \citep{maguire1977} and ground-based millimetre observations \citep{encrenaz1991} established upper limits in the Martian atmosphere of 70~ppbv. More recently, \citet{khayat2017} stated upper limits of 1.1~ppbv from ground-based observations of a mid-infrared band centred at 2925~cm\textsuperscript{-1}, which was partially obscured by telluric absorption. This band is also the strongest of two OCS absorption bands that is covered by ACS MIR, specifically by grating position 11 (the other, centred around 3460 - 3500~cm\textsuperscript{-1} and covered by grating position 3, is weaker and contaminated by strong CO\textsubscript{2} absorption). It also has by far the best spatial and temporal coverage out of all the three grating positions shown here, providing almost continuous coverage from the start of the ACS science phase in April 2018 (MY 34, L\textsubscript{s} = 163\textdegree) to January 2021 (MY 35, L\textsubscript{s} = 355\textdegree). Unlike with SO\textsubscript{2} and H\textsubscript{2}S there is a relative lack of other gases that absorb in this region and would further complicate the retrieval of OCS, with the exception of HCl, which is present only during certain seasons and which is easily isolated from the OCS absorption lines. On the flip side, the lack of gases that absorb in this region can also make it difficult to fit the instrument line shape associated with spectral doubling, especially for occultations where there is an absence of HCl. This can also occasionally result in overfitting of OCS to local noise features present in the spectrum, resulting in spurious detections usually of the order of 2$\sigma$. We show this in figure \ref{ocsnormaldist}, where the retrieved ratios of $\mu^{*}/\varsigma^{*}$ for all position 11 measurement sequences and altitudes should statistically approximate a Gaussian distribution centred around $\mu^{*}/\varsigma^{*}\approx0$ and with a standard deviation equivalent to $\mu^{*}/\varsigma^{*}$, but we instead find some positive kurtosis due to the presence of false 2$\sigma$ detections. Nonetheless, the effect of overfitting noise is somewhat mitigated through averaging over adjacent rows on the detector array, and we find no evidence of OCS in our observations above a 3$\sigma$ confidence level. \begin{figure}[h] \includegraphics[width=1\columnwidth]{ocs_normaldist} \caption{Statistical distribution of the retrieved values of $\mu^{*}$ and $\varsigma^{*}$ for OCS from all position 11 spectra in the analysed dataset. \emph{Panel a:} Scatter plot of $\mu^{*}$ against $\varsigma^{*}$, with the dashed black line showing the 1$\sigma$ detection threshold and the dashed red line showing the 2$\sigma$ detection threshold. \emph{Panel b: }Normalised probability distribution of derived $\sigma$ detection values, with the solid black line showing the Gaussian best fit to the probability distribution. While we find little skew in our upper limit retrievals, there is some kurtosis due to occasional overfitting of fixed pattern noise.} \label{ocsnormaldist} \end{figure} As with SO\textsubscript{2}, the greatest density of retrieved upper limits can be found around 20 km of altitude as shown in Fig. \ref{ocsscatter}, where values of 2-3~ppbv can regularly be sought even in regions of high dust concentration. Unlike with SO\textsubscript{2}, however, observations at lower altitudes are much less limited by the presence of systematic uncertainties or aerosols, which allows for smaller upper limit values to be sought much closer to the surface. In Fig. \ref{ocsdists} we show that the lowest upper limits can be obtained at the winter poles down to 0.4~ppbv, where the atmosphere can be probed by ACS MIR down to around 2~km above the areoid, and especially the southern hemisphere during winter solstice where global or regional dust activity is minimal, for which the best example is shown in Fig. \ref{ocsfits}. By contrast, the perihelion dust season usually limits detections to above 1~ppbv, in line with previous upper limit estimates from \citet{khayat2017} that were obtained through co-adding of both aphelion and perihelion observations. This is also an improvement on theoretical upper limit estimates previously predicted for ACS MIR by \citet{korablev2018}, where the expected performance of the instrument only predicted values down to 2~ppbv even in clear atmospheric conditions. Nonetheless, if we were to attempt to search for real OCS outgassing given an upper limit constraint of 20~ppbv provided by our SO\textsubscript{2} retrievals, together with the constraints on the OCS/SO\textsubscript{2} provided by terrestrial volcanoes, even if we were to assume a very high ratio of OCS/SO\textsubscript{2}= 10\textsuperscript{-2} we would realistically require sensitivity to OCS below at least 0.2~ppbv. This is also complicated by the fact that OCS is difficult to form at low temperatures according to the \citet{hongfegley1997} reaction mechanism and would require very hot sub-surface temperatures to be present on Mars. \citet{oppenheimer2011} quote OCS/SO\textsubscript{2} ratios for the Antarctic volcano Erebus of $7\times10^{-3}$, equivalent to a maximum of 0.15~ppbv of OCS given detected SO\textsubscript{2} values on Mars. All these criteria make it very unlikely that ACS MIR would be able to find OCS in the Martian atmosphere in sufficiently high quantities to be detected. \begin{figure}[h] \includegraphics[width=1\columnwidth]{ocs_vertical_scatter_review1}\caption{Retrieved 1$\sigma$ upper limit values of OCS from all position 11 measurement sequences analysed in the dataset. Colours indicate the density of upper limit values as in Figs. \ref{so2scatter} and \ref{h2sdist}. } \label{ocsscatter} \end{figure} \begin{figure}[h] \includegraphics[width=1\columnwidth]{ocs_distribution_season2_review1} \caption{Seasonal distribution of lowest OCS upper limit values from each position 11 measurement sequence, as a function of latitude (\emph{Panel a}) and altitude (\emph{Panel b}). Crosses denote upper limits of below 1$\sigma$ significance, circles denote detections of between 1$\sigma$ and 2$\sigma$, and squares denote detections of between 2$\sigma$ and 3$\sigma.$ Brown shaded regions indicate major dust storm events, as in Fig. \ref{so2dist}. No positive detections of OCS above 3$\sigma$ were found in the data.} \label{ocsdists} \end{figure} \begin{figure*} \includegraphics[width=1\textwidth]{ocs_fits2_review1} \caption{Upper limit of OCS from position 11 spectra. \emph{Panel a: }Fit to the spectrum with the lowest retrieved OCS upper limit in the position 11 dataset, with a 3$\sigma$ value equivalent to 1.2 ppbv. As in Fig. \ref{h2sfits}, the grey lines represent each of the spectra on the detector array used to estimate the OCS detection limit, where the darker the grey colour, the higher the S/N. On the left we show the fit to the spectrum in diffraction order 173, and to the right we show the simultaneous fit to order 174. For reference, we also plot the approximate 3$\sigma$ upper limit of HCl in green, equivalent to 0.2 ppbv, as well as the contribution of CO\protect\textsubscript{2} to the fit. \emph{Panel b: }Residuals of the spectral fit.} \label{ocsfits} \end{figure*} \section{Discussion and conclusion} In this analysis we present the results of a systematic study of multiple solar occultation observations of Mars during Martian years 34 and 35 using ACS MIR, with the aim of either detecting or establishing upper limits on the presence of three major signatures of volcanic outgassing. For SO\textsubscript{2}, we constrain gas abundances to below 20~ppbv. Assuming that SO\textsubscript{2} is well mixed in the atmosphere following an eruption that happened more than six months prior, and assuming a total mass of the atmosphere of approximately $2.5\times10^{16}$~k \begin{comment} better citation than NASA fact sheet? \end{comment} , this translates to a total maximum limit of approximately 750~ktons of SO\textsubscript{2} in the atmosphere, averaging an outgassing rate of less than 2 ktons a day if we assume that SO\textsubscript{2} has a lifetime in the Martian atmosphere of approximately 2 years. For comparison, the most active volcanoes on Earth -- Etna in Sicily and Kilauea in Hawaii -- have passive outgassing rates of SO\textsubscript{2} of approximately 5.5~ktons/day and 0.98~ktons/day, respectively (\citealt{oppenheimer2011}; and references therein), while a single eruption of a Volcanic Explosivity Index (VEI) of 3, equivalent to a small eruption that is expected to occur approximately once every few months on Earth, can be expected to emit around 700~ktons of SO\textsubscript{2} into the atmosphere \citep{graf1997volcanic}. This appears to reinforce the prevailing view that residual present-day volcanic activity on Mars can only exist at an extremely low level, if at all. In addition, we derive H\textsubscript{2}S upper limits down to 16~ppbv and OCS upper limits down to 0.4~ppbv in the best cases, the latter value being lower than any values previously published. However, these are unlikely to be of sufficient sensitivity for ACS MIR to detect passive outgassing from the surface of Mars. For all three molecules, no positive detections beyond 3$\sigma$ were found to be present in the ACS MIR data. The lack of sulphur compounds detected in the Martian atmosphere has implications for the origin of two other molecules that have been detected in recent years. Halogen halides (HF, HCl, HBr, and HI) are known to be emitted by terrestrial volcanoes, and only one, HCl, has so far been confirmed to exist in the Martian atmosphere. HCl was seen to peak in abundances of the order of a few ppbv in the second half of both MY 34 and MY35, between the global and regional dust storm periods \citep{korablev2020hcl,olsen2021hcl}. The HCl/SO\textsubscript{2} mass ratio in gases emitted from terrestrial volcanoes is variable but usually lies between around 0.1 - 0.9 \citep{pylemather2009}. This would be just compatible with the SO\textsubscript{2} upper limits found in this work, although seasonal trends in HCl concentration appear not to favour a primarily volcanic origin. With regards to the origin of methane, the ratio of CH\textsubscript{4}/SO\textsubscript{2} emitted by terrestrial volcanoes is 1.53 at the most, and usually much smaller (\citealt{nakagawa2009}; and references therein). The recent 20~ppbv and 45~ppbv methane spikes claimed by \citet{moores2019} and \citet{mumma2009} would therefore only be partially justifiable as being of volcanic origin, assuming they are confirmed to be genuine. ACS MIR observations of SO\textsubscript{2} and H\textsubscript{2}S have so far been hampered by a lack of spatial and temporal coverage, as well as an instrument line shape function that as yet remains incompletely characterised. Additional measurements in the future could allow us to probe deeper into the atmosphere and drive down upper limits on SO\textsubscript{2} and H\textsubscript{2}S even further to definitively constrain the amount of outgassing from the surface of Mars, especially around regions near the tropics such as Cerberus Fossae, where signs of intermittent volcanism are the most promising. \begin{acknowledgements} The ACS investigation was developed by the Space Research Institute (IKI) in Moscow, and the Laboratoire Atmosph\`eres, Milieux, Observations Spatiales (LATMOS) in Guyancourt, France. The investigation was funded by Roscosmos and the French National Centre for Space Studies (CNES). This work was funded by CNES, the Agence Nationale de la Recherche (ANR, PRCI, CE31 AAPG2019, MCUBE project), the Natural Sciences and Engineering Research Council of Canada (NSERC) (PDF\textendash 516895\textendash 2018), the UK Space Agency and the UK Science and Technology Facilities Council (ST/T002069/1, ST/R001502/1, ST/P001572/1). AT, OIK and AAF were funded by the Russian Science Foundation (RSF) 20-42-09035 for part of their contribution described below. All ACS MIR spectral fitting was performed by ASB, while ACS NIR spectral fitting of pressure-temperature profiles was performed by AAF and GCM-derived pressure-temperature profiles were created by FF and EM. The interpretation of all results in this work was done by ASB, AT, FM and KSO. Pre-processing and calibration of ACS spectra was performed at IKI by AT and at LATMOS by LB. Spatio-temporal metadata were produced in LATMOS by GL and in IKI by AP. Input and aid on spectral fitting were given by JA, LB, FM, KSO and AT. The ACS instrument was designed, developed, and operated by OIK, FM, AP, AS and AT. \end{acknowledgements} \bibliographystyle{aa}
1,116,691,499,512
arxiv
\section{Abstract} The ALICE experiment at the Large Hadron Collider at CERN is optimized to study the properties of the hot, dense matter created in high energy nuclear collisions in order to improve our understanding of the properties of nuclear matter under extreme conditions. In 2009 the first proton beams were collided at the Large Hadron collider and since then data from proton-proton collisions at $\sqrt{s}$\xspace = 0.9, 2.36, 2.76, and 7 TeV have been taken. Results from pp\xspace collisions provide significant constraints on models. In particular, results on strange particles indicate that Monte Carlo generators still have considerable difficulty describing strangeness production. In 2010 the first lead nuclei were collided at $\sqrt{s_{NN}}$\xspace = 2.76 TeV. Results from Pb+Pb\xspace demonstrate suppression of particle production relative to that observed in pp\xspace collisions, consistent with expectations based on data available at lower energies. \section{Introduction} A Large Ion Collider Experiment (ALICE)~\cite{Kuijer:2002xq,Aamodt:2008zz} is a general purpose detector optimized to measure the bulk properties of the matter created in Pb+Pb\xspace collisions. The primary goal of studies of ultra-relativistic heavy-ion physics is the study of nuclear matter at extreme temperatures and energy densities. At sufficiently high energy densities nuclear matter transitions from ordinary nuclear matter to a phase of deconfined quarks and gluons, called the Quark Gluon Plasma (QGP). The creation of a QGP is possible in high energy nuclear collisions~\cite{Back:2004je,Adcox:2004mh,Arsene:2004fa,Adams:2005dq}. However these collisions present experimental challenges due to the large track densities in central Pb+Pb\xspace collisions. It is therefore necessary for tracking detectors to have fine granularity. In addition, many of the observables which can be used to determine the properties of the QGP are flavor and mass dependent. It is therefore crucial to have information on particle identification over as wide a kinematic range as possible. ALICE's capabilities for low momentum tracking and particle identification allow ALICE to perform measurements in pp\xspace collisions complementary to those that other experiments at the LHC emphasize. The LHC has provided data from pp\xspace collisions at center of mass energies $\sqrt{s}$\xspace = 0.9, 2.36, 2.76, and 7 TeV and Pb+Pb\xspace collisions at a center of mass energy per nucleon of $\sqrt{s_{NN}}$\xspace = 2.76 TeV. The ALICE detector, shown in \Fref{fig:aliceschematic}, is 16 m in diameter and 26 m long and weighs approximately 10,000 tons. Since ALICE is designed for measurements of events with high track densities, ALICE has precision detectors but with limited acceptance, focusing on midrapidity ($|\eta|<$ 0.9). The central detectors sit inside of the L3 magnet, which produces a nominal magnetic field of 0.5 T. The Inner Tracking System (ITS) surrounds the beam pipe and consists of a Silicon Pixel Detector (SPD), Silicon Drift Detector (SDD), and Silicon Strip Detector (SSD). These silicon detectors provide $<$ 100 $\mu$m resolution of tracks' distance of closest approach to the primary vertex for tracks with pseudorapidity $|\eta|<$ 0.9. The ITS is capable of multiplicity measurements for tracks with transverse momentum $p_{T}$\xspace $>$ 0.050 GeV/$c$\xspace and tracking for particles with $p_{T}$\xspace $>$ 0.100 GeV/$c$\xspace. A large Time Projection Chamber (TPC) surrounds the inner tracking system, extending from a radius of approximately 85 cm from the beam pipe to a radius of approximately 250 cm from the beam pipe. It is approximately 5 m along the beam pipe, providing tracking for $|\eta|<$ 0.9 with momentum resolution $\Delta p_{T}/p_{T}<$ 1\% for tracks completely contained in the TPC acceptance. Both the ITS and the TPC are capable of particle identification through energy loss. The Time-Of-Flight (TOF) detector covers $|\eta|<$ 0.9 and can separate pions and kaons up to a momentum of 2.5 GeV/$c$\xspace and protons and kaons up to a momentum of 4 GeV/$c$\xspace. The High Momentum Particle Idenfication Detector (HMPID) is a ring-imaging Cherenkov detector optimized to extend $\pi$/K discrimination to 3 GeV/$c$\xspace and K/p discrimination up to 5 GeV/$c$\xspace. The HMPID covers $|\eta|<$ 0.6 in pseudorapidity and 1.2$^\circ<\phi<$ 58.8$^\circ$. ALICE has two electromagnetic calorimeters, the Photon Spectrometer (PHOS) and the Electromagnetic Calorimeter (EMCAL). PHOS is 4.6 m from the vertex, is made of scintillating crystals (PbWO$_4$) with high resolution and granularity, and is optimized to measure photons. The EMCAL is a lead scintillator sampling calorimeter optimized for studies of jets. PHOS covers $|\eta|<$ 0.12 and 100$^\circ$ in azimuth and the EMCAL covers $|\eta|<$ 0.7 and 107$^\circ$ in azimuth. A COsmic Ray DEtector (ACORDE) provides triggers for cosmic ray events for calibration and alignment as well as studies of cosmic ray physics. The Muon Spectrometer sits between 2$^\circ$ and 9$^\circ$ from the beam pipe and triggers on and tracks muons for studies of heavy quark particles, mainly through $J/\psi$ and $\Upsilon$. In addition there are several smaller detectors at small angles for triggering and multiplicity measurements, the Zero Degree Calorimeter (ZDC), Photon Multiplicity Detector (PMD), Forward Meson Detector (FMD), T0, and V0. \begin{figure}[htb] \begin{center} \epsfig{file=aliceschematic.eps,width=6in} \caption{The ALICE detector comprises 18 detector systems. Acronyms are defined in the text.} \label{fig:aliceschematic} \end{center} \end{figure} \section{Results} ALICE is able to measure the bulk properties of pp\xspace and Pb+Pb\xspace events accurately because of its capabilities for precision low momentum tracking and particle identification. These capabilities allow precision measurements of particle multiplicities and transverse energy. Since separation of $\pi^\pm$, K$^\pm$, and p($\bar{p}$)\xspace is possible over a wide kinematic region, the ALICE detector is complementary to those of other LHC experiments. Measurements of strange particles which undergo weak decays (K$^0_S$, $\Lambda$, $\Xi$, $\Omega$), resonances (e.g., $\phi$, K$^*$), and charmed hadrons which decay hadronically (e.g., D$^\pm$, D$^0$, D$^{\pm}_S$) can be improved substantially since the combinatorial background can be reduced substantially by identifying the decay daughters. \subsection{Bulk properties} ALICE's low momentum tracking capabilities allow measurements of track multiplicities down to $p_{T}$\xspace = 50 MeV/$c$\xspace, limiting the extrapolation necessary to measure charged particle multiplicities (d$N_{ch}$/d$\eta$). The energy dependence of multiplicities in pp\xspace collisions at $\sqrt{s}$\xspace = 0.9~\cite{:2009dt,Aamodt:2010ft}, 2.36~\cite{Aamodt:2010ft}, and 7 TeV~\cite{Aamodt:2010pp} is described well by a power law in energy, $s^{0.1}$ and multiplicities are generally above model predictions. Models underpredict the increase in multiplicity as a function of collision energy and most of the discrepancy with models is in the high multiplicity tails of the distribution of events. These measurements have already been used to refine models for particle production in pp\xspace collisions \begin{figure}[htb] \begin{center} \epsfig{file=MeanRecoTotVsNpartComp.eps,height=3in} \caption{Transverse energy over the number of participating nucleons, $N_{\mathrm{part}}$ divided by two from ALICE~\cite{Loizides:2011ys} compared to PHENIX~\cite{Adcox:2001ry} and STAR~\cite{Adams:2004cb} scaled by a factor of 2.5 to compare the shape of the $N_{\mathrm{part}}$ dependence. $f_{\mathrm{total}}$ is the correction factor to correct for neutral particles which were not measured.} \label{fig:et} \end{center} \end{figure} Multiplicity measurements in Pb+Pb\xspace collisions substantially constrain models. Predictions for multiplicities in central Pb+Pb\xspace collisions at $\sqrt{s_{NN}}$\xspace = 2.76 TeV made after data from Au+Au\xspace collisions at $\sqrt{s_{NN}}$\xspace = 200 GeV at the Relativistic Heavy Ion Collider (RHIC) were available ranged from 1000-1700~\cite{Abreu:2007kv}. ALICE data constrained this to 1584 $\pm$ 4 (stat.) $\pm$ 76 (syst.)~\cite{Aamodt:2010pb}, leading to an energy dependence of d$N_{ch}$/d$\eta$ described well by a power law in energy of $s^{0.15}$ in heavy-ion collisions. Centrality dependent studies showed that the data at the LHC have the same shape as a function of the number of participating nucleons observed at RHIC~\cite{Aamodt:2010cz}, with the data within error up to an overall scaling factor. Transverse energy measurements, shown in \Fref{fig:et}, demonstrate the same trend~\cite{Loizides:2011ys,Adcox:2001ry,Adams:2004cb}. Charged particle multiplicity and transverse energy measurements indicate that the energy densities reached at the LHC are approximately three times larger than those produced at RHIC~\cite{Collaboration:2011rta}. \subsection{Charged particles} In pp\xspace collisions measurements of charged particle spectra indicate that models which describe the particle multiplicity in pp\xspace collisions well still struggle to describe the shape of the momentum spectrum~\cite{Aamodt:2010my}. Models are particularly poor at describing low momentum ($p_{T}$\xspace$<$ 500 MeV/$c$\xspace) particles. Identified $\pi^\pm$, K$^\pm$, and p($\bar{p}$)\xspace spectra show that models fail to describe the particle composition, systematically underestimating the production of both kaons and protons at high momenta~\cite{Aamodt:2011zj}. The combination of ALICE's particle identification capabilities with the precision low momentum tracking allows proton identification and rejection of secondary protons. This enables accurate measurements of the $\bar{p}$/p ratio in pp\xspace collisions at both $\sqrt{s}$\xspace = 0.9 (0.957 $\pm$ 0.006 $\pm$ 0.014) and 7 TeV (0.991 $\pm$ 0.005 $\pm$ 0.014). This ratio is described by models well~\cite{Aamodt:2010dx}. Suppression of high momentum particle production is expected in $A$+$A$\xspace collisions due to interactions of hard partons with the hot, dense medium. This suppression is often quantified by reporting the nuclear modification factor: \begin{equation} R_{AA}(p_T) = \frac{ (1/N_{\mathrm{evt}}^{AA}) d^2N^{AA}/d\eta dp_{T} }{ <N_{\mathrm{coll}}> (1/N_{\mathrm{evt}}^{pp}) d^2N^{pp}/d\eta dp_{T} }, \end{equation} \noindent the ratio of the particle yield in $A$+$A$\xspace collisions ($1/N_{\mathrm{evt}}^{AA}) d^2N^{AA}/d\eta dp_{T}$) to that in pp\xspace collisions ($(1/N_{\mathrm{evt}}^{pp}) d^2N^{pp}/d\eta dp_{T}$) at the same energy scaled by the number of binary collisions in the $A$+$A$\xspace collisions ($<N_{\mathrm{coll}}>$). If Pb+Pb\xspace collisions were just a superposition of nucleon-nucleon collisions, the nuclear modification factor would be one at high $p_{T}$\xspace, where particle production is dominated by hard parton scattering. $R_{\mathrm{AA}}$ unidentified charged particles reaches values as low as ~0.15, lower than the suppression of ~0.2 observed at RHIC~\cite{Aamodt:2010jd}. Early comparisons of identified $\pi^\pm$, K$^\pm$, and p($\bar{p}$)\xspace spectra in heavy ion collisions shown in \Fref{fig:pikp} indicate that while models provide a reasonable description of pions and kaons, they fail to describe both the shape and the yield of protons and anti-protons~\cite{Floris:2011ru}. \begin{figure}[htb] \begin{center} \epsfig{file=2011-Jun-30-central_pos_hydro_alice.eps,height=3in} \caption{Spectra of identified particles compared to a hydrodynamical model for particle production~\cite{Floris:2011ru}} \label{fig:pikp} \end{center} \end{figure} \subsection{Strange particles} Strange particles which decay weakly can be identified by the reconstruction of their decay vertices and identification of the decay daughters improves measurements by helping reduce the combinatorial background. Models fail to describe either the shape or the yield of strange particles, underestimating kaon spectra by as much as a factor of two above 1 GeV/$c$\xspace and $\Lambda$ and $\bar{\Lambda}$ spectra by as much as a factor of three above 1 GeV/$c$\xspace in pp\xspace collisions at $\sqrt{s}$\xspace = 0.9 TeV~\cite{Aamodt:2011zz}. Similar discrepancies between Monte Carlo event generators and kaon and $\Lambda$ ($\bar{\Lambda}$) spectra are observed at $\sqrt{s}$\xspace = 7 TeV. Production of $\Xi^{\pm}$ and $\Omega^{\pm}$ is underestimated by as much as a factor of four and ten, respectively~\cite{Chinellato:2011yn}. However, models are considerably better at describing production of the $\phi$ resonance~\cite{Pulvirenti:2011xs}. While Monte Carlo generators have substantial difficulties describing the particle yields of strange particles, at RHIC energies statistical models for particle production were able to describe the ratios of particle yields in pp\xspace collisions well. Instead, a comparison of the ALICE data to the THERMUS model indicate that at the LHC the model is currently unable to describe the data~\cite{Floris:2011ru}. \begin{figure}[htb] \begin{center} \epsfig{file=2011-Jun-24-RAA_centralChargedvsV0.eps,height=3in} \caption{Nuclear modification factor ($R_{\mathrm{AA}}$) of $\Lambda$ and $K^0_S$ compared to unidentified hadrons as a function of $p_{T}$\xspace~\cite{Collaboration:2011xk}} \label{fig:strange} \end{center} \end{figure} For strange particles in central Pb+Pb\xspace collisions, the nuclear modification factor of $\Lambda$ and $K^0_S$ compared to unidentified hadrons shown in \Fref{fig:strange} approach the same value at high $p_{T}$\xspace, indicating that mass effects are less significant at higher momenta~\cite{Collaboration:2011xk}. Furthermore, the ratio of $\Lambda$/$K^0_S$ exhibits an enhancement over that observed in pp\xspace collisions. This enhancement is comparable to that observed in central Au+Au\xspace collisions at $\sqrt{s_{NN}}$\xspace = 200 GeV, although about 20\% larger~\cite{Collaboration:2011xk}. \subsection{Heavy flavors} \begin{figure}[htb] \begin{center} \epsfig{file=2011-May-19-PWG3-Preliminary-029_FONLL.eps,height=5in} \caption{Spectrum of electrons from semi-leptonic decays of heavy quarks compared to Fixed Order Next-to-Leading-Log perturbative QCD calculations~\cite{Masciocchi}} \label{fig:charm} \end{center} \end{figure} ALICE allows multiple measurements of heavy flavor production. Hadrons containing heavy quarks have substantial cross sections for semi-leptonic decays and ALICE can measure heavy quark production various decay channels. Studies of non-photonic electrons in pp\xspace collisions at $\sqrt{s}$\xspace = 7 TeV are consistent with Fixed Order Next-to-Leading-Log perturbative QCD calculations~\cite{Masciocchi}, as shown in \Fref{fig:charm}. Heavy flavor can also be measured through reconstruction of heavy quarks which decay hadronically (e.g., D$^{\pm}$, D$^0$) and through quarkonia which decay into dileptons (e.g., $J/\psi$). Studies of the nuclear modification factor for D$^{\pm}$ and D$^0$ mesons indicate that the suppression of heavy flavor in Pb+Pb\xspace collisions at $\sqrt{s_{NN}}$\xspace = 2.76 TeV is comparable to the suppression of light flavors, similar to what was observed at RHIC~\cite{Dainese:2011vb}. The nuclear modification factor of the $J/\psi$ in Pb+Pb\xspace collisions at $\sqrt{s_{NN}}$\xspace = 2.76 is higher than that of the D$^{\pm}$ and D$^0$ mesons~\cite{Dainese:2011vb}, however, the interpretation of this result is complicated by the observation of the suppression of the $J/\psi$ in $d$+Au\xspace collisions~\cite{Adare:2010fn}. \section{Conclusions} Since the first collisions in November 2009, the LHC has delivered pp\xspace collisions at $\sqrt{s}$\xspace = 0.9, 2.36, 2.76, and 7 TeV and Pb+Pb\xspace collisions at $\sqrt{s_{NN}}$\xspace = 2.76 TeV. The ALICE detector is able to make precise measurements of particle multiplicities, transverse energy, identified particle spectra, strange particle spectra, and heavy flavor. Measurements in both pp\xspace and Pb+Pb\xspace collisions have already constrained models considerably, restricting multiplicities and indicating that Monte Carlo generators need substantial refinement to be able to describe strange particle production.
1,116,691,499,513
arxiv
\section{Introduction} One of the main open problems in modern cosmology is represented by the statistical characterization and the physical understanding of large scale galaxy structures. The first question in this context concerns the studies of galaxy correlation properties. Particularly two-point properties are useful to determine correlations and their spatial extension. There are different ways of measuring two-point properties and, in general, the most suitable method depends on the type of correlations, strong or weak, characterizing a given point distribution in a certain sample. For example, Hogg et al. (2005) have recently measured the conditional average density in a sample of Luminous Red Galaxies (LRG) from a data release of the Sloan Digital Sky Survey (SDSS). Such a statistics is very useful to determine correlation properties in the regime of strong clustering and the spatial extension of strong fluctuations in a given sample. This was firstly introduced by Pietronero (1987) and then measured in many samples by Sylos Labini et al. (1998). We refer the reader to Baryshev \& Teerikorpi (2005) for a review of the measurements of the reduced and complete correlation functions by different authors in the various angular and three-dimensional samples. The conditional density gives the average density of points in a spherical volume (or a spherical shell) centered around a galaxy (see Gabrielli et al. 2004 for a discussion about this method). The results obtained by Hogg et al. (2005) can be summarized as follows: (i) A simple power-law scaling corresponding to a correlation exponent $\gamma \approx 1$ gives a very good fit to the data up to at least $20$ Mpc/h, over approximately a decade in scale. We note that these results are in good agreement with those obtained by Sylos Labini et al. (1998) through the analyses of many smaller samples and more recently by Vasilyev, Baryshev and Sylos Labini (2006) in the 2dFGRS. (ii) The second important result of Hogg et al. (2005) is that at larger scales (i.e. $r >30$ Mpc/h) the conditional density continues to decrease, but less rapidly, until about $\sim 70$ Mpc/h, above which it seems to flatten up to the largest scale probed by the sample ($100$ Mpc/h). The transition between the two regimes is slow, in the sense that the conditional density at $\sim 20$ Mpc/h is about twice the asymptotic mean density. Joyce et al. (2005) have discussed the basic implications of these results noticing, for example, that the possible convergence to a well defined homogeneity in a volume equivalent to that of a sphere of radius 70 Mpc/h, place in doubt previous detections of ``luminosity bias'' from measures of the amplitude of the reduced correlation function $\xi(r)$. They emphasized that the way to resolve these issues is to first use, in volume limited (VL) samples corresponding to different ranges of luminosity, the conditional density to establish the features of galaxy space correlations. Note that Sylos Labini et al. (1998) found evidences for a continuation of the small scale power-law to distances of order hundreds of Mpc/h, although with a weaker statistics, which seems to be not confirmed by Hogg et al. (2005). In this paper we continue the analysis of galaxy distributions previously applied to the 2dFGRS data (Vasilyev et al. 2006) to the so-called ``main galaxy sample'' of SDSS Data Release (DR4), in the spirit of the tests discussed above. In a companion paper we will discuss the properties of the LRG sample of the SDSS DR4, which can be directly compared with the results of Hogg et al. (2005) and Eiseinstein et al. (2005). The paper is organized as follows. In Section 2 we describe the data and the way we have constructed the VL samples. We also discuss the determination of the nearest neighbor (NN) distribution, and of the average distance between nearest galaxies, which allows us to define the lower cut-off for the studies of correlations. In addition we discuss the determination of the radial counts in different VL samples, emphasizing that large variations for this quantity are found in the different samples. Such fluctuations, which seem to be persistent up to the sample boundaries, correspond to the large scales structures observed in these catalogs. The quantitative characterization of the correlation properties of these fluctuations is presented in Section 3, where we discuss the determination of the conditional average density in the different VL samples. In particular we present several tests useful to clarify the effect of systematic fluctuations at scales of order of the samples size. In Section 4 we discuss the differences between the galaxy conditional density, measured in these samples and the conditional density of point-particles in cosmological N-body simulations. We show that by using this statistics, together with a study of the NN probability distribution, two-point properties of observed galaxies of different luminosity and mock galaxy catalogs constructed using particles lying in region with different local density in cosmological N-body simulations, present different behaviors. Finally in Section 5 we draw our main conclusions. \section{The data} The SDSS (http://www.sdss.org) is currently the largest spectroscopic survey of the extragalactic objects and one of the most ambitious observational programs ever undertaken in astronomy. It will measure about 1 million redshifts, giving a complete mapping of the local universe up to a depth of several hundreds of Mpc. In this paper we consider the data from the latest public data release (SDSS DR4) which is accessible at http://www.sdss.org/dr4 (Adelman-McCarthy et al. 2005) containing redshifts for more than 565 thousands of galaxies and 67 thousands of quasars. There are two independent parts of the galaxy survey in the SDSS: the main galaxy sample and the LRG sample. Here we discuss the former only. The spectroscopic survey covers an area of 4783 square degrees of the celestial sphere. The apparent magnitude limit for the galaxies is 17.77 in the $r$-filter and photometry for each galaxy is available in five different bands, of which we consider the ones in the $r$ and $g$ filters. \subsection{Definition of the samples} We have used the following criteria to query the SDSS DR4 database. First of all we constrain the flags indicating the type of object so that we select only the objects from the main galaxy sample. We then consider galaxies in the redshift interval $10^{-4} \leq z \leq 0.3$ and with the redshift confidence parameter larger than $0.95$. In addition we apply the filtering condition $r < 17.77$, thus taking into account the target magnitude limit for the main galaxy sample in the SDSS DR4. With the respect to the listed conditions we have selected 321516 objects totally. The angular coverage of the survey is not uniform but observations have been done in different sky regions. For this reason we have considered three rectangular angular fields (named R1, R2 and R3) in the SDSS internal angular coordinates $(\eta,\lambda)$: in such a way we do not have to consider the irregular boundaries of the survey mask, as we have cut such boundaries to avoid uneven edges of observed regions. In Tab.\ref{tbl_VLSamplesProperties2} we report the parameters of the three angular regions we have considered. In addition we do not use corrections neither for the redshift completeness mask nor for the fiber collision effects. Completeness varies mostly nearby the current survey edges which are excluded in our samples. Fiber collisions in general do not represent a problem for measurements of galaxy correlations (see discussion in, e.g., Strauss et al., 2002). \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Region name & $\eta_1$ & $\eta_2$ & $\lambda_1$ & $\lambda_2$ & $\Omega$ \\ \hline R1 & 9.0 & 36.0 & -47.0& 8.0 & 0.41 \\ R2 & -33.5 & -16.5 &-54.0 & -24.0& 0.12 \\ R3 & -36.0 & -26.5 & 2.5 & 43.0& 0.11 \\ \hline \end{tabular} \end{center} \caption{Main properties of the angular regions considered: The limits in degrees of the cuts are chosen using the intrinsic coordinates of the survey $\eta$ and $\lambda$ (in degrees). The last column $\Omega$ gives the solid angle of three angular regions in steradians.} \label{tbl_VLSamplesProperties2} \end{table} \subsection{Construction of VL samples} To construct VL samples which are unbiased for the selection effect related to the cut in the apparent magnitude, we have applied a standard procedure (see e.g. Zehavi et al., 2004): First of all we compute metric distances as \begin{equation} \label{MetricDistance} r(z) = \frac{c}{H_0} \int_{\frac{1}{1+z}}^{1} {\frac{dy}{y \cdot \left(\Omega_M/y+\Omega_\Lambda \cdot y^2 \right)^{1/2}}} \; , \end{equation} where we have used the standard cosmological parameters $\Omega_M=0.3$ and $\Omega_\Lambda=0.7$ with $H_0=100 h$ km/sec/Mpc. We use Petrosian apparent magnitudes in the $r$ filter $m_r$ which are corrected for galactic absorption. The absolute magnitudes can be computed as \begin{equation} \label{AbsoluteMagnitude} M_r = m_r - 5 \cdot \log_{10}\left[r(z) \cdot (1+z)\right] - K_r(z) - 25. \end{equation} where $K_r(z)$ is the K-correction. As the redshift range considered is small from a cosmological point of view (i.e. $z \leq 0.3$), to estimate the K-corrections $K_r(z)$ (linearly proportional to $z$ and thus small in this context) we have used the simple interpolating formula \begin{equation} \label{K-CorrDef} K_r(z) = (2.61 \cdot (m_g-m_r)-0.64) \cdot z \;, \end{equation} where $m_g$ is the apparent magnitude in the $g$ filter. This corresponds to the calculated K-corrections in Blanton et al. (2001 --- see their Fig.4). By knowing the intrinsic $g-r$ color and the redshift one may directly estimate the K-correction term. We have considered 4 different VL samples (named VL1, VL2, VL3 and VL4) defined by two chosen limits in absolute magnitude and metric distance, whose parameters are reported in Tab.\ref{tbl_VLSamplesProperties1}. While VL1 and VL2 actually contain relatively faint galaxies in the local universe, VL3 sample covers a wide range of distances, and VL4 consists of bright galaxies at distances up to 600 Mpc/h. Considering the three different rectangular areas (described above), in summary we have $4 \times 3 = 12$ VL subsamples whose characteristics are reported in Tab.\ref{tbl_VLSamplesProperties3}. The comparison between VL samples with the same magnitude and distance cuts, in different sky regions, will allow us to test the statistical stationarity of galaxy distributions in these samples and to estimate sample-to-sample fluctuations. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline VL sample & $r_{min}$ & $r_{max}$ & $M_{min}$ & $M_{max}$ & $\langle \Lambda \rangle$\\ \hline VL1 & 50 & 135 & -19.0 & -18.0 & 1.7 \\ VL2 & 50 & 200 & -21.0 & -19.0 & 1.3 \\ VL3 & 100 & 500 & -23.0 & -21.0 & 2.9 \\ VL4 & 150 & 600 & -23.0 & -22.0 & 6 \\ \hline \end{tabular} \end{center} \caption{Main properties of the obtained VL samples: $r_{min}$, $r_{max}$ (in Mpc/h) are the chosen limits for the metric distance; ${M_{min}, \,M_{max}}$ define the interval for the absolute magnitude in each sample. The quantity $\langle \Lambda \rangle$ (in Mpc/h) is the average distance between nearest-neighbor galaxies.} \label{tbl_VLSamplesProperties1} \end{table} \begin{table} \begin{center} \begin{tabular}{|l|c|c|} \hline VL Sample & N & $R_c$ \\ \hline R1VL1 & 3130 &15 \\ R1VL2 & 15181&21 \\ R1VL3 & 27975&54 \\ R1VL4 & 6742 &65 \\ R2VL1 & 790 &10 \\ R2VL2 & 3912 &15 \\ R2VL3 & 8586 &38 \\ R2VL4 & 1923 &42 \\ R3VL1 & 790 &9 \\ R3VL2 & 2895 &12 \\ R3VL3 & 7584 &30 \\ R3VL4 & 1503 &36 \\ \hline \end{tabular} \end{center} \caption{Number of galaxies in each of the VL sample. Names are given according to the discussion in the text. The scale $R_c$ (in Mpc/h) is discussed in Sect.\ref{rc} below.} \label{tbl_VLSamplesProperties3} \end{table} \subsection{Nearest neighbor distribution} The NN distance probability distribution depends on the cut in absolute magnitude of a given VL sample. We expect this function not to be dependent on the angular sky cuts if the distribution is statistically stationary in the different VL samples. As discussed in Vasilyev, Baryshev \& Sylos Labini (2006) space correlations introduce a deviation from the case of a pure Poisson distribution: particularly the average distance $\langle \Lambda \rangle$ between NN is expected to be smaller than for the Poisson case in the same sample and with the same number of points. The measurements in the data, obtained by simple pair-counting, are shown in Figs.\ref{FIGnn1}-\ref{FIGnn4}. When a VL sample includes fainter galaxies (e.g. VL1,VL2) $\langle \Lambda \rangle$ is smaller (see Tab.\ref{tbl_VLSamplesProperties1}) than for the case when only brighter galaxies are inside (e.g. VL3,VL4). This is because brighter galaxies are sparser than fainter ones. This corresponds to the exponential decay of the galaxy luminosity function at the bright end (see discussion in Gabrielli et al., 2004) \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG1.eps} \end{center} \caption{Nearest Neighbor distribution in VL1 sample: different symbols correspond to different angular regions. The average distance between nearest galaxies is $\langle \Lambda \rangle = 1.7$ Mpc/h. For reference the solid line represents the NN distribution for a Poisson configuration with the {\it same} $\langle \Lambda \rangle$: one may notice that the tails of this function decay more rapidly.} \label{FIGnn1} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG2.eps} \end{center} \caption{As Fig.\ref{FIGnn1} but for the VL2 samples. The average distance between galaxies is $\langle \Lambda \rangle = 1.3$ Mpc/h. } \label{FIGnn2} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG3.eps} \end{center} \caption{As Fig.\ref{FIGnn1} but for the VL3 samples. The average distance between nearest galaxies is $\langle \Lambda \rangle = 2.9$ Mpc/h.} \label{FIGnn3} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG4.eps} \end{center} \caption{ As Fig.\ref{FIGnn1} but for the VL4 samples.The average distance between nearest galaxies is $\langle \Lambda \rangle = 6$ Mpc/h. } \label{FIGnn4} \end{figure} Note that Zehavi et al. (2004) have estimated that at scale of order $1 \div 2$ Mpc/h there is a departure from a power law behavior in the reduced correlation function. At the light of the discussion above we stress that this change occurs in a range of scales where NN correlations are dominant in all samples considered. For the interpretation of this behavior one may consider the relation between the conditional density, or the reduced correlation function, and the NN probability distribution (see Baertschiger \& Sylos Labini 2004 for a discussion of this point). In this respect, in the comparison of galaxy data with N-body simulations, one has to be careful in that these small-scale properties can be determined by sampling, sparseness and other more subtle finite size effects related to the precision of a given N-body simulation (Baertschiger \& Sylos Labini 2004). We have then studied the effect of the fiber collisions on the NN statistic: about $6\%$ of galaxies that satisfy the selection criteria of the main galaxy sample are not observed because they have a companion closer than the 55 arc-sec minimum separation of spectroscopic fibers (Strauss et al., 2002). However not all 55-arc-sec pairs are affected by fiber collisions, because some of the SDSS were observed spectroscopically more than once. We have identified all $<=55$ arc-sec pairs for which both galaxies have redshifts, and we have randomly removed one of those redshifts in each case to make a new sample with an even more severe fiber collision problem than the existing sample. Because of the very small number of galaxy pairs with angular separation $<= 55$ arc-sec (of order of few percent in all the volume limited samples we have considered) there is no sensible effect of the results. In fact, for galaxies in the main sample the average redshift $z \sim 0.1$, and hence the angular distance 55 arc-sec corresponds to the linear separation $r \sim 0.1$ Mpc/h which is marginally outside the scales interval we have studied the NN distribution, i.e. $r>0.2$ Mpc/h. Hence we expect that the fiber collision effect does not influence our results as indeed we find. \subsection{Number counts in VL samples} A simple statistics which can be easily computed in VL samples is represented by the differential number counts. This gives us a first indication about (i) the slope of the counts and (ii) the nature of fluctuations (see e.g. Gabrielli et al. 2004). In general we may write that the number of points counted from a given point chosen as origin (in this case the Earth) grows as \begin{equation} N(r) \sim r^D \;. \end{equation} This represents the radial counts in a spherical volume of radius $r$ around the observer (or in a portion of a sphere). In the case $D=3$ the distribution is uniform and $D<3$ if it is, for example, fractal or if there is a systematic effect of depletion of points as a function of distance. In this situation we neglect relativistic effects, which are anyway small in the range of redshift considered. However as noticed by Gabrielli et al. (2004) these corrections may change the slope of the counts but not the intrinsic fluctuations. Given that a VL sample is defined by two cuts in distance we compute \begin{equation} \label{e2} n(r) = \frac{d N(r)}{dr} \sim r^{D-1} \;, \end{equation} i.e. the differential number counts in shells. Simply stated we expect the exponent in Eq.\ref{e2} to be 2 when the distribution is uniform; in this case we also expect to see small (normalized) fluctuations generally decaying as the volume or faster for the case of super-homogeneous case (i.e. for standard cosmological density fields --- see discussion in Gabrielli et al., 2004) Results in the samples considered are shown in Figs.\ref{FIGnc1}-\ref{FIGnc4}, where for each sample we have normalized the counts to the solid angle of the corresponding angular region. One may note that the best fit exponent (reported in the figures) fluctuates, and in several case it is larger than 2. This means that there are large fluctuations as evidenced by the non-smooth behaviors of $n(r)$ in the different samples. A similar evidence of the effect of large scale structures in these samples on other statistical quantities has been recently pointed out by Nichol et al. (2006). This is a first rough indication that the question of uniformity at scales of order 100 Mpc/h is not simple to be sorted out in these samples. These large fluctuations in slope and amplitude correspond to the presence of large scale galaxy structures extending up to the boundaries of the various samples considered. We do not present a more quantitative discussion of these behaviors as the statistics is rather weak. \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG5.eps} \end{center} \caption{Differential number counts as a function of distance in the VL1 sample in different angular regions normalized to their own solid angle} \label{FIGnc1} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG6.eps} \end{center} \caption{ The same as Fig.\ref{FIGnc1} but for the VL2 samples} \label{FIGnc2} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG7.eps} \end{center} \caption{The same as Fig.\ref{FIGnc1} but for the VL3 samples} \label{FIGnc3} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG8.eps} \end{center} \caption{The same as Fig.\ref{FIGnc1} but for the VL4 samples} \label{FIGnc4} \end{figure} \section{Correlation properties of galaxy distributions} We study now the behavior of the conditional average density in the various VL samples discussed in the previous section. We use the full-shell estimator, discussed extensively in Gabrielli et al. (2004) and recently in Vasilyev, Baryshev \& Sylos Labini (2006). This estimator has the advantage of making no assumption in the treatment of boundary conditions and it is the more conservative one among estimators of two-pint correlations (see discussion in Kerscher,1999). Briefly the conditional density in spheres $\langle n(r)^*\rangle_p$ is defined for an ensemble of realizations of a given point process, as \begin{equation}\label{Gamma*-r} \langle n(r)^*\rangle_p = \frac{\langle{N(r)}\rangle_{p}}{V(r)}. \end{equation} This quantity measures the average number of points $\langle{N(r)}\rangle_{p}$ contained in a sphere of volume $V(r)=\frac{4}{3}\pi{r}^{3}$ with the condition that the center of the sphere lies on an occupied point of the distribution (and $\langle{...}\rangle_{p}$ denotes the conditional ensemble average). Such a quantity can be estimated\footnote{For simplicity we use the same symbol for the ensemble average and for the estimator of all statistical quantities defined in this section} in a finite sample by a volume average (supposing ergodicity of the point distribution) \begin{equation} \label{Gamma*E-r} \langle n(r)^*\rangle_p = \frac{1}{N_c(r)} \sum_{i=1}^{N_c(r)}{\frac{N_i(r)}{V(r)}}, \end{equation} where $N_c(r)$ --- the number of points for which, when chosen as centers of a sphere of radius $r$, this is fully contained in the sample volume --- averaging by the sample points. (The estimation of the conditional density in shells $\langle n(r)\rangle_p$ proceeds in the same way, except for the fact of considering spherical shells instead of spheres centered on the points --- see e.g. Vasilyev, Baryshev \& Sylos Labini 2006). Therefore this full-shell estimator has an important constraint: it is measured only in spherical volumes fully included in the sample volume. In this situation the number of centers $N_c(r)$ over which the average Eq.\ref{Gamma*E-r} is performed, becomes strongly dependent on the scale $r$ when $r \rightarrow R_s$, being $R_s$ the sample size. In this context such a length scale can be defined as the radius of the largest sphere fully included in the sample volume: the center of such a sphere lies in the middle of the sample volume. Thus, when approaching the scale $R_s$ there are two sources of fluctuations which increase the variance of the measurements. From the one hand the number of points over which the average is performed decreases very rapidly and from the other hand the remaining points are concentrated toward the center of the sample. In such a way systematic fluctuations may affect the estimation, given that these are not averaged out by the volume average. An estimation of the scale beyond which systematic effects become strong and thus important. The following subsection is focused to the discussion of the measurements of $\langle n(r)^* \rangle_p$ in the different VL samples, while Sect.\ref{rc} is devoted to the problem of the determination of the maximum scale up to which the volume average is properly performed, and thus beyond which systematic unaveraged fluctuations may affect the behavior of the conditional density. \subsection{Estimation of the conditional density} \label{egamma} The results of the measurements in redshift space of the conditional density by the full-shell estimator, in VL samples with the same cuts in absolute magnitude and distance but in different angular regions, are reported in Figs.\ref{FIGgamma1}-\ref{FIGgamma4}. The formal statistical error, reported in the figures, for the determination of $\langle n(r)^*\rangle_p$ at each scale, can be simply derived from the dispersion of the average \begin{equation} \label{errgamma} \Sigma^2(r) = \frac{1}{N_c(r)} \sum_{i=1}^{N_c(r)-1} \frac{\left( n(r)_i^* - \langle n(r)^*\rangle_p \right)^2} {N_c(r)-1} \;, \end{equation} where $n(r)_i^*$ represents the determination from the $i^{th}$ point. One may see that such an error is very small, except for the last few points. However, as discussed below, when $r \rightarrow R_s$ systematic fluctuations can be more important than statistical ones. One may note the following behaviors: \begin{itemize} \item In the three VL1 samples the signal is approximately the same up to 10 Mpc/h, where the conditional density has a power-law behavior \begin{equation} \langle n(r)^*\rangle_p \sim r^{-\gamma} \end{equation} with exponent $\gamma = 1.0 \pm 0.1$. The sample R3VL1 has an $R_s$ of order 10 Mpc/h, while the sample R1VL1 about 25 Mpc/h and R2VL1 about 15 Mpc/h. In these two former samples the signal is different in the range of scale 10-20 Mpc/h and clearly affected by large systematic fluctuations. \item For the three VL2 samples the situation is similar to the previous one. There is a difference in the amplitude of R1VL2 and R2VL2 of about a factor 2. Nevertheless the power-index is very similar in all the three samples and $\gamma=1.0\pm0.1$. All samples present a deviation from a power-law at their respective $R_s= 35, \; 20, \; 15$ Mpc/h. These deviations are again a sign of finite size effects, reflecting systematic unaveraged fluctuations, as they occur at different scales in the three samples, but always at scales comparable to the samples size. \item For the case of VL3 samples the behavior of the conditional density is smoother at small scales: up to 30 Mpc/h all the three samples present the same power-law correlation with an index $\gamma = 1.0 \pm 0.1$. Thus the exponent is the same as in VL1 and VL2, but, given that $R_s$ for these samples is larger than for VL1 and VL2, it extends to larger scales. The amplitude of the conditional density is almost the same in the there samples up to $\sim \;$30$\div\;$40 Mpc/h. Beyond such a scale we note that R1VL3 shows a flatten behavior, similar to the case R2VL3 although in this case there is a deviation at large scales (from about 40 Mpc/h). Finally the sample size for R3VL3 is about 30 Mpc/h and thus does not give any information on the larger scales. We may anticipate that in the following section we are going to present several tests to clarify whether the crossover to homogeneity which seems to be clear in the sample R1VL3 is stable in different samples and whether systematic fluctuations are negligible. \item The sample VL4 is the deepest one and the behavior measured is similar to VL3 although there is a clear difference at large scales and fluctuations are more evident. Up to 30 Mpc/h the exponent is again $ \gamma = 1.0 \pm 0.1$, i.e. like VL1 and VL2 at smaller scales, and VL3 at the same scales. \end{itemize} Note that the different in amplitude of the conditional density in the different samples VL1, VL2 and VL3 is simply explained by considering the effect of the luminosity function in the selection of the galaxies (see Gabrielli et al., 2004 for a detailed treatment of this point). From this discussion we may draw our main conclusion: the correlation properties are independent on galaxy luminosity and they are characterized by a power-law index in the behavior of the conditional density $\gamma = 1.0 \pm 0.1$ up to 30 Mpc/h. At larger scales, as shown for example in the two samples R1VL4 and R2VL4 the situation is less clear: fluctuations are more important because they are not smoothed out by the volume average. In the next subsection we define the range where the volume average is properly performed. \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG9.eps} \end{center} \caption{Conditional density in spheres in the VL1 sample in the angular region R1, R2, R3. Here and in Figs.\ref{FIGgamma2}-\ref{FIGgamma4} we report, for each sample, a vertical line corresponding to the distance scale $R_c$ discussed in Sect.\ref{rc} and shown in Tab.\ref{tbl_VLSamplesProperties4} (solid-line for R1, dotted-line for R2 and dashed-line for R3)} \label{FIGgamma1} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG10.eps} \end{center} \caption{As for Fig.\ref{FIGgamma1} but for VL2 sample} \label{FIGgamma2} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG11.eps} \end{center} \caption{As for Fig.\ref{FIGgamma1} but for VL3 sample} \label{FIGgamma3} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG12.eps} \end{center} \caption{As for Fig.\ref{FIGgamma1} but for VL4 sample} \label{FIGgamma4} \end{figure} \subsection{Finite volume effects} \label{rc} % In order to quantify the finite volume effects previously mentioned, we have divided each of the VL samples of the R1 field into two non-overlapping contiguous angular regions, and we have recomputed the conditional density in each of the $2 \times 4$ samples. The properties of these subsamples are listed in Tab.\ref{tbl_VLSamplesProperties4}. In Figs.\ref{FIGgammaR1-1}-\ref{FIGgammaR1-4} we show the results. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Region name & $\eta_1$ & $\eta_2$ & $\lambda_1$ & $\lambda_2$ & $N$ \\ \hline R1$\_$1VL1 & 9.0 & 22.5 & -47.0 & 8.0 & 1585 \\ R1$\_$2VL1 & 22.5 & 36.0 & -47.0 & 8.0 & 1545 \\ R1$\_$1VL2 & 9.0 & 22.5 & -47.0 & 8.0 & 7684 \\ R1$\_$2VL2 & 22.5 & 36.0 & -47.0 & 8.0 & 7497 \\ R1$\_$1VL3 & 9.0 & 22.5 & -47.0 & 8.0 & 13982 \\ R1$\_$2VL3 & 22.5 & 36.0 & -47.0 & 8.0 & 13993 \\ R1$\_$1VL4 & 9.0 & 22.5 & -47.0 & 8.0 & 3343 \\ R1$\_$2VL4 & 22.5 & 36.0 & -47.0 & 8.00& 3399 \\ \hline \end{tabular} \end{center} \caption{Main properties of the different subsamples considered in the R1 region. The angular limits of the cuts in the intrinsic coordinates of the survey $\eta$ and $\lambda$ (in degrees). The last column gives the number of points in the sample.} \label{tbl_VLSamplesProperties4} \end{table} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG13.eps} \end{center} \caption{Conditional density in spheres in the R1VL1 sample and in the 2 subsamples defined by the angular cut performed as discussed in the text. The lines labeled with $N_c$ represent the behavior of the number of centers used in the average (Eq.\ref{Gamma*E-r}) arbitrarily normalized. } \label{FIGgammaR1-1} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG14.eps} \end{center} \caption{As Fig.\ref{FIGgammaR1-1} but for the R1VL2 sample} \label{FIGgammaR1-2} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG15.eps} \end{center} \caption{As Fig.\ref{FIGgammaR1-1} but for the R1VL3 sample} \label{FIGgammaR1-3} \end{figure} \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG16.eps} \end{center} \caption{As Fig.\ref{FIGgammaR1-1} but for the R1VL4 sample} \label{FIGgammaR1-4} \end{figure} Let us now discuss the situation in some details. As already mentioned the average computed by Eq.\ref{Gamma*E-r} is made by changing, at each scale $r$ the number $N_c(r)$ of points which do contribute. This scale-dependency follows from the requirement that only those points for which, when chosen as centers of a sphere of radius $r$, the volume does not overlap or intersect the boundaries of the sample. In this way, in a sample of size $R_s$, when $r\ll R_s$ almost all points will contribute to the average, while when $r\rightarrow R_s$ only those points lying close to the center of the volume will be taken into account in the average. Hence at large scales the average is performed on a number of points which exponentially decays when $r\rightarrow R_s$. In Figs.\ref{FIGgammaR1-1}-\ref{FIGgammaR1-4} we show the behavior of the number of centers $N_c(r)$ as function of scale, normalized to an arbitrary factor for seek of clarity. The normalization is simple because at small scales $N_c(r) = N$ where $N$ is the number of points contained in a given VL sample: in fact at such small scales all points contribute to the statistics. One may note at a scale comparable but smaller than the sample size there is an abrupt decay of this quantity: this means that only few points contribute to the average at large scales. That systematic fluctuations are more important than statistical ones, can be noticed from the behavior of the conditional density in Figs.\ref{FIGgammaR1-1}-\ref{FIGgammaR1-4} by comparing the behaviors in the original sample (e.g. R1VL1) and in the two separate subsamples (e.g. R1$\_$1VL1 and R1$\_$2VL1). When the distance scale approaches the boundaries of the samples one may note that there are systematic variations which are larger than the (small) error bars derived from Eq.\ref{errgamma}. As already mentioned, in some cases there is an evidence for a more flatter behavior while in other cases instead the conditional density show a decay up to the sample boundaries which is slower than at smaller scales. This situation puts a serious warning for the interpretation of the large scale tail of the conditional density. The question is how to quantify the regime where systematic fluctuations are important and may affect the behavior of the conditional density. One may define a criterion for the statistical robustness of the volume average, by imposing for example $N_c(r)$ to be larger than a certain value. While this can certainly give an useful indication, the problem of the volume average is more subtle. In fact when $r\rightarrow R_s$ there can be sufficiently enough points for $N_c(r)$ to be larger than a given pre-defined value: however it may happen that all these points lie, for example, in a cluster located close to the sample center. In this situation the volume average is not properly performed, in the sense that all points ``see'' almost the same volume. A possibility to clarify such a situation has been proposed by Joyce et al. (1999). One may compute the average distance between the $N_c(r)$ centers at the scale $r$: \begin{equation} R_c(r) = \frac{1}{N_c(r)(N_c(r)-1)} \sum_{i,j=1}^{N_c(r)} |\vec{r}_i - \vec{r}_j| \end{equation} where $\vec{r}_i$ and $\vec{r}_j$ are two of the $N_c(r)$ points. A criterion for statistical validity of the volume average is then \begin{equation} R_c\ge2\times r \end{equation} which implies that the average distance between sphere centers if larger than twice the scale at which the conditional density is computed, assuring in this way the independence of the different terms in the average. The values of $R_c$ for the different samples is reported in Tab.\ref{tbl_VLSamplesProperties3} and this length-scale is indicated as a vertical line in Figs.\ref{FIGgamma1}-\ref{FIGgamma4}. In practice all samples show an $R_c$ smaller than 40 Mpc/h with the exception of R1VL3 and R1VL4 for which $R_c=54,\ 65;$ Mpc/h respectively. Hoverer in these two samples the conditional density does behave differently at large scales (see Fig.\ref{gammavl3vl4}), in the sense that the change of slope occurs at different scales and thus at a different value average density. \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG17.eps} \end{center} \caption{Conditional density in spheres in the R1VL3 and R1VL4 samples, normalized to have the same amplitude at 1 Mpc/h. One may see that the large scale behavior ($r> $30 Mpc/h) is different due to the effect of systematic fluctuations.} \label{gammavl3vl4} \end{figure} Thus it is very hard to conclude about the correlation properties at such large scales. However we note that there is enough evidence that the signal is smoother on scales $>$ 40 Mpc/h and that sample-to-sample fluctuations or the variations in radial counts (discussed in Section 2) are smaller, thus indicating a tendency toward a more uniform distribution. However these data do not support unambiguously a clear evidence in favor of homogeneity at scales of order $70$ Mpc/h, as Hogg et al. (2005) found by analyzing the LRG sample, because the change in correlation properties occurs at scales comparable to the scales $R_s$ and $R_c$. We conclude that these data support an evidence for a change of slope, with a clear tendency for $\gamma <1$, but with undefined value. These tests indicate that the availability of larger samples, provided, for example, by DR5, will allow one to understand these systematic variations. Particularly we may see that to study scales of order 100 Mpc/h, samples with $R_s \approx$ 300 Mpc/h are needed. However the full SDSS data will provide us with such large and complete catalogs. \section{Correlation properties of cosmological N-body simulations} Gravitational clustering in the regime of strong fluctuations is usually studied through gravitational N-body simulations. The particles are not meant to describe galaxies but collision-less dark-matter mass tracers. During gravitational evolution complex non-linear dynamics make non-linear structures at small scales, while at large scales it occurs a linear amplification according to linear perturbation theory. Thus, while on large scales correlation properties do not change from the beginning --- a part a simple linear scaling of amplitudes --- at small scales non-linear correlations are built. Typically in these simulations non-linear clustering is formed up to scales of order of few Mpc. At late times one can identify subsamples of points which trace the high density regions, and these would represent the sites for galaxy formation, whose statistical properties are ultimately compared with the ones found in galaxy samples. In order to study this problem we consider the GIF galaxy catalog (\cite{gif}) constructed from a $\Lambda$CDM simulation run by the Virgo consortium (\cite{virgo}). The way in which this is done is to firstly identify the halos, which represent almost spherical structures with a power-law density profile from their center. The number of galaxies belonging to each halo is set proportional to the total number of points belonging to the halo to a certain power. This procedure identifies points lying in high density regions of the dark-matter particles. One may assign to each point a luminosity and a color on the basis of a certain criterion which is not relevant for what follows (see \cite{sheth} and reference therein). The resulting catalog is divided into two subsamples based on ``galaxy'' color B-I as in Sheth et al. (2001): (brighter) red galaxies (for which B-I is redder than 1.8) and (fainter) blue galaxies (B-I bluer than 1.8). In summary four samples of points may be considered: (i) the original dark matter particles with $N$=$256^3$ particles (ii) all galaxies with $N$=15445 (iii) blue galaxies with $N$=11023 and (iv) red galaxies with $N$=4422. In order to understand the correlation properties in the sampled point distributions it is useful to study the behavior of the conditional density which, as already discussed, has a straightforward interpretation in terms of correlations: results are shown in Fig.\ref{gammasimu}. \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG18.eps} \end{center} \caption{Conditional density for the four samples of points selected in the simulation: the original dark matter (DM) field, all ``galaxies'' (ALL), blue galaxies (BLUE) and red galaxies (RED). The conditional density for dark matter particles (DM) has been normalized arbitrarily. The reference dashed-dotted line has a slope $\gamma=1.7$. The dashed line with $\gamma=1$, corresponding to the slope measured in the galaxy samples is also reported.} \label{gammasimu} \end{figure} The red galaxies are responsible for the strong correlations observed in the full sample as the conditional density is almost the same as for all galaxies at small scales. At large scales there is instead a fast decrease as the sample average of red galaxies is smaller than the one of all galaxies (there are less objects). For red galaxies the sampling is local, i.e. their conditional density is (almost) invariant at small scales. Clearly, as there are globally less objects, the sample density of red galaxies is smaller than that of all galaxies. On the other hand blue galaxies present only some residual correlations at small scales, and they are more numerous than red galaxies. The small scale properties of these distributions can be studied by analyzing the NN probability distribution (see Fig.\ref{FIGnnnbs}). \begin{figure} \begin{center} \includegraphics*[angle=0, width=0.5\textwidth]{FIG19.eps} \end{center} \caption{Nearest neighbors probability distribution for three point sets selected in the simulation (see discussion in the text): all ``galaxies'' (ALL), blue galaxies (BLUE) and red galaxies (RED).} \label{FIGnnnbs} \end{figure} One may note that blue galaxies have a bell-shaped distribution, typical of the case where correlation are very weak. Instead red and all galaxies present almost the same function, with a long small-scale tail, which is the typical feature indicating the presence of strong two-point correlations (see discussion in Baertschiger and Sylos Labini, 2002). This situation is different from the one detected in the samples of DR4 as shown in Figs.\ref{FIGnn1}-\ref{FIGnn4}, where the NN probability distribution has the same shape for all different samples considered. The main points we stress are the following: \begin{itemize} \item The slope of the conditional density in all the artificial samples considered here is different from $\gamma=1.0 \pm 0.1$ measured in the real galaxy data. In particular for those mock samples (red galaxies, all galaxies and dark matter particles) where correlations are power-law, the slope is $\gamma=1.7\pm 0.1$ in the range [0.01,5] Mpc/h while a clear transition toward homogeneity occurs at scales of order 10 Mpc/h. These different slopes can be originated by the fact that we compare a measure in redshift space, in the case of real data, which can be affected by redshift distortions, with the mock catalogs where the conditional density has been measured in real space. We will examine this point in more detail in a forthcoming paper. \item Small scales properties, as detected by the NN probability distribution, are different in the real and artificial samples. \item The conditional densities of mock blue and red galaxies are different at all scales and blue galaxies show almost no correlations. \item Both mock red and blue galaxies show a well-defined transition to homogeneity at a scale of oder 10 Mpc/h. As we have already mentioned, this is not the behavior observed in the data. Particularly the range of non-linear structures seem to be much larger in the real data than in the simulations. \end{itemize} In conclusion, while the comparison between correlation properties of real galaxies and mock galaxy catalogs constructed from points selected in N-body simulations is usually performed by the analysis of the reduced two-point correlation function, here we have presented the comparison of the conditional density and of the NN probability distributions. We find that some important disagreement between data and simulations are evident when the behavior of these statistical quantities are considered. This is not the same conclusion that one may reach by analyzing the reduced correlation function $\xi(r)$: the reason is that in the estimation of $\xi(r)$ one uses the estimation of the sample average which introduces a finite-size effect which may affect both the amplitude and slope of this function (see e.g. Gabrielli et al., 2004 for a detailed discussion of this point). The estimation of the conditional density is less affected by finite-volume effects and the comparison between different sample is straightforward. Note that the data are analyzed in redshift space and the simulations in real space. However given that velocities are typically smaller than $500$ km/s the difference between real and redshift space cannot be accounted by the effects of peculiar velocities on scales larger than 5 Mpc/h. The problem of the relation between real and redshift space, considering the finite size effects present when strong correlation characterize the data, has been discussed in Vasilyev, Baryshev \& Sylos Labini (2006). \section{Discussion and Conclusions} Our main results are the following: (i) In all VL samples we find that in the range of scales $0.5 \le r \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 30$ Mpc/h the conditional density shows power-law correlation with a power-law index $\gamma =1.0\pm 0.1$. This result is in good agreement with the behavior found in other smaller samples by Sylos Labini et al. (1998), Joyce et al. (1999) and in the SDSS LRG sample by Hogg et al. (2005), and with the correlation properties measured by Vasilyev, Baryhsev \& Sylos Labini (2006) in the 2dFGRS. Note that we do not confirm the results of Zehavi et al. (2004) who found a departure from a power-law in the galaxy correlation function at a scale of order 1 Mpc/h: their analysis has been performed in real space while ours is in redshift space. In this range of scales nearest-neighbor correlation dominate the behavior of the conditional density and thus also of the reduced correlation function and for a detailed understanding of this regime a study of the nearest-neighbor is shown to be necessary. In addition we do not find either a luminosity or color dependence of the galaxy the conditional density in the regime where the statistics is robust. In this respect Zehavi et al. (2005) have considered the behavior of the reduced two-point correlation function, and concluded that there is a color (luminosity) dependence of galaxy correlations. This apparent disagreement can be understood by considering that the reduced two-point correlation function can be strongly affected by finite-size effects in the regime where the conditional density presents power-law correlations (see discussion, e.g. in Joyce et al., 2005). Moreover results by Zehavi et al. (2005) have been obtained in real space: in Vasilyev, Baryhsev \& Sylos Labini (2006) we discussed the kind of finite size effects which perturb the estimation of $\xi(r)$ when the conditional density has power-law correlations. (ii) In the range $30 \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 100$ Mpc/h the situation is less clear: as we discussed finite volume effects are important in this range of scales and systematic unaveraged fluctuations may affect the results. We have presented several tests to show the role of finite volume effects and to determine the range of scales where they perturb the estimation of the conditional density, finding that in all but two samples the volume average is properly performed up to $R_c \approx 40$ Mpc/h. In the remaining two samples we have shown that systematic fluctuations persist up to their boundaries $R_s$. Thus in the range $30 \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 100$ Mpc/h we find evidences for more uniform distribution and hence a smaller power law index ($\gamma <1$) in the conditional density. This is a stable result in all samples considered. However a detailed analysis of the behavior of the conditional density in all samples does not allow us to conclude neither that there is definitive crossover to homogeneity at a scales of order 70 Mpc/h as Hogg et al. (2005) have concluded by considering the LRG sample, nor that there is a change of power-law index beyond 30 Mpc/h which remain stable up to samples limit, i.e. up to 100 Mpc/h. Both possibilities are still open and will be clarified by forthcoming data releases of SDSS as the solid angle is going to sensibly grow. (iii) The comparison of mock galaxy catalogs constructed from particle distributions extracted from cosmological N-body simulations with real galaxy data outlines a problematic situation. From the one hand we have discussed the fact that the slope of the conditional density in latter case is different from the one measured in real catalogs. On the other hand we have also stressed that when constructing artificial galaxy samples from dark matter particles in N-body simulations, there are different behaviors in the conditional density according to the different selection criteria used, and thus on the different way to assign ``luminosity'' and ``color'' to the artificial galaxies. In any case, this behavior is not in agreement with the data, as in all samples here analyzed, the same slope in the conditional density is measured. The same situation is present when the NN probability distribution is considered. Then in N-body simulations structures are sensibly smaller than in real data, as shown by the definitive crossover to homogeneity at about 10 Mpc/h found in N-body particle distribution, contrary to the galaxy case where the crossover may happen on much larger scales of order 100 Mpc/h. It is worth noticing that we have used a very conservative statistical analysis which introduces an important constraints on the way we treat the data. For example if the distribution would have been uniform on scales smaller than the actual sample sizes, the conditional density estimation can be done for all points in the sample, even on large scales, not just the points near the center of the sample, because it can be assumed that volume outside the survey region is statistically similar to volume inside. This is the standard approach with conventional two-point statistics in the literature. On the other hand we have used, for example, periodic boundary conditions in the analysis of artificial simulations, as in this case the distribution is periodic, beyond the simulation box, by construction. However, as we do not know whether this is case for galaxy distribution, and actually we would like to test this point, we have used more conservative statistics to analyze the real data. This, instead of being a limitation, allow us to derive results about galaxy correlation properties which are unbiased by finite size effects. Indeed, when using less conservative methods, one is implicitly making the assumption that finite size effects, induced by long range correlations in the galaxy distribution, are negligible. Here we instead test that this is the case in the data we consider and actually we find evidence that, because of the long range nature of galaxy correlations, there are subtle finite size effects which should then put a serious warning on the use of less conservative statistical methods. Having used a more conservative statistics we are able to obtain results which are less biased by finite size effects (which ultimately appear from the presence of large fluctuations represented by large scale structures) with respect to the ones derived by a statistical analysis which makes use of some untested assumptions to derive its results. For example we get that the exponent of the conditional density is -1 instead of -1.7 as derived through a more ``relaxed'' analysis, at the same scales. The measurements of the conditional density has been performed in real space in the mock catalogs and in redshift space in the real samples, and this can be the origin of the different values of the correlation exponents. Whether this is case, or a finite size effect is playing a crucial role will be studied in a forthcoming paper. Finally we would like to briefly discuss our results in relation to theoretical models of fluctuations in standard cosmologies. It has been shown (see e.g. Gabrielli et al. 2004) that the only feature of the primordial correlations, defined in theoretical models like the cold dark matter (CDM) one, which can be detected in galaxy data is represented by the large scale tail of the reduced correlation function. In fact, in terms of correlation function $\xi(r)$ CDM models presents the following behavior: it is positive at small scales, it crosses zero at a certain scale and then it is negative approaching zero with a tail which goes as $r^{-4}$ in the region corresponding to $P(k) \sim k$ (see e.g. Gabrielli et al. 2004). The super-homogeneity (or Harrison-Zeldovich) condition says that the volume integral over all space of the correlation function is zero \begin{equation} \int_0^{\infty} d^3r \xi(r) = 0 \;. \end{equation} This means that there is a fine tuned balance between small-scale positive correlations and large-scale negative anti-correlations. This is the behavior that one would like to detect in the data in order to confirm inflationary models. Up to now this search has been done through the analysis of the galaxy power spectrum (PS) which should scale as $P(k) \sim k$ at small $k$ (large scales). No observational test of this behavior has been provided yet. However for this case one should consider an additional complication. In standard models of structure formation galaxies result from a {\it sampling} of the underlying CDM density field: for instance one selects only the highest fluctuations of the field which would represent the locations where galaxy will eventually form. It has been shown that sampling a super-homogeneous fluctuation field changes the nature of correlations (Durrer et al., 2003). The reason for this can be found in the property of super-homogeneity of such a distribution: the sampling necessarily destroys the surface nature of the fluctuations, as it introduces a volume (Poisson-like) term in the mass fluctuations, giving rise to a Poisson-like PS on large scales $P(k)\sim$ constant. The ``primordial'' form of the PS is thus not apparent in that which one would expect to measure from objects selected in this way. This conclusion should hold for any generic model of bias and its quantitative importance has to be established in any given model (Durrer et al., 2003). On the other hand one may show (Durrer et al., 2003) that the negative $r^{-4}$ tail in the correlation function does not change under sampling: on large enough scales, where in these models (anti) correlations are small enough, the biased fluctuation field has a correlation function which is linearly amplified with respect to the underlying dark matter correlation function. For this reason the detection of such a negative tail would be the main confirmation of models of primordial density field. This will be possible if firstly a clear determination of the homogeneity scale will be obtained, and then if the data will be statistically robust enough to allow the determination of the correlation when it is $\xi(r) \ll 1$. While Eiseinstein et al. (2005) claimed to have measured that $\xi(r) \approx 0.01$ at scales of order 100 Mpc/h in a sample of SDSS LRG galaxies, here we cannot confirm these results as our analysis does not extend to such large scales with a robust statistics. However from the large fluctuations observed, for example in the behavior of the radial counts and in sample-to-sample variations of the conditional density at such large scales, we conclude that this result deserves more studies, and perhaps much larger samples, to be confirmed. \section*{Acknowledgments} We thank Andrea Gabrielli, Michael Joyce and Luciano Pietronero for useful discussions and comments. Yu.V.B. and N.L.V. thank the ``Istituto dei Sistemi Complessi'' (CNR, Rome, Italy) for the kind hospitality during the writing of this paper. FSL acknowledge the financial support of the EC grant No. 517588 ``Statistical Physics for Cosmic Structures" and the MIUR-PRIN05 project on ``Dynamics and Thermodynamics of systems with long range interactions" for financial supports. Yu.B and N.V thanks the partial financial support by Russian Federation grants NSh-8542.2006.2 and RNP.2.1.1.2852.
1,116,691,499,514
arxiv
\section{Introduction} Sensing and tracking of a moving object/human by a robot is an important topic of research in the field of robotics and automation for enabling collaborative work environments~\cite{mcclure2009darpa}, including for applications such as fire fighting and exploration of unknown terrains \cite{murphy2004trial, penders2011robot, thrun2004autonomous}. In disaster management, robots can assist by tracking and following first-responders while the team explores an unknown environment~\cite{kumar2004robot}. To achieve this, staying in proximity to the first responders is the key. Another application context of this field of research is in the Leader--Follower collaborative robotics architecture~\cite{ren2008distributed} where a follower robot is required to track and follow a respective Leader. Robotic tracking is also required in smart home environments where robots assist humans in daily activities. In this paper, we focus on this class of tracking problems where the term \textbf{``tracking''} refers to the relative position sensing and control of a robot that is required to stay in proximity to an uncontrolled moving target such as a Leader robot or human. \textbf{{Our Contribution:}} We propose the Autonomous Rssi based RElative poSitioning and Tracking (ARREST), \emph{a purely Radio Signal Strength Information (RSSI) based single node RF sensing system for joint location, angle and speed estimation and \textbf{bounded distance tracking} of a target moving arbitrarily in 2-D that can be implemented using commodity hardware. } In our proposed system, the target, which we refer to as the \textbf{Leader}, carries an RF-emitting device that sends out periodic beacons. The tracking robot, which we refer to as the \textbf{TrackBot}, employs an off-the-shelf directional antenna, novel relative position and speed estimation algorithms, and a Linear Quadratic Gaussian (LQG) controller to measure the RSSI of the beacons and control its maneuvers. {Further, to evaluate the ARREST system in a range of large scale and uncontrolled environments, we developed an integrated Time Difference of Arrival (TDoA) based ground truth estimation system for line of sight (LOS) scenarios that can be easily extended to perform a range of large scale indoor and outdoor robotics experiments, without the need of a costly and permanent VICON~\cite{vicon} system.} \begin{figure}[!ht] \centering \includegraphics[width=0.9\linewidth]{robo_follower_fig/figures/robot_sch.jpg} \caption{ The TrackBot Prototype} \label{fig:robot} \end{figure} \textbf{Performance Evaluation Overview:} To analyze and evaluate the ARREST architecture, we develop a hardware prototype (detailed in Section~\ref{sec:hardware}) and perform a set of exhaustive real world experiments as well as emulations. We first perform a set of emulation experiments (detailed in Section~\ref{sec:emulation}) based on real world RSSI data traces collected in various environments. The emulations demonstrate that the TrackBot is able to estimate the target's location with decimeter-scale accuracy, and stay within $5m$ of the Leader (with $\geq 99\%$ probability and with bounded errors in estimations) as long as the Leader's speed is less than or equal to $3m/s$ and the TrackBot's speed is 1.8X times faster than the Leader's speed. \emph{Next, using the same parameter setup as in the emulations, we perform a set of small scale real-world tracking experiments (detailed in Section~\ref{sec:real_exp}) in three representative environments: a cluttered indoor room, a long hallway, and a VICON~\cite{vicon} based robotic experiment facility.} {This is followed by a range of large-scale long duration experiments in four representative environments, detailed in Section~\ref{sec:real_exp_large}.} These experiments demonstrate the practicality of our ARREST architecture and validate the emulation results. Moreover, these experiments prove that our ARREST system works well (with $\geq70\%$ probability) in cluttered environments (even in the absence of line of sight) and identify some non-line of sight scenarios where our system can fail. {To improve the success rate of our ARREST system in severe non-line of sight (NLOS) situations, we propose a movement randomization technique, detailed in Section~\ref{sec:multipath_adapt}.} We also compare the ARREST system's performance for varying relative position estimation accuracies offered by different sensing modalities such as camera or infrared in Section~\ref{sec:modality}. \section{Related Works} The most popular class of tracking architectures employs vision and laser range finder systems~\cite{papanikolopoulos1993visual,jung2004detecting,Kleinehagenbrock1}. Researchers have proposed a class of efficient sampling and filtering algorithms for vision based tracking such as the Kalman filtering and the particle filtering~\cite{jung2004detecting,schulz2001}. There also exist some works that combine vision with range finders~\cite{Kleinehagenbrock1,WendaXu:2015ufa,prassler2001fast}. However, the effectiveness of these sensors crumble when visibility deteriorates or direct line of sight does not exist~\cite{lindstrom2001detecting}. Moreover, the use of these types of sensors and the processing of their data, namely image processing, increases the form factor and power consumption of the robots which inherently always work under power constraints. \emph{In contrast, our proposed RSSI based ARREST system can be developed with low-cost, small form-factor hardware and can be applied in scenarios with limited visibility and non-line-of-sight environments such as cluttered indoor environments and disaster rubble. } Another class of related works lies within the large body of works in the field of RF Localization in wireless sensor networks~\cite{han2013localization} where robots are employed for localizing static nodes. Graefenstein~\emph{et al.}~\cite{graefenstein2009wireless} employed a rotating antenna on a mobile robot to map the RSSI of a region and exploit the map to localize the static nodes. {Similar works have been proposed in the context of locating radio tagged fish or wild animals~\cite{tokekar2011active,tokekar2014multi,vander2014cautious}. The works of Zickler and Veloso~\cite{zickler2010rss}, and Oliveira \emph{et al.}~\cite{oliveira2014rssi} on RF-based relative localization are also mentionable. In ~\cite{twigg2012rss}, a RSSI based static single radio source localization method is presented by Twigg \emph{et al.} whereas multiple transient static radio source localization problem is discussed in the work of Song, Kim and Yi~\cite{song2012simultaneous}.} Some researchers have also employed infrared~\cite{pugh2009fast} and ultrasound devices~\cite{rivard2008ultrasonic} for relative localization. One of the most recent significant works on relative localization, which is presented in~\cite{vasisht2016decimeter}, applies a MIMO-based system to localize a single node. {Simulation of a RSSI based constant distance following technique is demonstrated in~\cite{rssi_const_follow} where the leader movement path is predetermined and known to the Follower. \emph{However, unlike these works, the TrackBot in the ARREST system relies solely on RSSI data not only for the localization of the mobile Leader with unknown movement pattern, but also for autonomous motion control with the goal of maintaining a bounded distance.} The closest state-of-the-art related to our work is presented in~\cite{min2014robotic}. In this work, the authors developed a system that follows the bearing of a directional antenna for effective communication. However, to our knowledge, the maintenance of guaranteed close proximity to the Leader was not discussed in~\cite{min2014robotic}, which is the most important goal in our work. Also, this work employs both RSSI and sonar to determine the orientation of the transmitter antenna. Lastly, compared to their proposed hardware solution which is based on a large robot that carries a laptop as the controller, our solution is low power, small size, and requires much less processing power.} On LQG related works, Bertsekas~\cite{bertsekas1995dynamic} has demonstrated that a LQG controller can provide the optimal control of a robot along a known/pre-calculated path, when the uncertainty in the motion as well as the noise in observations are Gaussian. Extending this concept, Van Den Berg \emph{et al.}~\cite{van2011lqg,van2012lqg} and Tornero \emph{et al.}~\cite{tornero2001multirate} proposed LQG based robotic path planning solutions to deal with uncertainties and imperfect state observations. \emph{To the best of our knowledge, we are the first to combine RSSI-based relative position, angle, and speed estimation with the LQG controller for localizing and tracking a moving RF-emitting Object. } \section{Problem Formulation} \label{sec:prob_form} In this section, we present the details of our tracking problem and our mathematical formulation based on both a 2D global frame of reference, $\mathcal{R}_{G}$, and the TrackBot's 2D local frame of reference at time $t$, $\mathcal{R}_{F}(t)$. Let the location of the \emph{Leader} at time $t$ be represented as {\small $\mathbf{X}_{L}(t)=(x_L(t),y_L(t))$} in $\mathcal{R}_{G}$. The \emph{Leader} follows an unknown path, $\mathcal{P}_L$. Similarly, let the position of the TrackBot at any time instant $t$ be denoted by {\small $\mathbf{X}_F(t)=(x_F(t),y_F(t))$}. The maximum speeds of the Leader and the TrackBot are $v_L^{max}$ and $v_F^{max}$, respectively. For simplicity, we discretize the time with steps of $\delta t > 0$ and use the notation $n$ to refer to the $n^{th}$ time step i.e., $t = n \cdot \delta t$. Let {\small $d[n]=||\mathbf{X}_L[n]-\mathbf{X}_F[n]||_2$} be the distance between the TrackBot and the Leader at time-slot $n$, where $||.||_2$ denotes the $L_2$ norm. Then, with $D_{th}$ denoting the max distance allowed between the Leader (L) and the TrackBot (F), the objective of tracking is to plan the TrackBot's path, $\mathcal{P}_F$, such that {\small$\mathbb{P} \left(d[n] \leq D_{th}\right) \approx 1 \ \ \forall n$} where $\mathbb{P}(.)$ denotes the probability. \begin{figure}[!ht] \centering \includegraphics[width=0.7\linewidth]{robo_follower_fig/figures/co_ordinate.pdf} \caption{Coordinate System Illustration} \label{fig:co_ordinate} \end{figure} Realistic deployment scenarios typically do not have a global frame of reference. Thus, we formulate a local frame of reference, $\mathcal{R}_{F}[n]$, with the origin representing the location of the TrackBot, {\small $\mathbf{X}_F[n]$}. Let the robot's forward and backward movements at any time instant $n$ be aligned with the X-axis of $\mathcal{R}_{F}[n]$. Also, let the direction perpendicular to the robot's forward and backward movements be aligned with the Y-axis of $\mathcal{R}_{F}[n]$. This local frame of reference is illustrated in Fig.~\ref{fig:co_ordinate}. Note that in our real system all measurements by the TrackBot are in $\mathcal{R}_{F}[n]$. In order to convert the position of the Leader in $\mathcal{R}_{F}[n]$ from $\mathcal{R}_{G}$ or vice versa for simulations and emulations, we need to apply coordinate transformations. Let the relative angular orientation of $\mathcal{R}_{F}[n]$ with respect to $\mathcal{R}_{G}$ be $\theta_{rot}[n]$ and the position of the Leader in $\mathcal{R}_{F}[n]$ be {\small $\mathbf{X}_{L}^{rel}[n]=(x_L^{rel}[n],y_L^{rel}[n])$}. Then: { \small \begin{equation} \begin{bmatrix} x_L[n] \\ y_L[n]\\ 1 \\ \end{bmatrix} = \begin{bmatrix} \cos (\theta_{rot}[n]) & -\sin (\theta_{rot}[n]) & x_F[n]\\ \sin (\theta_{rot}[n]) & \cos (\theta_{rot}[n]) & y_F[n]\\ 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} x_L^{rel}[n] \\ y_L^{rel}[n]\\ 1 \\ \end{bmatrix} \end{equation} } \noindent and {$\theta_{rel}[n]=\arctan (y_L^{rel}[n]/x_L^{rel}[n])$} is the Leader's direction in {\small$\mathcal{R}_{F}[n]$}. To restate the objective of tracking in terms of the local coordinates, {$\mathbb{P} \left(d[n] \leq D_{th}\right) \approx 1 \ \ \forall t$} where { $d[n]=||\mathbf{X}_{L}^{rel}[n]||_2 = ({x_L^{rel}[n]}^2 + {y_L^{rel}[n]}^2)^{1/2}$.} \section{The ARREST System} In this section, we discuss our proposed system solution for RSSI based relative position sensing and tracking. In the ARREST system, the Leader is a robot or a human carrying a device that periodically transmits RF beacons, and the TrackBot is a robot carrying a directional, off-the-shelf RF receiver. As shown in Fig.~\ref{fig:arrest}, the ARREST architecture consists of three layers: Communication ANd Estimation (CANE), Control And STate update (CAST), and Physical RobotIc ControllEr (PRICE). In order to track the Leader, the TrackBot needs sufficiently accurate estimations of both the Leader's relative position ($\mathbf{X}_{L}^{rel}$) and relative speed ($v_{rel}$). Thus, at any time instant $[n]$, we define the state of the TrackBot as a 3-tuple: \textbf{{\small $\mathbf{\mathcal{S}}[n] = \begin{bmatrix} d^e[n] ,& v^{e}_{rel}[n] ,& \theta_{rel}^e[n] \end{bmatrix}$} where the superscript $e$ refers to the estimated values,} $d^e[n]=||\mathbf{X}_{L}^{rel}[n]||_{2}$ refers to the estimated distance at time $n$, $v^{e}_{rel}[n]$ refers to the relative speed of the TrackBot along the X-axis of $\mathcal{R}_F[n]$ with respect to the Leader, and $\theta_{rel}^e[n]$ refers to the angular orientation (in radians) of the Leader in $\mathcal{R}_F[n]$. \textbf{CANE:} The function of the CANE layer is to measure RSSI values from the beacons and approximate the Leader's position relative to the TrackBot, (i.e., $d^e[n]$ and $\theta_{rel}^e[n]$). The CANE layer is broken down into three modules: Wireless Communication and Sensing, Rotating Platform Assembly, and Relative Position Estimation. At the beginning of each time slot $n$, the Wireless Communication and Sensing module and the Rotating Platform Assembly perform a $360^\circ$ RSSI sweep by physically rotating the directional antenna while storing RSSI measurements of successful beacon receptions into the vector $\mathbf{r_v}[n]$. The Relative Position Estimation module uses $\mathbf{r_v}[n]$ to approximate the relative position of the Leader by leveraging pre-estimated directional gains of the antenna, detailed in Section~\ref{Sec:estimation}. \begin{figure}[!ht] \centering \includegraphics[width=0.9\linewidth]{robo_follower_fig/figures/system_arrest.pdf} \caption{ The ARREST Architecture} \label{fig:arrest} \end{figure} \textbf{CAST:} The functions of the CAST layer is to maintain the 3-tuple state estimates and to generate control commands based on current and past observations to send to the PRICE layer. {The CAST layer consists of two different modules: the Linear Quadratic Gaussian (LQG) Controller and the Strategic Speed Controller. We also have a special, case specific module called Multipath Angle Correction for severely cluttered environments (explained further in Section~\ref{sec:multipath_adapt}).} The Strategic Speed Controller estimates the relative speed of the Leader by exploiting past and current state information and generates the speed control signal in conjunction with the LQG controller. The term \textbf{``Strategic''} is used to emphasize that we propose two different strategies, Optimistic and Pragmatic, for the relative speed approximation as well as speed control of the TrackBot (detailed in Section~\ref{sec:vel}). The LQG controller incorporates past state information, past control information, and relative position and speed approximations to: (1) generate the system's instantaneous state, (2) determine how much to rotate the TrackBot itself, and (3) determine what should be the TrackBot's relative speed. The state information generated by the LQG controller is directly sent to the Strategic Speed Controller to calculate the absolute speed of the TrackBot. The details of our LQG controller formulation are discussed in Section~\ref{sec:lqg}. \textbf{PRICE:} The goal of the PRICE layer is to convert the control signals from the CAST layer into actual translational and rotational motions of the TrackBot. It consists of two modules: Movement Translator and Robot Chassis. The Movement Translator maps the control signals from the CAST layer to a series of platform-specific Robot Chassis motor control signals (detailed in Section~\ref{sec:hardware}). \subsection{Proposed LQG Formulation} \label{sec:lqg} In our proposed solution, we first formulate the movement control problem of the TrackBot as a discrete time Linear Quadratic Gaussian (LQG) control problem. A LQG controller is a combination of a Kalman Filter with a Linear Quadratic Regulator (LQR) that is proven to be the \textbf{optimal controller} for linear systems with Additive White Gaussian Noise (AWGN) and incomplete state information~\cite{athans1971role}. The linear system equations for any discrete LQG problem can be written as: {\small \begin{equation} \begin{split} &\mathbf{\mathcal{S}}[n+1]=A_n \mathbf{\mathcal{S}}[n]+B_n \mathbf{U}[n] +\mathbf{Z}[n] \\ &\mathbf{O}[n]=C_n \mathbf{\mathcal{S}}[n]+\mathbf{W}[n] \end{split} \end{equation} } \noindent where $A_n$ and $B_n$ are the state transition matrices, $\mathbf{U}[n]$ is the LQG control vector, $\mathbf{Z}[n]$ is the system noise, $\mathbf{O}[n]$ is the LQG system's observation vector, $C_n$ is the state-to-observation transformation matrix, and $\mathbf{W}[n]$ is the observation noise at time $n$. A LQG controller first predicts the next state based on the current state and the signals generated by the LQR. Next, it applies the system observations to update the estimates further and generates the control signals based on the updated state estimates. \textbf{In our case, {\small $\mathbf{O}[n]=\begin{bmatrix} d^{m}[n], & v_{rel}^{m}[n], & \theta_{rel}^{m}[n] \end{bmatrix}^T$ }(the superscript $m$ refers to measured values).} Moreover, in our case, {the state transition matices $A_n=A$, $B_n=B$, $C_n=C$ are time invariant and the time horizon is infinite as we do not have any control over the Leader's movements.} For a infinite time horizon LQG problem~\cite{bertsekas1995dynamic}, the cost function can be written as: {\small \begin{equation} \label{lqg:Cost} J=\lim_{N \rightarrow \infty} \frac{1}{N}\mathbb{E}\left(\sum_{n=0}^{N} \mathcal{S}[n]^T \mathbf{Q} \mathcal{S}[n]+ \mathbf{U}[n]^T \mathbf{H} \mathbf{U}[n] \right) \end{equation}} \noindent where $\mathbf{Q}\geq 0,\mathbf{H}>0$ are the weighting matrices. The discrete time LQG controller for this optimization problem is: {\small \begin{equation} \begin{split} &\hat{\mathcal{S}}[n+1]=A\hat{\mathcal{S}}[n]+B\mathbf{U}[n] +K(\mathbf{O}[n+1]-C \{A\hat{\mathcal{S}}[n]+B\mathbf{U}[n]\})\\ &\mathbf{U}[n]=-L\hat{\mathcal{S}}[n] \qquad \mbox{and} \qquad \hat{\mathcal{S}}(0)=\mathbb{E}(\mathcal{S}(0)) \end{split} \end{equation}} \noindent where $\hat{}$ denotes estimates, $K$ is the Kalman gain which can be solved via the algebraic Riccati equation~\cite{lancaster1995algebraic}, and $L$ is the feedback gain matrix. In our system, the state transition matrix values are as follows: {\small \begin{equation} \mathbf{A}=\begin{bmatrix} 1 & -\delta t & 0 \\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} \mathbf{B}=\begin{bmatrix} 0 & -\delta t & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{bmatrix} \mathbf{C}=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \label{sec:state} \end{equation} } \noindent \emph{where $\delta t$ is the time granularity for the state update. Ideally, within $\delta t$, the TrackBot executes one set of movement control decisions while it also scans RSSI for the next set of control decision (detailed in Sections~\ref{sec:emulation} and~\ref{sec:real_exp}).} Note that, to solve this optimization problem, we also require the covariance data for the noise, i.e., $ \Sigma_{WW}=\mathbb{E}(\mathbf{W} \mathbf{W}^T),$ and $\ \Sigma_{ZZ}=\mathbb{E}(\mathbf{Z} \mathbf{Z}^T)$. We assume the system noise, $Z[n]$, to be Gaussian and the measurement noise, $W[n]$, to be approximated as Gaussian. \begin{figure}[!ht] \centering \includegraphics[width=0.7\linewidth]{robo_follower_fig/figures/system2.pdf} \caption{Proposed LQG Controller System} \label{fig:system} \end{figure} Furthermore, we tweak the LQG controller to send out a rotational control signal after a state update and before generating the LQR control signals, $\mathbf{U}[n]$. The rotational control signal rotates the TrackBot assembly by ${\theta}^{e}_{rel}[n]$ and sets ${\theta}_{rel}^{e}[n]=0$. This is performed to align the robot toward the estimated direction of the Leader before calculating the movement speed. Thus, we use only the Kalman Filtering part of the LQG controller for angle/orientation control. The reason behind not using the full LQG controller for TrackBot's orientation control lies in the fact that the LQG controller considers sudden rapid change in direction ($\approx 180^\circ$) as a noise and takes a while to correct the course of the TrackBot. More study of this problem is left as a future work. A block diagram of our LQG control system model is presented in Fig.~\ref{fig:system}. \section{RSSI Based Relative Position and Speed Observations} \label{Sec:estimation} In this section, we discuss our methodologies to map the observed RSSI vector, $\mathbf{r_v}[n]$, into the controller observation vector, $\mathbf{O}[n]$. \subsection{Distance Observations} \label{sec:dist_est} The RSSI is well known to be a measure of distance if provided with sufficient transceiver statistics such as the transmitter power, the channel path loss exponent, and the fading characteristics. One of the standard equations for calculating the received power for an omnidirectional antenna is as follows~\cite{rappaport1996wireless}: { \begin{equation} \label{eqn:power} \begin{split} &P_{r,dBm}=P_{t,dBm}+G_{dB}-\mathcal{L}_{ref}-10\eta \log_{10} \frac{d^m[n]}{d_{ref}} + \psi\\ &P_{r,dBm}^{ref}=P_{t,dBm}+G_{dB}-\mathcal{L}_{ref} + \psi \\ &\implies \frac{d^m[n]}{d_{ref}} \approx 10^{\frac{\left(P_{r,dBm}^{ref} -P_{r,dBm} \right) }{10\cdot \eta}} \end{split} \end{equation} } \noindent where $P_{r,dBm}$ is the received power in dBm, $P_{t,dBm}$ is the transmitter power in dBm, $G_{dB}$ is the gain in dB, $\mathcal{L}_{ref}$ is the path loss at the reference distance $d_{ref}$ in dB, $\eta$ is the path loss exponent, $d^m[n]$ is the distance between the transmitter and receiver, $\psi$ is the random shadowing and multipath fading noise in dB, and $P_{r,dBm}^{ref}$ is the received power at reference distance ($d_{ref}$). Eqn.~\eqref{eqn:power} is also valid for the average received power for a directional antenna with an average gain of $G_{dB}$. {To calculate the received power for a particular direction $\theta$, we just need to replace $G_{dB}$ in \eqref{eqn:power} with the directional gain of the antenna, $G_{dB}(\theta)$.} To apply~\eqref{eqn:power} in ARREST, the TrackBot needs to learn the channel parameters such as the $\eta$, $\mathcal{L}_{ref}$, and $d_{ref}$. In our proposed system, we assume that the TrackBot has information about the initial distance to the Leader ($d^m(0)$) and the average received power ({\small $P_{r,dBm}^{ref}$}) at reference distance ($d_{ref}$) which we choose to be 1 meter. Furthermore, the directional gain, $G_{dB}(\theta)$, and the transmitter power, $P_{t,dBm}$, are known as a part of the system design process. Upon initialization of ARREST, the TrackBot performs a RSSI scan by rotating the antenna assembly to generate $\mathbf{r_v}(0)$ and harnesses the average received power ($P_{r,dBm}$) information to estimate the environment's $\eta$ as follows. { \begin{equation} \eta=({P_{r,dBm}^{ref}-P_{r,dBm}})/{10 \log_{10} \frac{d^m(0)}{d_{ref}} \label{eqn:eta_est} \end{equation} } \noindent Next, the TrackBot applies the estimated $\eta$ and $P_{r,dBm}= avg $ $\{ \mathbf{r_v}[n]\}$ on~\eqref{eqn:power} to map $\mathbf{r_v}[n]$ to the observed distance to the Leader, $d^m[n]$. \subsection{Angle Observations} \label{sec:arrest_angle} One of the main components of our ARREST architecture is the observation of the Angle of Arrival (AoA) of RF beacons solely based on the RSSI data, $\mathbf{r_v}[n]$. There exist three different classes of RF based solutions to determine the AoA. The first class, \textbf{antenna array based approaches}, employs an array of antennas to determine the AoA by leveraging the phase differences among the signals received by the different antennas~\cite{xiong2013arraytrack}. The main difficulty of implementing this class is that very few multi-antenna off-the-shelf radios provide access to phase information. The second class, \textbf{multiple directional antenna based approaches}, employs at least two directional antennas oriented in different directions~\cite{jiang2012alrd} to determine AoA. In this class, the differences among RSSI values from all antennas are utilized to determine the AoA. However, utilizing current off-the-shelf antenna arrays or multiple directional antennas increases the cost, form factor, and complexity of a TrackBot implementation. {We avoid the multiple directional antenna based option also because it requires separate radio drivers for each antenna as well as proper time synchronizations.} Thus, we develop methods contributing to the third class of solutions, which is the use of a single, rotating antenna and the knowledge of the antenna's directional gain pattern to approximate the AoA of RF beacons. The core of these methods, called \emph{pattern correlation}, is to correlate the vector of RSSI measurements, $\mathbf{r_v}[n]$, with another vector representing the antenna's known, normalized gain pattern, $\mathbf{g_{abs}}$. At the beginning of each time slot $n$, the TrackBot performs a $360^\circ$ sweep of RSSI measurements to generate the vector, $\mathbf{r_v}[n]$. Then, $\mathbf{r_v}[n]$ is normalized: $\mathbf{g_{m}}=\mathbf{r_v}[n]-\max (\mathbf{r_v}[n])$. The TrackBot also generates different $\theta$ shifted versions of $\mathbf{g_{abs}}(\theta)$ as follows. { \begin{equation} \begin{split} &\mathbf{r_v}[n]=[ r_{-180},r_{-178.2},\cdots, r_{-1.8},r_{0},r_{1.8},\cdots r_{178.2}]\\ &\mathbf{g_{m}}=[ r'_{-180},r'_{-178.2},\cdots, r'_{-1.8},r'_{0},r'_{1.8},\cdots r'_{178.2}]\\ &\mathbf{g_{abs}}(\theta)=[ g_{(-180+\theta)}, \cdots, g_{(0+\theta)}, \cdots, g_{(178.2+\theta)}] \end{split} \label{corr_eqn} \end{equation} } \noindent where $r_{\phi}$ refers to the RSSI measurement, $g_{\phi}$ refers to the antenna gain, and $r'_{\phi}=r_{\phi}-\max \{ \mathbf{r_v}\}$ refers to the observed gain for the antenna orientation of $\phi^\circ$ with respect to the X-axis of $\mathcal{R}_{F}[n]$. The step size of $1.8^\circ$ is chosen based on our hardware implementation's constraints. \textbf{Thus, the possible antenna orientations ($\phi$) are limited to {\small $\Theta=\{-180,\cdots,-1.8,0,$ $\cdots,178.2\}$}.} Next, the TrackBot employs different pattern correlation methods for the AoA observation. Below, we describe three methods in increasing order of complexity. The first method was originally demonstrated by \cite{graefenstein2009wireless}. Through real world experimentation, we develop two additional improved methods. \subsubsection{Basic Correlation Method} \label{sec:aprox1} The first method of determining AoA correlates $\mathbf{g_{m}}$ with all $\theta$ shifted versions of $\mathbf{g_{abs}}$ and calculates the respective $L_2$ distances. The observed AoA is the $\theta$ at which the $L_2$ distance is the smallest: { \begin{equation} \begin{split} \theta_{rel}^{m} =\argmin_{\theta\in \Theta} \sum_{k\in \Theta} ||r'_{k}-g_{(k+\theta)}||_2 \cdot \mathbb{I}_{r'_{k}} \end{split} \label{corr_eqn1} \end{equation} } \noindent {\small $\mathbb{I}_{r'_{k}}$} is an indicator function to indicate whether the sample ${r'_{k}}$ exists or not to account for missing samples in real experiments. \subsubsection{Clustering Method} While the first method works well if enough uniformly distributed samples ($\geq 100$ in our implementation) are collected within the $360^\circ$ scan, it fails in scenarios of sparse, non-uniform sampling ($<100$ samples), which occurs in practice due to packet loss due to fading and interference from collocated WiFi devices. In real experiments (mainly indoors), the collected RSSI samples can be uniformly sparse or sometimes batched sparse (samples form clusters with large gaps ($\approx 30^\circ $) between them). \begin{defn} \emph{Angular Cluster:} An angular cluster ($\Lambda$) is a set of valid samples for a contiguous set of angles: {\small $\Lambda=\{k | \mathbb{I}_{r'_{k}}=1 \forall k \in \{\phi_f,\phi_f+1.8,\cdots,\phi_l-1.8, \phi_l\} \}$ } where {\small $\phi_f,\phi_l \in \Theta$} define the boundary of the cluster. \end{defn} \noindent To prevent undue bias from large cardinality clusters that can cause errors in estimating the correlation, we assign a weight ($\omega_{k}$) to each sample ($k$) and use the pattern correlation method as follows. { \begin{equation} \theta_{rel}^{m} =\argmin_{\theta \in \Theta} \sum_{k\in \Theta} \omega_{k} \cdot ||r'_{k}-g_{(k+\theta)}||_2 \cdot \mathbb{I}_{r'_{k}} \label{eqn:new_est} \end{equation} } \noindent In our weighting scheme, we assign $\omega_{k}=\frac{1}{|\Lambda|}$ where $k \in \Lambda$. Thus, the sum of all weights of the samples from a single cluster sums to $1$, i.e., the weights of the samples are defined by the angular cluster it belongs to. \subsubsection{Weighted Average Method} Based on real world experiments, we find that the angle observation based on~\eqref{corr_eqn1}, say $\theta_{m}^1$, gives reasonable error performance if the average cluster size, $\lambda_{a}$, is greater than the average gap size between clusters, $\mu_{a}$. Conversely, the angle observation based on~\eqref{eqn:new_est}, say $\theta_{m}^2$, is better if $\lambda_{a} << \mu_{a}$. Thus, as a trade-off between both the basic correlation method and the clustering method, we propose a weighted averaging method described below. { \begin{equation} \theta_{rel}^{m}= \begin{cases} \frac{\lambda_{a}}{\mu_{a}}\cdot \theta_{m}^1 + (1-\frac{\lambda_{a}}{\mu_{a}})\cdot \theta_{m}^2 &\mbox{if $\lambda_{a}\leq \mu_{a}$ }\\ \theta_{m}^1 &\mbox{if $\lambda_{a} > \mu_{a}$ } \end{cases} \label{eqn:comb_est} \end{equation} } \noindent \emph{In the rest of the paper, we use the weighted average method for angle observations.} We compare the performance of all three methods based on real world experiments in Section~\ref{sec:real_est_err}. \subsection{Speed Observations} \label{sec:vel} To fulfill the tracking objective, the TrackBot needs to adapt its speed of movement ($v_F[n]$), according to the Leader's speed ($v_L[n]$). In our ARREST architecture, the Strategic Speed Controller uses the relative position observations $(d^m[n],\theta^m_{rel}[n])$ from the CANE layer and the past LQG state estimates to determine the current relative speed, $v_{rel}^{m}[n]$, as well as the Leader's speed, $v_{L}^{m}[n]$. In this context, we employ two different observation strategies. The first strategy, which we refer to as the \emph{Optimistic strategy}, assumes that the Leader will be static for the next time slot and determines the relative speed as follows: { \begin{equation} \begin{split} &v^{m}_{rel}[n]=v^{e}_{rel}[n]-\frac{(d^m[n] -d^e[n]\cdot \cos (\theta_{rel}^m[n]))}{\delta t}\\ &v_{L}^{e}[n+1]=0 \end{split} \end{equation} } \noindent On the other hand, the \emph{Pragmatic Strategy} assumes that the Leader will continue traveling at the observed speed, $v_{L}^{m}[n]$. This strategy determines the relative speed as follows: { \begin{equation} \begin{split} & v_1 = \{d^m[n]\cdot \cos (\theta_{rel}^m[n]) - d^e[n]\}\\ & v_2 = \{d^m[n]\cdot \sin (\theta_{rel}^m[n])\}\\ &v_L[n]= \frac{\sqrt{v_1^2+v_2^2}}{\delta t} \\ &\theta_v[n]=\arctan \frac{v_2}{v_1} -\theta_{rel}^m[n] \\ &v_{L}^{e}[n+1]=v_{L}^{m}[n]=v_L[n] \cdot \cos (\theta_v[n])\\ &v^{m}_{rel}[n]=v_{F}[n]-v_{L}^{m}[n] \end{split} \end{equation} } \begin{figure}[!ht] \centering { \includegraphics[width=\linewidth]{robo_follower_fig/figures/vel_estimation.pdf}} \caption{ Illustration of the Relative Speed Observation} \label{fig:vel_estimate} \end{figure} \noindent For an illustration of different components of this process, please refer to Fig.~\ref{fig:vel_estimate}. Next, the LQG controller uses the observation vector $\mathbf{O}[n]$ to decide the next state's relative speed, $v^{e}_{rel}[n+1]$ which is used by the Speed Controller to generate the TrackBot's actual speed for next time step, $v_{F}[n+1]=v_{L}^{e}[n+1]+v^{e}_{rel}[n+1]$. Note that the speed of the TrackBot, $v_{F}[n]$, is exactly known to itself at any time $n$. In addition to the different assumptions about the Leader's speed, the two strategies also differ in how the noise is modeled in the correlation between distance and speed estimations: the Optimistic Strategy assumes that the noise in speed observations are uncorrelated with the noise in distance observations, whereas the Pragmatic strategy assumes strong correlation between distance and speed estimation noise. We compare the performance of both strategies based on emulation and real world experiments in Sections~\ref{sec:comparison_emulation} and~\ref{sec:comparison_real_world}, respectively. \section{TrackBot Prototype} \label{sec:hardware} \subsection{Hardware} We implemented a TrackBot with our ARREST architecture inside a real, low-cost robot prototype presented in Fig.~\ref{fig:robot}. For a concise description of our prototype, we list the hardware used for implementation of each of the ARREST components in Table~\ref{tab:hardware_summary}. {We discuss details of the Time Difference of Arrival (TDOA) based localization system integrated with our ARREST architecture for ground truth estimation separately in Section~\ref{sec:real_exp_large}.} {\small \begin{table}[!h] \centering \caption{ARREST Hardware Implementation} \resizebox{\linewidth}{!} {\small \begin{tabular}{|m{0.025\linewidth}|m{0.275\linewidth}|m{0.7\linewidth}|} \hline & Module & Hardware\\ \hline & Wireless Communication and Sensing & OpenMote\cite{Openmote}; Rosewill Directional Antenna (Model RNX-AD7D) \\ \cline{2-3} \rotatebox{90}{\multirow{2}{*}{\colorbox{blue!30}{CANE}}} & Rotating Platform Assembly & Nema 17 (4-wire bipolar Stepper Motor); EasyDriver - Stepper Motor Driver; mbed NXP LPC1768~\cite{mbed_ref}\\ \cline{2-3} & Relative Position Estimation & mbed NXP LPC1768~\cite{mbed_ref}\\ \hline \multicolumn{2}{|l|}{\colorbox{blue!30}{CAST}} & mbed NXP LPC1768~\cite{mbed_ref}\\ \hline \parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{\colorbox{blue!30}{PRICE}}}} & Movement Translator & mbed NXP LPC1768~\cite{mbed_ref} \\ \cline{2-3} & Robot Chassis & Baron-4WD Mobile Platform, L298N Stepper Motor Driver Controller Board, HC-SR04 Ultrasonic Sensor~\cite{ultra_1}\\ \hline \hline \end{tabular}} \resizebox{\linewidth}{!} {\small \begin{tabular}{|m{0.2\linewidth}|m{0.8\linewidth}|} \hline OpenMote~\cite{Openmote} & TI 32-bit CC2538 @ 32 MHz with 512KB Flash memory, 32KB RAM, 2.4GHz IEEE 802.15.4-based Transceiver connected via SMA plug \\ \hline mbed NXP-LPC1768~\cite{mbed_ref} $\mu$-processor & 32-bit ARM Cortex-M3 core @ 96MHz, 512KB FLASH, 32KB RAM; \textbf{Interfaces:} built-in Ethernet, USB Host and Device, CAN, SPI, I2C, ADC, DAC, PWM and other I/O interfaces \\ \hline Rosewill RNX-AD7D Directional Antenna & \textbf{Mode 1:} \textbf{Frequency:} 2.4GHz, \textbf{Max Gain:} 5dBi, \textbf{HPBW:} $70^\circ$ \textbf{Mode 2:} \textbf{Frequency:} 5GHz, \textbf{Max Gain:} 7dBi, \textbf{HPBW:} $50^\circ$\\ \hline Nema 17 Stepper Motor & \textbf{Dimension:} 1.65"x1.65"x1.57", \textbf{Step size:} 1.8 degrees (200 steps/rev), \textbf{Rated current:} 2A, \textbf{Rated resistance:} 1.1 Ohms\\ \hline HC-SR04~\cite{ultra_1} & \textbf{Operating Voltage:} 5V DC, \textbf{Operating Current:} 15mA, \textbf{Measure Angle:} $15^\circ$, \textbf{Ranging Distance:} 2cm - 4m\\ \hline \end{tabular}} \label{tab:hardware_summary} \end{table} } In the TrackBot prototype, the directional antenna and the OpenMote are mounted on top of a stepper motor using a plate. While we use two microprocessors (the OpenMote and the mbed), the system can be implemented using one microprocessor. We choose to use two in this prototype to work around wiring issues and work around the lack of sufficient GPIO pins on the OpenMote. The OpenMote is only used for RF sensing while the mbed is used to implement the rest of the ARREST modules. For programming of the mbed, we use the mbed Real Time Operating System~\cite{rtos}. The mbed sends control signals to the stepper motor to rotate it in precise steps of $1.8^\circ$. \emph{Each consecutive $360^\circ$ antenna rotations alternate between clockwise and anti-clockwise because this: (1) prevents any wire twisting between the mbed and OpenMote and (2) compensates for the stepper motor's movement errors.} The mbed communicates with other H/W components via GPIO pins and High Level Data Link Control (HDLC) Protocol~\cite{gelenbe1978performance} based reliable serial line communication. In the current prototype, the maximum speed of the robot is $30cm/s$. Due to synchronization issues on the mbed when trying to simultaneously rotate the antenna and move the robot chassis, the antenna assembly sometimes does not return to its initial position after a complete rotation. To solve this issue while avoiding complex solutions (e.g., via a feedback-based offset control mechanism), the TrackBot instead first performs a RSSI scan and then moves the chassis. Ideally, the antenna can rotate $360^\circ$ in $1 s$ while collecting $200$ samples. However, we choose to slow the scan down to a duration of $2 s$ to cope with the occasional occurrence of sparse RSSI samples. Moreover, to keep the movement simple, the TrackBot first rotates to the desired direction and then moves straight with the desired speed. The wheels of the robot are controlled using PWM signals from the mbed with a period of $2 s$. We choose a $2 s$ period for robot rotation as one $2 s$ pulse width equates to a chassis rotation amount of $\approx 180^\circ$. We also choose the same period length ($2 s$) for forward movement which caps the speed of the robot at $60/6 =10 cm/s$ (including $2 s$ of RSSI scan). The whole system is powered by five AA batteries which can run for a total of $\approx 3-4$ hours. {We also implemented a very simple obstacle avoidance mechanism by employing a single HC-SR04 range finder in the front bumper of the chassis and protection bumpers on the other sides. While moving forward, if the ultrasound detects an object at a distance less than $10$cm, it stops the TrackBot's movement immediately.} The Leader node is currently implemented as an OpenMote transmitting beacons with the standard omnidirectional antenna and a transmit power of $7dBm$. For programming of the OpenMotes, {we use the RIOT operating systems~\cite{riot,baccelli2013riot}. The Leader implementation is capable of transmitting $200$ packets/second.} \subsection{ARREST System Parameter Setup} \label{sec:lqg_param} We discuss here the choices of the LQG Controller parameters such as the $\mathbf{Q}$, $\mathbf{H}$, $\Sigma_{WW}$ and $\Sigma_{ZZ}$. \subsubsection{Cost Parameters Setup} In the cost function of LQG, the matrix $\mathbf{Q}$ determines the weights of different states on the overall cost, $J$. In our case, $\mathbf{Q}$ is a $3\times 3$ positive definite matrix with nonzero diagonal terms: {\small \begin{equation} \mathbf{Q}=\begin{bmatrix} Q_d & 0 & 0 \\ 0 & Q_v & 0 \\ 0 & 0 & Q_\theta \end{bmatrix} \end{equation} } \noindent Our main goal is to keep the distance as well as the relative angle to be as low as possible while keeping emphasis on the distance. From this perspective, the weights in increasing order should be $Q_v$, $Q_\theta$ and $Q_d$, respectively. Furthermore, focusing on one particular aspect such as the distance has detrimental effects on the other aspects. Thus, we perform a set of experiments to find a good trade-off between $Q_v$, $Q_\theta$ and $Q_d$ where we vary one parameter while keeping the rest of them fixed. For example, we vary the value of $Q_d$ by keeping $Q_v$ and $Q_\theta$ fixed. Based on these experiments, we opt for the following settings: $Q_v=0.1$, $Q_\theta=1$ and $Q_d=10\cdot v_{L}^{max}$ where $v_{L}^{max}$ is the maximum speed of the Leader. With these settings, our system performs better than any other explored settings. Furthermore, $\mathbf{H}$ is chosen to be a $3\times 3$ Identity matrix. Note that, the values of $\mathbf{Q}$ and $\mathbf{H}$ are strategy (Optimistic or Pragmatic) independent. \subsubsection{Noise Covariance Matrix Parameters Setup} The noise covariance matrices, $\Sigma_{WW}$ and $\Sigma_{ZZ}$, need to be properly set for a good state estimation in the presence of noise and imperfect/partial state observations. The system noises are assumed to be i.i.d normal random variables with $\Sigma_{ZZ}$ being a $3\times 3$ identity matrix. On the other hand, the observation noise covariance matrix requires separate settings for the different strategies. For the Optimistic strategy, we assume that the observation noises are uncorrelated, whereas, for the Pragmatic strategy, the distance estimation errors and the relative speed estimation errors are highly correlated with variances proportional to $v_{L}^{max}$. A set of empirically determined values of $\Sigma_{WW}$ for the Optimistic and the Pragmatic strategies are as follows. { \small \begin{equation} \Sigma_{WW}^{Op}= \begin{bmatrix} 4 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix} , \Sigma_{WW}^{Pg}= \begin{bmatrix} 1 & v_{L}^{max} & 0 \\ v_{L}^{max} & (v_{F}^{max})^2 & 0 \\ 0 & 0 & 0.1 \end{bmatrix} \end{equation}} \noindent where $Op$ and $Pg$ refers to the Optimistic and the Pragmatic strategies, respectively. \section{Baseline Analysis via Emulation} \label{sec:emulation} In this section, we perform a thorough evaluation and setup the different parameters such as the LQG covariance matrix (discussed in Section~\ref{sec:lqg_param}) of the ARREST architecture via a set of emulation experiments. We use the emulation experiment results as a baseline for our real-world experiments. \subsection{Method} { We employ our hardware prototypes, discussed in Section~\ref{sec:hardware}, to collect sets of RSSI data in cluttered indoor and outdoor environments for a set of representative distances, $\mathcal{D}$, and angles $\Theta$. Next, we use the collected samples to interpolate the RSSI samples for any random configuration $\mathcal{C}=(d,\theta_{rel})$, where $d \in \mathbb{R}^+$ and $\theta_{rel} \in [-180, 180)$, as follows: $ r^e=r^s-10\cdot \eta \cdot \log_{10} (d/d_{near}) + \mathcal{N}(0,\sigma^2)$, where $r^s$ is a random sample for configuration $\mathcal{C}_{near}=(d_{near},$ $\theta_{near})$ such that $d_{near}=\argmin_{d_i \in \mathcal{D}} | d_i - d| $ and $\theta_{near}=\argmin_{\theta_i \in {\Theta}}$ $ | \theta_i - \theta_{rel} | $. Note that we add an extra noise of variance $\sigma^2=2$ on top of the noisy samples (with $\sigma^2 \approx 4$) for configuration $\mathcal{C}_{near}$. To estimate the $\eta$, we use \eqref{eqn:eta_est} to calculate $\eta_{ij}$ for each pair of distances, $d_i,d_j\in\mathcal{D}$ and take the average of them. We choose a value of $\delta t = 1s$ in \eqref{sec:state} to match the maximum achievable speed of our stepper motor as, ideally, the interval between any two consecutive movement control decisions could be $1s$ where the TrackBot carries out any movement control decision within the respective $1s$ interval. } \subsection{The Optimistic Strategy vs. The Pragmatic Strategy} \label{sec:comparison_emulation} In this section, we compare the performance among the two proposed strategies, Optimistic and Pragmatic, and a Baseline algorithm. In the \textbf{Baseline algorithm}, the TrackBot estimates the relative position via the basic correlation method (discussed in~\ref{sec:aprox1}). Once the direction is determined, the TrackBot rotates to align itself toward the estimated direction and then moves with a speed of $\min\{v_{F}^{max}, \frac{d^e[n]}{\delta t}\}$. In Fig.~\ref{fig:compare_maxvel_target}, we compare the average distance between the TrackBot and the Leader for varying $v_{L}^{max}$ while setting $v_F^{max}=1.8\cdot v_{L}^{max}$. Figure~\ref{fig:compare_maxvel_target} clearly demonstrates that the Pragmatic strategy performs better than the Optimistic strategy as well as the Baseline algorithm, due to adaptability and accuracy of the speed information. The poor performance of the Optimistic strategy is due to its indifference towards the actual speed of the Leader which causes the TrackBot to lag behind for higher velocities. Conversely, we compare the average distance between the Leader and the TrackBot for varying $v_{F}^{max}$, while the Leader's maximum speed is fixed at $v_{L}^{max}=1m/s$. The experiment outcomes, presented in Fig.~\ref{fig:compare_maxvel_tracker}, show that the performance of both strategies are comparable, while the Optimistic strategy outperforms the Pragmatic strategy for $v_{F}^{max}\geq 3 \cdot v_{L}^{max}$. The reason behind this is the Leader is constantly changing movement direction while the TrackBot always travels along the straight line joining the last estimated position of the Leader and the TrackBot which may not be the same as the Leader's direction of movement. This results in oscillations in the movement pattern for the Pragmatic strategy while the Optimistic strategy avoids oscillations since it assumes the Leader to be static. The worst performance of the Baseline approach is attributed to lack of speed adaptation by taking past observation into account. \begin{figure}[!ht] \centering \subfloat[Varying Leader Speed]{\label{fig:compare_maxvel_target}\includegraphics[width=0.8\linewidth,height=0.4\linewidth]{robo_follower_fig/compare/emulation_dist_max_vel_leader.pdf}}\, \subfloat[Varying TrackBot's Speed]{\label{fig:compare_maxvel_tracker}\includegraphics[width=0.8\linewidth,height=0.4\linewidth]{robo_follower_fig/compare/emulation_dist_max_vel_tracker.pdf}} \caption{ (a)-(b)Tracking Performance Comparison Among Different Speed Estimation Strategies} \label{fig:diff_path_dist} \end{figure} One more noticeable fact from Fig.~\ref{fig:compare_maxvel_tracker} is that if $v_{F}^{max}=v_{L}^{max}$, the tracking performance is the worst. This is quite intuitive because for this speed configuration, the TrackBot is unable to compensate for any error or initial distance while the Leader constantly moves at a speed close to $v_{L}^{max}$. Thus, the relative speed needs to be positive for proper tracking. In order to find a lower bound on the TrackBot's speed requirement, we perform another set of experiments by varying $v_F^{max}$ from $v_{L}^{max}$ to $3\cdot v_{L}^{max}$. {Based on the results, we conclude that for $v_{F}^{max} \leq 1.6\cdot v_L^{max}$, the tracking system fails and the distance increases rapidly.} On the other hand, for $v_{F}^{max}>1.6\cdot v_{L}^{max}$ the performance remains the same. Thus, in our experimental setup, we opt for $v_{F}^{max}=1.8\cdot v_{L}^{max}$. \subsection{Absolute Distance Statistics} One main focus of our ARREST architecture is to guarantee {\small$\mathbb{P}\left(||\mathbf{X}_L[n]\right.$ $\left.-\mathbf{X}_F[n]||_2 \leq D_{th}\right) \approx 1 \ \ \forall n$}. The value of $D_{th}$ could be chosen as a function of $v_{L}^{max}$. However, according to our target application context, we select $D_{th}= 5m$ as we consider a distance more than 5 meters to be large enough to lose track in an indoor environment. With this constraint, we find that our present implementation of the ARREST system fails in the tracking/following objective if the Leader moves faster than 3m/s. In order to verify whether our ARREST architecture can guarantee the distance requirement for Leader with $v_{L}^{max}\leq 3m/s$, we perform a set of emulations with $\delta t=1s$, where the Leader travels along a set of random paths. In all cases, the instantaneous distances between the TrackBot and the Leader during the emulation are less than $5m$ with probability $\approx 1$. The nonzero probability of distances higher than $5m$ is due to randomness in the Leader's motion including complete reversal of movement direction. \begin{figure}[!ht] \centering \subfloat[]{\label{fig:emu_error_dist}\includegraphics[width=0.36\linewidth,height=0.4\linewidth]{robo_follower_fig/emulation2/emulation_dist_error.pdf}} \subfloat[]{\label{fig:emu_error_angle}\includegraphics[width=0.32\linewidth,height=0.4\linewidth]{robo_follower_fig/emulation2/emulation_angle_error.pdf}} \subfloat[]{\label{fig:emu_error_speed}\includegraphics[width=0.32\linewidth,height=0.4\linewidth]{robo_follower_fig/emulation2/emulation_speed_error.pdf}} \caption{Emulation Based Performance: (a) Absolute Distance Estimation Errors (in m), (b) Absolute Angle Estimation Errors (in degrees), and (c) Absolute Speed Estimation Errors (in m/s) } \label{fig:error_dist1} \end{figure} \begin{figure*}[!ht] \centering \subfloat[]{\label{fig:diste_stat_in_real}\includegraphics[width=0.22\linewidth,height=0.22\linewidth]{robo_follower_fig/practical_exp/small_scale_policy_comp.pdf}}\qquad \subfloat[]{\label{fig:dist_error_stat_in_real}\includegraphics[width=0.22\linewidth,height=0.22\linewidth]{robo_follower_fig/practical_exp/small_scale_distance.pdf}}\qquad \subfloat[]{\label{fig:angle_error_stat_in_real}\includegraphics[width=0.22\linewidth,height=0.22\linewidth]{robo_follower_fig/practical_exp/small_scale_angle_comp.pdf}} \caption{ Real Experiment Based Performance for Small Scale: (a) Absolute Distance in Meters, (b) Absolute Distance Estimation Error in Meters, and (c) Absolute Angle Estimation Error in Degrees} \label{fig:error_dist_practical} \end{figure*} \subsection{Estimation Errors} In order to learn the statistics of different estimation errors, we perform a range of emulation experiments, where the Leader follows a set of random paths and $v_{L}^{max}\leq 3m/s$. In Fig.~\ref{fig:emu_error_dist}, we plot the empirical CDF of the absolute errors in the distance estimates maintained by our system. Figure~\ref{fig:emu_error_dist} clearly illustrates that the instantaneous errors are less than $100cm$ with very high probability ($\approx 90\%$), and that the absolute error values are bounded by $1.5m$. These statistics are reasonable for pure RSSI-based estimation systems (explained further in Section~\ref{sec: raw}). We also plot the CDF of the absolute angle estimation errors over the duration of the emulations in Fig.~\ref{fig:emu_error_angle}. It can be seen that the absolute angle errors are less than $40^\circ$ with high ($\approx 80\%$) probability, which is justified as the Half Power Beam Width (HPBW) for the antenna we are using is approx $70^\circ$. Further improvements may be possible by using an antenna with greater directionality or other radios (such as UWB radios). The non-zero probability of the angle error being more than $40^\circ$ is again due to the random direction changes in the Leader's movements. Similarly, we analyze the absolute speed estimation errors in terms of CDF, illustrated in Fig.~\ref{fig:emu_error_speed}. The absolute errors in the speed estimations of the Leader are less that $1m/s$ with $\approx 90\%$ probability. {\small \begin{table}[!h] \centering \caption{Summary of Emulation Results} \resizebox{\linewidth}{!} {\small \begin{tabular}{|m{\linewidth}|} \hline $\Box$ Pragmatic Strategy performs best for $1.6 \cdot v_{L}^{max} < v_{F}^{max} <3\cdot v_{L}^{max}$ while Optimistic Strategy performs best for $v_{F}^{max}\geq 3\cdot v_{L}^{max}$\\ $\Box$ The ARREST system fails if $v_{L}^{max}>3m/s$.\\ $\Box$ For $v_{L}^{max}\leq 3m/s$ and $v_{F}^{max}=1.8 \cdot v_{L}^{max}$, the TrackBot stays within $5m$ of the Leader with probability $\approx100\%$.\\ $\Box$ Absolute distance estimation errors are $<100cm$ with probability $\approx90\%$ and $<150cm$ with probability $\approx 100\%$.\\ $\Box$ Absolute angle estimation errors are $<40^\circ$ with probability $\approx 80\%$.\\ $\Box$ Absolute speed estimation errors are less than $1m/s$ with probability $\approx 90\%$. \\ \hline \end{tabular} } \end{table} } \section{Real Experiment Results : Small Scale} \label{sec:real_exp} To analyse the performance of the ARREST architecture, we use the TrackBot prototype to perform a set of small scale experiments, followed by a range of large scale experiments. In this section, we present the results of our small-scale real-world experiments. \subsection{Method} Based on the valuable insights from the emulation results, we choose TrackBot's speed to be at least 1.8X the Leader's speed. The TrackBot makes a decision every $6s$. Between each decision, the TrackBot takes $2s$ for both the antenna rotation and RSSI scan, $2s$ for the chassis rotation, and $2s$ for the chassis translation. However, in the state update equations, $\delta t=4s$ because the actual chassis movement takes place for only $4s$. With this setup, we perform a set of real tracking experiments in three different environments: $\Box $ A cluttered office space, illustrated in Fig.~\ref{fig:indoor_trace} ($\approx 10m \times 6m$), with a lot of office desks, chairs, cabinets, and reflecting surfaces. $\Box $ A hallway, illustrated in Fig.~\ref{fig:hallway_trace} ($\approx 18m$ long and $3m$ wide), with pillars as well as sharp corners. $\Box $ A VICON camera localization~\cite{vicon} based robot experiment facility, illustrated in Fig.~\ref{fig:vicon_trace} ($\approx 6m\times 6m $). For the first two environments, we use manual markings on the floor to localize both the Leader and the TrackBot. For the last environment, the VICON facility provides us with camera-based localization at millimeter scale accuracy. We perform a set of experiments in each of these environments for an approximate total period of one month with individual run lasting for $30$ minutes during different times of the day. For these experiments, the Leader is a human carrying an OpenMote transmitter. \subsection{The Optimistic Strategy vs. The Pragmatic Strategy} \label{sec:comparison_real_world} Similar to our emulation based analysis, we perform a real system based comparison of the proposed speed adaptation strategies as well as the \textbf{Baseline Algorithm} (introduced in Section~\ref{sec:comparison_emulation}). However, in this set of experiments we do not vary the maximum speed of the TrackBot or the Leader due to prototype hardware limitations. Instead, we compare the absolute distance CDF statistics of these three strategies in Fig.~\ref{fig:diste_stat_in_real} for $v_{F}^{max}=10cm/s$ and $v_{F}^{max}=1.8\cdot v_{L}^{max}$. Figure~\ref{fig:diste_stat_in_real} validates that Pragmatic strategy performs best among all three strategies when $v_{F}^{max}=1.8\cdot v_{L}^{max}$. Moreover, the baseline strategy performs the worst due to lack of speed adaptation as well as lack of history incorporation. In summary, our real experiment based results concur with the emulation results. \begin{figure*}[!ht] \centering \subfloat[Indoor]{\label{fig:indoor_trace}\includegraphics[width=0.32\linewidth, height=0.2\linewidth]{robo_follower_fig/practical_exp/indoor_trace.pdf}} \qquad \subfloat[Hallway]{\label{fig:hallway_trace}\includegraphics[width=0.32\linewidth, height=0.22\linewidth]{robo_follower_fig/practical_exp/hallway_trace.pdf}}\, \subfloat[Indoor (No Line of Sight)]{\label{fig:indoor_static}\includegraphics[width=0.32\linewidth, height=0.2\linewidth]{robo_follower_fig/practical_exp/indoor_static.pdf}} \qquad \subfloat[VICON System]{\label{fig:vicon_trace}\includegraphics[width=0.32\linewidth, height=0.22\linewidth]{robo_follower_fig/practical_exp/trace_vicon.pdf}} \caption{Full Path Traces from Small Scale Real World Experiments} \label{fig:practical_trace} \end{figure*} \subsection{Estimation Errors} \label{sec:real_est_err} To analyze the state estimation errors in our ARREST architecture similar to the emulations, we perform a range of prototype based experiments, where the $v_{F}^{max} = 1.8\cdot v_{L}^{max}$ and the Leader follows a set of random paths. In Fig.~\ref{fig:dist_error_stat_in_real}, we plot the empirical CDF of the absolute errors in the distance estimates maintained by our TrackBot. Figure~\ref{fig:dist_error_stat_in_real} clearly illustrates that the instantaneous absolute errors in our distance estimates are $\leq 100cm$ with very high probability ($\approx90\%$), and are bounded by $1.5m$. These statistics are also reasonable for pure RSSI based estimation systems and concur with the emulation results. Next, in Fig.~\ref{fig:angle_error_stat_in_real}, we compare the angle estimation error performance of the TrackBot for all three AoA observation methods introduced in Section~\ref{sec:arrest_angle} where we intentionally introduce random sparsity in the RSSI measurements. Figure~\ref{fig:angle_error_stat_in_real} illustrates that our proposed \textbf{clustering method} and \textbf{weighted average method} perform significantly better than the \textbf{basic correlation method} which is expected since the first two take into account the clustered sparsity (Detailed in Section~\ref{sec:arrest_angle}). The instantaneous absolute angle errors are less than $40^\circ$ with high probability ($\approx 90\%$) for all three methods which is justified because the HPBW specification for the antenna is approx $70^\circ$. Figure~\ref{fig:angle_error_stat_in_real} also illustrates that the weighted angle observation method slightly outperforms the clustering method for AoA observation. The apparent similarity between the performance of the {clustering method} and the {weighted average method} is attributed to the consistent lower cluster sizes compared to the gap sizes ($\lambda_{a} << \mu_{a}$) in our experiments. \subsection{Tracking Performance} In Fig.~\ref{fig:indoor_trace}, we present a representative path trace from the experiments in the indoor scenario. Similarly, in Fig.~\ref{fig:hallway_trace} we present a real experiment instance in the Hallway. Lastly, Fig.~\ref{fig:vicon_trace} illustrates an example trace from the VICON system. All three figures illustrate that our system performs quite well in the respective scenarios and stays within $\approx 2m$ from the Leader for the duration of the experiments. These results suggest that our system works equally well in different environments: cluttered and uncluttered. To verify that further, we perform a set of experiments with a static Leader not in the line of sight of the TrackBot for $\geq 50\%$ of the TrackBot's path. \emph{Our TrackBot was able to find the Leader in $75\%$ of such experiments. } In Fig.~\ref{fig:indoor_static}, we present one instance of such experiment. The main reason behind this success lies in the TrackBot's ability to leverage a good multipath signal (if exists). In absence of direct line of sight, the TrackBot first follows the most promising multipath component and by doing so it eventually comes in line of sight with the Leader and follows the direct path from that point on. \emph{In most of these experiments ($\geq 90\%$), the TrackBot travels a total distance of less than 2X the distance traveled by the Leader. This implies that our system is efficient in terms of energy consumption due to robotic maneuvers. } Nonetheless, these small real-world experiments also point out that our current system does not work if there exists no strong/good multipath signal in NLOS situations where ``strong multipath'' implies that one multipath signal's power is significantly higher than other multipath signals. We detail multipath related problems and our method of partly circumventing it in Section~\ref{sec:multipath_adapt}. {\small \begin{table}[!h] \centering \caption{Summary of Small Scale Real-World Experiments} \resizebox{\linewidth}{!} {\small \begin{tabular}{|m{\linewidth}|} \hline $\Box$ Pragmatic Strategy performs best for $1.8 \cdot v_{L}^{max} = v_{F}^{max}$.\\ $\Box$ Absolute distance estimation errors are $<100cm$ with probability $\approx90\%$ and $<150cm$ with probability $\approx 100\%$.\\ $\Box$ Absolute angle estimation errors are $<40^\circ$ with probability $\approx 90\%$.\\ $\Box$ Weighted average AoA observation method performs the best.\\ $\Box$ The TrackBot stays within $2m$ of the Leader with probability $\approx98\%$ in line of sight contexts.\\ $\Box$ The ARREST system works with probability $\approx 75\%$ for NLOS contexts, although it fails if no ``strong multipath'' exists.\\ \hline \end{tabular} } \end{table} } \section{Real Experiment Results : Large Scale} \label{sec:real_exp_large} \subsection{Method} The small scale experiments, presented in Section~\ref{sec:real_exp}, were limited in terms of deployment region ($\leq 60$ sq. meters) due to the dimensions of the VICON system and the effort plus time required for large scale experiments with manual measuring/markings. To perform large scale and long duration experiments based evaluations, we integrated a version of a well known Time Difference of Arrival based localization~\cite{savvides2001dynamic,maloney2000enchanced} ground truth system in our TrackBot. This helped us avoid the need of tedious manual markings and measurements. For more efficient experiments, we also developed a robotic leader, which we will refer to as the \textbf{LeaderBot} in this section, to act as both Leader as well as the reference node for the TDoA localization system. The main idea behind TDoA systems is to use a reference node that transmits two different types of signals, say RF and Ultrasound, simultaneously. Now, the localizing/receiver node receives these two signals at different instances of time due the propagation speed difference between RF and Ultrasound, say $\Delta c$. With proper timestamps, the receiver can now calculate the time difference of arrival of these two signal, say $\Delta t$, to estimate the distance as $\Delta c \cdot \Delta t$. We extend this concept slightly further by placing both the receiver RF antenna and the ultrasound on the the TrackBot's rotating platform. We rotate the platform in steps of $18^\circ$ (just a design choice) and perform TDoA based distance estimation for each orientation of the assembly. The TDoA system returns a valid measurement if and only if the assembly is oriented toward a direct line of sight or a reflected signal path. Assuming that there exists a line of sight, the orientation with the smallest TDoA corresponds to the actual angle between the LeaderBot and the TrackBot, and value of the smallest TDoA corresponds to the distance. \subsection{LeaderBot and TDoA Ranging} The LeaderBot is built upon the commercially available small Pololu 3pi robot~\cite{pololu}. \emph{In our LeaderBot, we use two Openmotes: one Openmote acts as the Leader beaconer \textbf{(Beacon Mote)} and operates on 802.15.4 channel 26; the other Openmote \textbf{(Range Mote)} is used to remotely control the 3pi robot's movements and to perform the TDoA based localization on 802.15.4 channel 25.} We use two Openmotes for cleaner design as well as to avoid operation interference between remote controlling and beaconing. We use a MB 1300 XL-MaxSonar-AE0~\cite{mbultra} as the ultrasound beaconer, powered by the 3pi robot. The LeaderBot is illustrated in Fig.~\ref{fig:3pi}. On the TrackBot, we also add a MB 1300 XL-MaxSonar-AE0~\cite{mbultra} ultrasound on the rotating platform along side with the directional antenna to receive the ultrasound beacons. In these experiments, the TrackBot switches between \textbf{Tracking mode} and the \textbf{Ranging mode} for ground truth estimation by switching its operating threads as well as the Openmote channel (since there is only one Openmote on the TrackBot). Step by step method of ranging are as follows. \begin{enumerate}[leftmargin=*] \item Before ranging, the TrackBot and the LeaderBot finish up their last movement step and stops. \item TrackBot switches channel from 26 (Tracking channel) to 25 (Ranging Channel). \item TrackBot's Openmote sends a ranging request (REQ) packet to the Leader's RangeMote. \item Upon receipt of the REQ packet, the RangeMote and the LeaderBot prepares for ranging by temporarily switching off the remote control feature and sends a Ready (RDY) packet to the TrackBot. \item Upon receiving RDY packet, the TrackBot's Openmote turns ON the ultrasound-rf ping receiving mode by setting some flags in the MAC layer to prepare for interception of the packet and sends a GO packet. \item Upon receiving the GO packet, the RangeMote on the 3pi sends exactly one RF packet and exactly one ultrasound ping @42kHz. \item If both transmissions are received, the TrackBot's Openmote estimates the TDoA and sends it to the mbed which then rotates the platform to the next orientation. If the TDoA process fails, the Openmote timeout and returns 0 to the mbed. \item After rotating the platform by one step, the mbed controls the Openmote to repeat the procedure from Step 3 to Step 7. \item If a full $360^\circ$ rotation of the platform is complete, the mbed processes the TDoA data to estimate the angle and the distance. The TrackBot's Openmote switches back to channel 26 for Tracking mode. \end{enumerate} Before evaluating the ARREST system on the basis of the TDoA ground truth system, we first evaluate the performance of the TDoA system. We found that the worst case distance estimation errors in TDoA systems are in the order of $10-20$ cm, as illustrated in Fig.~\ref{fig:tdoa_performance_dist}. The angle estimation statistics presented in Fig.~\ref{fig:tdoa_performance_angle} demonstrates highly accurate performance in angle estimations. The slight chances of getting an error of $18^\circ$ is justifiable by our choice of ranging rotation step size of $18^\circ$. Thus, our TDOA system is accurate enough to be considered as a ground truth in line of sight situations. Nonetheless, we monitor the ranging outputs to trigger retries in case of very inaccurate outputs or momentary failures. Moreover, in non-line sight situations, we still rely on manual measurements as the TDOA system fails in such scenarios. \begin{figure}[!ht] \centering \subfloat[]{\label{fig:tdoa_performance_dist}\includegraphics[width=0.8\linewidth,height=0.5\linewidth]{robo_follower_fig/practical_exp/ultrasound_stat_dist.pdf}}\, \subfloat[]{\label{fig:tdoa_performance_angle}\includegraphics[width=0.8\linewidth,height=0.5\linewidth]{robo_follower_fig/practical_exp/ultrasound_stat_angle.pdf}} \caption{ TDOA based Localization System Performance: (a) Distance Estimation Errors and (b) Angle Estimation Errors} \label{fig:tdoa_performance} \end{figure} \begin{figure*}[!ht] \centering \subfloat[]{ \label{fig:3pi}\includegraphics[width=0.15\linewidth, height=0.25\linewidth]{robo_follower_fig/practical_exp/3pi.png}} \subfloat[]{\label{fig:practical_large_env_dist}\includegraphics[width=0.25\linewidth,height=0.25\linewidth]{robo_follower_fig/practical_exp/large_scale_distance.pdf}}\qquad \subfloat[]{\label{fig:practical_large_env_dist_error}\includegraphics[width=0.23\linewidth,height=0.25\linewidth]{robo_follower_fig/practical_exp/large_scale_dist_error.pdf}}\qquad \subfloat[]{\label{fig:practical_large_env_angle_error}\includegraphics[width=0.23\linewidth,height=0.25\linewidth]{robo_follower_fig/practical_exp/large_scale_angle_error.pdf}} \caption{ Real Experiment Based Performance for Large Scale: (a) 3pi LeaderBot (b) Absolute Distance in Meters, (c) Absolute Distance Estimation Error in Meters, and (d) Absolute Angle Estimation Error in Degrees} \label{fig:error_dist_practical_large_env (b) } \end{figure*} \subsection{Different Experimental Settings} With the aforementioned setup, we performed a range of experiments over months of duration with each run lasting for $1 - 2$ hours. For the ARREST setup, we use the Pragmatic policy with the weighted average angle estimation because of its superior performance in our emulations and small scale experiments. The LQG setup are also kept same as the small scale experiments. To diversify the situation we have performed experiments in four different classes of settings. $\Box$ Large ($\geq 15m \times 10m$) office rooms with lots of computers, reflective surface, and cluttered regions. $\Box$ Long hallways ($\approx 200m$ long and $5-10m$ wide) with lots of turns. $\Box$ Open ground floor spaces ($\approx 30m \times 30m$) with pillars. $\Box$ Homelike environments with couches, furniture, and obstacles. \subsection{Performance Analysis} In Fig.~\ref{fig:practical_large_env_dist}, we present the statistics of the absolute distance between the TrackBot and the LeaderBot throughout the duration of the experiments in all four scenarios. Figure~\ref{fig:practical_large_env_dist} shows that the absolute distance is bounded by 3.5 meters in all four scenarios which further verifies our small scale experiment results presented in Fig.~\ref{fig:error_dist_practical}. Another noticeable fact from the figure is that ARREST system performs worst in the cluttered office scenarios which is justifiable due to presence of a lot of reflecting surfaces as well as obstacles. Similar statistics can be seen in the absolute LQG distance error plot presented in the Fig.~\ref{fig:practical_large_env_dist_error}. Figure~\ref{fig:practical_large_env_dist_error} shows that the instantaneous absolute distance errors are $\leq 100cm$ with $\approx 90\%$ probability, except in the office scenario ($\approx 70\%$). {The comparatively higher distance errors for office scenarios is due to overestimation of distances in NLOS scenarios and in presence of strong multipath signals. However, this does not affect the performance much as the temporarily predicted higher distance tends to only lead to a temporary higher velocity of the TrackBot.} In summary, the distance error statistics is also mostly similar to the distance error statistics from the small scale experiments. Similar pattern can be observed in the angle estimation error plots presented in Fig.~\ref{fig:practical_large_env_angle_error}. Again the office space performance is worst. The open space performance is prominently better than the other scenarios due to absence of any sort of multipath signals. The instantaneous angle errors are less than $40^\circ$ with high probability ($\approx 85\%$) in \textbf{overall statistics.} However the scenario specific errors statistics (error being less that $40^\circ$) vary from $\approx 75\%$ probability in indoor setting to $\approx 100\%$ probability in the outdoor settings. {This slight discrepancy between small scale and large scale angle error performance is mainly due to different environment settings as evident from the Fig.~\ref{fig:practical_large_env_angle_error} itself. In Fig.~\ref{fig:full_path_large_scale}, we present a sample illustrative trace of a large scale hallway experiment, drawn based on manual reconstruction from a video recording and markings on the floor.} \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{robo_follower_fig/practical_exp/large_hallway_trace_1.png} \caption{ Full path Trace for a Sample Large Scale Experiment (Blue$\implies$ Leader, Red$\implies$ TrackBot)} \label{fig:full_path_large_scale} \end{figure} \subsection{Multipath Adaptation} \label{sec:multipath_adapt} Similar to small-scale experiments, we perform a set of experiments with a static Leader not in the line of sight of the TrackBot for $\geq 50\%$ of the TrackBot's path. Due to the TrackBot's ability to leverage a good multipath signal, the TrackBot was able to find the Leader in $70\%$ of the cases. However, we also notice that it fails dramatically if the TrackBot falls into a region with no direct path as well as no strong multipath signals (i.e., there exist multiple similar strength multipath signals). To overcome that, we add a \textbf{Multipath Angle Correction} module in the CAST layer (refer to Fig.~\ref{fig:arrest}). This module triggers a randomized movement for a single LQG period if: (1) the TrackBot hits an obstacle for $3-4$ consecutive LQG periods or, (2) the LQG estimated distance to the transmitter doesn't change much over $3-4$ consecutive periods. This policy basically leads the TrackBot to a random direction with the hope of getting out of such region. {However, we noticed that if the TrackBot keeps following randomized direction for consecutive LQG periods, it harms the tracking performance. Thus, we have set a minimum time duration (Five LQG periods in our implementation) between any two consecutive randomized movements. Note that, all these timing choices are made empirically via a range of real experiments. \emph{With this strategy, we noticed an improvement on the TrackBot's success rate from $\approx 70\%$ to a success rate of $\approx 95\%$ in such scenarios. However, the trade-off in such context is that the convergence in case of a far away Leader ($\geq 8m$) is now slower by $\approx 15 \%$. } } {\small \begin{table}[!h] \centering \caption{Summary of Large Scale Real-World Experiments} \resizebox{\linewidth}{!} {\small \begin{tabular}{|m{\linewidth}|} \hline $\Box$ Absolute distance estimation errors are $<100cm$ with probability $\approx90\%$ except in the case of cluttered office environments.\\ $\Box$ Average Absolute angle estimation errors are $<40^\circ$ with probability $\approx 85\%$.\\ $\Box$ The TrackBot stays within $3.5m$ of the Leader with probability $\approx100\%$ in all scenarios of tracking.\\ $\Box$ In NLOS scenarios, addition of a conditional randomization improves the success rate from $70\%$ to $95\%$ but slows the converges by $\approx 15\%$ for static far-away Leader.\\ \hline \end{tabular} } \end{table} } \section{Miscellaneous} \subsection{Raw RSSI Data Analysis} \label{sec: raw} Based on all our evaluations, we conclude that the presence of multipath signals does not hamper the performance if there exists a direct line of sight. To justify this further and to gather more insights on the systems performance, we perform a raw RSSI data analysis and calculate the unfiltered error statistics. In Fig.~\ref{fig:raw_data_large_env_dist}, we plot the RSSI pattern based distance estimation error statistics which demonstrates that the accuracy of the directional antenna pattern based distance estimations are in the order of less than 1 meter with $90\%$ probability. On the other hand, Fig.~\ref{fig:raw_data_large_env_angle} shows that the RSSI pattern based angle estimation error are less that $40^\circ$ with very high probability ($\approx 80\%$) with some deviations due to multipath and random changes in movement directions. Again, note that, the an error upto $40^\circ$ is acceptable due to our choice of directional antenna. {\emph{We also perform a set of experiments in an anechoic chamber with controlled position of the reflectors. While we do not present the respective plots for page limitations, the statistics are very similar to Fig.~\ref{fig:raw_data_large_env} for a maximum separation distance of 5m.} } \todo{We also verify the performance of the RSSI based estimation for varying sampling rate. For these set of experiments, we fix the distance and angle between the TrackBot and the Leader and properly set the channel parameters before each experiment.} Figure~\ref{fig:error_stat_in_real} presents the average angle errors and average distance estimation errors with $95\%$ confidence interval for varying sampling rate. Figure~\ref{fig:error_stat_in_real_angle} shows that the angle estimation performance deteriorates as the sampling rate is decreased which is self-justified. The distance estimation actually doesn't vary much with the sampling rate. Our numbers may even appear to be better than those typically reported for RSSI based localization (where typical accuracies are $\approx 2m-5m$ or higher), but this is attributed to the fact that the distance estimates use the average of $40 - 200$ samples, one from each sample's respective antenna orientation. This analysis also suggests that we can use sampling rate of $100$ samples/rev to achieve similar performance. Nonetheless, we stick with $200$ sample/rev as we notice a loss of maximum $70-90$ samples per revolution in severe scenarios. \begin{figure}[!ht] \centering \subfloat[]{ \label{fig:raw_data_large_env_dist} \includegraphics[width=0.8\linewidth,height=0.42\linewidth]{robo_follower_fig/practical_exp/raw_data_dist.pdf}}\, \subfloat[]{ \label{fig:raw_data_large_env_angle}\includegraphics[width=0.8\linewidth,height=0.42\linewidth]{robo_follower_fig/practical_exp/raw_data_angle.pdf}} \caption{Raw Data Analysis: (a) Distance Estimation Errors and (b) Angle Estimation Errors} \label{fig:raw_data_large_env} \end{figure} \subsection{Different Sensing Modalities} \label{sec:modality} While our proposed ARREST architecture employs pure RSSI based distance, angle, and speed estimations, the same architecture can be easily adapted to use other technologies such as cameras or infrared sensors. In such cases, we just need to modify the CANE layer of the ARREST architecture and feed the relative position approximations to the CAST layer. Now, each of these estimation technologies i.e., camera based or RF based estimations, have different accuracies in terms of distance and angle estimations. To analyze the tracking performance of the ARREST system, oblivious to the actual technology used in CANE layer, we perform a set of simulation experiments where we control the average errors in the distance and the angle estimations. Figure~\ref{fig:control_error_dist} illustrates one instance of such experiments where we fix the average angle error ($0$ in this case) and vary the average distance estimation error. {Figure~\ref{fig:control_error_dist} shows that the effect of positive estimation errors ($d_{org}-d^e>0$, where $d_{org}$ is the actual distance) have a more detrimental effect on the tracking performance than negative errors.} This is justified as positive distance estimation errors imply always falling short in the movements, whereas, negative errors imply over-estimations and more aggressive movements. It is also noticeable that there exists an optimal value of average distance estimation error. The value of this optimal distance error depends on the maximum Leader speed as well as average angle error. Next, we plot the relation between average tracking distance and average angle error while the average distance error is kept to be $0$ in Fig.~\ref{fig:control_error_angle}. It is obvious and quite intuitive that the best tracking performance is obtained for an average angle estimation error of $0$. Note that, we do not control the speed error separately as it is directly related to the angle and distance estimations. This analysis demonstrates the versatility of our ARREST architecture to tolerate a large range of estimation errors. More specifically, it tolerates up to $5m$ average distance error and $45^\circ$ average absolute angle error in a successful tracking application. This analysis also shows that while RSSI based system is not optimum, it has reasonable performance compared to the best possible ARREST system (with zero distance and angle estimation error). \subsection{Some Challenges and Lesson Learned} \label{sec:lesson} Here we present two main challenges we faced in this project along with our methodology to overcome them. \begin{figure}[!ht] \centering \subfloat[]{ \label{fig:error_stat_in_real_dist} \includegraphics[width=0.8\linewidth,height=0.42\linewidth]{robo_follower_fig/practical_exp/vary_sample_dist.pdf}}\, \subfloat[]{ \label{fig:error_stat_in_real_angle}\includegraphics[width=0.8\linewidth,height=0.42\linewidth]{robo_follower_fig/practical_exp/vary_sample_angle.pdf}} \caption{Estimation Performance for Varying Sampling Rate: (a) Distance Estimation Errors and (b) Angle Estimation Errors} \label{fig:error_stat_in_real} \end{figure} \emph{Rotating Platform Wire Twisting:} To overcome this challenge mechanically, we alternate the rotation direction between clockwise and anticlockwise. Further, we opt for a system design where every device on the rotating platform (in our case only ultrasound) communicate via the serial line between the Openmote (on the platform) and the mbed. To achieve that we use multi-threading in RIOT OS and the HDLC protocol to allocate a dedicated thread and HDLC identifier for each of the peripheral device on the Openmote. Upon receipt of a HDLC packet from the Openmote, the mbed process the HDLC identifier to identify the source device and perform the necessary operation. \emph{Missing Samples:} During the RSSI sampling, we noticed that the mbed receives a very low number of samples from the Openmote with chunks of missing samples. This was caused by the beaconer's buffer overflow (due to continuous beaconing), interference from other devices, and loss in the communication between the Openmote and the mbed. To solve the beaconer buffer overflow issue, we added a periodic reset controller on the 3pi LeaderBot's mbed that resets the beaconer after every two minute via a GPIO pin. The interference and noise related missing samples problem were solved by changing the beaconing from broadcast to unicast and also via employing our proposed block based angle estimation (Refer to Section~\ref{sec:arrest_angle}). We reduce the loss due to communication between the mbed and the Openmote by employing the HDLC protocol based packetized serial communication and proper ACK mechanism. \begin{figure*}[!ht] \centering \subfloat[]{\label{fig:control_error_dist} \includegraphics[width=0.35\linewidth,height=0.20\linewidth]{robo_follower_fig/compare/control_dist_error.pdf}} \qquad \subfloat[]{\label{fig:control_error_angle}\includegraphics[width=0.35\linewidth,height=0.20\linewidth]{robo_follower_fig/compare/control_angle_error.pdf}} \caption{ Performance of the ARREST System in Terms of Controlled Estimation Errors} \label{fig:control_error} \end{figure*} \section{Conclusion} While our proposed solely RSSI based relative localization and tracking system for autonomously following a RF-emitting object works with reasonable performance, there are a lot of research questions that need to be addressed in our future works and are not part of this work. First, we intend to develop a strategy with a proper trade-off between Optimism and Pragmatism, which will potentially improve the performance. Second, we want to make the system faster by employing the concept of compressive sampling that will potentially allow for continuous-time decision making. Third, we want to explore the optimal configuration options for our system as well as the optimality conditions for RF based tracking. Fourth, we intend to look into more structured randomization in the TrackBot's movements to improve performance in severe NLOS environments. Finally, we intend to explore the domains of game theory and robust control to see if better or more robust predictions of the Leader's motion could improve the performance. { \bibliographystyle{IEEEtran}